MIT News - School of Science
MIT news feed about: School of Science
China-based emissions of three potent climate-warming greenhouse gases spiked in past decade

Two studies pinpoint their likely industrial sources and mitigation opportunities.

When it comes to heating up the planet, not all greenhouse gases are created equal. They vary widely in their global warming potential (GWP), a measure of how much infrared thermal radiation a greenhouse gas would absorb over a given time frame once it enters the atmosphere. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO2), and the GWPs of a class of greenhouse gases known as perfluorocarbons (PFCs) are thousands of times that of CO2. The lifespans in the atmosphere of different greenhouse gases also vary widely. Methane persists in the atmosphere for around 10 years; CO2 for over 100 years, and PFCs for up to tens of thousands of years.

Given the high GWPs and lifespans of PFCs, their emissions could pose a major roadblock to achieving the aspirational goal of the Paris Agreement on climate change — to limit the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels. Now, two new studies based on atmospheric observations inside China and high-resolution atmospheric models show a rapid rise in Chinese emissions over the last decade (2011 to 2020 or 2021) of three PFCs: tetrafluoromethane (PFC-14) and hexafluoroethane (PFC-116) (results in PNAS), and perfluorocyclobutane (PFC-318) (results in Environmental Science & Technology).

Both studies find that Chinese emissions have played a dominant role in driving up global emission levels for all three PFCs.

The PNAS study identifies substantial PFC-14 and PFC-116 emission sources in the less-populated western regions of China from 2011 to 2021, likely due to the large amount of aluminum industry in these regions. The semiconductor industry also contributes to some of the emissions detected in the more economically developed eastern regions. These emissions are byproducts from aluminum smelting, or occur during the use of the two PFCs in the production of semiconductors and flat panel displays. During the observation period, emissions of both gases in China rose by 78 percent, accounting for most of the increase in global emissions of these gases.

The ES&T study finds that during 2011-20, a 70 percent increase in Chinese PFC-318 emissions (contributing more than half of the global emissions increase of this gas) — originated primarily in eastern China. The regions with high emissions of PFC-318 in China overlap with geographical areas densely populated with factories that produce polytetrafluoroethylene (PTFE, commonly used for nonstick cookware coatings), implying that PTFE factories are major sources of PFC-318 emissions in China. In these factories, PFC-318 is formed as a byproduct.

“Using atmospheric observations from multiple monitoring sites, we not only determined the magnitudes of PFC emissions, but also pinpointed the possible locations of their sources,” says Minde An, a postdoc at the MIT Center for Global Change Science (CGCS), and corresponding author of both studies. “Identifying the actual source industries contributing to these PFC emissions, and understanding the reasons for these largely byproduct emissions, can provide guidance for developing region- or industry-specific mitigation strategies.”

“These three PFCs are largely produced as unwanted byproducts during the manufacture of otherwise widely used industrial products,” says MIT professor of atmospheric sciences Ronald Prinn, director of both the MIT Joint Program on the Science and Policy of Global Change and CGCS, and a co-author of both studies. “Phasing out emissions of PFCs as early as possible is highly beneficial for achieving global climate mitigation targets and is likely achievable by recycling programs and targeted technological improvements in these industries.”

Findings in both studies were obtained, in part, from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment (AGAGE) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, as reported in the latest United Nations Environment Program and World Meteorological Organization Ozone Assessment report.

Math program promotes global community for at-risk Ukrainian high schoolers

“Our hope is that our students grow and mature as scholars and help rebuild the intellectual potential of Ukraine after the devastating war.”

When Sophia Breslavets first heard about Yulia’s Dream, the MIT Department of Mathematics’ Program for Research in Mathematics, Engineering, and Science (PRIMES) for Ukrainian students, Russia had just invaded her country, and she and her family lived in a town 20 miles from the Russian border.

Breslavets had attended a school that emphasized mathematics and physics, took math classes on weekends and during summer breaks, and competed in math Olympiads. “Math was really present in our lives,” she says. 

But the war shifted her studies to online. “It still wasn’t like a fully functioning online school,” she recalls. “You can’t socialize.”

So she was grateful to be accepted to the MIT program in 2022. “Yulia’s Dream was a great thing to happen to me personally, because in the beginning, when the war was just starting, I didn't know what to do. This was just a great thing to take your mind off of what's going on outside your window, and you can just kind of get yourself into that and know that you have some work to do, and that was huge.”

Second time around

Breslavets just finished up her second year in the online enrichment program, which offers small-group math instruction in their native language and in English to Ukrainian high schoolers by mentors from around the world. Students wrap up the program by presenting their papers at a conference; several of those papers are published on This year’s conference featured a guest talk by Professor Pavlo Pylyavskyy of the University of Minnesota Twin Cities, who discussed “Incidences and Tilings,” a joint work with Professor Sergey Fomin of the University of Michigan.

The PRIMES program first organized Yulia’s Dream in 2022, named in memory of Yulia Zdanovska, a talented mathematician and computer scientist who was a teacher with Teach for Ukraine. She was 21 when she was killed in 2022 during Russian shelling in her home city of Kharkiv.

The program fulfills one of PRIMES’s goals, to expose students to the world community of research mathematics by connecting them with early-career mentors. Students must solve a challenging entrance problem set and are then referred by Ukrainian math teachers and leaders at math competitions and math camps.

Yulia’s Dream is coordinated by Dmytro Matvieievskyi, a postdoc at the Kavli Institute in Tokyo, who graduated from School #27 of Kharkiv, and is a recipient of the Bronze medal at the 2012 International Math Olympiad (IMO) as part of the Ukraine Team.

In its first year, from 2022 to 2023, the program drew 48 students in Phase I (reading) and 33 students in Phase II (reading and research). “Our expectation for 2022-23 was that each of six research groups would produce a research paper, and they all did, and one group continued working and produced an extra paper a few months after, for a total of seven papers. Three papers are now on, which is a mark of quality. This went beyond our expectations.”

This past year, the program provided guided reading and research supervision to 32 students. “We conduct thorough selection and provide opportunities to all Ukrainian students capable of doing advanced reading and/or research at the requisite level,” says PRIMES’s director Slava Gerovitch PhD ’99.

MIT pipeline

Several students participated in both years, and at least two have been accepted to MIT.

One of those students is two-time Yulia’s Dream participant Nazar Korniichuk, who had attended a high school in Kyiv that specialized in mathematics and physics when his education was disrupted by the war. 

“I was confused and did not know which way I should go,” he recalls. “But then I saw the program Yulia's Dream, and the desire to try real mathematical research ignited.”

In his first year in the program, participation was a challenge. “On the one hand, it was very difficult, because in certain periods there was no electricity and no water. There was always stress and uncertainty about tomorrow. But on the other hand, because there was a war, it motivated me to do mathematics even more, especially during periods when there was no electricity or water.”

He did complete his paper, with Kostiantyn Molokanov and Severyn Khomych, and with mentor Darij Grinberg PhD ’16, a professor of mathematics at Drexel University: “The Pak–Postnikov and Naruse skew hook length formulas: A new proof” (2 Oct 2023;, 27 Oct 2023).

Korniichuk completed his second round from his new home in Newton, Massachusetts, to which his family had migrated last summer. At the recent conference, he presented his paper, with co-authors Kostiantyn Molokanov and Severyn Khomych, “Affine root systems via Lyndon words,” that they worked on with mentor Professor Oleksandr Tsymbaliuk of Purdue University.

“Yulia’s Dream was a very unique experience for me,” says Korniichuk, who plans to study math and computer science at MIT. “I had the opportunity to work on a difficult topic for a long time and then take part in writing an article. Although these years have been difficult, this program encouraged me to go forward.”

Real research

What makes the program work is providing a university level of instruction in mathematics research, to prepare high school students for top mathematics programs. In this case, it provides Ukrainian students an alternative route to reach their educational goals.

The core philosophy of the Yulia’s Dream experience is to provide “the best possible approximation to real mathematical research,” math professor and PRIMES chief research advisor Pavel Etingof told attendees at the 2024 conference. Etingof was born in Ukraine.

“In particular, all projects have to be real — i.e., of interest to professional research mathematicians — and the reading groups should be a bridge towards real mathematics as well. Also, the time frame of Yulia’s Dream is closer to that of real mathematical research than it is in any other high school research program: the students work on their projects for a whole year!”

Other principles include an emphasis on writing and collaboration, with students working on teams with undergraduates, graduate students, postdocs, and faculty. There is also an emphasis on computer-assisted math, which “not only allows participation of high school students as equal members of our research teams, but also allows them to grasp abstract mathematical notions more easily,” says Pavel. “If such notions (such as group, ring, module, etc.) have an incarnation in the familiar digital world, they are less scary.”

Breslavets says that she especially appreciates the collaboration part of the program. Now 16, Breslavets just finished her second year with Yulia’s Dream, and with Andrii Smutchak presented “Double groupoids,” as mentored by University of Alberta professor Harshit Yadav. She says that they began working on the paper in October, and it took about three months to write. 

This year’s session was easier for her to participate in, because in summer 2022, her parents found her a host family in Connecticut so that she could transfer to St. Bernard’s School. Even with her new school’s great curriculum, she is grateful for the Yulia’s Dream program.

“Our high school program is considered to be advanced, and we have a class that’s called math research, but it’s definitely not the same, because [with Yulia’s Dream] you're working with people who actually do that for a living,” she says. “I learned a lot from both of my mentors. It’s so collaborative. They can give you feedback, and they can be honest about it.”  

She says she misses her Ukrainian math community, which drifted apart after the Covid-19 pandemic and because of the war, but reports finding a new one with Yulia’s Dream. “I actually met a lot of new people,” she says.

Group collaboration is a huge goal for PRIMES director Slava Gerovitch.

“Yulia’s Dream reflects the international nature of the mathematical community, with the mentors coming from different countries and working together with the students to advance knowledge for the whole of humanity. Our hope is that our students grow and mature as scholars and help rebuild the intellectual potential of Ukraine after the devastating war,” says Gerovitch.

Applications for next year’s program are now open. Math graduate students and postdocs are also invited to apply to be a mentor. Weekly meetings begin in October, and culminate in a June 2025 conference to present papers.

Astronomers spot a highly “eccentric” planet on its way to becoming a hot Jupiter

The planet’s wild orbit offers clues to how such large, hot planets take shape.

Hot Jupiters are some of the most extreme planets in the galaxy. These scorching worlds are as massive as Jupiter, and they swing wildly close to their star, whirling around in a few days compared to our own gas giant’s leisurely 4,000-day orbit around the sun.

Scientists suspect, though, that hot Jupiters weren’t always so hot and in fact may have formed as “cold Jupiters,” in more frigid, distant environs. But how they evolved to be the star-hugging gas giants that astronomers observe today is a big unknown.

Now, astronomers at MIT, Penn State University, and elsewhere have discovered a hot Jupiter “progenitor” — a sort of juvenile planet that is in the midst of becoming a hot Jupiter. And its orbit is providing some answers to how hot Jupiters evolve.

The new planet, which astronomers labeled TIC 241249530 b, orbits a star that is about 1,100 light-years from Earth. The planet circles its star in a highly “eccentric” orbit, meaning that it comes extremely close to the star before slinging far out, then doubling back, in a narrow, elliptical circuit. If the planet was part of our solar system, it would come 10 times closer to the sun than Mercury, before hurtling out, just past Earth, then back around. By the scientists’ estimates, the planet’s stretched-out orbit has the highest eccentricity of any planet detected to date.

The new planet’s orbit is also unique in its “retrograde” orientation. Unlike the Earth and other planets in the solar system, which orbit in the same direction as the sun spins, the new planet travels in a direction that is counter to its star’s rotation.

The team ran simulations of orbital dynamics and found that the planet’s highly eccentric and retrograde orbit are signs that it is likely evolving into a hot Jupiter, through “high-eccentricity migration” — a process by which a planet’s orbit wobbles and progressively shrinks as it interacts with another star or planet on a much wider orbit.

In the case of TIC 241249530 b, the researchers determined that the planet orbits around a primary star that itself orbits around a secondary star, as part of a stellar binary system. The interactions between the two orbits — of the planet and its star — have caused the planet to gradually migrate closer to its star over time.

The planet’s orbit is currently elliptical in shape, and the planet takes about 167 days to complete a lap around its star. The researchers predict that in 1 billion years, the planet will migrate into a much tighter, circular orbit, when it will then circle its star every few days. At that point, the planet will have fully evolved into a hot Jupiter.

“This new planet supports the theory that high eccentricity migration should account for some fraction of hot Jupiters,” says Sarah Millholland, assistant professor of physics in MIT’s Kavli Institute for Astrophysics and Space Research. “We think that when this planet formed, it would have been a frigid world. And because of the dramatic orbital dynamics, it will become a hot Jupiter in about a billion years, with temperatures of several thousand kelvin. So it’s a huge shift from where it started.”

Millholland and her colleagues have published their findings today in the journal Nature. Her co-authors are MIT undergraduate Haedam Im, lead author Arvind Gupta of Penn State University and NSF NOIRLab, and collaborators at multiple other universities, institutions, and observatories.

“Radical seasons”

The new planet was first spotted in data taken by NASA’s Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission that monitors the brightness of nearby stars for “transits,” or brief dips in starlight that could signal the presence of a planet passing in front of, and temporarily blocking, a star’s light.

On Jan. 12, 2020, TESS picked up a possible transit of the star TIC 241249530. Gupta and his colleagues at Penn State determined that the transit was consistent with a Jupiter-sized planet crossing in front of the star. They then acquired measurements from other observatories of the star’s radial velocity, which estimates a star’s wobble, or the degree to which it moves back and forth, in response to other nearby objects that might gravitationally tug on the star.

Those measurements confirmed that a Jupiter-sized planet was orbiting the star and that its orbit was highly eccentric, bringing the planet extremely close to the star before flinging it far out.

Prior to this detection, astronomers had known of only one other planet, HD 80606 b, that was thought to be an early hot Jupiter. That planet, discovered in 2001, held the record for having the highest eccentricity, until now.

“This new planet experiences really dramatic changes in starlight throughout its orbit,” Millholland says. “There must be really radical seasons and an absolutely scorched atmosphere every time it passes close to the star.”

“Dance of orbits”

How could a planet have fallen into such an extreme orbit? And how might its eccentricity evolve over time? For answers, Im and Millholland ran simulations of planetary orbital dynamics to model how the planet may have evolved throughout its history and how it might carry on over hundreds of millions of years.

The team modeled the gravitational interactions between the planet, its star, and the second nearby star. Gupta and his colleagues had observed that the two stars orbit each other in a binary system, while the planet is simultaneously orbiting the closer star. The configuration of the two orbits is somewhat like a circus performer twirling a hula hoop around her waist, while spinning a second hula hoop around her wrist.

Millholland and Im ran multiple simulations, each with a different set of starting conditions, to see which condition, when run forward over several billions of years, produced the configuration of planetary and stellar orbits that Gupta’s team observed in the present day. They then ran the best match even further into the future to predict how the system will evolve over the next several billion years.

These simulations revealed that the new planet is likely in the midst of evolving into a hot Jupiter: Several billion years ago, the planet formed as a cold Jupiter, far from its star, in a region cold enough to condense and take shape. Newly formed, the planet likely orbited the star in a circular path. This conventional orbit, however, gradually stretched and grew eccentric, as it experienced gravitational forces from the star’s misaligned orbit with its second, binary star.

“It’s a pretty extreme process in that the changes to the planet’s orbit are massive,” Millholland says. “It’s a big dance of orbits that’s happening over billions of years, and the planet’s just going along for the ride.”

In another billion years, the simulations show that the planet’s orbit will stabilize in a close-in, circular path around its star.

“Then, the planet will fully become a hot Jupiter,” Millholland says.

The team’s observations, along with their simulations of the planet’s evolution, support the theory that hot Jupiters can form through high eccentricity migration, a process by which a planet gradually moves into place via extreme changes to its orbit over time.

“It’s clear not only from this, but other statistical studies too, that high eccentricity migration should account for some fraction of hot Jupiters,” Millholland notes. “This system highlights how incredibly diverse exoplanets can be. They are mysterious other worlds that can have wild orbits that tell a story of how they got that way and where they’re going. For this planet, it’s not quite finished its journey yet.”

“It is really hard to catch these hot Jupiter progenitors ‘in the act’ as they undergo their super eccentric episodes, so it is very exciting to find a system that undergoes this process,” says Smadar Naoz, a professor of physics and astronomy at the University of California at Los Angeles, who was not involved with the study. “I believe that this discovery opens the door to a deeper understanding of the birth configuration of the exoplanetary system.”

Study reveals how an anesthesia drug induces unconsciousness

Propofol, a drug commonly used for general anesthesia, derails the brain’s normal balance between stability and excitability.

There are many drugs that anesthesiologists can use to induce unconsciousness in patients. Exactly how these drugs cause the brain to lose consciousness has been a longstanding question, but MIT neuroscientists have now answered that question for one commonly used anesthesia drug.

Using a novel technique for analyzing neuron activity, the researchers discovered that the drug propofol induces unconsciousness by disrupting the brain’s normal balance between stability and excitability. The drug causes brain activity to become increasingly unstable, until the brain loses consciousness.

“The brain has to operate on this knife’s edge between excitability and chaos. It’s got to be excitable enough for its neurons to influence one another, but if it gets too excitable, it spins off into chaos. Propofol seems to disrupt the mechanisms that keep the brain in that narrow operating range,” says Earl K. Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

The new findings, reported today in Neuron, could help researchers develop better tools for monitoring patients as they undergo general anesthesia.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study. MIT graduate student Adam Eisen and MIT postdoc Leo Kozachkov are the lead authors of the paper.

Losing consciousness

Propofol is a drug that binds to GABA receptors in the brain, inhibiting neurons that have those receptors. Other anesthesia drugs act on different types of receptors, and the mechanism for how all of these drugs produce unconsciousness is not fully understood.

Miller, Fiete, and their students hypothesized that propofol, and possibly other anesthesia drugs, interfere with a brain state known as “dynamic stability.” In this state, neurons have enough excitability to respond to new input, but the brain is able to quickly regain control and prevent them from becoming overly excited.

Previous studies of how anesthesia drugs affect this balance have found conflicting results: Some suggested that during anesthesia, the brain shifts toward becoming too stable and unresponsive, which leads to loss of consciousness. Others found that the brain becomes too excitable, leading to a chaotic state that results in unconsciousness.

Part of the reason for these conflicting results is that it has been difficult to accurately measure dynamic stability in the brain. Measuring dynamic stability as consciousness is lost would help researchers determine if unconsciousness results from too much stability or too little stability.

In this study, the researchers analyzed electrical recordings made in the brains of animals that received propofol over an hour-long period, during which they gradually lost consciousness. The recordings were made in four areas of the brain that are involved in vision, sound processing, spatial awareness, and executive function.

These recordings covered only a tiny fraction of the brain’s overall activity, so to overcome that, the researchers used a technique called delay embedding. This technique allows researchers to characterize dynamical systems from limited measurements by augmenting each measurement with measurements that were recorded previously.

Using this method, the researchers were able to quantify how the brain responds to sensory inputs, such as sounds, or to spontaneous perturbations of neural activity.

In the normal, awake state, neural activity spikes after any input, then returns to its baseline activity level. However, once propofol dosing began, the brain started taking longer to return to its baseline after these inputs, remaining in an overly excited state. This effect became more and more pronounced until the animals lost consciousness.

This suggests that propofol’s inhibition of neuron activity leads to escalating instability, which causes the brain to lose consciousness, the researchers say.

Better anesthesia control

To see if they could replicate this effect in a computational model, the researchers created a simple neural network. When they increased the inhibition of certain nodes in the network, as propofol does in the brain, network activity became destabilized, similar to the unstable activity the researchers saw in the brains of animals that received propofol.

“We looked at a simple circuit model of interconnected neurons, and when we turned up inhibition in that, we saw a destabilization. So, one of the things we’re suggesting is that an increase in inhibition can generate instability, and that is subsequently tied to loss of consciousness,” Eisen says.

As Fiete explains, “This paradoxical effect, in which boosting inhibition destabilizes the network rather than silencing or stabilizing it, occurs because of disinhibition. When propofol boosts the inhibitory drive, this drive inhibits other inhibitory neurons, and the result is an overall increase in brain activity.”

The researchers suspect that other anesthetic drugs, which act on different types of neurons and receptors, may converge on the same effect through different mechanisms — a possibility that they are now exploring.

If this turns out to be true, it could be helpful to the researchers’ ongoing efforts to develop ways to more precisely control the level of anesthesia that a patient is experiencing. These systems, which Miller is working on with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering at MIT, work by measuring the brain’s dynamics and then adjusting drug dosages accordingly, in real-time.

“If you find common mechanisms at work across different anesthetics, you can make them all safer by tweaking a few knobs, instead of having to develop safety protocols for all the different anesthetics one at a time,” Miller says. “You don’t want a different system for every anesthetic they’re going to use in the operating room. You want one that’ll do it all.”

The researchers also plan to apply their technique for measuring dynamic stability to other brain states, including neuropsychiatric disorders.

“This method is pretty powerful, and I think it’s going to be very exciting to apply it to different brain states, different types of anesthetics, and also other neuropsychiatric conditions like depression and schizophrenia,” Fiete says.

The research was funded by the Office of Naval Research, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the National Science Foundation Directorate for Computer and Information Science and Engineering, the Simons Center for the Social Brain, the Simons Collaboration on the Global Brain, the JPB Foundation, the McGovern Institute, and the Picower Institute. 

Q&A: Helping young readers explore curiosity about rocks through discovery and play

“All the Rocks We Love” is a new picture book by MIT Professor Taylor Perron and Lisa Varchol Perron.

It’s no secret that children love rocks: playing on them, stacking them, even sneaking them home in pockets. This universal curiosity about the world around us is what inspires psychotherapist and author Lisa Varchol Perron when writing books for young readers.

While in talks with publishers, an editor asked if she’d be interested in co-authoring a book with her husband, Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences Taylor Perron. The result was the picture book “All the Rocks We Love,” with illustrations by David Scheirer. The book introduces the many rocks showcased in it through play and discovery, two aspects that were part of the story since its inception. While aimed at readers aged 3-6, the book also includes back matter explainers about the rocks in the story to give older readers a chance to learn more.

Lisa and Taylor took a moment to talk about the writing process, working together, and tapping into our innate sense of curiosity as a means of education.

Q: Were either of you the kind of kid who had to pick up all the rocks you saw?

Lisa: Absolutely. I’ve always been intrigued by rocks. Our kids are, too; they love exploring, scrambling on rocks, looking on pebble beaches.

Taylor: That means we end up needing to check pockets before we put things in the laundry. Often my pockets.

Q: What has it been like formally collaborating on something?

Lisa: We’ve really enjoyed it. We started by brainstorming the rocks that we would cover in the book, and we wanted to emphasize the universality of kids’ love for rocks. So we decided not to have a main character, but to have a variety of kids each interacting with a different rock in a special way.

Taylor: Which is a natural thing to do, because we wanted to have a wide variety of rocks that are not necessarily always found in the same place. It made sense to have a lot of different geographic settings from around the world with different kids in all of those places.

Lisa: We spent a lot of time talking about where that would be, what those rocks would be, and what was appealing about different rocks, both in terms of play and their appearance. We wanted visual variability to help readers differentiate the rocks presented. The illustrator, David Scheirer, does such beautiful watercolors. It’s like you can reach in and pick up some of the rocks from the book, because they have this incredible, tangible quality.

Q: Going into that creative process, Taylor, what was it like working with the artist, finding that balance between accuracy and artistic expression?

Taylor: That was an interesting process. Something that not everyone realizes about picture books is that you’re not necessarily creating the text and the art at the same time; in this case, the text was there first and art came later. David is such an amazing artist of natural materials that I think things worked out really, really well. For example, there’s a line that says that mica schist sparkles in the sun, and so you want to make sure that you can see that in the illustration, and I think David did that wonderfully. We had an opportunity to provide some feedback and iterate to refine some of the geological details in a few spots.

Q: Lisa, you focus a lot on nature and science in your books. Why focus on these topics in children’s literature?

Lisa: We spend a lot of time outside, and I always have questions. One of the great things about being married to Taylor is that I have a walking encyclopedia about earth science. I really enjoy sharing that sense of wonder with kids through school visits or library read-alouds. I love seeing how much they know, how delighted they are in sharing what they know, or what questions they have.

Taylor: Most of the time when I think about education, it’s university education. I taught our introductory geology class for about 10 years with [Department of Earth, Atmospheric, and Planetary Sciences professor] Oli Jagoutz, and so had a lot of opportunities to interact with students who were coming out of a wide variety of secondary education circumstances in the U.S. and elsewhere. And that made me think a lot about what we could do to introduce students to earth sciences even earlier and give them more excitement at a younger age. [The book] presented a really nice opportunity to have a reach into educational environments beyond what I do in the classroom.

Q: Informal education like this is important for students coming into research and academia. Taylor, how has it influenced your own research and teaching?

Taylor: At first glance, it seems pretty different. And yet, going back to that initial discussion we had with the editors about what this book should be, one theme that clearly emerged from that was the joy of discovery and the joy of play.

In the classroom, joy of discovery is still very much something that can excite people at any age. And so, teaching students, even MIT students who already know a lot, showing them new things either in the classroom or in the field, is something that I’ll remember to prioritize even more in the future.

And, while not exactly the joy of play, students at MIT love hands-on, project-based learning; something that’s beyond seeing it on a slide, or that helps the picture leap off the page.

Q: Would you two consider working together again on a project?

Taylor: Yes, absolutely. We collaborate all the time: we collaborate on dinner, collaborate on kid pickups and drop-offs ...

Lisa: [Laughing] On a picture book, as well, we would definitely love to collaborate again. We’re always brainstorming ideas; I think we have fun doing that.

Taylor: Going through the process once has made it clear how complementary our skills are. We’re excited to get started on the next one.

Q: Who are you hoping reads the book?

Lisa: Anyone interested in learning more about rocks or tapping into their love of exploring outdoors. At all ages, we can continue to cultivate a sense of curiosity. And I hope the book gives whoever reads it an increased appreciation for the earth, because that is the first step in really caring for our planet.

Taylor: I would be happy if children and their parents read it and are inspired to discover something outdoors or in nature that they might have overlooked before, whether or not that’s rocks. Sometimes you can look over a landscape and think that it’s mundane, but there’s almost always a story there, either in the rocks, the other natural forces that have shaped it, or biological processes occurring there.

Q: The most important question, and this is for both of you: Which rock in the book is your favorite?

Lisa: I am fascinated by fossils, so I would say limestone with fossils. I feel like I'm looking back through time.

Taylor: It's a tough one; the mica schist reminds me of where I grew up in the Green Mountains of Vermont. So that’s my favorite for sentimental reasons.

The book is available for purchase on July 16 through most major booksellers. Lisa reminds people to also consider checking it out from their local library.

Q&A: What past environmental success can teach us about solving the climate crisis

In a new book, Professor Susan Solomon uses previous environmental successes as a source of hope and guidance for mitigating climate change.

Susan Solomon, MIT professor of Earth, atmospheric, and planetary sciences (EAPS) and of chemistry, played a critical role in understanding how a class of chemicals known as chlorofluorocarbons were creating a hole in the ozone layer. Her research was foundational to the creation of the Montreal Protocol, an international agreement established in the 1980s that phased out products releasing chlorofluorocarbons. Since then, scientists have documented signs that the ozone hole is recovering thanks to these measures.

Having witnessed this historical process first-hand, Solomon, the Lee and Geraldine Martin Professor of Environmental Studies, is aware of how people can come together to make successful environmental policy happen. Using her story, as well as other examples of success — including combating smog, getting rid of DDT, and more — Solomon draws parallels from then to now as the climate crisis comes into focus in her new book, Solvable: How we Healed the Earth and How we can do it Again.”

Solomon took a moment to talk about why she picked the stories in her book, the students who inspired her, and why we need hope and optimism now more than ever.

Q: You have first-hand experience seeing how we’ve altered the Earth, as well as the process of creating international environmental policy. What prompted you to write a book about your experiences?

A: Lots of things, but one of the main ones is the things that I see in teaching. I have taught a class called Science, Politics and Environmental Policy for many years here at MIT. Because my emphasis is always on how we’ve actually fixed problems, students come away from that class feeling hopeful, like they really want to stay engaged with the problem.

It strikes me that students today have grown up in a very contentious and difficult era in which they feel like nothing ever gets done. But stuff does get done, even now. Looking at how we did things so far really helps you to see how we can do things in the future.

Q: In the book, you use five different stories as examples of successful environmental policy, and then end talking about how we can apply these lessons to climate change. Why did you pick these five stories?

A: I picked some of them because I’m closer to those problems in my own professional experience, like ozone depletion and smog. I did other issues partly because I wanted to show that even in the 21st century, we’ve actually got some stuff done — that’s the story of the Kigali Amendment to the Montreal Protocol, which is a binding international agreement on some greenhouse gases.

Another chapter is on DDT. One of the reasons I included that is because it had an enormous effect on the birth of the environmental movement in the United States. Plus, that story allows you to see how important the environmental groups can be.

Lead in gasoline and paint is the other one. I find it a very moving story because the idea that we were poisoning millions of children and not even realizing it is so very, very sad. But it’s so uplifting that we did figure out the problem, and it happened partly because of the civil rights movement, that made us aware that the problem was striking minority communities much more than non-minority communities.

Q: What surprised you the most during your research for the book?

A: One of the things that that I didn’t realize and should have, was the outsized role played by one single senator, Ed Muskie of Maine. He made pollution control his big issue and devoted incredible energy to it. He clearly had the passion and wanted to do it for many years, but until other factors helped him, he couldn’t. That's where I began to understand the role of public opinion and the way in which policy is only possible when public opinion demands change.

Another thing about Muskie was the way in which his engagement with these issues demanded that science be strong. When I read what he put into congressional testimony I realized how highly he valued the science. Science alone is never enough, but it’s always necessary. Over the years, science got a lot stronger, and we developed ways of evaluating what the scientific wisdom across many different studies and many different views actually is. That’s what scientific assessment is all about, and it’s crucial to environmental progress.

Q: Throughout the book you argue that for environmental action to succeed, three things must be met which you call the three Ps: a threat much be personal, perceptible, and practical. Where did this idea come from?

A: My observations. You have to perceive the threat: In the case of the ozone hole, you could perceive it because those false-color images of the ozone loss were so easy to understand, and it was personal because few things are scarier than cancer, and a reduced ozone layer leads to too much sun, increasing skin cancers. Science plays a role in communicating what can be readily understood by the public, and that’s important to them perceiving it as a serious problem.

Nowadays, we certainly perceive the reality of climate change. We also see that it’s personal. People are dying because of heat waves in much larger numbers than they used to; there are horrible problems in the Boston area, for example, with flooding and sea level rise. People perceive the reality of the problem and they feel personally threatened.

The third P is practical: People have to believe that there are practical solutions. It’s interesting to watch how the battle for hearts and minds has shifted. There was a time when the skeptics would just attack the whole idea that the climate was changing. Eventually, they decided ‘we better accept that because people perceive it, so let’s tell them that it’s not caused by human activity.’ But it’s clear enough now that human activity does play a role. So they’ve moved on to attacking that third P, that somehow it’s not practical to have any kind of solutions. This is progress! So what about that third P?

What I tried to do in the book is to point out some of the ways in which the problem has also become eminently practical to deal with in the last 10 years, and will continue to move in that direction. We’re right on the cusp of success, and we just have to keep going. People should not give in to eco despair; that’s the worst thing you could do, because then nothing will happen. If we continue to move at the rate we have, we will certainly get to where we need to be.

Q: That ties in very nicely with my next question. The book is very optimistic; what gives you hope?

A: I’m optimistic because I’ve seen so many examples of where we have succeeded, and because I see so many signs of movement right now that are going to push us in the same direction.

If we had kept conducting business as usual as we had been in the year 2000, we’d be looking at 4 degrees of future warming. Right now, I think we're looking at 3 degrees. I think we can get to 2 degrees. We have to really work on it, and we have to get going seriously in the next decade, but globally right now over 30 percent of our energy is from renewables. That's fantastic! Let’s just keep going.

Q: Throughout the book, you show that environmental problems won’t be solved by individual actions alone, but requires policy and technology driving. What individual actions can people take to help push for those bigger changes?

A: A big one is choose to eat more sustainably; choose alternative transportation methods like public transportation or reducing the amount of trips that you make. Older people usually have retirement investments, you can shift them over to a social choice funds and away from index funds that end up funding companies that you might not be interested in. You can use your money to put pressure: Amazon has been under a huge amount of pressure to cut down on their plastic packaging, mainly coming from consumers. They’ve just announced they’re not going to use those plastic pillows anymore. I think you can see lots of ways in which people really do matter, and we can matter more.

Q: What do you hope people take away from the book?

A: Hope for their future and resolve to do the best they can getting engaged with it.

Study finds health risks in switching ships from diesel to ammonia fuel

Ammonia could be a nearly carbon-free maritime fuel, but without new emissions regulations, its impact on air quality could significantly impact human health.

As container ships the size of city blocks cross the oceans to deliver cargo, their huge diesel engines emit large quantities of air pollutants that drive climate change and have human health impacts. It has been estimated that maritime shipping accounts for almost 3 percent of global carbon dioxide emissions and the industry’s negative impacts on air quality cause about 100,000 premature deaths each year.

Decarbonizing shipping to reduce these detrimental effects is a goal of the International Maritime Organization, a U.N. agency that regulates maritime transport. One potential solution is switching the global fleet from fossil fuels to sustainable fuels such as ammonia, which could be nearly carbon-free when considering its production and use.

But in a new study, an interdisciplinary team of researchers from MIT and elsewhere caution that burning ammonia for maritime fuel could worsen air quality further and lead to devastating public health impacts, unless it is adopted alongside strengthened emissions regulations.

Ammonia combustion generates nitrous oxide (N2O), a greenhouse gas that is about 300 times more potent than carbon dioxide. It also emits nitrogen in the form of nitrogen oxides (NO and NO2, referred to as NOx), and unburnt ammonia may slip out, which eventually forms fine particulate matter in the atmosphere. These tiny particles can be inhaled deep into the lungs, causing health problems like heart attacks, strokes, and asthma.

The new study indicates that, under current legislation, switching the global fleet to ammonia fuel could cause up to about 600,000 additional premature deaths each year. However, with stronger regulations and cleaner engine technology, the switch could lead to about 66,000 fewer premature deaths than currently caused by maritime shipping emissions, with far less impact on global warming.

“Not all climate solutions are created equal. There is almost always some price to pay. We have to take a more holistic approach and consider all the costs and benefits of different climate solutions, rather than just their potential to decarbonize,” says Anthony Wong, a postdoc in the MIT Center for Global Change Science and lead author of the study.

His co-authors include Noelle Selin, an MIT professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); Sebastian Eastham, a former principal research scientist who is now a senior lecturer at Imperial College London; Christine Mounaïm-Rouselle, a professor at the University of Orléans in France; Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology; and Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics. The research appears this week in Environmental Research Letters.

Greener, cleaner ammonia

Traditionally, ammonia is made by stripping hydrogen from natural gas and then combining it with nitrogen at extremely high temperatures. This process is often associated with a large carbon footprint. The maritime shipping industry is betting on the development of “green ammonia,” which is produced by using renewable energy to make hydrogen via electrolysis and to generate heat.

“In theory, if you are burning green ammonia in a ship engine, the carbon emissions are almost zero,” Wong says.

But even the greenest ammonia generates nitrous oxide (N2O), nitrogen oxides (NOx) when combusted, and some of the ammonia may slip out, unburnt. This nitrous oxide would escape into the atmosphere, where the greenhouse gas would remain for more than 100 years. At the same time, the nitrogen emitted as NOx and ammonia would fall to Earth, damaging fragile ecosystems. As these emissions are digested by bacteria, additional N2O  is produced.

NOx and ammonia also mix with gases in the air to form fine particulate matter. A primary contributor to air pollution, fine particulate matter kills an estimated 4 million people each year.

“Saying that ammonia is a ‘clean’ fuel is a bit of an overstretch. Just because it is carbon-free doesn’t necessarily mean it is clean and good for public health,” Wong says.

A multifaceted model

The researchers wanted to paint the whole picture, capturing the environmental and public health impacts of switching the global fleet to ammonia fuel. To do so, they designed scenarios to measure how pollutant impacts change under certain technology and policy assumptions.

From a technological point of view, they considered two ship engines. The first burns pure ammonia, which generates higher levels of unburnt ammonia but emits fewer nitrogen oxides. The second engine technology involves mixing ammonia with hydrogen to improve combustion and optimize the performance of a catalytic converter, which controls both nitrogen oxides and unburnt ammonia pollution.

They also considered three policy scenarios: current regulations, which only limit NOx emissions in some parts of the world; a scenario that adds ammonia emission limits over North America and Western Europe; and a scenario that adds global limits on ammonia and NOx emissions.

The researchers used a ship track model to calculate how pollutant emissions change under each scenario and then fed the results into an air quality model. The air quality model calculates the impact of ship emissions on particulate matter and ozone pollution. Finally, they estimated the effects on global public health.

One of the biggest challenges came from a lack of real-world data, since no ammonia-powered ships are yet sailing the seas. Instead, the researchers relied on experimental ammonia combustion data from collaborators to build their model.

“We had to come up with some clever ways to make that data useful and informative to both the technology and regulatory situations,” he says.

A range of outcomes

In the end, they found that with no new regulations and ship engines that burn pure ammonia, switching the entire fleet would cause 681,000 additional premature deaths each year.

“While a scenario with no new regulations is not very realistic, it serves as a good warning of how dangerous ammonia emissions could be. And unlike NOx, ammonia emissions from shipping are currently unregulated,” Wong says.

However, even without new regulations, using cleaner engine technology would cut the number of premature deaths down to about 80,000, which is about 20,000 fewer than are currently attributed to maritime shipping emissions. With stronger global regulations and cleaner engine technology, the number of people killed by air pollution from shipping could be reduced by about 66,000.

“The results of this study show the importance of developing policies alongside new technologies,” Selin says. “There is a potential for ammonia in shipping to be beneficial for both climate and air quality, but that requires that regulations be designed to address the entire range of potential impacts, including both climate and air quality.”

Ammonia’s air quality impacts would not be felt uniformly across the globe, and addressing them fully would require coordinated strategies across very different contexts. Most premature deaths would occur in East Asia, since air quality regulations are less stringent in this region. Higher levels of existing air pollution cause the formation of more particulate matter from ammonia emissions. In addition, shipping volume over East Asia is far greater than elsewhere on Earth, compounding these negative effects.

In the future, the researchers want to continue refining their analysis. They hope to use these findings as a starting point to urge the marine industry to share engine data they can use to better evaluate air quality and climate impacts. They also hope to inform policymakers about the importance and urgency of updating shipping emission regulations.

This research was funded by the MIT Climate and Sustainability Consortium.

Empowering future innovators through a social impact lens

The IDEAS Social Innovation Challenge helps students hone their entrepreneurship skills to create viable ventures for public good.

What if testing for Lyme disease were as simple as dropping a tick in a test tube at home, waiting a few minutes, and looking for a change of color?

MIT Sloan Fellow and physician Erin Dawicki is making it happen, as part of her aspiration to make Lyme testing accessible, affordable, and widespread. She noticed a troubling increase in undetected Lyme disease in her practice and collaborated with fellow MIT students to found Lyme Alert, a startup that has created the first truly at-home Lyme screening kit using nanotechnology.

Lyme Alert focuses on social impact in its mission to deliver faster diagnoses while using its technology to track disease spread. Participating in the 2024 IDEAS Social Innovation Challenge (IDEAS) helped the team refine their solution while keeping impact at the heart of their work. They ultimately won the top prize at the program’s award ceremony in the spring.

Over the past 23 years, IDEAS has fostered a community in which hundreds of entrepreneurial students have developed their innovation skills in collaboration with affected stakeholders, experienced entrepreneurs, and a supportive network of alumni, classmates, and mentors. The 14 teams in the 2024 IDEAS cohort join over 200 alumni teams — many still in operation today — that have received over $1.5 million in seed funding since 2001.

“IDEAS is a great example of experiential learning at MIT: empowering students to ask good questions, explore new frameworks, and propose sustainable interventions to urgent challenges alongside community partners," says Lauren Tyger, assistant dean of social innovation at the Priscilla King Gray Public Service Center (PKG Center) at MIT.

As MIT’s premier social impact incubator housed within the PKG Center, IDEAS prepares students to take their early-stage ideas to the next level. Teams learn how to develop relationships with constituents affected by social issues, propose interventions that yield measurable impact, and create effective social enterprise models.

“This program undoubtedly opened my eyes to the intersection of social impact and entrepreneurship, fields I previously thought to be mutually exclusive,” says Srihitha Dasari, a rising junior in brain and cognitive sciences and co-founder of another award-winning team, PuntoSalud. “It not only provided me with vital skills to advance my own interests in the startup ecosystem, but expanded my network in order to enact change.”

Shaping the “leaders of tomorrow”

Over the course of one semester, IDEAS teams participate in iterative workshops, refine their ideas with mentors, and pitch their solutions to peers and judges. The process helps students transform their concepts into social innovations in health care, finance, climate, education, and many more fields.

The program culminates in an awards ceremony at the MIT Museum, where teams share their final products. This year’s showcase featured a keynote address from Christine Ortiz, professor of materials science and engineering. Her passion for socially-directed science and technology aligns with IDEAS’ focus on social impact.

“I was grateful to be a part of the journey for these 14 teams,” Ortiz says. “IDEAS speaks to the core of what MIT needs: innovators capable of thinking critically about problems within their communities.”

Five teams are selected for awards of $6,000 to $20,000 by a group of experts across a variety of industries who volunteer as judges, and two additional award grants of $2,500 are given to teams that received the most votes through the MIT Solve initiative’s IDEAS virtual showcase.

The teams that received awards this year are: Lyme Alert, which created the first truly at-home tick testing kit for Lyme disease; My Sister’s Keeper, which aims to establish a professional leadership incubator designed specifically for Muslim immigrant women in the United States; Sakhi - Simppl, which created a WhatsApp chatbot that generates responses grounded in accurate, verified knowledge from international health agencies; BendShelters, which provides sustainable, modular, and easily deployable bamboo shelters for displaced populations in Myanmar, a Southeast Asian country under a dictatorship; PuntoSalud, an AI-powered virtual health messaging system that delivers personalized, trustworthy information sourced directly from local hospitals in Argentina; ONE Community, which provides a digital network through which businesses in India at risk of displacement can connect with more customers and partners to ensure sustained and resilient growth; and Mudzi Cooking Project, a social enterprise tackling the challenges faced by women in Chisinga, Malawi, who struggle to access firewood.

As a member of the Science Hub, the PKG Center worked with corporate partner Amazon, which sponsored the top five awards for the first time in 2024. The inaugural Amazon Prizes for Social Good honored the teams’ efforts to use tech to solve social issues.

“Clearly, these students are inspired to give rather than to take, and their work distinguishes them all as the leaders of tomorrow,” says Tye Brady, chief technologist at Amazon Robotics.

All of the teams will refine their ideas over the summer and report back by the start of the next academic year. Additionally, for a period of 16 months the teams that won awards will continue to receive guidance from the PKG Center and a founder support network with the 2023 group of IDEAS grantees.

Tapping MIT’s innovation ecosystem

IDEAS is just one of the PKG Center’s programs that provide opportunities for students to focus on social impact. In tandem with other Institute resources for student innovators, PKG enables students to apply their innovation skills to solve real-world problems while supporting community-informed solutions to systemic challenges.

“The PKG Center is a valued partner in enabling the growing numbers of students who aspire to create impact-focused ventures,” says Don Shobrys, director of MIT Venture Mentoring Service.

In order to make those ventures successful, Tyger explains, “IDEAS teaches students frameworks to deeply understand the systems around a challenge, get to know who’s already addressing it, find gaps, and then design and implement something that will uniquely and sustainably address the challenge. Rather than optimizing for profit alone, IDEAS helps students learn how to optimize for what can produce the most social good or reduce the most harm.”

Tyger notes that although IDEAS’ emphasis on social impact is somewhat unique, it is complemented by MIT’s rich entrepreneurship ecosystem. “There are many resources and people who are incredibly generous with their time — and who above all do it because they know we are all supporting the growth of students,” she says.

This year’s program partners included MIT Sandbox and Arts Startup Incubator, which co-hosted informational sessions for applicants in the fall; BU Law ClinicD-Lab, and Systems-Awareness Lab leaders, who served as guest speakers throughout the spring; Venture Mentoring Service, which matched teams with mentors; entrepreneurs-in-residence from the Martin Trust Center for MIT Entrepreneurship, who judged final pitches and advised teams; DesignX and the Center for Development and Entrepreneurship at MIT (formerly the Legatum Center), which provided additional support to several teams; MIT Solve, which hosted the teams on their voting platform; and MIT Innovation HQ, which provided space for students to meet one another and exchange ideas.

While IDEAS projects are designed to be a means of transformative change for public good, many students say that the program is transformative for them, as well. “Before IDEAS, I didn’t see myself as an innovator — just someone passionate about solving a problem that I’d heard people facing across diseases,” reflects Anika Wadhera, a rising senior in biological engineering and co-founder of Chronolog Health, a platform revolutionizing chronic illness management. “Now I feel much more confident in my ability to actually make a difference by better understanding the different stakeholders and the factors that are necessary to make a transformative solution.”

A new strategy to cope with emotional stress

A study by MIT scientists supports “social good” as a cognitive approach to dealing with highly stressful events.

Some people, especially those in public service, perform admirable feats: Think of health-care workers fighting to keep patients alive or first responders arriving at the scene of a car crash. But the emotional weight can become a mental burden. Research has shown that emergency personnel are at elevated risk for mental health challenges like post-traumatic stress disorder. How can people undergo such stressful experiences and also maintain their well-being?

A new study from the McGovern Institute for Brain Research at MIT revealed that a cognitive strategy focused on social good may be effective in helping people cope with distressing events. The research team found that the approach was comparable to another well-established emotion regulation strategy, unlocking a new tool for dealing with highly adverse situations.

“How you think can improve how you feel,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, who is a senior author of the paper. “This research suggests that the social good approach might be particularly useful in improving well-being for those constantly exposed to emotionally taxing events.”

The study, published today in PLOS ONE, is the first to examine the efficacy of this cognitive strategy. Nancy Tsai, a postdoc in Gabrieli’s lab at the McGovern Institute, is the lead author of the paper.

Emotion regulation tools

Emotion regulation is the ability to mentally reframe how we experience emotions — a skill critical to maintaining good mental health. Doing so can make one feel better when dealing with adverse events, and emotion regulation has been shown to boost emotional, social, cognitive, and physiological outcomes across the lifespan.

One emotion regulation strategy is “distancing,” where a person copes with a negative event by imagining it as happening far away, a long time ago, or from a third-person perspective. Distancing has been well-documented as a useful cognitive tool, but it may be less effective in certain situations, especially ones that are socially charged — like a firefighter rescuing a family from a burning home. Rather than distancing themselves, a person may instead be forced to engage directly with the situation.

“In these cases, the ‘social good’ approach may be a powerful alternative,” says Tsai. “When a person uses the social good method, they view a negative situation as an opportunity to help others or prevent further harm.” For example, a firefighter experiencing emotional distress might focus on the fact that their work enables them to save lives. The idea had yet to be backed by scientific investigation, so Tsai and her team, alongside Gabrieli, saw an opportunity to rigorously probe this strategy.

A novel study

The MIT researchers recruited a cohort of adults and had them complete a questionnaire to gather information including demographics, personality traits, and current well-being, as well as how they regulated their emotions and dealt with stress. The cohort was randomly split into two groups: a distancing group and a social good group. In the online study, each group was shown a series of images that were either neutral (such as fruit) or contained highly aversive content (such as bodily injury). Participants were fully informed of the kinds of images they might see and could opt out of the study at any time.

Each group was asked to use their assigned cognitive strategy to respond to half of the negative images. For example, while looking at a distressing image, a person in the distancing group could have imagined that it was a screenshot from a movie. Conversely, a subject in the social good group might have responded to the image by envisioning that they were a first responder saving people from harm. For the other half of the negative images, participants were asked to only look at them and pay close attention to their emotions. The researchers asked the participants how they felt after each image was shown.

Social good as a potent strategy

The MIT team found that distancing and social good approaches helped diminish negative emotions. Participants reported feeling better when they used these strategies after viewing adverse content compared to when they did not, and stated that both strategies were easy to implement.

The results also revealed that, overall, distancing yielded a stronger effect. Importantly, however, Tsai and Gabrieli believe that this study offers compelling evidence for social good as a powerful method better-suited to situations when people cannot distance themselves, like rescuing someone from a car crash, “Which is more probable for people in the real world,” notes Tsai. Moreover, the team discovered that people who most successfully used the social good approach were more likely to view stress as enhancing rather than debilitating. Tsai says this link may point to psychological mechanisms that underlie both emotion regulation and how people respond to stress.

Additionally, the results showed that older adults used the cognitive strategies more effectively than younger adults. The team suspects that this is probably because, as prior research has shown, older adults are more adept at regulating their emotions, likely due to having greater life experiences. The authors note that successful emotion regulation also requires cognitive flexibility, or having a malleable mindset to adapt well to different situations.

“This is not to say that people, such as physicians, should reframe their emotions to the point where they fully detach themselves from negative situations,” says Gabrieli. “But our study shows that the social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”

The MIT team says that future studies are needed to further validate this work, and that such research is promising in that it can uncover new cognitive tools to equip individuals to take care of themselves as they bravely assume the challenge of taking care of others.

Study: Weaker ocean circulation could enhance CO2 buildup in the atmosphere

New findings challenge current thinking on the ocean’s role in storing carbon.

As climate change advances, the ocean’s overturning circulation is predicted to weaken substantially. With such a slowdown, scientists estimate the ocean will pull down less carbon dioxide from the atmosphere. However, a slower circulation should also dredge up less carbon from the deep ocean that would otherwise be released back into the atmosphere. On balance, the ocean should maintain its role in reducing carbon emissions from the atmosphere, if at a slower pace.

However, a new study by an MIT researcher finds that scientists may have to rethink the relationship between the ocean’s circulation and its long-term capacity to store carbon. As the ocean gets weaker, it could release more carbon from the deep ocean into the atmosphere instead.

The reason has to do with a previously uncharacterized feedback between the ocean’s available iron, upwelling carbon and nutrients, surface microorganisms, and a little-known class of molecules known generally as “ligands.” When the ocean circulates more slowly, all these players interact in a self-perpetuating cycle that ultimately increases the amount of carbon that the ocean outgases back to the atmosphere.

“By isolating the impact of this feedback, we see a fundamentally different relationship between ocean circulation and atmospheric carbon levels, with implications for the climate,” says study author Jonathan Lauderdale, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “What we thought is going on in the ocean is completely overturned.”

Lauderdale says the findings show that “we can’t count on the ocean to store carbon in the deep ocean in response to future changes in circulation. We must be proactive in cutting emissions now, rather than relying on these natural processes to buy us time to mitigate climate change.”

His study appears today in the journal Nature Communications.

Box flow

In 2020, Lauderdale led a study that explored ocean nutrients, marine organisms, and iron, and how their interactions influence the growth of phytoplankton around the world. Phytoplankton are microscopic, plant-like organisms that live on the ocean surface and consume a diet of carbon and nutrients that upwell from the deep ocean and iron that drifts in from desert dust.

The more phytoplankton that can grow, the more carbon dioxide they can absorb from the atmosphere via photosynthesis, and this plays a large role in the ocean’s ability to sequester carbon.

For the 2020 study, the team developed a simple “box” model, representing conditions in different parts of the ocean as general boxes, each with a different balance of nutrients, iron, and ligands — organic molecules that are thought to be byproducts of phytoplankton. The team modeled a general flow between the boxes to represent the ocean’s larger circulation — the way seawater sinks, then is buoyed back up to the surface in different parts of the world.

This modeling revealed that, even if scientists were to “seed” the oceans with extra iron, that iron wouldn’t have much of an effect on global phytoplankton growth. The reason was due to a limit set by ligands. It turns out that, if left on its own, iron is insoluble in the ocean and therefore unavailable to phytoplankton. Iron only becomes soluble at “useful” levels when linked with ligands, which keep iron in a form that plankton can consume. Lauderdale found that adding iron to one ocean region to consume additional nutrients robs other regions of nutrients that phytoplankton there need to grow. This lowers the production of ligands and the supply of iron back to the original ocean region, limiting the amount of extra carbon that would be taken up from the atmosphere.

Unexpected switch

Once the team published their study, Lauderdale worked the box model into a form that he could make publicly accessible, including ocean and atmosphere carbon exchange and extending the boxes to represent more diverse environments, such as conditions similar to the Pacific, the North Atlantic, and the Southern Ocean. In the process, he tested other interactions within the model, including the effect of varying ocean circulation.

He ran the model with different circulation strengths, expecting to see less atmospheric carbon dioxide with weaker ocean overturning — a relationship that previous studies have supported, dating back to the 1980s. But what he found instead was a clear and opposite trend: The weaker the ocean’s circulation, the more CO2 built up in the atmosphere.

“I thought there was some mistake,” Lauderdale recalls. “Why were atmospheric carbon levels trending the wrong way?”

When he checked the model, he found that the parameter describing ocean ligands had been left “on” as a variable. In other words, the model was calculating ligand concentrations as changing from one ocean region to another.

On a hunch, Lauderdale turned this parameter “off,” which set ligand concentrations as constant in every modeled ocean environment, an assumption that many ocean models typically make. That one change reversed the trend, back to the assumed relationship: A weaker circulation led to reduced atmospheric carbon dioxide. But which trend was closer to the truth?

Lauderdale looked to the scant available data on ocean ligands to see whether their concentrations were more constant or variable in the actual ocean. He found confirmation in GEOTRACES, an international study that coordinates measurements of trace elements and isotopes across the world’s oceans, that scientists can use to compare concentrations from region to region. Indeed, the molecules’ concentrations varied. If ligand concentrations do change from one region to another, then his surprise new result was likely representative of the real ocean: A weaker circulation leads to more carbon dioxide in the atmosphere.

“It’s this one weird trick that changed everything,” Lauderdale says. “The ligand switch has revealed this completely different relationship between ocean circulation and atmospheric CO2 that we thought we understood pretty well.”

Slow cycle

To see what might explain the overturned trend, Lauderdale analyzed biological activity and carbon, nutrient, iron, and ligand concentrations from the ocean model under different circulation strengths, comparing scenarios where ligands were variable or constant across the various boxes.

This revealed a new feedback: The weaker the ocean’s circulation, the less carbon and nutrients the ocean pulls up from the deep. Any phytoplankton at the surface would then have fewer resources to grow and would produce fewer byproducts (including ligands) as a result. With fewer ligands available, less iron at the surface would be usable, further reducing the phytoplankton population. There would then be fewer phytoplankton available to absorb carbon dioxide from the atmosphere and consume upwelled carbon from the deep ocean.

“My work shows that we need to look more carefully at how ocean biology can affect the climate,” Lauderdale points out. “Some climate models predict a 30 percent slowdown in the ocean circulation due to melting ice sheets, particularly around Antarctica. This huge slowdown in overturning circulation could actually be a big problem: In addition to a host of other climate issues, not only would the ocean take up less anthropogenic CO2 from the atmosphere, but that could be amplified by a net outgassing of deep ocean carbon, leading to an unanticipated increase in atmospheric CO2 and unexpected further climate warming.” 

MIT researchers introduce generative AI for databases

This new tool offers an easier way for people to analyze complex tabular data.

A new tool makes it easier for database users to perform complicated statistical analyses of tabular data without the need to know what is going on behind the scenes.

GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.

For instance, if the system were used to analyze medical data from a patient who has always had high blood pressure, it could catch a blood pressure reading that is low for that particular patient but would otherwise be in the normal range.

GenSQL automatically integrates a tabular dataset and a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.

Moreover, GenSQL can be used to produce and analyze synthetic data that mimic the real data in a database. This could be especially useful in situations where sensitive data cannot be shared, such as patient health records, or when real data are sparse.

This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.

“Historically, SQL taught the business world what a computer could do. They didn’t have to write custom programs, they just had to ask questions of a database in high-level language. We think that, when we move from just querying data to asking questions of models and data, we are going to need an analogous language that teaches people the coherent questions you can ask a computer that has a probabilistic model of the data,” says Vikash Mansinghka ’05, MEng ’09, PhD ’09, senior author of a paper introducing GenSQL and a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences.

When the researchers compared GenSQL to popular, AI-based approaches for data analysis, they found that it was not only faster but also produced more accurate results. Importantly, the probabilistic models used by GenSQL are explainable, so users can read and edit them.

“Looking at the data and trying to find some meaningful patterns by just using some simple statistical rules might miss important interactions. You really want to capture the correlations and the dependencies of the variables, which can be quite complicated, in a model. With GenSQL, we want to enable a large set of users to query their data and their model without having to know all the details,” adds lead author Mathieu Huot, a research scientist in the Department of Brain and Cognitive Sciences and member of the Probabilistic Computing Project.

They are joined on the paper by Matin Ghavami and Alexander Lew, MIT graduate students; Cameron Freer, a research scientist; Ulrich Schaechtle and Zane Shelby of Digital Garage; Martin Rinard, an MIT professor in the Department of Electrical Engineering and Computer Science and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Feras Saad ’15, MEng ’16, PhD ’22, an assistant professor at Carnegie Mellon University. The research was recently presented at the ACM Conference on Programming Language Design and Implementation.

Combining models and databases

SQL, which stands for structured query language, is a programming language for storing and manipulating information in a database. In SQL, people can ask questions about data using keywords, such as by summing, filtering, or grouping database records.

However, querying a model can provide deeper insights, since models can capture what data imply for an individual. For instance, a female developer who wonders if she is underpaid is likely more interested in what salary data mean for her individually than in trends from database records.

The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries.

They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language.

A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers.

For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” Just looking at a correlation between columns in a database might miss subtle dependencies. Incorporating a probabilistic model can capture more complex interactions.   

Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.

For instance, with this calibrated uncertainty, if one queries the model for predicted outcomes of different cancer treatments for a patient from a minority group that is underrepresented in the dataset, GenSQL would tell the user that it is uncertain, and how uncertain it is, rather than overconfidently advocating for the wrong treatment.

Faster and more accurate results

To evaluate GenSQL, the researchers compared their system to popular baseline methods that use neural networks. GenSQL was between 1.7 and 6.8 times faster than these approaches, executing most queries in a few milliseconds while providing more accurate results.

They also applied GenSQL in two case studies: one in which the system identified mislabeled clinical trial data and the other in which it generated accurate synthetic data that captured complex relationships in genomics.

Next, the researchers want to apply GenSQL more broadly to conduct largescale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.

They also want to make GenSQL easier to use and more powerful by adding new optimizations and automation to the system. In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal is to eventually develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   

This research is funded, in part, by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation.

What is language for?

Drawing on evidence from neurobiology, cognitive science, and corpus linguistics, MIT researchers make the case that language is a tool for communication, not for thought.

Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas — but is it more than that? Do we use language not just to communicate, but to think?

In the June 19 issue of the journal Nature, McGovern Institute for Brain Research neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.

Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.

“I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says. “But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”

Separating language and thought

For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically. But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT brain and cognitive scientist and linguist Edward Gibson, and University of California at Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.

“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”

Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.

“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”

That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then … destroying the language system should lead to problems in thinking as well, and it really doesn’t.”

Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.

Language optimization

In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.

That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.

“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.

“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition — but does not give rise to it. 

Summer 2024 reading from MIT

MIT News rounds up recent titles from Institute faculty and staff.

MIT faculty and staff authors have published a plethora of books, chapters, and other literary contributions in the past year. The following titles represent some of their works published in the past 12 months. In addition to links for each book from its publisher, the MIT Libraries has compiled a helpful list of the titles held in its collections.

Looking for more literary works from the MIT community? Enjoy our book lists from 2023, 2022, and 2021.

Happy reading!

Novel, memoir, and poetry

Seizing Control: Managing Epilepsy and Others’ Reactions to It — A Memoir” (Haley’s, 2023)
By Laura Beretsky, grant writer in the MIT Introduction to Technology, Engineering, and Science (MITES) program

Beretsky’s memoir, “Seizing Control,” details her journey with epilepsy, discrimination, and a major surgical procedure to reduce her seizures. After two surgical interventions, she has been seizure-free for eight years, though she notes she will always live with epilepsy.

Sky. Pond. Mouth.” (Yas Press, 2024)
By Kevin McLellan, staff member in MIT’s Program in Art, Culture, and Technology

In this book of poetry, physical and emotional qualities free-range between the animate and inanimate as though the world is written with dotted lines. With chiseled line breaks, intriguing meta-poetic levels, and punctuation like seed pods, McLellan’s poems, if we look twice, might flourish outside the book’s margin, past the grow light of the screen, even (especially) other borderlines we haven’t begun to imagine.

Science and engineering

The Visual Elements: Handbooks for Communicating Science and Engineering” (University of Chicago Press, 2023 and 2024)
By Felice Frankel, research scientist in chemical engineering

Each of the two books in the “Visual Elements” series focuses on a different aspect of scientific visual communication: photography on one hand and design on the other. Their unifying goal is to provide guidance for scientists and engineers who must communicate their work with the public, for grant applications, journal submissions, conference or poster presentations, and funding agencies. The books show researchers the importance of presenting their work in clear, concise, and appealing ways that also maintain scientific integrity.

A Book of Waves” (Duke University Press, 2023)
By Stefan Helmreich, professor of anthropology

In this book, Helmreich examines ocean waves as forms of media that carry ecological, geopolitical, and climatological news about our planet. Drawing on ethnographic work with oceanographers and coastal engineers in the Netherlands, the United States, Australia, Japan, and Bangladesh, he details how scientists at sea and in the lab apprehend waves’ materiality through abstractions, seeking to capture in technical language these avatars of nature at once periodic and irreversible, wild and pacific, ephemeral and eternal.

An Introduction to System Safety Engineering” (MIT Press, 2023)
By Nancy G. Leveson, professor of aeronautics and astronautics

Preventing accidents and losses in complex systems requires a holistic perspective that can accommodate unprecedented types of technology and design. Leveson’s book covers the history of safety engineering; explores risk, ethics, legal frameworks, and policy implications; and explains why accidents happen and how to mitigate risks in modern, software-intensive systems. It includes accounts of well-known accidents like the Challenger and Columbia space shuttle disasters, Deepwater Horizon oil spill, and Chernobyl and Fukushima nuclear accidents, examining their causes and how to prevent similar incidents in the future.

Solvable: How We Healed the Earth, and How We Can Do It Again” (University of Chicago Press, 2024)
By Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry

We solved planet-threatening problems before, Solomon argues, and we can do it again. She knows firsthand what those solutions entail, as she gained international fame as the leader of a 1986 expedition to Antarctica, making discoveries that were key to healing the damaged ozone layer. She saw a path from scientific and public awareness to political engagement, international agreement, industry involvement, and effective action. Solomon connects this triumph to the stories of other past environmental victories — against ozone depletion, smog, pesticides, and lead — to extract the essential elements of what makes change possible.

Culture, humanities, and social sciences

Political Rumors: Why We Accept Misinformation and How to Fight It” (Princeton University Press, 2023)
By Adam Berinsky, professor of political science

Political rumors pollute the political landscape. But if misinformation crowds out the truth, how can democracy survive? Berinsky examines why political rumors exist and persist despite their unsubstantiated and refuted claims, who is most likely to believe them, and how to combat them. He shows that a tendency toward conspiratorial thinking and vehement partisan attachment fuel belief in rumors. Moreover, in fighting misinformation, it is as important to target the undecided and the uncertain as it is the true believers.

Laws of the Land: Fengshui and the State in Qing Dynasty China,” (Princeton University Press, 2023)
By Tristan Brown, assistant professor of history

In “Laws of the Land,” Brown tells the story of the important roles — especially legal ones — played by fengshui in Chinese society during China’s last imperial dynasty, the Manchu Qing (1644–1912). Employing archives from Mainland China and Taiwan that have only recently become available, this is the first book to document fengshui’s invocations in Chinese law during the Qing dynasty.

Trouble with Gender: Sex Facts, Gender Fictions” (Polity, 2024)
By Alex Byrne, professor of philosophy

MIT philosopher Alex Byrne knows that within his field, he’s very much in the minority when it comes to his views on sex and gender. In “Trouble with Gender,” Byrne suggests that some ideas regarding sex and gender have not been properly examined by philosophers, and he argues for a reasoned and civil conversation on the topic.

Life at the Center: Haitians and Corporate Catholicism in Boston (University of California Press, 2024)
By Erica Caple James, professor of medical anthropology and urban studies

In “Life at the Center,” James traces how faith-based and secular institutions in Boston have helped Haitian refugees and immigrants attain economic independence, health, security, and citizenship in the United States. The culmination of more than a decade of advocacy and research on behalf of the Haitians in Boston, this groundbreaking work exposes how Catholic corporations have strengthened — but also eroded — Haitians’ civic power.

Portable Postsocialisms: New Cuban Mediascapes after the End of History” (University of Texas Press, 2024)
By Paloma Duong, associate professor of media studies/writing

Why does Cuban socialism endure as an object of international political desire, while images of capitalist markets consume Cuba’s national imagination? “Portable Postsocialisms” calls on a vast multimedia archive to offer a groundbreaking cultural interpretation of Cuban postsocialism. Duong examines songs, artworks, advertisements, memes, literature, jokes, and networks that refuse exceptionalist and exoticizing visions of Cuba.

They All Made Peace — What Is Peace?” (University of Chicago Press, 2023)
Chapter by Lerna Ekmekcioglu, professor of history and director of the Program in Women’s and Gender Studies

In her chapter, Ekmekcioglu contends that the Treaty of Lausanne, which followed the first world war, is an often-overlooked event of great historical significance for Armenians. The treaty became the “birth certificate” of modern Turkey, but there was no redress for Armenians. The chapter uses new research to reconstruct the dynamics of the treaty negotiations, illuminating both Armenians’ struggles as well as the international community’s struggles to deliver consistent support for multiethnic, multireligious states.

We’ve Got You Covered: Rebooting American Health Care” (Portfolio, 2023)
By Amy Finkelstein, professor of economics, and Liran Einav

Few of us need convincing that the American health insurance system needs reform. But many existing proposals miss the point, focusing on expanding one relatively successful piece of the system or building in piecemeal additions. As Finkelstein and Einav point out, our health care system was never deliberately designed, but rather pieced together to deal with issues as they became politically relevant. The result is a sprawling, arbitrary, and inadequate mess that has left 30 million Americans without formal insurance. It’s time, the authors argue, to tear it all down and rebuild, sensibly and deliberately.

At the Pivot of East and West: Ethnographic, Literary and Filmic Arts” (Duke University Press, 2023)
By Michael M.J. Fischer, professor of anthropology and of science and technology studies

In his latest book, Fischer examines documentary filmmaking and literature from Southeast Asia and Singapore for their para-ethnographic insights into politics, culture, and aesthetics. Continuing his project of applying anthropological thinking to the creative arts, Fischer exemplifies how art and fiction trace the ways in which taken-for-granted common sense changes over time speak to the transnational present and track signals of the future before they surface in public awareness.

Lines Drawn across the Globe” (McGill-Queen's University Press, 2023)
By Mary Fuller, professor of literature and chair of the faculty

Around 1600, English geographer and cleric Richard Hakluyt published a 2,000-page collection of travel narratives, royal letters, ships’ logs, maps, and more from over 200 voyages. In "Lines Drawn across the Globe," Fuller traces the history of the book’s compilation and gives order and meaning to its diverse contents. From Sierra Leone to Iceland, from Spanish narratives of New Mexico to French accounts of the Saint Lawrence and Portuguese accounts of China, Hakluyt’s shaping of the book provides a conceptual map of the world’s regions and of England’s real and imagined relations to them.

The Rise and Fall of the EAST: How Exams, Autocracy, Stability, and Technology Brought China Success, and Why They Might Lead to Its Decline” (Yale University Press, 2023)
By Yasheng Huang, the Epoch Foundation Professor of International Management and professor of global economics and management

According to Huang, the world is seeing a repeat of Chinese history during which restrictions on economic and political freedom created economic stagnation. The bottom line: “Without academic collaboration, without business collaboration, without technological collaborations, the pace of Chinese technological progress is going to slow down dramatically.”

The Long First Millennium: Affluence, Architecture, and Its Dark Matter Economy (Routledge, 2023)
By Mark Jarzombek, professor of the history and theory of architecture

Jarzombek’s book argues that long-distance trade in luxury items — such as diamonds, gold, cinnamon, scented woods, ivory, and pearls, all of which require little overhead in their acquisition and were relatively easy to transport — played a foundational role in the creation of what we would call “global trade” in the first millennium CE. The book coins the term “dark matter economy” to better describe this complex — though mostly invisible — relationship to normative realities. “The Long Millennium” will appeal to students, scholars, and anyone interested in the effect of trade on medieval society.

World Literature in the Soviet Union” (Academic Studies Press, 2023)
Chapter by Maria Khotimsky, senior lecturer in Russian

Khotimsky’s chapter, “The Treasure Trove of World Literature: Shaping the Concept of World Literature in Post-Revolutionary Russia,” examines Vsemirnaia Literatura (World Literature), an early Soviet publishing house founded in 1919 in Petersburg that advanced an innovative canon of world literature beyond the European tradition. It analyzes the publishing house’s views on translation, focusing on book prefaces that reveal a search for a new evaluative system, adaptation to changing socio-cultural norms and reassessing the roles of readers, critics, and the very endeavor of translation.

Dare to Invent the Future: Knowledge in the Service of and Through Problem-Solving” (MIT Press, 2023)
By Clapperton Chakanetsa Mavhunga, professor of science, technology, and society

In this provocative book — the first in a trilogy — Chakanetsa Mavhunga argues that our critical thinkers must become actual thinker-doers. Taking its title from one of Thomas Sankara’s most inspirational speeches, “Dare to Invent the Future” looks for moments in Africa’s story where precedents of critical thought and knowledge in service of problem-solving are evident to inspire readers to dare to invent such a knowledge system.

Death, Dominance, and State-Building: The US in Iraq and the Future of American Military Intervention” (Oxford University Press, 2024)
By Roger Petersen, the Arthur and Ruth Sloan Professor of Political Science

“Death, Dominance, and State-Building” provides the first comprehensive analytic history of post-invasion Iraq. Although the war is almost universally derided as one of the biggest foreign policy blunders of the post-Cold War era, Petersen argues that the course and conduct of the conflict is poorly understood. The book applies an accessible framework to a variety of case studies across time and region. It concludes by drawing lessons relevant to future American military interventions.

Technology, systems, and society

Code Work: Hacking Across the U.S./México Techno-Borderlands” (Princeton University Press, 2023)
By Héctor Beltrán, assistant professor of anthropology

In this book, Beltrán examines Mexican and Latinx coders’ personal strategies of self-making as they navigate a transnational economy of tech work. Beltrán shows how these hackers apply concepts from the coding world to their lived experiences, deploying batches, loose coupling, iterative processing (looping), hacking, prototyping, and full-stack development in their daily social interactions — at home, in the workplace, on the dating scene, and in their understanding of the economy, culture, and geopolitics.

Unmasking AI: My Mission to Protect What is Human in a World of Machines” (Penguin Random House, 2023)
By Joy Buolamwini SM ’17, PhD ’22, member of the Media Lab Director’s Circle

To many it may seem like recent developments in artificial intelligence emerged out of nowhere to pose unprecedented threats to humankind. But to Buolamwini, this moment has been a long time in the making. “Unmasking AI” is the remarkable story of how Buolamwini uncovered what she calls “the coded gaze” — evidence of encoded discrimination and exclusion in tech products. She shows how racism, sexism, colorism, and ableism can overlap and render broad swaths of humanity “excoded” and therefore vulnerable in a world rapidly adopting AI tools.

Counting Feminicide: Data Feminism in Action” (MIT Press, 2024)
By Catherine D’Ignazio, associate professor of urban science and planning

“Counting Feminicide” brings to the fore the work of data activists across the Americas who are documenting feminicide, and challenging the reigning logic of data science by centering care, memory, and justice in their work. D’Ignazio describes the creative, intellectual, and emotional labor of feminicide data activists who are at the forefront of a data ethics that rigorously and consistently takes power and people into account.

Rethinking Cyber Warfare: The International Relations of Digital Disruption” (Oxford University Press, 2024)
By R. David Edelman, research fellow at the MIT Center for International Studies

Fifteen years into the era of “cyber warfare,” are we any closer to understanding the role a major cyberattack would play in international relations — or to preventing one? Uniquely spanning disciplines and enriched by the insights of a leading practitioner, Edelman provides a fresh understanding of the role that digital disruption plays in contemporary international security.

Model Thinking for Everyday Life: How to Make Smarter Decisions” (INFORMS, 2023)
By Richard Larson, professor post-tenure in the Institute for Data, Systems, and Society

Decisions are a part of everyday life, whether simple or complex. It’s all too easy to jump to Google for the answers, but where does that take us? We’re losing the ability to think critically and decide for ourselves. In this book, Larson asks readers to undertake a major mind shift in our everyday thought processes. Model thinking develops our critical thinking skills, using a framework of conceptual and mathematical tools to help guide us to full comprehension, and better decisions.

Future[tectonics]: Exploring the intersection between technology, architecture and urbanism” (Parametric Architecture, 2024)
Chapter by Jacob Lehrer, project coordinator in the Department of Mathematics

In his chapter, “Garbage In, Garbage Out: How Language Models Can Reinforce Biases,” Lehrer discusses how inherent bias is baked into large data sets, like those used to train massive AI algorithms, and how society will need to reconcile with the inherent biases built into systems of power. He also attempts to reconcile with it himself, delving into the mathematics behind these systems.

Music and Mind: Harnessing the Arts for Health and Wellness” (Penguin Random House, 2024)
Chapter by Tod Machover, the Muriel R. Cooper Professor of Music and Media; Rébecca Kleinberger SM ’14, PhD ’20; and Alexandra Rieger SM ’18, doctoral candidate in media arts and sciences

In their chapter, “Composing the Future of Health,” the co-authors discuss their approach to combining scientific research, technology innovation, and new composing strategies to create evidence-based, emotionally potent music that can delight and heal.

The Heart and the Chip: Our Bright Future with Robots” (W. W. Norton and Company, 2024)
By Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the Computer Science and Artificial Intelligence Laboratory; and Gregory Mone

In “The Heart and the Chip,” Rus and Mone provide an overview of the interconnected fields of robotics, artificial intelligence, and machine learning, and reframe the way we think about intelligent machines while weighing the moral and ethical consequences of their role in society. Robots aren’t going to steal our jobs, they argue; they’re going to make us more capable, productive, and precise.

Education, business, finance, and social impact

Disciplined Entrepreneurship Startup Tactics: 15 Tactics to Turn Your Business Plan Into a Business” (Wiley, 2024)
By Paul Cheek, executive director and entrepreneur in residence at the Martin Trust Center for MIT Entrepreneurship and senior lecturer in the MIT Sloan School of Management, with foreword by Bill Aulet, professor of the practice of entrepreneurship at MIT Sloan and managing director of the Martin Trust Center

Cheek provides a hands-on, practical roadmap to get from great idea to successful company with his actionable field guide to transforming your one great idea into a functional, funded, and staffed startup. Readers will find ground-level, down-and-dirty entrepreneurial tactics — like how to conduct advanced primary market research, market and sell to your first customers, and take a scrappy approach to building your first products — that keep young firms growing. These tactics maximize impact with limited resources.

Organic Social Media: How to Build Flourishing Online Communities” (KoganPage, 2023)
By Jenny Li Fowler, director of social media strategy in the Institute Office of Communications

In “Organic Social Media,” Fowler outlines the important steps that social media managers need to take to enhance an organization's broader growth objectives. Fowler breaks down the key questions to help readers determine the best platforms to invest in, how they can streamline approval processes, and other essential strategic steps to create an organic following on social platforms.

From Intention to Impact: A Practical Guide to Diversity, Equity, and Inclusion” (MIT Press, 2024)
By Malia Lazu, lecturer in the MIT Sloan School of Management

In her new book, Lazu draws on her background as a community organizer, her corporate career as a bank president, and now her experience as a leading consultant to explain what has been holding organizations back and what they can do to become more inclusive and equitable. “From Intention to Impact” goes beyond “feel good” PR-centric actions to showcase the real work that must be done to create true and lasting change.

The AFIRE Guide to U.S. Real Estate Investing” (Afire and McGraw Hill, 2024)
Chapter by Jacques Gordon, lecturer in the MIT Center for Real Estate

In his chapter, “The Broker and the Investment Advisor: A wide range of options,” Gordon discusses important financial topics including information for lenders and borrowers, joint ventures, loans and debt, comingled funds, bankruptcy, and Islamic finance.

The Geek Way: The Radical Mindset That Drives Extraordinary Results” (Hachette Book Group, 2023)
By Andrew McAfee, principal research scientist and co-director of the MIT Initiative on the Digital Economy

The geek way of management delivers excellent performance while offering employees a work environment that features high levels of autonomy and empowerment. In what Eric Schmidt calls a “handbook for disruptors,” “The Geek Way” reveals a new way to get big things done. It will change the way readers think about work, teams, projects, and culture, and give them the insight and tools to harness our human superpowers of learning and cooperation.

Iterate: The Secret to Innovation in Schools” (Teaching Systems Lab, 2023)
By Justin Reich, associate professor in comparative media studies/writing

In “Iterate,” Reich delivers an insightful bridge between contemporary educational research and classroom teaching, showing readers how to leverage the cycle of experiment and experience to create a compelling and engaging learning environment. Readers learn how to employ a process of continuous improvement and tinkering to develop exciting new programs, activities, processes, and designs.

red helicopter — a parable for our times: lead change with kindness (plus a little math)” (HarperCollins, 2024)
By James Rhee, senior lecturer in the MIT Sloan School of Management

Is it possible to be successful and kind? To lead a company or organization with precision and compassion? To honor who we are in all areas of our lives? While eloquently sharing a story of personal and professional success, Rhee presents a comforting yet bold solution to the dissatisfaction and worry we all feel in a chaotic and sometimes terrifying world.

Routes to Reform: Education Politics in Latin America” (Oxford University Press, 2024)
By Ben Ross Schneider, the Ford International Professor of Political Science and faculty director of the MIT-Chile Program and MISTI Chile

In “Routes to Reform,” Ben Ross Schneider examines education policy throughout Latin America to show that reforms to improve learning — especially making teacher careers more meritocratic and less political — are possible. He demonstrates that contrary to much established theory, reform outcomes in Latin America depended less on institutions and broad coalitions, and more on micro-level factors like civil society organizations, teacher unions, policy networks, and technocrats.

Wiring the Winning Organization: Liberating Our Collective Greatness through Slowification, Simplification, and Amplification” (IT Revolution, 2023)
By Steven J. Spear, senior lecturer in system dynamics at the MIT Sloan School of Management, and Gene Kim

Organizations succeed when they design their processes, routines, and procedures to encourage employees to problem-solve and contribute to a common purpose. DevOps, Lean, and Agile got us part of the way. Now with “Wiring the Winning Organization,” Spear and Kim introduce a new theory of organizational management: Organizations win by using three mechanisms to slowify, simplify, and amplify, which systematically moves problem-solving from high-risk danger zones to low-risk winning zones.

Oxford Research Encyclopedia of Economics and Finance” (Oxford University Press, 2024)
Chapter by Annie Thompson, lecturer in the MIT Center for Real Estate; Walter Torous, senior lecturer at the MIT Center for Real Estate; and William Torous

In their chapter, “What Causes Residential Mortgage Defaults?” the authors assess the voluminous research investigating why households default on their residential mortgages. A particular focus is oriented towards critically evaluating the recent application of causal statistical inference to residential defaults on mortgages.

Data Is Everybody’s Business: The Fundamentals of Data Monetization” (MIT Press, 2023)
By Barbara H. Wixom, principal research scientist at the MIT Sloan Center for Information Systems Research (MIT CISR); Leslie Owens, senior lecturer at the MIT Sloan School of Management and former executive director of MIT CISR; and Cynthia M. Beath

In “Data Is Everybody’s Business,” the authors offer a clear and engaging way for people across the entire organization to understand data monetization and make it happen. The authors identify three viable ways to convert data into money — improving work with data, wrapping products with data, and selling information offerings — and explain when to pursue each and how to succeed.

Arts, architecture, planning, and design

The Routledge Handbook of Museums, Heritage, and Death” (Routledge, 2023)
Chapter by Laura Anderson Barbata, lecturer in MIT’s Program in Art, Culture, and Technology

This book provides an examination of death, dying, and human remains in museums and heritage sites around the world. In her chapter, “Julia Pastrana’s Long Journey Home,” Barbata describes the case of Julia Pastrana (1834-1860), an indigenous Mexican opera singer who suffered from hypertrichosis terminalis and hyperplasia gingival. Due to her appearance, Pastrana was exploited and exhibited for over 150 years, during her lifetime and after her early death in an embalmed state. Barbata sheds light on the ways in which the systems that justified Pastrana’s exploitation continue to operate today.

Emergency INDEX: An Annual Document of Performance Practice, vol. 10” (Ugly Duckling Press, 2023)
Chapter by Gearoid Dolan, staff member in MIT’s Program in Art, Culture, and Technology

This “bible of performance art activity” documents performance projects from around the world. Dolan’s chapter describes “Protest ReEmbodied,” a performance that took place online during Covid-19 lockdown. The performance was a live version of the ongoing “Protest ReEmbodied” project, an app that individuals can download and run on their computer to be able to perform on camera, inserted into protest footage.

Land Air Sea: Architecture and Environment in the Early Modern Era” (Brill, 2023)
Chapter by Caroline Murphy, the Clarence H. Blackall Career Development Assistant Professor in the Department of Architecture

“Land Air Sea” positions the long Renaissance and 18th century as being vital for understanding how many of the concerns present in contemporary debates on climate change and sustainability originated in earlier centuries. Murphy’s chapter examines how Girolamo di Pace da Prato, a state engineer in the Duchy of Florence, understood and sought to mitigate the problems of alluvial flooding in the mid-sixteenth century, an era of exceptional aquatic and environmental volatility.


Made Here: Recipes and Reflections From NYC’s Asian Communities” (Send Chinatown Love, 2023)
Chapter by Robin Zhang, postdoc in mathematics, and Diana Le

In their chapter, “Flushing: The Melting Pot’s Melting Pot,” the authors explore how Flushing, New York — whose Chinatown is the largest and fastest growing in the world — earned the title of the “melting pot’s melting pot” through its cultural history. Readers will walk down its streets past its snack stalls, fabric stores, language schools, hair salons, churches, and shrines, and you will hear English interspersed with Korean, several dialects of Chinese, Hindi, Bengali, Urdu, and hundreds of other fibers that make up Flushing’s complex ethnolinguistic fabric.

Pioneering the future of materials extraction

MIT spinout SiTration looks to disrupt industries with a revolutionary process for recovering and extracting critical materials.

The next time you cook pasta, imagine that you are cooking spaghetti, rigatoni, and seven other varieties all together, and they need to be separated onto 10 different plates before serving. A colander can remove the water — but you still have a mound of unsorted noodles.
Now imagine that this had to be done for thousands of tons of pasta a day. That gives you an idea of the scale of the problem facing Brendan Smith PhD ’18, co-founder and CEO of SiTration, a startup formed out of MIT’s Department of Materials Science and Engineering (DMSE) in 2020.
SiTration, which raised $11.8 million in seed capital led by venture capital firm 2150 earlier this month, is revolutionizing the extraction and refining of copper, cobalt, nickel, lithium, precious metals, and other materials critical to manufacturing clean-energy technologies such as electric motors, wind turbines, and batteries. Its initial target applications are recovering the materials from complex mining feed streams, spent lithium-ion batteries from electric vehicles, and various metals refining processes.
The company’s breakthrough lies in a new silicon membrane technology that can be adjusted to efficiently recover disparate materials, providing a more sustainable and economically viable alternative to conventional, chemically intensive processes. Think of a colander with adjustable pores to strain different types of pasta. SiTration’s technology has garnered interest from industry players, including mining giant Rio Tinto.
Some observers may question whether targeting such different industries could cause the company to lose focus. “But when you dig into these markets, you discover there is actually a significant overlap in how all of these materials are recovered, making it possible for a single solution to have impact across verticals,” Smith says.

Powering up materials recovery

Conventional methods of extracting critical materials in mining, refining, and recycling lithium-ion batteries involve heavy use of chemicals and heat, which harm the environment. Typically, raw ore from mines or spent batteries are ground into fine particles before being dissolved in acid or incinerated in a furnace. Afterward, they undergo intensive chemical processing to separate and purify the valuable materials.
“It requires as much as 10 tons of chemical input to produce one ton of critical material recovered from the mining or battery recycling feedstock,” says Smith. Operators can then sell the recaptured materials back into the supply chain, but suffer from wide swings in profitability due to uncertain market prices. Lithium prices have been the most volatile, having surged more than 400 percent before tumbling back to near-original levels in the past two years. Despite their poor economics and negative environmental impact, these processes remain the state of the art today.
By contrast, SiTration is electrifying the critical-materials recovery process, improving efficiency, producing less chemical waste, and reducing the use of chemicals and heat. What’s more, the company’s processing technology is built to be highly adaptable, so it can handle all kinds of materials.
The core technology is based on work done at MIT to develop a novel type of membrane made from silicon, which is durable enough to withstand harsh chemicals and high temperatures while conducting electricity. It’s also highly tunable, meaning it can be modified or adjusted to suit different conditions or target specific materials.
SiTration’s technology also incorporates electro-extraction, a technique that uses electrochemistry to further isolate and extract specific target materials. This powerful combination of methods in a single system makes it more efficient and effective at isolating and recovering valuable materials, Smith says. So depending on what needs to be separated or extracted, the filtration and electro-extraction processes are adjusted accordingly.
“We can produce membranes with pore sizes from the molecular scale up to the size of a human hair in diameter, and everything in between. Combined with the ability to electrify the membrane and separate based on a material’s electrochemical properties, this tunability allows us to target a vast array of different operations and separation applications across industrial fields,” says Smith.
Efficient access to materials like lithium, cobalt, and copper — and precious metals like platinum, gold, silver, palladium, and rare-earth elements — is key to unlocking innovation in business and sustainability as the world moves toward electrification and away from fossil fuels.

“This is an era when new materials are critical,” says Professor Jeffrey Grossman, co-founder and chief scientist of SiTration and the Morton and Claire Goulder and Family Professor in Environmental Systems at DMSE. “For so many technologies, they’re both the bottleneck and the opportunity, offering tremendous potential for non-incremental advances. And the role they’re having in commercialization and in entrepreneurship cannot be overstated.”

SiTration’s commercial frontier

Smith became interested in separation technology in 2013 as a PhD student in Grossman’s DMSE research group, which has focused on the design of new membrane materials for a range of applications. The two shared a curiosity about separation of critical materials and a hunger to advance the technology. After years of study under Grossman’s mentorship, and with support from several MIT incubators and foundations including the Deshpande Center for Technological Innovation, the Kavanaugh Fellowship, MIT Sandbox, and Venture Mentoring Service, Smith was ready to officially form SiTration in 2020. Grossman has a seat on the board and plays an active role as a strategic and technical advisor.
Grossman is involved in several MIT spinoffs and embraces the different imperatives of research versus commercialization. “At SiTration, we’re driving this technology to work at scale. There’s something super exciting about that goal,” he says. “The challenges that come with scaling are very different than the challenges that come in a university lab.” At the same time, although not every research breakthrough becomes a commercial product, open-ended, curiosity-driven knowledge pursuit holds its own crucial value, he adds.

It has been rewarding for Grossman to see his technically gifted student and colleague develop a host of other skills the role of CEO demands. Getting out to the market and talking about the technology with potential partners, putting together a dynamic team, discovering the challenges facing industry, drumming up support, early on — those became the most pressing activities on Smith’s agenda.
“What’s most fun to me about being a CEO of an early-stage startup is that there are 100 different factors, most people-oriented, that you have to navigate every day. Each stakeholder has different motivations and objectives. And you basically try to fit that all together, to create value for our partners and customers, the company, and for society,” says Smith. “You start with just an idea, and you have to keep leveraging that to form a more and more tangible product, to multiply and progress commercial relationships, and do it all at an ever-expanding scale.”
MIT DNA runs deep in the nine-person company, with DMSE grad and former Grossman student Jatin Patil as director of product; Ahmed Helal, from MIT’s Department of Mechanical Engineering, as vice president of research and development; Daniel Bregante, from the Department of Chemistry, as VP of technology; and Sarah Melvin, from the departments of Physics and Political Science, as VP of strategy and operations. Melvin is the first hire devoted to business development. Smith plans to continue expanding the team following the closing of the company’s seed round.  

Strategic alliances

Being a good communicator was important when it came to securing funding, Smith says. SiTration received $2.35 million in pre-seed funding in 2022 led by Azolla Ventures, which reserves its $239 million in investment capital for startups that would not otherwise easily obtain funding. “We invest only in solution areas that can achieve gigaton-scale climate impact by 2050,” says Matthew Nordan, a general partner at Azolla and now SiTration board member. The MIT-affiliated E14 Fund also contributed to the pre-seed round; Azolla and E14 both participated in the recent seed funding round.
“Brendan demonstrated an extraordinary ability to go from being a thoughtful scientist to a business leader and thinker who has punched way above his weight in engaging with customers and recruiting a well-balanced team and navigating tricky markets,” says Nordan.
One of SiTration’s first partnerships is with Rio Tinto, one of the largest mining companies in the world. As SiTration evaluated various uses cases in its early days, identifying critical materials as its target market, Rio Tinto was looking for partners to recover valuable metals such as cobalt and copper from the wastewater generated at mines. These metals were typically trapped in the water, creating harmful waste and resulting in lost revenue.
“We thought this was a great innovation challenge and posted it on our website to scout for companies to partner with who can help us solve this water challenge,” said Nick Gurieff, principal advisor for mine closure, in an interview with MIT’s Industrial Liaison Program in 2023.
At SiTration, mining was not yet a market focus, but Smith couldn’t help noticing that Rio Tinto’s needs were in alignment with what his young company offered. SiTration submitted its proposal in August 2022.
Gurieff said SiTration’s tunable membrane set it apart. The companies formed a business partnership in June 2023, with SiTration adjusting its membrane to handle mine wastewater and incorporating Rio Tinto feedback to refine the technology. After running tests with water from mine sites, SiTration will begin building a small-scale critical-materials recovery unit, followed by larger-scale systems processing up to 100 cubic meters of water an hour.

SiTration’s focused technology development with Rio Tinto puts it in a good position for future market growth, Smith says. “Every ounce of effort and resource we put into developing our product is geared towards creating real-world value. Having an industry-leading partner constantly validating our progress is a tremendous advantage.”

It’s a long time from the days when Smith began tinkering with tiny holes in silicon in Grossman’s DMSE lab. Now, they work together as business partners who are scaling up technology to meet a global need. Their joint passion for applying materials innovation to tough problems has served them well. “Materials science and engineering is an engine for a lot of the innovation that is happening today,” Grossman says. “When you look at all of the challenges we face to make the transition to a more sustainable planet, you realize how many of these are materials challenges.”

Scientists observe record-setting electron mobility in a new crystal film

The newly synthesized material could be the basis for wearable thermoelectric and spintronic devices.

A material with a high electron mobility is like a highway without traffic. Any electrons that flow into the material experience a commuter’s dream, breezing through without any obstacles or congestion to slow or scatter them off their path.

The higher a material’s electron mobility, the more efficient its electrical conductivity, and the less energy is lost or wasted as electrons zip through. Advanced materials that exhibit high electron mobility will be essential for more efficient and sustainable electronic devices that can do more work with less power.

Now, physicists at MIT, the Army Research Lab, and elsewhere have achieved a record-setting level of electron mobility in a thin film of ternary tetradymite — a class of mineral that is naturally found in deep hydrothermal deposits of gold and quartz.

For this study, the scientists grew pure, ultrathin films of the material, in a way that minimized defects in its crystalline structure. They found that this nearly perfect film — much thinner than a human hair — exhibits the highest electron mobility in its class.

The team was able to estimate the material’s electron mobility by detecting quantum oscillations when electric current passes through. These oscillations are a signature of the quantum mechanical behavior of electrons in a material. The researchers detected a particular rhythm of oscillations that is characteristic of high electron mobility — higher than any ternary thin films of this class to date.

“Before, what people had achieved in terms of electron mobility in these systems was like traffic on a road under construction — you’re backed up, you can’t drive, it’s dusty, and it’s a mess,” says Jagadeesh Moodera, a senior research scientist in MIT’s Department of Physics. “In this newly optimized material, it’s like driving on the Mass Pike with no traffic.”

The team’s results, which appear today in the journal Materials Today Physics, point to ternary tetradymite thin films as a promising material for future electronics, such as wearable thermoelectric devices that efficiently convert waste heat into electricity. (Tetradymites are the active materials that cause the cooling effect in commercial thermoelectric coolers.) The material could also be the basis for spintronic devices, which process information using an electron’s spin, using far less power than conventional silicon-based devices.

The study also uses quantum oscillations as a highly effective tool for measuring a material’s electronic performance.

“We are using this oscillation as a rapid test kit,” says study author Hang Chi, a former research scientist at MIT who is now at the University of Ottawa. “By studying this delicate quantum dance of electrons, scientists can start to understand and identify new materials for the next generation of technologies that will power our world.”

Chi and Moodera’s co-authors include Patrick Taylor, formerly of MIT Lincoln Laboratory, along with Owen Vail and Harry Hier of the Army Research Lab, and Brandi Wooten and Joseph Heremans of Ohio State University.

Beam down

The name “tetradymite” derives from the Greek “tetra” for “four,” and “dymite,” meaning “twin.” Both terms describe the mineral’s crystal structure, which consists of rhombohedral crystals that are “twinned” in groups of four — i.e. they have identical crystal structures that share a side.

Tetradymites comprise combinations of bismuth, antimony tellurium, sulfur, and selenium. In the 1950s, scientists found that tetradymites exhibit semiconducting properties that could be ideal for thermoelectric applications: The mineral in its bulk crystal form was able to passively convert heat into electricity.

Then, in the 1990s, the late Institute Professor Mildred Dresselhaus proposed that the mineral’s thermoelectric properties might be significantly enhanced, not in its bulk form but within its microscopic, nanometer-scale surface, where the interactions of electrons is more pronounced. (Heremans happened to work in Dresselhaus’ group at the time.)

“It became clear that when you look at this material long enough and close enough, new things will happen,” Chi says. “This material was identified as a topological insulator, where scientists could see very interesting phenomena on their surface. But to keep uncovering new things, we have to master the material growth.”

To grow thin films of pure crystal, the researchers employed molecular beam epitaxy — a method by which a beam of molecules is fired at a substrate, typically in a vacuum, and with precisely controlled temperatures. When the molecules deposit on the substrate, they condense and build up slowly, one atomic layer at a time. By controlling the timing and type of molecules deposited, scientists can grow ultrathin crystal films in exact configurations, with few if any defects.

“Normally, bismuth and tellurium can interchange their position, which creates defects in the crystal,” co-author Taylor explains. “The system we used to grow these films came down with me from MIT Lincoln Laboratory, where we use high purity materials to minimize impurities to undetectable limits. It is the perfect tool to explore this research.”

Free flow

The team grew thin films of ternary tetradymite, each about 100 nanometers thin. They then tested the film’s electronic properties by looking for Shubnikov-de Haas quantum oscillations — a phenomenon that was discovered by physicists Lev Shubnikov and Wander de Haas, who found that a material’s electrical conductivity can oscillate when exposed to a strong magnetic field at low temperatures. This effect occurs because the material’s electrons fill up specific energy levels that shift as the magnetic field changes.

Such quantum oscillations could serve as a signature of a material’s electronic structure, and the ways in which electrons behave and interact. Most notably for the MIT team, the oscillations could determine a material’s electron mobility: If oscillations exist, it must mean that the material’s electrical resistance is able to change, and by inference, electrons can be mobile, and made to easily flow.

The team looked for signs of quantum oscillations in their new films, by first exposing them to ultracold temperatures and a strong magnetic field, then running an electric current through the film and measuring the voltage along its path, as they tuned the magnetic field up and down.

“It turns out, to our great joy and excitement, that the material’s electrical resistance oscillates,” Chi says. “Immediately, that tells you that this has very high electron mobility.”

Specifically, the team estimates that the ternary tetradymite thin film exhibits an electron mobility of 10,000 cm2/V-s — the highest mobility of any ternary tetradymite film yet measured. The team suspects that the film’s record mobility has something to do with its low defects and impurities, which they were able to minimize with their precise growth strategies. The fewer a material’s defects, the fewer obstacles an electron encounters, and the more freely it can flow.

“This is showing it’s possible to go a giant step further, when properly controlling these complex systems,” Moodera says. “This tells us we’re in the right direction, and we have the right system to proceed further, to keep perfecting this material down to even much thinner films and proximity coupling for use in future spintronics and wearable thermoelectric devices.”

This research was supported in part by the Army Research Office, National Science Foundation, Office of Naval Research, Canada Research Chairs Program and Natural Sciences and Engineering Research Council of Canada.

Creating the crossroads

Through academia and industry, Gevorg Grigoryan PhD ’07 says there is no right path — just the path that works for you.

A few years ago, Gevorg Grigoryan PhD ’07, then a professor at Dartmouth College, had been pondering an idea for data-driven protein design for therapeutic applications. Unsure how to move forward with launching that concept into a company, he dug up an old syllabus from an entrepreneurship course he took during his PhD at MIT and decided to email the instructor for the class.

He labored over the email for hours. It went from a few sentences to three pages, then back to a few sentences. Grigoryan finally hit send in the wee hours of the morning.

Just 15 minutes later, he received a response from Noubar Afeyan PhD ’87, the CEO and co-founder of venture capital company Flagship Pioneering (and the commencement speaker for the 2024 OneMIT Ceremony).

That ultimately led Grigoryan, Afeyan, and others to co-found Generate:Biomedicines, where Grigoryan now serves as chief technology officer.

“Success is defined by who is evaluating you,” Grigoryan says. “There is no right path — the best path for you is the one that works for you.”

Generalizing principles and improving lives

Generate:Biomedicines is the culmination of decades of advancements in machine learning, biological engineering, and medicine. Until recently, de novo design of a protein was extremely labor intensive, requiring months or years of computational methods and experiments.

“Now, we can just push a button and have a generative model spit out a new protein with close to perfect probability it will actually work. It will fold. It will have the structure you’re intending,” Grigoryan says. “I think we’ve unearthed these generalizable principles for how to approach understanding complex systems, and I think it’s going to keep working.”

Drug development was an obvious application for his work early on. Grigoryan says part of the reason he left academia — at least for now — are the resources available for this cutting-edge work. 

“Our space has a rather exciting and noble reason for existing,” he says. “We’re looking to improve human lives.”

Mixing disciplines

Mixed-discipline STEM majors are increasingly common, but when Grigoryan was an undergraduate, little-to-no infrastructure existed for such an education. 

“There was this emerging intersection between physics, biology, and computational sciences,” Grigoryan recalls. “It wasn’t like there was this robust discipline at the intersection of those things — but I felt like there could be, and maybe I could be part of creating one.”

He majored in biochemistry and computer science, much to the confusion of his advisors for each major. This was so unprecedented that there wasn’t even guidance for which group he should walk with at graduation.

Heading to Cambridge

Grigoryan admits his decision to attend MIT in the Department of Biology wasn’t systematic.

“I was like, ‘MIT sounds great — strong faculty, good techie school, good city. I’m sure I’ll figure something out,’” he says. “I can’t emphasize enough how important and formative those years at MIT were to who I ultimately became as a scientist.”

He worked with Amy Keating, then a junior faculty member, now head of the Department of Biology, modeling protein-protein interactions. The work involved physics, math, chemistry, and biology. The computational and systems biology PhD program was still a few years away, but the developing field was being recognized as important.

Keating remains an advisor and confidant to this day. Grigoryan also commends her for her commitment to mentoring while balancing the demands of a faculty position — acquiring funding, running a research lab, and teaching.

“It’s hard to make time to truly advise and help your students grow, but Amy is someone who took it very seriously and was very intentional about it,” Grigoryan says. “We spent a lot of time discussing ideas and doing science. The kind of impact that one can have through mentorship is hard to overestimate.”

Grigoryan next pursued a postdoc at the University of Pennsylvania with William “Bill” DeGrado, continuing to focus on protein design while gaining more experience in experimental approaches and exposure to thinking about proteins differently.

Just by examining them, DeGrado had an intuitive understanding of molecules — anticipating their functionality or what mutations would disrupt that functionality. His predictive skill surpassed the abilities of computer modeling at the time.

Grigoryan began to wonder: Could computational models use prior observations to be at least as predictive as someone who spent a lot of time considering and observing the structure and function of those molecules?

Grigoryan next went to Dartmouth for a faculty position in computer science with cross-appointments in biology and chemistry to explore that question.

Balancing industry and academia

Much of science is about trial and error, but early on, Grigoryan showed that accurate predictions of proteins and how they would bind, bond, and behave didn’t require starting from first principles. Models became more accurate by solving more structures and taking more binding measurements.

Grigoryan credits the leaders at Flagship Pioneering for their initial confidence in the possible applications for this concept — more bullish, at the time, than Grigoryan himself.

He spent four years splitting his time between Dartmouth and Cambridge and ultimately decided to leave academia altogether.

“It was inevitable because I was just so in love with what we had built at Generate,” he says. “It was so exciting for me to see this idea come to fruition.”

Pause or grow

Grigoryan says the most important thing for a company is to scale at the right time, to balance “hitting the iron while it’s hot” while considering the readiness of the company, the technology, and the market.

But even successful growth creates its own challenges.

When there are fewer than two dozen people, aligning strategies across a company is straightforward: Everyone can be in the room. However, growth — say, expanding to 200 employees — requires more deliberate communication and balancing agility while maintaining the company’s culture and identity.

“Growing is tough,” he says. “And it takes a lot of intentional effort, time, and energy to ensure a transparent culture that allows the team to thrive.”

Grigoryan’s time in academia was invaluable for learning that “everything is about people” — but academia and industry require different mindsets.

“Being a PI [principal investigator] is about creating a lane for each of your trainees, where they’re essentially somewhat independent scientists,” he says. “In a company, by construction, you are bound by a set of common goals, and you have to value your work by the amount of synergy that it has with others, as opposed to what you can do only by yourself.” 

Scientists use computational modeling to guide a difficult chemical synthesis

Using this new approach, researchers could develop drug compounds with unique pharmaceutical properties.

Researchers from MIT and the University of Michigan have discovered a new way to drive chemical reactions that could generate a wide variety of compounds with desirable pharmaceutical properties.

These compounds, known as azetidines, are characterized by four-membered rings that include nitrogen. Azetidines have traditionally been much more difficult to synthesize than five-membered nitrogen-containing rings, which are found in many FDA-approved drugs.

The reaction that the researchers used to create azetidines is driven by a photocatalyst that excites the molecules from their ground energy state. Using computational models that they developed, the researchers were able to predict compounds that can react with each other to form azetidines using this kind of catalysis.

“Going forward, rather than using a trial-and-error process, people can prescreen compounds and know beforehand which substrates will work and which ones won't,” says Heather Kulik, an associate professor of chemistry and chemical engineering at MIT.

Kulik and Corinna Schindler, a professor of chemistry at the University of Michigan, are the senior authors of the study, which appears today in Science. Emily Wearing, recently a graduate student at the University of Michigan, is the lead author of the paper. Other authors include University of Michigan postdoc Yu-Cheng Yeh, MIT graduate student Gianmarco Terrones, University of Michigan graduate student Seren Parikh, and MIT postdoc Ilia Kevlishvili.

Light-driven synthesis

Many naturally occurring molecules, including vitamins, nucleic acids, enzymes and hormones, contain five-membered nitrogen-containing rings, also known as nitrogen heterocycles. These rings are also found in more than half of all FDA-approved small-molecule drugs, including many antibiotics and cancer drugs.

Four-membered nitrogen heterocycles, which are rarely found in nature, also hold potential as drug compounds. However, only a handful of existing drugs, including penicillin, contain four-membered heterocycles, in part because these four-membered rings are much more difficult to synthesize than five-membered heterocycles.

In recent years, Schindler’s lab has been working on synthesizing azetidines using light to drive a reaction that combines two precursors, an alkene and an oxime. These reactions require a photocatalyst, which absorbs light and passes the energy to the reactants, making it possible for them to react with each other.

“The catalyst can transfer that energy to another molecule, which moves the molecules into excited states and makes them more reactive. This is a tool that people are starting to use to make it possible to make certain reactions occur that wouldn't normally occur,” Kulik says.

Schindler’s lab found that while this reaction sometimes worked well, other times it did not, depending on which reactants were used. They enlisted Kulik, an expert in developing computational approaches to modeling chemical reactions, to help them figure out how to predict when these reactions will occur.

The two labs hypothesized that whether a particular alkene and oxime will react together in a photocatalyzed reaction depends on a property known as the frontier orbital energy match. Electrons that surround the nucleus of an atom exist in orbitals, and quantum mechanics can be used to predict the shape and energies of these orbitals. For chemical reactions, the most important electrons are those in the outermost, highest energy (“frontier”) orbitals, which are available to react with other molecules.

Kulik and her students used density functional theory, which uses the Schrödinger equation to predict where electrons could be and how much energy they have, to calculate the orbital energy of these outermost electrons.

These energy levels are also affected by other groups of atoms attached to the molecule, which can change the properties of the electrons in the outermost orbitals.

Once those energy levels are calculated, the researchers can identify reactants that have similar energy levels when the photocatalyst boosts them into an excited state. When the excited states of an alkene and an oxime are closely matched, less energy is required to boost the reaction to its transition state — the point at which the reaction has enough energy to go forward to form products.

Accurate predictions

After calculating the frontier orbital energies for 16 different alkenes and nine oximes, the researchers used their computational model to predict whether 18 different alkene-oxime pairs would react together to form an azetidine. With the calculations in hand, these predictions can be made in a matter of seconds.

The researchers also modeled a factor that influences the overall yield of the reaction: a measure of how available the carbon atoms in the oxime are to participate in chemical reactions.

The model’s predictions suggested that some of these 18 reactions won’t occur or won’t give a high enough yield. However, the study also showed that a significant number of reactions are correctly predicted to work.

“Based on our model, there's a much wider range of substrates for this azetidine synthesis than people thought before. People didn't really think that all of this was accessible,” Kulik says.

Of the 27 combinations that they studied computationally, the researchers tested 18 reactions experimentally, and they found that most of their predictions were accurate. Among the compounds they synthesized were derivatives of two drug compounds that are currently FDA-approved: amoxapine, an antidepressant, and indomethacin, a pain reliever used to treat arthritis.

This computational approach could help pharmaceutical companies predict molecules that will react together to form potentially useful compounds, before spending a lot of money to develop a synthesis that might not work, Kulik says. She and Schindler are continuing to work together on other kinds of novel syntheses, including the formation of compounds with three-membered rings.

“Using photocatalysts to excite substrates is a very active and hot area of development, because people have exhausted what you can do on the ground state or with radical chemistry,” Kulik says. “I think this approach is going to have a lot more applications to make molecules that are normally thought of as really challenging to make.”

CHARMed collaboration creates a potent therapy candidate for fatal prion diseases

A new gene-silencing tool shows promise as a future therapy against prion diseases and paves the way for new approaches to treating disease.

Drug development is typically slow: The pipeline from basic research discoveries that provide the basis for a new drug to clinical trials and then production of a widely available medicine can take decades. But decades can feel impossibly far off to someone who currently has a fatal disease. Broad Institute of MIT and Harvard Senior Group Leader Sonia Vallabh is acutely aware of that race against time, because the topic of her research is a neurodegenerative and ultimately fatal disease — fatal familial insomnia, a type of prion disease — that she will almost certainly develop as she ages. 

Vallabh and her husband, Eric Minikel, switched careers and became researchers after they learned that Vallabh carries a disease-causing version of the prion protein gene and that there is no effective therapy for fatal prion diseases. The two now run a lab at the Broad Institute, where they are working to develop drugs that can prevent and treat these diseases, and their deadline for success is not based on grant cycles or academic expectations but on the ticking time bomb in Vallabh’s genetic code.

That is why Vallabh was excited to discover, when she entered into a collaboration with Whitehead Institute for Biomedical Research member Jonathan Weissman, that Weissman’s group likes to work at full throttle. In less than two years, Weissman, Vallabh, and their collaborators have developed a set of molecular tools called CHARMs that can turn off disease-causing genes such as the prion protein gene — as well as, potentially, genes coding for many other proteins implicated in neurodegenerative and other diseases — and they are refining those tools to be good candidates for use in human patients. Although the tools still have many hurdles to pass before the researchers will know if they work as therapeutics, the team is encouraged by the speed with which they have developed the technology thus far.

“The spirit of the collaboration since the beginning has been that there was no waiting on formality,” Vallabh says. “As soon as we realized our mutual excitement to do this, everything was off to the races.”

Co-corresponding authors Weissman and Vallabh and co-first authors Edwin Neumann, a graduate student in Weissman’s lab, and Tessa Bertozzi, a postdoc in Weissman’s lab, describe CHARM — which stands for Coupled Histone tail for Autoinhibition Release of Methyltransferase — in a paper published today in the journal Science.

“With the Whitehead and Broad Institutes right next door to each other, I don’t think there’s any better place than this for a group of motivated people to move quickly and flexibly in the pursuit of academic science and medical technology,” says Weissman, who is also a professor of biology at MIT and a Howard Hughes Medical Institute Investigator. “CHARMs are an elegant solution to the problem of silencing disease genes, and they have the potential to have an important position in the future of genetic medicines.”

To treat a genetic disease, target the gene

Prion disease, which leads to swift neurodegeneration and death, is caused by the presence of misshapen versions of the prion protein. These cause a cascade effect in the brain: the faulty prion proteins deform other proteins, and together these proteins not only stop functioning properly but also form toxic aggregates that kill neurons. The most famous type of prion disease, known colloquially as mad cow disease, is infectious, but other forms of prion disease can occur spontaneously or be caused by faulty prion protein genes.

Most conventional drugs work by targeting a protein. CHARMs, however, work further upstream, turning off the gene that codes for the faulty protein so that the protein never gets made in the first place. CHARMs do this by epigenetic editing, in which a chemical tag gets added to DNA in order to turn off or silence a target gene. Unlike gene editing, epigenetic editing does not modify the underlying DNA — the gene itself remains intact. However, like gene editing, epigenetic editing is stable, meaning that a gene switched off by CHARM should remain off. This would mean patients would only have to take CHARM once, as opposed to protein-targeting medications that must be taken regularly as the cells’ protein levels replenish.

Research in animals suggests that the prion protein isn’t necessary in a healthy adult, and that in cases of disease, removing the protein improves or even eliminates disease symptoms. In a person who hasn’t yet developed symptoms, removing the protein should prevent disease altogether. In other words, epigenetic editing could be an effective approach for treating genetic diseases such as inherited prion diseases. The challenge is creating a new type of therapy.

Fortunately, the team had a good template for CHARM: a research tool called CRISPRoff that Weissman’s group previously developed for silencing genes. CRISPRoff uses building blocks from CRISPR gene editing technology, including the guide protein Cas9 that directs the tool to the target gene. CRISPRoff silences the targeted gene by adding methyl groups, chemical tags that prevent the gene from being transcribed, or read into RNA, and so from being expressed as protein. When the researchers tested CRISPRoff’s ability to silence the prion protein gene, they found that it was effective and stable.

Several of its properties, though, prevented CRISPRoff from being a good candidate for a therapy. The researchers’ goal was to create a tool based on CRISPRoff that was just as potent but also safe for use in humans, small enough to deliver to the brain, and designed to minimize the risk of silencing the wrong genes or causing side effects.

From research tool to drug candidate

Led by Neumann and Bertozzi, the researchers began engineering and applying their new epigenome editor. The first problem that they had to tackle was size, because the editor needs to be small enough to be packaged and delivered to specific cells in the body. Delivering genes into the human brain is challenging; many clinical trials have used adeno-associated viruses (AAVs) as gene-delivery vehicles, but these are small and can only contain a small amount of genetic code. CRISPRoff is way too big; the code for Cas9 alone takes up most of the available space.

The Weissman lab researchers decided to replace Cas9 with a much smaller zinc finger protein (ZFP). Like Cas9, ZFPs can serve as guide proteins to direct the tool to a target site in DNA. ZFPs are also common in human cells, meaning they are less likely to trigger an immune response against themselves than the bacterial Cas9.

Next, the researchers had to design the part of the tool that would silence the prion protein gene. At first, they used part of a methyltransferase, a molecule that adds methyl groups to DNA, called DNMT3A. However, in the particular configuration needed for the tool, the molecule was toxic to the cell. The researchers focused on a different solution: Instead of delivering outside DNMT3A as part of the therapy, the tool is able to recruit the cell’s own DNMT3A to the prion protein gene. This freed up precious space inside of the AAV vector and prevented toxicity.

The researchers also needed to activate DNMT3A. In the cell, DNMT3A is usually inactive until it interacts with certain partner molecules. This default inactivity prevents accidental methylation of genes that need to remain turned on. Neumann came up with an ingenious way around this by combining sections of DNMT3A’s partner molecules and connecting these to ZFPs that bring them to the prion protein gene. When the cell’s DNMT3A comes across this combination of parts, it activates, silencing the gene.

“From the perspectives of both toxicity and size, it made sense to recruit the machinery that the cell already has; it was a much simpler, more elegant solution,” Neumann says. “Cells are already using methyltransferases all of the time, and we’re essentially just tricking them into turning off a gene that they would normally leave turned on.”

Testing in mice showed that ZFP-guided CHARMs could eliminate more than 80 percent of the prion protein in the brain, while previous research has shown that as little as 21 percent elimination can improve symptoms.

Once the researchers knew that they had a potent gene silencer, they turned to the problem of off-target effects. The genetic code for a CHARM that gets delivered to a cell will keep producing copies of the CHARM indefinitely. However, after the prion protein gene is switched off, there is no benefit to this, only more time for side effects to develop, so they tweaked the tool so that after it turns off the prion protein gene, it then turns itself off.

Meanwhile, a complementary project from Broad Institute scientist and collaborator Benjamin Deverman’s lab, focused on brain-wide gene delivery and published in Science on May 17, has brought the CHARM technology one step closer to being ready for clinical trials. Although naturally occurring types of AAV have been used for gene therapy in humans before, they do not enter the adult brain efficiently, making it impossible to treat a whole-brain disease like prion disease. Tackling the delivery problem, Deverman’s group has designed an AAV vector that can get into the brain more efficiently by leveraging a pathway that naturally shuttles iron into the brain. Engineered vectors like this one make a therapy like CHARM one step closer to reality.

Thanks to these creative solutions, the researchers now have a highly effective epigenetic editor that is small enough to deliver to the brain, and that appears in cell culture and animal testing to have low toxicity and limited off-target effects.

“It’s been a privilege to be part of this; it’s pretty rare to go from basic research to therapeutic application in such a short amount of time,” Bertozzi says. “I think the key was forming a collaboration that took advantage of the Weissman lab’s tool-building experience, the Vallabh and Minikel lab’s deep knowledge of the disease, and the Deverman lab’s expertise in gene delivery.”

Looking ahead

With the major elements of the CHARM technology solved, the team is now fine-tuning their tool to make it more effective, safer, and easier to produce at scale, as will be necessary for clinical trials. They have already made the tool modular, so that its various pieces can be swapped out and future CHARMs won’t have to be programmed from scratch. CHARMs are also currently being tested as therapeutics in mice. 

The path from basic research to clinical trials is a long and winding one, and the researchers know that CHARMs still have a way to go before they might become a viable medical option for people with prion diseases, including Vallabh, or other diseases with similar genetic components. However, with a strong therapy design and promising laboratory results in hand, the researchers have good reason to be hopeful. They continue to work at full throttle, intent on developing their technology so that it can save patients’ lives not someday, but as soon as possible.

“UnrulyArt” creates joy and engagement, regardless of ability

Researchers and staff from MIT, including from the Simons Center for the Social Brain, collaborated with schoolchildren with special needs to create art, have fun, and learn from each other.

An unmistakable takeaway from sessions of “UnrulyArt” is that all those “-n’ts” — can’t, needn’t, shouldn’t, won’t — which can lead people to exclude children with disabilities or cognitive, social, and behavioral impairments from creative activities, aren’t really rules. They are merely assumptions and stigmas.

When a session ends and the paint that was once flying is now just drying, the rewards that emerge are more than the individual works the children and their volunteer helpers created. There is also the joy and the intellectual engagement that maybe was experienced differently but nevertheless could be shared equally between the children and the volunteers.

When MIT professor Pawan Sinha first launched UnrulyArt in 2012, his motivation was to share the joy and fulfillment he personally found in art with children in India who had just gained their sense of sight through a program he founded called Project Prakash.

“I felt that this is an activity that may also be fun for children who have not had an opportunity to engage in art,” says Sinha, professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences (BCS). “Children with disabilities are especially deprived in this context. Societal attitudes toward art can keep it away from children who suffer from different kinds of cognitive, sensory, or motoric challenges.”

Margaret Kjelgaard, an assistant professor at Bridgewater State University and Sinha’s longtime colleague in autism research and in convening UnrulyArt sessions, says that the point of the art is the experience of creation, not demonstrations of skill.

“It’s not about fine art and being precise,” says Kjelgaard, whose autistic son had a blast participating in his own UnrulyArt session a decade ago and still enjoys art. “It’s about just creating beautiful things without constraint.”

UnrulyArt’s ability to edify both children with developmental disabilities and the scientists who study their conditions interleaves closely with the mission of the Simons Center for the Social Brain (SCSB), says Director Mriganka Sur. That’s why SCSB sponsored and helped to staff four sessions of UnrulyArt recently in Belmont and Burlington, Massachusetts.

“As an academic research center, SCSB activities focus mainly on science and scientists,” says Sur, the Newton Professor in BCS and The Picower Institute for Learning and Memory at MIT. “Our team thought this would be a wonderful opportunity for us to do something outside the box.”

Getting unruly

At a session in a small event hall in Burlington, SCSB postdocs and administrators and members of Sinha’s lab laid down tarps and set up stations of materials for dozens of elementary school children from the LABBB Educational Collaborative, which provides special education services to  schoolchildren from ages 3 through 22 from local communities. In all, UnrulyArt hosted approximately 60 children across four sessions earlier this spring, says program director Donna Goodell.

“It’s also a wonderful social opportunity as we bring different cohorts of students together to participate,” she notes.

With the room set up, kids came right in to get unruly with the facilitation of volunteers. Some children painted on sheets of paper at tables, as any other children would. Other children opted to skate around on globs of paint on a huge piece of paper on the floor. Many others, including some in wheelchairs who struggled to hold a brush, were aided by materials and techniques cleverly conceived to enable aesthetic results.

For instance, children of all abilities could drop dollops of paint on paper that, when folded over, created a symmetric design. Others freely slathered paints on boards that had been pre-masked with tape so that when the tape was removed, the final image took on the hidden structure. Yet others did the same with smaller boards where removal of a heart-shaped mask revealed a heart of a different color.

One youngster sitting on the floor with Sinha Lab graduate student Charlie Shvartsman was elated to learn that he was free to drop paint on paper and then slap it hard with his hands.

Researcher reflections

The volunteers worked hard, not only setting up and facilitating but also drying paintings and cleaning up after each session. Several of them expressed a deep sense of personal and intellectual reward from the experience.

“I paint as a hobby and wanted to experience how children on the autism spectrum react to the media, which I find very relaxing,” says Chhavi Sood, a Simons Fellow in the lab of Menicon Professor Troy Littleton in BCS, the Department of Biology, and The Picower Institute.

Sood works with fruit flies to study the molecular mechanisms by which mutation in an autism-associated gene affects neural circuit connections.

“[UnrulyArt] puts a human face to the condition and makes me appreciate the diversity of the autism spectrum,” she says. “My work is far from behavioral studies. This experience broadened my understanding of how autism spectrum disorder can manifest differently in people.”

Simons Fellow Tomoe Ishikawa, who works in the lab of BCS and Picower Institute Associate Professor Gloria Choi, says she, too, benefited from the chance to observe the children’s behavior as she helped them. She said she saw exciting moments of creativity, but also notable moments where self-control seemed challenging. As she is studying social behavior using mouse models in the lab, she says UnrulyArt helped increase her motivation to discover new therapies that could help autistic children with behavioral challenges.

Suayb Arslan, a visiting scholar in Sinha’s Lab who studies human visual perception, saw many connections between his work and what unfolded at UnrulyArt. This was visual art, after all, but then there was the importance of creativity in many facets of life, including doing research. And Arslan also valued the chance to work with children with different challenges to see how they processed what they were seeing.

He anticipated that the experience would be so valuable that he came with his wife Beyza and his daughter Reyyan, who made several creations alongside the other kids. Reyyan, he says, is enrolled in a preschool program in Cambridge that by design includes typically developing children like her with kids with various challenges and differences.

“I think that it’s important that she be around these kids to sit down together with them and enjoy the time with them, have fun with them and with the colors,” Arslan says.

What happens during the first moments of butterfly scale formation

New findings could help engineers design materials for light and heat management.

A butterfly’s wing is covered in hundreds of thousands of tiny scales like miniature shingles on a paper-thin roof. A single scale is as small as a speck of dust yet surprisingly complex, with a corrugated surface of ridges that help to wick away water, manage heat, and reflect light to give a butterfly its signature shimmer.

MIT researchers have now captured the initial moments during a butterfly’s metamorphosis, as an individual scale begins to develop this ridged pattern. The researchers used advanced imaging techniques to observe the microscopic features on a developing wing, while the butterfly transformed in its chrysalis.

The team continuously imaged individual scales as they grew out from the wing’s membrane. These images reveal for the first time how a scale’s initially smooth surface begins to wrinkle to form microscopic, parallel undulations. The ripple-like structures eventually grow into finely patterned ridges, which define the functions of an adult scale.

The researchers found that the scale’s transition to a corrugated surface is likely a result of “buckling” — a general mechanism that describes how a smooth surface wrinkles as it grows within a confined space.

“Buckling is an instability, something that we usually don’t want to happen as engineers,” says Mathias Kolle, associate professor of mechanical engineering at MIT. “But in this context, the organism uses buckling to initiate the growth of these intricate, functional structures.”

The team is working to visualize more stages of butterfly wing growth in hopes of revealing clues to how they might design advanced functional materials in the future.

“Given the multifunctionality of butterfly scales, we hope to understand and emulate these processes, with the aim of sustainably designing and fabricating new functional materials. These materials would exhibit tailored optical, thermal, chemical, and mechanical properties for textiles, building surfaces, vehicles — really, for generally any surface that needs to exhibit characteristics that depend on its micro- and nanoscale structure,” Kolle adds.

The team has published their results in a study appearing today in the journal Cell Reports Physical Science. The study’s co-authors include first author and former MIT postdoc Jan Totz, joint first author and postdoc Anthony McDougal, graduate student Leonie Wagner, former postdoc Sungsam Kang, professor of mechanical engineering and biomedical engineering Peter So, professor of mathematics Jörn Dunkel, and professor of material physics and chemistry Bodo Wilts of the University of Salzburg.

A live transformation

In 2021, McDougal, Kolle and their colleagues developed an approach to continuously capture microscopic details of wing growth in a butterfly during its metamorphosis. Their method involved carefully cutting through the insect’s paper-thin chrysalis and peeling away a small square of cuticle to reveal the wing’s growing membrane. They placed a small glass slide over the exposed area, then used a microscope technique developed by team member Peter So to capture continuous images of scales as they grew out of the wing membrane.

They applied the method to observe Vanessa cardui, a butterfly commonly known as a Painted Lady, which the team chose for its scale architecture, which is common to most lepidopteran species. They observed that Painted Lady scales grew along a wing membrane in precise, overlapping rows, like shingles on a rooftop. Those images provided scientists with the most continuous visualization of live butterfly wing scale growth at the microscale to date.

Four images show the butterfly; the butterfly scales; the ridges of a single scale; and an extreme closeup of few ridges.

In their new study, the team used the same approach to focus on a specific time window during scale development, to capture the initial formation of the finely structured ridges that run along a single scale in a living butterfly. Scientists know that these ridges, which run parallel to each other along the length of a single scale, like stripes in a patch of corduroy, enable many of the functions of the wing scales.

Since little is known about how these ridges are formed, the MIT team aimed to record the continuous formation of ridges in a live, developing butterfly, and decipher the organism’s ridge formation mechanisms.

“We watched the wing develop over 10 days, and got thousands of measurements of how the surfaces of scales changed on a single butterfly,” McDougal says. “We could see that early on, the surface is quite flat. As the butterfly grows, the surface begins to pop up a little bit, and then at around 41 percent of development, we see this very regular pattern of completely popped up protoridges. This whole process happens over about five hours and lays the structural foundation for the subsequent expression of patterned ridges."

Pinned down

What might be causing the initial ridges to pop up in precise alignment? The researchers suspected that buckling might be at play. Buckling is a mechanical process by which a material bows in on itself as it is subjected to compressive forces. For instance, an empty soda can buckles when squeezed from the top, down. A material can also buckle as it grows, if it is constrained, or pinned in place.

Scientists have noted that, as the cell membrane of a butterfly’s scale grows, it is effectively pinned in certain places by actin bundles — long filaments that run under the growing membrane and act as a scaffold to support the scale as it takes shape. Scientists have hypothesized that actin bundles constrain a growing membrane, similar to ropes around an inflating hot air balloon. As the butterfly’s wing scale grows, they proposed, it would bulge out between the underlying actin filaments, buckling in a way that forms a scale’s initial, parallel ridges.

To test this idea, the MIT team looked to a theoretical model that describes the general mechanics of buckling. They incorporated image data into the model, such as measurements of a scale membrane’s height at various early stages of development, and various spacings of actin bundles across a growing membrane. They then ran the model forward in time to see whether its underlying principles of mechanical buckling would produce the same ridge patterns that the team observed in the actual butterfly.

“With this modeling, we showed that we could go from a flat surface to a more undulating surface,” Kolle says. “In terms of mechanics, this indicates that buckling of the membrane is very likely what’s initiating the formation of these amazingly ordered ridges.”

“We want to learn from nature, not only how these materials function, but also how they’re formed,” McDougal says. “If you want to for instance make a wrinkled surface, which is useful for a variety of applications, this gives you two really easy knobs to tune, to tailor how those surfaces are wrinkled. You could either change the spacing of where that material is pinned, or you could change the amount of material that you grow between the pinned sections. And we saw that the butterfly is using both of these strategies.”

This research was supported, in part, by the International Human Frontier Science Program Organization, the National Science Foundation, the Humboldt Foundation, and the Alfred P. Sloan Foundation.

Professor Emerita Mary-Lou Pardue, pioneering cellular and molecular biologist, dies at 90

Known for her rigorous approach to science and her influential research, Pardue paved the way for women in science at MIT and beyond.

Professor Emerita Mary-Lou Pardue, an influential faculty member in the MIT Department of Biology, died on June 1. She was 90.

Early in her career, Pardue developed a technique called in situ hybridization with her PhD advisor, Joseph Gall, which allows researchers to localize genes on chromosomes. This led to many discoveries, including critical advancements in developmental biology, our understanding of embryonic development, and the structure of chromosomes. She also studied the remarkably complex way organisms respond to stress, such as heat shock, and discovered how telomeres, the ends of chromosomes, in fruit flies differ from those of other eukaryotic organisms during cell division.

“The reason she was a professor at MIT, and why she was doing research, was first and foremost because she wanted to answer questions and make discoveries,” says longtime colleague and Professor Emerita Terry Orr-Weaver. “She had her feet cemented in a love of biology.”

In 1983, Pardue was the first woman in the School of Science at MIT to be inducted into the National Academy of Sciences. She chaired the Section of Genetics from 1991 to 1994 and served as a council member from 1995 to 1998. Among other honors, she was named a fellow of the American Academy of Arts and Sciences, where she served as a council member, and a fellow of the American Association for the Advancement of Science. She also served on numerous editorial boards and review panels, and as the vice president, president, and chair of the Genetics Society of America and president of the American Society for Cell Biology.

In the 1990s, Pardue was also one of 16 senior women on MIT’s science faculty who co-signed a letter to the dean of science claiming bias against women scientists at the Institute at the time. As a result of this letter and a subsequent study of conditions for women at the Institute, MIT in 1999 publicly admitted to having discriminated against its female faculty, and made plans to rectify the problem — a process that ultimately served as a model for academic institutions around the nation. 

Her graduate students and postdocs included Alan Spradling, Matthew Scott, Tom Cech, Paul Lasko, and Joan Ruderman.

In the minority

Pardue was born on Sept. 15, 1933, in Lexington, Kentucky. She received a BS in biology from the College of William and Mary in 1955, and she earned an MS in radiation biology from the University of Tennessee in 1959. In 1970, she received a PhD in biology for her work with Gall at Yale University.

Pardue’s career was inextricably linked to the slowly rising number of women with advanced degrees in science. During her early years as a graduate student at Yale, there were a few women with PhDs — but none held faculty positions. Indeed, Pardue assumed she would spend her career as a senior scientist working in someone else’s lab, rather than running her own.

Pardue was an avid hiker and loved to travel and spend time outdoors. She scaled peaks from the White Mountains to the Himalayas and pursued postdoctoral work in Europe at the University of Edinburgh. She was delighted to receive invitations to give faculty search seminars for the opportunity to travel to institutions across the United States — including an invitation to visit MIT.

MIT had initially rejected her job application, although the department quickly realized it had erred in missing the opportunity to recruit the talented Pardue. In the end, she spent more than 30 years as a professor in Cambridge, Massachusetts.

When Pardue joined, the biology department had two female faculty members, Lisa Steiner and Annamaria Torriani-Gorini — more women than at any other academic institution Pardue had interviewed. Pardue became an associate professor of biology in 1972, a professor in 1980, and the Boris Magasanik Professor of Biology in 1995.

“The person who made a difference”

Pardue was known for her rigorous approach to science as well as her bright smile and support of others.

When Graham Walker, the American Cancer Society and Howard Hughes Medical Institute (HHMI) professor, joined the department in 1976, he recalled an event for meeting graduate students at which he was repeatedly mistaken for a graduate student himself. Pardue parked herself by his side to bear the task of introducing the newest faculty member.

“Mary-Lou had an art for taking care of people,” Walker says. “She was a wonderful colleague and a close friend.”

As a young faculty member, Troy Littleton — now a professor of biology, the Menicon Professor of Neuroscience, and investigator at the Picower Institute for Learning and Memory — had his first experience teaching with Pardue for an undergraduate project lab course.

“Observing how Mary-Lou was able to get the students excited about basic research was instrumental in shaping my teaching skills,” Littleton says. “Her passion for discovery was infectious, and the students loved working on basic research questions under her guidance.”

She was also a mentor for fellow women joining the department, including E.C. Whitehead Professor of Biology and HHMI investigator Tania A. Baker, who joined the department in 1992, and Orr-Weaver, the first female faculty member to join the Whitehead Institute in 1987.

“She was seriously respected as a woman scientist — as a scientist,” recalls Nancy Hopkins, the Amgen Professor of Biology Emerita. “For women of our generation, there were no role models ahead of us, and so to see that somebody could do it, and have that kind of respect, was really inspiring.”

Hopkins first encountered Pardue’s work on in situ hybridization as a graduate student. Although it wasn’t Hopkins’s field, she remembers being struck by the implications — a leap in science that today could be compared to the discoveries that are possible because of the applications of gene-editing CRISPR technology.

“The questions were very big, but the technology was small,” Hopkins says. “That you could actually do these kinds of things was kind of a miracle.”

Pardue was the person who called to give Hopkins the news that she had been elected to the National Academy of Sciences. They hadn’t worked together to that point, but Hopkins felt like Pardue had been looking out for her, and was very excited on her behalf.

Later, though, Hopkins was initially hesitant to reach out to Pardue to discuss the discrimination Hopkins had experienced as a faculty member at MIT; Pardue seemed so successful that surely her gender had not held her back. Hopkins found that women, in general, didn’t discuss the ways they had been undervalued; it was humiliating to admit to being treated unfairly.

Hopkins drafted a letter about the systemic and invisible discrimination she had experienced — but Hopkins, ever the scientist, needed a reviewer.

At a table in the corner of Rebecca’s Café, a now-defunct eatery, Pardue read the letter — and declared she’d like to sign it and take it to the dean of the School of Science.

“I knew the world had changed in that instant,” Hopkins says. “She’s the person who made the difference. She changed my life, and changed, in the end, MIT.”

MIT and the status of women

It was only when some of the tenured women faculty of the School of Science all came together that they discovered their experiences were similar. Hopkins, Pardue, Orr-Weaver, Steiner, Susan Carey, Sylvia Ceyer, Sallie “Penny” Chisholm, Suzanne Corkin, Mildred Dresselhaus, Ann Graybiel, Ruth Lehmann, Marcia McNutt, Molly Potter, Paula Malanotte-Rizzoli, Leigh Royden, and Joanne Stubbe ultimately signed a letter to Robert Birgeneau, then the dean of science.

Their efforts led to a Committee on the Status of Women Faculty in 1995, the report for which was made public in 1999. The report documented pervasive bias against women across the School of Science. In response, MIT ultimately worked to improve the working conditions of women scientists across the Institute. These efforts reverberated at academic institutions across the country.

Walker notes that creating real change requires a monumental effort of political and societal pressure — but it also requires outstanding individuals whose work surpasses the barriers holding them back.

“When Mary-Lou came to MIT, there weren’t many cracks in the glass ceiling,” he says. “I think she, in many ways, was a leader in helping to change the status of women in science by just being who she was.”

Later years

Kerry Kelley, now a research laboratory operations manager in the Yilmaz Lab at the Koch Institute for Integrative Cancer Research, joined Pardue as a technical lab assistant in 2008, Kelley’s first job at MIT. Pardue, throughout her career, was committed to hands-on work, preparing her own slides whenever possible.

“One of the biggest things I learned from her was mistakes aren’t always mistakes. If you do an experiment, and it doesn’t turn out the way you had hoped, there’s something there that you can learn from,” Kelley says. She recalls a frequent refrain with a smile: “‘It’s research. What do you do? Re-search.’”

Their birthdays were on consecutive days in September; Pardue would mark the occasion for both at Legal Seafoods in Kendall Square with bluefish, white wine, and lab members and collaborators including Kelley, Karen Traverse, and the late Paul Gregory DeBaryshe.

In the years before her death, Pardue resided at Youville House Assisted Living in Cambridge, where Kelley would often visit.

“I was sad to hear of the passing of Mary-Lou, whose seminal work expanded our understanding of chromosome structure and cellular responses to environmental stresses over more than three decades at MIT. Mary-Lou was an exceptional person who was known as a gracious mentor and a valued teacher and colleague,” says Amy Keating, head of the Department of Biology, the Jay A. Stein (1968) Professor of Biology, and professor of biological engineering. “She was kind to everyone, and she is missed by our faculty and staff. Women at MIT and beyond, including me, owe a huge debt to Mary-Lou, Nancy Hopkins, and their colleagues who so profoundly advanced opportunities for women in science.”

She is survived by a niece and nephew, Sarah Gibson and Todd Pardue.

Study: Titan’s lakes may be shaped by waves

MIT researchers find wave activity on Saturn’s largest moon may be strong enough to erode the coastlines of lakes and seas.

Titan, Saturn’s largest moon, is the only planetary body in the solar system besides our own that currently hosts active rivers, lakes, and seas. Titan’s otherworldly river systems are thought to be filled with liquid methane and ethane that flows into wide lakes and seas, some as large as the Great Lakes on Earth.

The existence of Titan’s large seas and smaller lakes was confirmed in 2007, with images taken by NASA’s Cassini spacecraft. Since then, scientists have pored over those and other images for clues to the moon’s mysterious liquid environment.

Now, MIT geologists have studied Titan’s shorelines and shown through simulations that the moon’s large seas have likely been shaped by waves. Until now, scientists have found indirect and conflicting signs of wave activity, based on remote images of Titan’s surface.

The MIT team took a different approach to investigate the presence of waves on Titan, by first modeling the ways in which a lake can erode on Earth. They then applied their modeling to Titan’s seas to determine what form of erosion could have produced the shorelines in Cassini’s images. Waves, they found, were the most likely explanation.

The researchers emphasize that their results are not definitive; to confirm that there are waves on Titan will require direct observations of wave activity on the moon’s surface.

“We can say, based on our results, that if the coastlines of Titan’s seas have eroded, waves are the most likely culprit,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “If we could stand at the edge of one of Titan’s seas, we might see waves of liquid methane and ethane lapping on the shore and crashing on the coasts during storms. And they would be capable of eroding the material that the coast is made of.”

Perron and his colleagues, including first author Rose Palermo PhD ’22, a former MIT-WHOI Joint Program graduate student and current research geologist at the U.S. Geological Survey, have published their study today in Science Advances. Their co-authors include MIT Research Scientist Jason Soderblom; former MIT postdoc Sam Birch, now an assistant professor at Brown University; Andrew Ashton at the Woods Hole Oceanographic Institution; and Alexander Hayes of Cornell University.

“Taking a different tack”

The presence of waves on Titan has been a somewhat controversial topic ever since Cassini spotted bodies of liquid on the moon’s surface.

“Some people who tried to see evidence for waves didn’t see any, and said, ‘These seas are mirror-smooth,’” Palermo says. “Others said they did see some roughness on the liquid surface but weren’t sure if waves caused it.”

Knowing whether Titan’s seas host wave activity could give scientists information about the moon’s climate, such as the strength of the winds that could whip up such waves. Wave information could also help scientists predict how the shape of Titan’s seas might evolve over time.

Rather than look for direct signs of wave-like features in images of Titan, Perron says the team had to “take a different tack, and see, just by looking at the shape of the shoreline, if we could tell what’s been eroding the coasts.”

Titan’s seas are thought to have formed as rising levels of liquid flooded a landscape crisscrossed by river valleys. The researchers zeroed in on three scenarios for what could have happened next: no coastal erosion; erosion driven by waves; and “uniform erosion,” driven either by “dissolution,” in which liquid passively dissolves a coast’s material, or a mechanism in which the coast gradually sloughs off under its own weight.

The researchers simulated how various shoreline shapes would evolve under each of the three scenarios. To simulate wave-driven erosion, they took into account a variable known as “fetch,” which describes the physical distance from one point on a shoreline to the opposite side of a lake or sea.

“Wave erosion is driven by the height and angle of the wave,” Palermo explains. “We used fetch to approximate wave height because the bigger the fetch, the longer the distance over which wind can blow and waves can grow.”

To test how shoreline shapes would differ between the three scenarios, the researchers started with a simulated sea with flooded river valleys around its edges. For wave-driven erosion, they calculated the fetch distance from every single point along the shoreline to every other point, and converted these distances to wave heights. Then, they ran their simulation to see how waves would erode the starting shoreline over time. They compared this to how the same shoreline would evolve under erosion driven by uniform erosion. The team repeated this comparative modeling for hundreds of different starting shoreline shapes.

They found that the end shapes were very different depending on the underlying mechanism. Most notably, uniform erosion produced inflated shorelines that widened evenly all around, even in the flooded river valleys, whereas wave erosion mainly smoothed the parts of the shorelines exposed to long fetch distances, leaving the flooded valleys narrow and rough.

“We had the same starting shorelines, and we saw that you get a really different final shape under uniform erosion versus wave erosion,” Perron says. “They all kind of look like the Flying Spaghetti Monster because of the flooded river valleys, but the two types of erosion produce very different endpoints.”

The team checked their results by comparing their simulations to actual lakes on Earth. They found the same difference in shape between Earth lakes known to have been eroded by waves and lakes affected by uniform erosion, such as dissolving limestone.

A shore’s shape

Their modeling revealed clear, characteristic shoreline shapes, depending on the mechanism by which they evolved. The team then wondered: Where would Titan’s shorelines fit, within these characteristic shapes?

In particular, they focused on four of Titan’s largest, most well-mapped seas: Kraken Mare, which is comparable in size to the Caspian Sea; Ligeia Mare, which is larger than Lake Superior; Punga Mare, which is longer than Lake Victoria; and Ontario Lacus, which is about 20 percent the size of its terrestrial namesake.

The team mapped the shorelines of each Titan sea using Cassini’s radar images, and then applied their modeling to each of the sea’s shorelines to see which erosion mechanism best explained their shape. They found that all four seas fit solidly in the wave-driven erosion model, meaning that waves produced shorelines that most closely resembled Titan’s four seas.

“We found that if the coastlines have eroded, their shapes are more consistent with erosion by waves than by uniform erosion or no erosion at all,” Perron says.

Juan Felipe Paniagua-Arroyave, associate professor in the School of Applied Sciences and Engineering at EAFIT University in Colombia, says the team’s results are “unlocking new avenues of understanding.”

“Waves are ubiquitous on Earth’s oceans. If Titan has waves, they would likely dominate the surface of lakes,” says Paniagua-Arroyave, who was not involved in the study. ”It would be fascinating to see how Titan’s winds create waves, not of water, but of exotic liquid hydrocarbons.”The researchers are working to determine how strong Titan’s winds must be in order to stir up waves that could repeatedly chip away at the coasts. They also hope to decipher, from the shape of Titan’s shorelines, from which directions the wind is predominantly blowing.

“Titan presents this case of a completely untouched system,” Palermo says. “It could help us learn more fundamental things about how coasts erode without the influence of people, and maybe that can help us better manage our coastlines on Earth in the future.”

This work was supported, in part, by NASA, the National Science Foundation, the U.S. Geological Survey, and the Heising-Simons Foundation.

Microscope system sharpens scientists’ view of neural circuit connections

A newly described technology improves the clarity and speed of using two-photon microscopy to image synapses in the living brain.

The brain’s ability to learn comes from “plasticity,” in which neurons constantly edit and remodel the tiny connections called synapses that they make with other neurons to form circuits. To study plasticity, neuroscientists seek to track it at high resolution across whole cells, but plasticity doesn’t wait for slow microscopes to keep pace, and brain tissue is notorious for scattering light and making images fuzzy. In an open access paper in Scientific Reports, a collaboration of MIT engineers and neuroscientists describes a new microscopy system designed for fast, clear, and frequent imaging of the living brain.

The system, called “multiline orthogonal scanning temporal focusing” (mosTF), works by scanning brain tissue with lines of light in perpendicular directions. As with other live brain imaging systems that rely on “two-photon microscopy,” this scanning light “excites” photon emission from brain cells that have been engineered to fluoresce when stimulated. The new system proved in the team’s tests to be eight times faster than a two-photon scope that goes point by point, and proved to have a four-fold better signal-to-background ratio (a measure of the resulting image clarity) than a two-photon system that just scans in one direction.

“Tracking rapid changes in circuit structure in the context of the living brain remains a challenge,” says co-author Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory and MIT’s departments of Biology and Brain and Cognitive Sciences. “While two-photon microscopy is the only method that allows high-resolution visualization of synapses deep in scattering tissue, such as the brain, the required point-by-point scanning is mechanically slow. The mosTF system significantly reduces scan time without sacrificing resolution.”

Scanning a whole line of a sample is inherently faster than just scanning one point at a time, but it kicks up a lot of scattering. To manage that scattering, some scope systems just discard scattered photons as noise, but then they are lost, says lead author Yi Xue SM ’15, PhD ’19, an assistant professor at the University of California at Davis and a former graduate student in the lab of corresponding author Peter T.C. So, professor of mechanical engineering and biological engineering at MIT. Newer single-line and the mosTF systems produce a stronger signal (thereby resolving smaller and fainter features of stimulated neurons) by algorithmically reassigning scattered photons back to their origin. In a two-dimensional image, that process is better accomplished by using the information produced by a two-dimensional, perpendicular-direction system such as mosTF, than by a one-dimensional, single-direction system, Xue says.

“Our excitation light is a line, rather than a point — more like a light tube than a light bulb — but the reconstruction process can only reassign photons to the excitation line and cannot handle scattering within the line,” Xue explains. “Therefore, scattering correction is only performed along one dimension for a 2D image. To correct scattering in both dimensions, we need to scan the sample and correct scattering along the other dimension as well, resulting in an orthogonal scanning strategy.”

In the study the team tested their system head-to-head against a point-by-point scope (a two-photon laser scanning microscope — TPLSM) and a line-scanning temporal focusing microscope (lineTF). They imaged fluorescent beads through water and through a lipid-infused solution that better simulates the kind of scattering that arises in biological tissue. In the lipid solution, mosTF produced images with a 36-times better signal-to-background ratio than lineTF.

For a more definitive proof, Xue worked with Josiah Boivin in the Nedivi lab to image neurons in the brain of a live, anesthetized mouse, using mosTF. Even in this much more complex environment, where the pulsations of blood vessels and the movement of breathing provide additional confounds, the mosTF scope still achieved a four-fold better signal-to-background ratio. Importantly, it was able to reveal the features where many synapses dwell: the spines that protrude along the vine-like processes, or dendrites, that grow out of the neuron cell body. Monitoring plasticity requires being able to watch those spines grow, shrink, come, and go across the entire cell, Nedivi says.

“Our continued collaboration with the So lab and their expertise with microscope development has enabled in vivo studies that are unapproachable using conventional, out-of-the-box two-photon microscopes,” she adds.

So says he is already planning further improvements to the technology.

“We’re continuing to work toward the goal of developing even more efficient microscopes to look at plasticity even more efficiently,” he says. “The speed of mosTF is still limited by needing to use high-sensitivity, low-noise cameras that are often slow. We are now working on a next-generation system with new type of detectors such as hybrid photomultiplier or avalanche photodiode arrays that are both sensitive and fast.”

In addition to Xue, So, Boivin, and Nedivi, the paper’s other authors are Dushan Wadduwage and Jong Kang Park.

The National Institutes of Health, Hamamatsu Corp., Samsung Advanced Institute of Technology, Singapore-MIT Alliance for Research and Technology Center, Biosystems and Micromechanics, The Picower Institute for Learning and Memory, The JPB Foundation, and The Center for Advanced Imaging at Harvard University provided support for the research.

Technologies enable 3D imaging of whole human brain hemispheres at subcellular resolution

Three innovations by an MIT-based team enable high-resolution, high-throughput imaging of human brain tissue at a full range of scales, and mapping connectivity of neurons at single-cell resolution.

Observing anything and everything within the human brain, no matter how large or small, while it is fully intact has been an out-of-reach dream of neuroscience for decades. But in a new study in Science, an MIT-based team describes a technology pipeline that enabled them to finely process, richly label, and sharply image full hemispheres of the brains of two donors — one with Alzheimer’s disease and one without — at high resolution and speed.

“We performed holistic imaging of human brain tissues at multiple resolutions, from single synapses to whole brain hemispheres, and we have made that data available,” says senior and corresponding author Kwanghun Chung, associate professor the MIT departments of Chemical Engineering and Brain and Cognitive Sciences and member of The Picower Institute for Learning and Memory and the Institute for Medical Engin­­­­eering and Science. “This technology pipeline really enables us to analyze the human brain at multiple scales. Potentially this pipeline can be used for fully mapping human brains.”

The new study does not present a comprehensive map or atlas of the entire brain, in which every cell, circuit, and protein is identified and analyzed. But with full hemispheric imaging, it demonstrates an integrated suite of three technologies to enable that and other long-sought neuroscience investigations. The research provides a “proof of concept” by showing numerous examples of what the pipeline makes possible, including sweeping landscapes of thousands of neurons within whole brain regions; diverse forests of cells, each in individual detail; and tufts of subcellular structures nestled among extracellular molecules. The researchers also present a rich variety of quantitative analytical comparisons focused on a chosen region within the Alzheimer’s and non-Alzheimer’s hemispheres.

The importance of being able to image whole hemispheres of human brains intact and down to the resolution of individual synapses (the teeny connections that neurons forge to make circuits) is two-fold for understanding the human brain in health and disease, Chung says.

Superior samples

On one hand, it will enable scientists to conduct integrated explorations of questions using the same brain, rather than having to (for example) observe different phenomena in different brains, which can vary significantly, and then try to construct a composite picture of the whole system. A key feature of the new technology pipeline is that analysis doesn’t degrade the tissue. On the contrary, it makes the tissues extremely durable and repeatedly re-labelable to highlight different cells or molecules as needed for new studies for potentially years on end. In the paper, Chung’s team demonstrates using 20 different antibody labels to highlight different cells and proteins, but they are already expanding that to a hundred or more.

“We need to be able to see all these different functional components — cells, their morphology and their connectivity, subcellular architectures, and their individual synaptic connections — ideally within the same brain, considering the high individual variabilities in the human brain and considering the precious nature of human brain samples,” Chung says. “This technology pipeline really enables us to extract all these important features from the same brain in a fully integrated manner.”

On the other hand, the pipeline’s relatively high scalability and throughput (imaging a whole brain hemisphere once it is prepared takes 100 hours, rather than many months) means that it is possible to create many samples to represent different sexes, ages, disease states, and other factors that can enable robust comparisons with increased statistical power. Chung says he envisions creating a brain bank of fully imaged brains that researchers could analyze and re-label as needed for new studies to make more of the kinds of comparisons he and co-authors made with the Alzheimer’s and non-Alzheimer’s hemispheres in the new paper.

Three key innovations

Chung says the biggest challenge he faced in achieving the advances described in the paper was building a team at MIT that included three especially talented young scientists, each a co-lead author of the paper because of their key roles in producing the three major innovations. Ji Wang, a mechanical engineer and former postdoc, developed the “Megatome,” a device for slicing intact human brain hemispheres so finely that there is no damage to them. Juhyuk Park, a materials engineer and former postdoc, developed the chemistry that makes each brain slice clear, flexible, durable, expandable, and quickly, evenly, and repeatedly labelable — a technology called “mELAST.” Webster Guan, a former MIT chemical engineering graduate student with a knack for software development, created a computational system called “UNSLICE” that can seamlessly reunify the slabs to reconstruct each hemisphere in full 3D, down to the precise alignment of individual blood vessels and neural axons (the long strands they extend to forge connections with other neurons).

No technology allows for imaging whole human brain anatomy at subcellular resolution without first slicing it, because it is very thick (it’s 3,000 times the volume of a mouse brain) and opaque. But in the Megatome, tissue remains undamaged because Wang, who is now at a company Chung founded called LifeCanvas Technologies, engineered its blade to vibrate side-to-side faster, and yet sweep wider, than previous vibratome slicers. Meanwhile she also crafted the instrument to stay perfectly within its plane, Chung says. The result are slices that don’t lose anatomical information at their separation or anywhere else. And because the vibratome cuts relatively quickly and can cut thicker (and therefore fewer) slabs of tissue, a whole hemisphere can be sliced in a day, rather than months.

A major reason why slabs in the pipeline can be thicker comes from mELAST. Park engineered the hydrogel that infuses the brain sample to make it optically clear, virtually indestructible, and compressible and expandable. Combined with other chemical engineering technologies developed in recent years in Chung’s lab, the samples can then be evenly and quickly infused with the antibody labels that highlight cells and proteins of interest. Using a light sheet microscope the lab customized, a whole hemisphere can be imaged down to individual synapses in about 100 hours, the authors report in the study. Park is now an assistant professor at Seoul National University in South Korea.

“This advanced polymeric network, which fine-tunes the physicochemical properties of tissues, enabled multiplexed multiscale imaging of the intact human brains,” Park says.

After each slab has been imaged, the task is then to restore an intact picture of the whole hemisphere computationally. Guan’s UNSLICE does this at multiple scales. For instance, at the middle, or “meso” scale, it algorithmically traces blood vessels coming into one layer from adjacent layers and matches them. But it also takes an even finer approach. To further register the slabs, the team purposely labeled neighboring neural axons in different colors (like the wires in an electrical fixture). That enabled UNSLICE to match layers up based on tracing the axons, Chung says. Guan is also now at LifeCanvas.

In the study, the researchers present a litany of examples of what the pipeline can do. The very first figure demonstrates that the imaging allows one to richly label a whole hemisphere and then zoom in from the wide scale of brainwide structures to the level of circuits, then individual cells, and then subcellular components, such as synapses. Other images and videos demonstrate how diverse the labeling can be, revealing long axonal connections and the abundance and shape of different cell types including not only neurons but also astrocytes and microglia.

Exploring Alzheimer’s

For years, Chung has collaborated with co-author Matthew Frosch, an Alzheimer’s researcher and director of the brain bank at Massachusetts General Hospital, to image and understand Alzheimer’s disease brains. With the new pipeline established they began an open-ended exploration, first noticing where within a slab of tissue they saw the greatest loss of neurons in the disease sample compared to the control. From there, they followed their curiosity — as the technology allowed them to do — ultimately producing a series of detailed investigations described in the paper.

“We didn’t lay out all these experiments in advance,” Chung says. “We just started by saying, ‘OK, let’s image this slab and see what we see.’ We identified brain regions with substantial neuronal loss so let’s see what’s happening there. ‘Let’s dive deeper.’ So we used many different markers to characterize and see the relationships between pathogenic factors and different cell types.

“This pipeline allows us to have almost unlimited access to the tissue,” Chung says. “We can always go back and look at something new.”

They focused most of their analysis in the orbitofrontal cortex within each hemisphere. One of the many observations they made was that synapse loss was concentrated in areas where there was direct overlap with amyloid plaques. Outside of areas of plaques the synapse density was as high in the brain with Alzheimer’s as in the one without the disease.

With just two samples, Chung says, the team is not offering any conclusions about the nature of Alzheimer’s disease, of course, but the point of the study is that the capability now exists to fully image and deeply analyze whole human brain hemispheres to enable exactly that kind of research.

Notably, the technology applies equally well to many other tissues in the body, not just brains.

“We envision that this scalable technology platform will advance our understanding of the human organ functions and disease mechanisms to spur development of new therapies,” the authors conclude.

In addition to Park, Wang, Guan, Chung, and Frosch, the paper’s other authors are Lars A. Gjesteby, Dylan Pollack, Lee Kamentsky, Nicholas B. Evans, Jeff Stirman, Xinyi Gu, Chuanxi Zhao, Slayton Marx, Minyoung E. Kim, Seo Woo Choi, Michael Snyder, David Chavez, Clover Su-Arcaro, Yuxuan Tian, Chang Sin Park, Qiangge Zhang, Dae Hee Yun, Mira Moukheiber, Guoping Feng, X. William Yang, C. Dirk Keene, Patrick R. Hof, Satrajit S. Ghosh, and Laura J. Brattain.

The main funding for the work came from the National Institutes of Health, The Picower Institute for Learning and Memory, The JPB Foundation, and the NCSOFT Cultural Foundation.

With programmable pixels, novel sensor improves imaging of neural activity

New camera chip design allows for optimizing each pixel’s timing to maximize signal-to-noise ratio when tracking real-time visual indicator of neural voltage.

Neurons communicate electrically, so to understand how they produce such brain functions as memory, neuroscientists must track how their voltage changes — sometimes subtly — on the timescale of milliseconds. In a new open-access paper in Nature Communications, MIT researchers describe a novel image sensor with the capability to substantially increase that ability.

The invention led by Jie Zhang, a postdoc in the lab of Matt Wilson, who is the Sherman Fairchild Professor at MIT and member of The Picower Institute for Learning and Memory, is a new take on the standard “CMOS” (complementary metal-oxide semiconductor) technology used in scientific imaging. In that standard approach, all pixels turn on and off at the same time — a configuration with an inherent trade-off in which fast sampling means capturing less light. The new chip enables each pixel’s timing to be controlled individually. That arrangement provides a “best of both worlds” in which neighboring pixels can essentially complement each other to capture all the available light without sacrificing speed.

In experiments described in the study, Zhang and Wilson’s team demonstrates how “pixelwise” programmability enabled them to improve visualization of neural voltage “spikes,” which are the signals neurons use to communicate with each other, and even the more subtle, momentary fluctuations in their voltage that constantly occur between those spiking events.

“Measuring with single-spike resolution is really important as part of our research approach,” says senior author Wilson, a professor in MIT’s departments of Biology and Brain and Cognitive Sciences (BCS), whose lab studies how the brain encodes and refines spatial memories both during wakeful exploration and during sleep. “Thinking about the encoding processes within the brain, single spikes and the timing of those spikes is important in understanding how the brain processes information.”

For decades, Wilson has helped to drive innovations in the use of electrodes to tap into neural electrical signals in real time, but like many researchers he has also sought visual readouts of electrical activity because they can highlight large areas of tissue and still show which exact neurons are electrically active at any given moment. Being able to identify which neurons are active can enable researchers to learn which types of neurons are participating in memory processes, providing important clues about how brain circuits work.

In recent years, neuroscientists including co-senior author Ed Boyden, the Y. Eva Tan Professor of Neurotechnology in BCS and the McGovern Institute for Brain Research and a Picower Institute affiliate, have worked to meet that need by inventing “genetically encoded voltage indicators” (GEVIs) that make cells glow as their voltage changes in real time. But as Zhang and Wilson have tried to employ GEVIs in their research, they’ve found that conventional CMOS image sensors were missing a lot of the action. If they operated too fast, they wouldn’t gather enough light. If they operated too slowly, they’d miss rapid changes.

But image sensors have such fine resolution that many pixels are really looking at essentially the same place on the scale of a whole neuron, Wilson says. Recognizing that there was resolution to spare, Zhang applied his expertise in sensor design to invent an image sensor chip that would enable neighboring pixels to each have their own timing. Faster ones could capture rapid changes. Slower-working ones could gather more light. No action or photons would be missed. Zhang also cleverly engineered the required control electronics so they barely cut into the space available for light-sensitive elements on a pixels. This ensured the sensor’s high sensitivity under low light conditions, Zhang says.

In the study the researchers demonstrated two ways in which the chip improved imaging of voltage activity of mouse hippocampus neurons cultured in a dish. They ran their sensor head-to-head against an industry standard scientific CMOS image sensor chip.

In the first set of experiments, the team sought to image the fast dynamics of neural voltage. On the conventional CMOS chip, each pixel had a zippy 1.25 millisecond exposure time. On the pixelwise sensor each pixel in neighboring groups of four stayed on for 5 ms, but their start times were staggered so that each one turned on and off 1.25 seconds later than the next. In the study, the team shows that each pixel, because it was on longer, gathered more light, but because each one was capturing a new view every 1.25 ms, it was equivalent to simply having a fast temporal resolution. The result was a doubling of the signal-to-noise ratio for the pixelwise chip. This achieves high temporal resolution at a fraction of the sampling rate compared to conventional CMOS chips, Zhang says.

Moreover, the pixelwise chip detected neural spiking activities that the conventional sensor missed. And when the researchers compared the performance of each kind of sensor against the electrical readings made with a traditional patch clamp electrode, they found that the staggered pixelwise measurements better matched that of the patch clamp.

In the second set of experiments, the team sought to demonstrate that the pixelwise chip could capture both the fast dynamics and also the slower, more subtle “subthreshold” voltage variances neurons exhibit. To do so they varied the exposure durations of neighboring pixels in the pixelwise chip, ranging from 15.4 ms down to just 1.9 ms. In this way, fast pixels sampled every quick change (albeit faintly), while slower pixels integrated enough light over time to track even subtle slower fluctuations. By integrating the data from each pixel, the chip was indeed able to capture both fast spiking and slower subthreshold changes, the researchers reported.

The experiments with small clusters of neurons in a dish was only a proof of concept, Wilson says. His lab’s ultimate goal is to conduct brain-wide, real-time measurements of activity in distinct types of neurons in animals even as they are freely moving about and learning how to navigate mazes. The development of GEVIs and of image sensors like the pixelwise chip that can successfully take advantage of what they show is crucial to making that goal feasible.  

“That’s the idea of everything we want to put together: large-scale voltage imaging of genetically tagged neurons in freely behaving animals,” Wilson says.

To achieve this, Zhang adds, “We are already working on the next iteration of chips with lower noise, higher pixel counts, time-resolution of multiple kHz, and small form factors for imaging in freely behaving animals.”

The research is advancing pixel by pixel.

In addition to Zhang, Wilson, and Boyden, the paper’s other authors are Jonathan Newman, Zeguan Wang, Yong Qian, Pedro Feliciano-Ramos, Wei Guo, Takato Honda, Zhe Sage Chen, Changyang Linghu, Ralph-Etienne Cummings, and Eric Fossum.

The Picower Institute, The JPB Foundation, the Alana Foundation, The Louis B. Thalheimer Fund for Translational Research, the National Institutes of Health, HHMI, Lisa Yang, and John Doerr provided support for the research.

Featured video: Researchers discuss queer visibility in academia

In “Scientific InQueery,” LGBTQ+ MIT faculty and graduate students describe finding community and living their authentic lives in the research enterprise.

“My identity as a scientist and my identity as a gay man are not contradictory, but complementary,” says Jack Forman, PhD candidate in media arts and sciences and co-lead of LGBTQ+ Grad, a student group run by and for LGBTQ+ grad students and postdocs at MIT.

He and co-leads Miranda Dawson and Tunahan Aytas ’23 recently interviewed queer MIT faculty about their experiences and the importance of visibility in “Scientific InQueery,” a video meant to inspire young LGBTQ+ academics to take pride in the intersections of their identities and their academic work.

“In professional settings, people need to create spaces for researchers to be able to discuss their scientific work and also be queer,” says Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics and dean of the MIT School of Science. “That [space] gives a sense of safety [to say] ‘I can be successful in my profession; I can be queer; and I can be out here flying my rainbow flag.’”

“As queer graduate students, we find community in our peers. However, as one progresses up the academic ladder, it can be harder to find examples of queer people in higher positions. Bringing visibility to the queer faculty helps younger queer academics find a greater sense of community,” says Dawson, a PhD student in MIT’s Department of Biological Engineering. In her years as co-lead of LGBTQ+ Grad, she has been a visible advocate for LGBTQ+ graduate students across MIT.

“We would love it if a young queer person with curiosity and a love for learning saw this video and realized that they belong here, at a place like MIT,” says Dawson.

In addition to Aytas, Dawson, Forman, and Mavalvala, the video features Sebastian Lourido, associate professor of biology; Lorna Gibson, professor of materials science and engineering; and Bryan Bryson, associate professor of biological engineering.

Scientists preserve DNA in an amber-like polymer

With their “T-REX” method, DNA embedded in the polymer could be used for long-term storage of genomes or digital data such as photos and music.

In the movie “Jurassic Park,” scientists extracted DNA that had been preserved in amber for millions of years, and used it to create a population of long-extinct dinosaurs.

Inspired partly by that film, MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, whether entire human genomes or digital files such as photos.

Most current methods for storing DNA require freezing temperatures, so they consume a great deal of energy and are not feasible in many parts of the world. In contrast, the new amber-like polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water.

The researchers showed that they could use this polymer to store DNA sequences encoding the theme music from Jurassic Park, as well as an entire human genome. They also demonstrated that the DNA can be easily removed from the polymer without damaging it.

“Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”

Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published yesterday in the Journal of the American Chemical Society. Former MIT postdoc Elizabeth Prince and MIT postdoc Ho Fung Cheng are the lead authors of the paper.

Capturing DNA

DNA, a very stable molecule, is well-suited for storing massive amounts of information, including digital data. Digital storage systems encode text, photos, and other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.

DNA offers a way to store this digital information at very high density: In theory, a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.

In 2021, Banal and his postdoc advisor, Mark Bathe, an MIT professor of biological engineering, developed a way to store DNA in particles of silica, which could be labeled with tags that revealed the particles’ contents. That work led to a spinout called Cache DNA.

One downside to that storage system is that it takes several days to embed DNA into the silica particles. Furthermore, removing the DNA from the particles requires hydrofluoric acid, which can be hazardous to workers handling the DNA.

To come up with alternative storage materials, Banal began working with Johnson and members of his lab. Their idea was to use a type of polymer known as a degradable thermoset, which consists of polymers that form a solid when heated. The material also includes cleavable links that can be easily broken, allowing the polymer to be degraded in a controlled way.

“With these deconstructable thermosets, depending on what cleavable bonds we put into them, we can choose how we want to degrade them,” Johnson says.

For this project, the researchers decided to make their thermoset polymer from styrene and a cross-linker, which together form an amber-like thermoset called cross-linked polystyrene. This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA. To make the thermoset degradable, the styrene monomers and cross-linkers are copolymerized with monomers called thionolactones. These links can be broken by treating them with a molecule called cysteamine.

Because styrene is so hydrophobic, the researchers had to come up with a way to entice DNA — a hydrophilic, negatively charged molecule — into the styrene.

To do that, they identified a combination of three monomers that they could turn into polymers that dissolve DNA by helping it interact with styrene. Each of the monomers has different features that cooperate to get the DNA out of water and into the styrene. There, the DNA forms spherical complexes, with charged DNA in the center and hydrophobic groups forming an outer layer that interacts with styrene. When heated, this solution becomes a solid glass-like block, embedded with DNA complexes.

The researchers dubbed their method T-REX (Thermoset-REinforced Xeropreservation). The process of embedding DNA into the polymer network takes a few hours, but that could become shorter with further optimization, the researchers say.

To release the DNA, the researchers first add cysteamine, which cleaves the bonds holding the polystyrene thermoset together, breaking it into smaller pieces. Then, a detergent called SDS can be added to remove the DNA from polystyrene without damaging it.

Storing information

Using these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). They were able to store DNA encoding the Emancipation Proclamation and the MIT logo, in addition to the theme music from “Jurassic Park.”

After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.

The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.

Cache DNA, a company started by Banal and Bathe, with Johnson as a member of the scientific advisory board, is now working on further developing DNA storage technology. The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future.

“The idea is, why don’t we preserve the master record of life forever?” Banal says. “Ten years or 20 years from now, when technology has advanced way more than we could ever imagine today, we could learn more and more things. We’re still in the very infancy of understanding the genome and how it relates to disease.”

The research was funded by the National Science Foundation.

Bob Prior: A deep legacy of cultivating books at the MIT Press

After 36 years and hundreds of titles, the executive editor reflects on his career as a “champion of rigorous and brilliant scholarship.”

In his first years as an acquisitions editor at the MIT Press in the late 1980s, Bob Prior helped handle a burgeoning computer science list.

Thirty-six years later, Prior has edited hundreds of trade and scholarly books in areas as diverse as neuroscience, natural history, electronic privacy, evolution, and design — including one single novel that he was able to sneak onto his list of otherwise entirely nonfiction titles. In more recent years as executive editor for biomedical science, neuroscience, and trade science, his work has focused on general interest science books with an emphasis on the life sciences, neuroscience, and natural history.

Prior argues, though, that his work — while fundamentally remaining the same — has always felt keenly different from year to year. “You utilize the same kind of skills, but in service of very different authors and very different projects,” he says.

And after a career at the press spanning three-plus decades, Prior is set to retire at the end of June.

He will leave behind an incredible legacy — especially as “a master networker, an astute acquiring editor, and a champion of rigorous and brilliant scholarship,” says Bill Smith, director of sales and marketing at the MIT Press. “His curious mind is always on the lookout for brilliant scientists and authors who have something to say to the wider world.”

“I’ve always valued his excellent instincts and competitive drive as an acquisitions editor, and his passion for his work and for the research in the fields that he works on,” says Amy Brand, director and publisher of the press. “In recent years, he’s been very generous in providing astute guidance to me and other press colleagues on specific projects and our overall acquisitions program.”

“Best of all, Bob is a boundary pusher, constantly questioning the preconceptions of what a smart, general reader book can be,” says Smith.

For Prior, some of his favorite projects over his career at the press have been some of the most personal.

One of the books he is proudest of having worked on is “The Autobiography of a Transgender Scientist,” written by lauded neuroscientist Ben Barres and finished just prior to his death from pancreatic cancer in 2017. Prior was tasked with editing Barres’s book posthumously.

“It is an incredibly personal story; he talks about his experiences being an undergrad at MIT, his transition, and the challenges of his life,” says Prior. With the diligent care that is a hallmark of Prior’s work as an editor, he helped bring Barres’s final work to the public eye. “It’s a book I am very proud of because of Ben’s legacy and the person he was, and because every person I know who has read it has been transformed by it in some way,” Prior says. “The book has strongly impacted my view of the world.”

Other books acquired by Prior over the course of his career include “The Laws of Simplicity,” by John Maeda; “The Distracted Mind: Ancient Brains in a High-Tech World,” by Adam Gazzaley and Larry D. Rosen; “Consciousness: Confessions of a Romantic Reductionist,” by Christof Koch; “Blueprint: How DNA Makes Us Who We Are,” by Robert Plomin; and “The Alchemy of Us: How Humans and Matter Transformed One Another,” by Ainissa Ramirez.

According to Gita Manaktala, executive editor at large at the MIT Press, Prior’s dedication and incredible success throughout his career is no coincidence. “For nearly 40 years, Bob Prior has shown us how to cultivate books by scientists and technologists,” Manaktala says. Each week Prior writes to a list of people he has never met but whose work he admires. Sometimes he hears back; but just as often he does not, Manaktala says — or at least not right away. Even so, Prior has never given up, knowing that books and relationships take time and effort to build.

“His sustained interest in people, ideas, and their impact on the world is what makes a great editor,” Manaktala adds. “Bob has helped to grow hundreds of essential books from small seeds. The world of ideas is a richer, greener, and more fertile place for his efforts.”

“I’ll personally miss him and his insights a great deal,” says Brand.

“While my life after MIT Press will be full with family, friends, and meaningful work in my community, I will definitely miss the world of publishing and chasing down great authors,” Prior says of his 36 years at the press. “What I will miss the most are my incredible colleagues; what an amazing place to make a career.”

Symposium highlights scale of mental health crisis and novel methods of diagnosis and treatment

Co-hosted by the McGovern Institute, MIT Open Learning, and others, the symposium stressed emerging technologies in advancing understanding of mental health and neurological conditions.

Digital technologies, such as smartphones and machine learning, have revolutionized education. At the McGovern Institute for Brain Research’s 2024 Spring Symposium, “Transformational Strategies in Mental Health,” experts from across the sciences — including psychiatry, psychology, neuroscience, computer science, and others — agreed that these technologies could also play a significant role in advancing the diagnosis and treatment of mental health disorders and neurological conditions.

Co-hosted by the McGovern Institute, MIT Open Learning, McClean Hospital, the Poitras Center for Psychiatric Disorders Research at MIT, and the Wellcome Trust, the symposium raised the alarm about the rise in mental health challenges and showcased the potential for novel diagnostic and treatment methods.

John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT, kicked off the symposium with a call for an effort on par with the Manhattan Project, which in the 1940s saw leading scientists collaborate to do what seemed impossible. While the challenge of mental health is quite different, Gabrieli stressed, the complexity and urgency of the issue are similar. In his later talk, “How can science serve psychiatry to enhance mental health?,” he noted a 35 percent rise in teen suicide deaths between 1999 and 2000 and, between 2007 and 2015, a 100 percent increase in emergency room visits for youths ages 5 to 18 who experienced a suicide attempt or suicidal ideation.

“We have no moral ambiguity, but all of us speaking today are having this meeting in part because we feel this urgency,” said Gabrieli, who is also a professor of brain and cognitive sciences, the director of the Integrated Learning Initiative (MITili) at MIT Open Learning, and a member of the McGovern Institute. "We have to do something together as a community of scientists and partners of all kinds to make a difference.”

An urgent problem

In 2021, U.S. Surgeon General Vivek Murthy issued an advisory on the increase in mental health challenges in youth; in 2023, he issued another, warning of the effects of social media on youth mental health. At the symposium, Susan Whitfield-Gabrieli, a research affiliate at the McGovern Institute and a professor of psychology and director of the Biomedical Imaging Center at Northeastern University, cited these recent advisories, saying they underscore the need to “innovate new methods of intervention.”

Other symposium speakers also highlighted evidence of growing mental health challenges for youth and adolescents. Christian Webb, associate professor of psychology at Harvard Medical School, stated that by the end of adolescence, 15-20 percent of teens will have experienced at least one episode of clinical depression, with girls facing the highest risk. Most teens who experience depression receive no treatment, he added.

Adults who experience mental health challenges need new interventions, too. John Krystal, the Robert L. McNeil Jr. Professor of Translational Research and chair of the Department of Psychiatry at Yale University School of Medicine, pointed to the limited efficacy of antidepressants, which typically take about two months to have an effect on the patient. Patients with treatment-resistant depression face a 75 percent likelihood of relapse within a year of starting antidepressants. Treatments for other mental health disorders, including bipolar and psychotic disorders, have serious side effects that can deter patients from adherence, said Virginie-Anne Chouinard, director of research at McLean OnTrackTM, a program for first episode psychosis at McLean Hospital.

New treatments, new technologies

Emerging technologies, including smartphone technology and artificial intelligence, are key to the interventions that symposium speakers shared.

In a talk on AI and the brain, Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, discussed novel ways to detect Parkinson’s and Alzheimer's, among other diseases. Early-stage research involved developing devices that can analyze how movement within a space impacts the surrounding electromagnetic field, as well as how wireless signals can detect breathing and sleep stages.

“I realize this may sound like la-la land,” Katabi said. “But it’s not! This device is used today by real patients, enabled by a revolution in neural networks and AI.”

Parkinson’s disease often cannot be diagnosed until significant impairment has already occurred. In a set of studies, Katabi’s team collected data on nocturnal breathing and trained a custom neural network to detect occurrences of Parkinson’s. They found the network was over 90 percent accurate in its detection. Next, the team used AI to analyze two sets of breathing data collected from patients at a six-year interval. Could their custom neural network identify patients who did not have a Parkinson’s diagnosis on the first visit, but subsequently received one? The answer was largely yes: Machine learning identified 75 percent of patients who would go on to receive a diagnosis.

Detecting high-risk patients at an early stage could make a substantial difference for intervention and treatment. Similarly, research by Jordan Smoller, professor of psychiatry at Harvard Medical School and director of the Center for Precision Psychiatry at Massachusetts General Hospital, demonstrated that AI-aided suicide risk prediction model could detect 45 percent of suicide attempts or deaths with 90 percent specificity, about two to three years in advance.

Other presentations, including a series of lightning talks, shared new and emerging treatments, such as the use of ketamine to treat depression; the use of smartphones, including daily text surveys and mindfulness apps, in treating depression in adolescents; metabolic interventions for psychotic disorders; the use of machine learning to detect impairment from THC intoxication; and family-focused treatment, rather than individual therapy, for youth depression.

Advancing understanding

The frequency and severity of adverse mental health events for children, adolescents, and adults demonstrate the necessity of funding for mental health research — and the open sharing of these findings.

Niall Boyce, head of mental health field building at the Wellcome Trust — a global charitable foundation dedicated to using science to solve urgent health challenges — outlined the foundation’s funding philosophy of supporting research that is “collaborative, coherent, and focused” and centers on “What is most important to those most affected?” Wellcome research managers Anum Farid and Tayla McCloud stressed the importance of projects that involve people with lived experience of mental health challenges and “blue sky thinking” that takes risks and can advance understanding in innovative ways. Wellcome requires that all published research resulting from its funding be open and accessible in order to maximize their benefits. 

Whether through therapeutic models, pharmaceutical treatments, or machine learning, symposium speakers agreed that transformative approaches to mental health call for collaboration and innovation.

“Understanding mental health requires us to understand the unbelievable diversity of humans,” Gabrieli said. “We have to use all the tools we have now to develop new treatments that will work for people for whom our conventional treatments don’t.”

Just thinking about a location activates mental maps in the brain

MIT neuroscientists have found that the brain uses the same cognitive representations whether navigating through space physically or mentally.

As you travel your usual route to work or the grocery store, your brain engages cognitive maps stored in your hippocampus and entorhinal cortex. These maps store information about paths you have taken and locations you have been to before, so you can navigate whenever you go there.

New research from MIT has found that such mental maps also are created and activated when you merely think about sequences of experiences, in the absence of any physical movement or sensory input. In an animal study, the researchers found that the entorhinal cortex harbors a cognitive map of what animals experience while they use a joystick to browse through a sequence of images. These cognitive maps are then activated when thinking about these sequences, even when the images are not visible.

This is the first study to show the cellular basis of mental simulation and imagination in a nonspatial domain through activation of a cognitive map in the entorhinal cortex.

“These cognitive maps are being recruited to perform mental navigation, without any sensory input or motor output. We are able to see a signature of this map presenting itself as the animal is going through these experiences mentally,” says Mehrdad Jazayeri, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

McGovern Institute Research Scientist Sujaya Neupane is the lead author of the paper, which appears today in Nature. Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center, is also an author of the paper.

Mental maps

A great deal of work in animal models and humans has shown that representations of physical locations are stored in the hippocampus, a small seahorse-shaped structure, and the nearby entorhinal cortex. These representations are activated whenever an animal moves through a space that it has been in before, just before it traverses the space, or when it is asleep.

“Most prior studies have focused on how these areas reflect the structures and the details of the environment as an animal moves physically through space,” Jazayeri says. “When an animal moves in a room, its sensory experiences are nicely encoded by the activity of neurons in the hippocampus and entorhinal cortex.”

In the new study, Jazayeri and his colleagues wanted to explore whether these cognitive maps are also built and then used during purely mental run-throughs or imagining of movement through nonspatial domains.

To explore that possibility, the researchers trained animals to use a joystick to trace a path through a sequence of images (“landmarks”) spaced at regular temporal intervals. During the training, the animals were shown only a subset of pairs of images but not all the pairs. Once the animals had learned to navigate through the training pairs, the researchers tested if animals could handle the new pairs they had never seen before.

One possibility is that animals do not learn a cognitive map of the sequence, and instead solve the task using a memorization strategy. If so, they would be expected to struggle with the new pairs. Instead, if the animals were to rely on a cognitive map, they should be able to generalize their knowledge to the new pairs.

“The results were unequivocal,” Jazayeri says. “Animals were able to mentally navigate between the new pairs of images from the very first time they were tested. This finding provided strong behavioral evidence for the presence of a cognitive map. But how does the brain establish such a map?”

To address this question, the researchers recorded from single neurons in the entorhinal cortex as the animals performed this task. Neural responses had a striking feature: As the animals used the joystick to navigate between two landmarks, neurons featured distinctive bumps of activity associated with the mental representation of the intervening landmarks.

“The brain goes through these bumps of activity at the expected time when the intervening images would have passed by the animal’s eyes, which they never did,” Jazayeri says. “And the timing between these bumps, critically, was exactly the timing that the animal would have expected to reach each of those, which in this case was 0.65 seconds.”

The researchers also showed that the speed of the mental simulation was related to the animals’ performance on the task: When they were a little late or early in completing the task, their brain activity showed a corresponding change in timing. The researchers also found evidence that the mental representations in the entorhinal cortex don’t encode specific visual features of the images, but rather the ordinal arrangement of the landmarks.

A model of learning

To further explore how these cognitive maps may work, the researchers built a computational model to mimic the brain activity that they found and demonstrate how it could be generated. They used a type of model known as a continuous attractor model, which was originally developed to model how the entorhinal cortex tracks an animal’s position as it moves, based on sensory input.

The researchers customized the model by adding a component that was able to learn the activity patterns generated by sensory input. This model was then able to learn to use those patterns to reconstruct those experiences later, when there was no sensory input.

“The key element that we needed to add is that this system has the capacity to learn bidirectionally by communicating with sensory inputs. Through the associational learning that the model goes through, it will actually recreate those sensory experiences,” Jazayeri says.

The researchers now plan to investigate what happens in the brain if the landmarks are not evenly spaced, or if they’re arranged in a ring. They also hope to record brain activity in the hippocampus and entorhinal cortex as the animals first learn to perform the navigation task.

“Seeing the memory of the structure become crystallized in the mind, and how that leads to the neural activity that emerges, is a really valuable way of asking how learning happens,” Jazayeri says.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, the Québec Research Funds, the National Institutes of Health, and the Paul and Lilah Newton Brain Science Award.

Nancy Kanwisher, Robert Langer, and Sara Seager named Kavli Prize Laureates

MIT scientists honored in each of the three Kavli Prize categories: neuroscience, nanoscience, and astrophysics, respectively.

MIT faculty members Nancy Kanwisher, Robert Langer, and Sara Seager are among eight researchers worldwide to receive this year’s Kavli Prizes.

A partnership among the Norwegian Academy of Science and Letters, the Norwegian Ministry of Education and Research, and the Kavli Foundation, the Kavli Prizes are awarded every two years to “honor scientists for breakthroughs in astrophysics, nanoscience and neuroscience that transform our understanding of the big, the small and the complex.” The laureates in each field will share $1 million.

Understanding recognition of faces

Nancy Kanwisher, the Walter A Rosenblith Professor of Brain and Cognitive Sciences and McGovern Institute for Brain Research investigator, has been awarded the 2024 Kavli Prize in Neuroscience with Doris Tsao, professor in the Department of Molecular and Cell Biology at the University of California at Berkeley, and Winrich Freiwald, the Denise A. and Eugene W. Chinery Professor at the Rockefeller University.

Kanwisher, Tsao, and Freiwald discovered a specialized system within the brain to recognize faces. Their discoveries have provided basic principles of neural organization and made the starting point for further research on how the processing of visual information is integrated with other cognitive functions.

Kanwisher was the first to prove that a specific area in the human neocortex is dedicated to recognizing faces, now called the fusiform face area. Using functional magnetic resonance imaging, she found individual differences in the location of this area and devised an analysis technique to effectively localize specialized functional regions in the brain. This technique is now widely used and applied to domains beyond the face recognition system. 

Integrating nanomaterials for biomedical advances

Robert Langer, the David H. Koch Institute Professor, has been awarded the 2024 Kavli Prize in Nanoscience with Paul Alivisatos, president of the University of Chicago and John D. MacArthur Distinguished Service Professor in the Department of Chemistry, and Chad Mirkin, professor of chemistry at Northwestern University.

Langer, Alivisatos, and Mirkin each revolutionized the field of nanomedicine by demonstrating how engineering at the nano scale can advance biomedical research and application. Their discoveries contributed foundationally to the development of therapeutics, vaccines, bioimaging, and diagnostics.

Langer was the first to develop nanoengineered materials that enabled the controlled release, or regular flow, of drug molecules. This capability has had an immense impact for the treatment of a range of diseases, such as aggressive brain cancer, prostate cancer, and schizophrenia. His work also showed that tiny particles, containing protein antigens, can be used in vaccination, and was instrumental in the development of the delivery of messenger RNA vaccines. 

Searching for life beyond Earth

Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, has been awarded the 2024 Kavli Prize in Astrophysics along with David Charbonneau, the Fred Kavli Professor of Astrophysics at Harvard University.

Seager and Charbonneau are recognized for discoveries of exoplanets and the characterization of their atmospheres. They pioneered methods for the detection of atomic species in planetary atmospheres and the measurement of their thermal infrared emission, setting the stage for finding the molecular fingerprints of atmospheres around both giant and rocky planets. Their contributions have been key to the enormous progress seen in the last 20 years in the exploration of myriad exoplanets. 

Kanwisher, Langer, and Seager bring the number of all-time MIT faculty recipients of the Kavli Prize to eight. Prior winners include Rainer Weiss in astrophysics (2016), Alan Guth in astrophysics (2014), Mildred Dresselhaus in nanoscience (2012), Ann Graybiel in neuroscience (2012), and Jane Luu in astrophysics (2012).

Making climate models relevant for local decision-makers

A new downscaling method leverages machine learning to speed up climate model simulations at finer resolutions, making them usable on local levels.

Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. 

Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. 

“It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. 

Traditional wisdom

In climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. 

“If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” 

Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. 

A little bit of both 

In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. 

Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. 

“If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” 

Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. 

Quantifying risk quickly

Being able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.

“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”

While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.

“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says. 

Catalyst Symposium helps lower “activation barriers” for rising biology researchers

Second annual assembly, sponsored by the Department of Biology and Picower Institute, invited postdocs from across the country to meet with faculty, present their work to the MIT community, and build relationships.

For science — and the scientists who practice it — to succeed, research must be shared. That’s why members of the MIT community recently gathered to learn about the research of eight postdocs from across the country for the second annual Catalyst Symposium, an event co-sponsored by the Department of Biology and The Picower Institute for Learning and Memory.

The eight Catalyst Fellows came to campus as part of an effort to increase engagement between MIT scholars and postdocs excelling in their respective fields from traditionally underrepresented backgrounds in science. The three-day symposium included panel discussions with faculty and postdocs, one-on-one meetings, social events, and research talks from the Catalyst Fellows.

“I love the name of this symposium because we’re all, of course, eager to catalyze advancements in our professional lives, in science, and to move forward faster by lowering activation barriers,” says MIT biology department head Amy Keating. “I feel we can’t afford to do science with only part of the talent pool, and I don’t think people can do their best work when they are worried about whether they belong.” 

The 2024 Catalyst Fellows include Chloé Baron from Boston Children’s Hospital; Maria Cecília Canesso from The Rockefeller University; Kiara Eldred from the University of Washington School of Medicine; Caitlin Kowalski from the University of Oregon; Fabián Morales-Polanco from Stanford University; Kali Pruss from the Washington University School of Medicine in St. Louis; Rodrigo Romero from Memorial Sloan Kettering Cancer Center; and Zuri Sullivan from Harvard University.

Romero, who received his PhD from MIT working in the Jacks Lab at the Koch Institute, said that it was “incredible to see so many familiar faces,” but he spent the symposium lunch chatting with new students in his old lab.

“Especially having been trained to think differently after MIT, I can now reach out to people that I didn’t as a graduate student, and make connections that I didn’t think about before,” Romero says.

He presented his work on lineage plasticity in the tumor microenvironment. Lineage plasticity is a hallmark of tumor progression but also occurs during normal development, such as wound healing.

As for the general mission of the symposium, Romero agrees with Keating.

“Trying to lower the boundary for other people to actually have a chance to do academic research in the future is important,” Romero says.

The Catalyst Symposium is aimed at early-career scientists who foresee a path in academia. Of the 2023 Catalyst Fellows, one has already secured a faculty position. Starting this September, Shan Maltzer will be an assistant professor at Vanderbilt University in the Department of Pharmacology and the Vanderbilt Brain Institute studying mechanisms of somatosensory circuit assembly, development, and function.

Another aim of the Catalyst Symposium is to facilitate collaborations and strengthen existing relationships. Sullivan, an immunologist and molecular neuroscientist who presented on the interactions between the immune system and the brain, is collaborating with Sebastian Lourido, an associate professor of biology and core member of the Whitehead Institute for Biomedical Research. Lourido’s studies include pathogens such as Toxoplasma gondii, which is known to alter the behavior of infected rodents. In the long term, Sullivan hopes to bridge research in immunology and neuroscience — for instance by investigating how infection affects behavior. She has observed that two rodents experiencing illness will huddle together in a cage, whereas an unafflicted rodent and an ill one will generally avoid each other when sharing the same space.

Pruss presented research on the interactions between the gut microbiome and the environment, and how they may affect physiology and fetal development. Kowalski discussed the relationship between fungi residing on our bodies and human health. Beyond the opportunity to deliver talks, both agreed that the small group settings of the three-day event were rewarding.

“The opportunity to meet with faculty throughout the symposium has been invaluable, both for finding familiar faces and for establishing friendly relationships,” Pruss says. “You don’t have to try to catch them when you’re running past them in the hallway.”

Eldred, who studies cell fate in the human retina, says she was excited about the faculty panels because they allowed her to ask faculty about fundamental aspects of recruiting for their labs, like bringing in graduate students.

Kowalski also says she enjoyed interfacing with so many new ideas — the spread of scientific topics from among the cohort of speakers extended beyond those she usually interacts with.

Mike Laub, professor of biology and Howard Hughes Medical Institute investigator, and Yadira Soto-Feliciano, assistant professor of biology and intramural faculty at the Koch Institute for Integrative Cancer Research, were on the symposium's planning committee, along with Diversity, Equity, and Inclusion Officer Hallie Dowling-Huppert. Laub hopes the symposium will continue to be offered annually; next year’s Catalyst Symposium is already scheduled to take place in early May.

“I thought this year’s Catalyst Symposium was another great success. The talks from the visiting fellows featured some amazing science from a wide range of fields,” Laub says. “I also think it’s fair to say that their interactions with the faculty, postdocs, and students here generated a lot of excitement and energy in our community, which is exactly what we hoped to accomplish with this symposium.”

Students research pathways for MIT to reach decarbonization goals

A class this semester challenged students to evaluate technologies to help MIT decarbonize — with implications for organizations across the globe.

A number of emerging technologies hold promise for helping organizations move away from fossil fuels and achieve deep decarbonization. The challenge is deciding which technologies to adopt, and when.

MIT, which has a goal of eliminating direct campus emissions by 2050, must make such decisions sooner than most to achieve its mission. That was the challenge at the heart of the recently concluded class 4.s42 (Building Technology — Carbon Reduction Pathways for the MIT Campus).

The class brought together undergraduate and graduate students from across the Institute to learn about different technologies and decide on the best path forward. It concluded with a final report as well as student presentations to members of MIT’s Climate Nucleus on May 9.

“The mission of the class is to put together a cohesive document outlining how MIT can reach its goal of decarbonization by 2050,” says Morgan Johnson Quamina, an undergraduate in the Department of Civil and Environmental Engineering. “We’re evaluating how MIT can reach these goals on time, what sorts of technologies can help, and how quickly and aggressively we’ll have to move. The final report details a ton of scenarios for partial and full implementation of different technologies, outlines timelines for everything, and features recommendations.”

The class was taught by professor of architecture Christoph Reinhart but included presentations by other faculty about low- and zero-carbon technology areas in their fields, including advanced nuclear reactors, deep geothermal energy, carbon capture, and more.

The students’ work served as an extension of MIT’s Campus Decarbonization Working Group, which Reinhart co-chairs with Director of Sustainability Julie Newman. The group is charged with developing a technology roadmap for the campus to reach its goal of decarbonizing its energy systems.

Reinhart says the class was a way to leverage the energy and creativity of students to accelerate his group’s work.

“It’s very much focused on establishing a vision for what could happen at MIT,” Reinhart says. “We are trying to bring these technologies together so that we see how this [decarbonization process] would actually look on our campus.”

A class with impact

Throughout the semester, every Thursday from 9 a.m. to 12 p.m., around 20 students gathered to explore different decarbonization technology pathways. They also discussed energy policies, methods for evaluating risk, and future electric grid supply changes in New England.

“I love that this work can have a real-world impact,” says Emile Germonpre, a master’s student in the Department of Nuclear Science and Engineering. “You can tell people aren’t thinking about grades or workload — I think people would’ve loved it even if the workload was doubled. Everyone is just intrinsically motivated to help solve this problem.”

The classes typically began with an introduction to one of 10 different technologies. The introductions covered technical maturity, ease of implementation, costs, and how to model the technology’s impact on campus emissions. Students were then split into teams to evaluate each technology’s feasibility.

“I’ve learned a lot about decarbonization and climate change,” says Johnson Quamina. “As an undergrad, I haven’t had many focused classes like this. But it was really beneficial to learn about some of these technologies I hadn’t even heard of before. It’s awesome to be contributing to the community like this.”

As part of the class, students also developed a model that visualizes each intervention’s effect on emissions, allowing users to select interventions or combinations of interventions to see how they shape emissions trajectories.

“We have a physics-based model that takes into account every building,” says Reinhart. “You can look at variants where we retrofit buildings, where we add rooftop photovoltaics, nuclear, carbon capture, and adopting different types of district underground heating systems. The point is you can start to see how fast we could do something like this and what the real game-changers are.”

The class also designed and conducted a preliminary survey, to be expanded in the fall, that captures the MIT community's attitudes towards the different technologies. Preliminary results were shared with the Climate Nucleus during students’ May 9 presentations.

“I think it’s this unique and wonderful intersection of the forward-looking and innovative nature of academia with real world impact and specificity that you’d typically only find in industry,” Germonpre says. “It lets you work on a tangible project, the MIT campus, while exploring technologies that companies today find too risky to be the first mover on.”

From MIT’s campus to the world

The students recommended MIT form a building energy team to audit and retrofit all campus buildings. They also suggested MIT order a comprehensive geological feasibility survey to support planning regarding shallow and deep borehole fields for harvesting underground heat. A third recommendation was to communicate with the MIT community as well as with regulators and policymakers in the area about the deployment of nuclear batteries and deep geothermal boreholes on campus.

The students’ modeling tool can also help members of the working group explore various decarbonization pathways. For instance, installing rooftop photovoltaics now would effectively reduce emissions, but installing them in a few decades, when the regional electricity grid is expected to be reducing its reliance on fossil fuels anyways, would have a much smaller impact.

“When you have students working together, the recommendations are a little less filtered, which I think is a good thing,” Reinhart says. “I think there’s a real sense of urgency in the class. For certain choices, we have to basically act now.”

Reinhart plans to do more activities related to the Working Group and the class’ recommendations in the fall, and he says he’s currently engaged with the Massachusetts Governor's Office to explore doing something similar for the state.

Students say they plan to keep working on the survey this summer and continue studying their technology areas. In the longer term, they believe the experience will help them in their careers.

“Decarbonization is really important, and understanding how we can implement new technologies on campuses or in buildings provides me with a more well-rounded vision for what I could design in my career,” says Johnson Quamina, who wants to work as a structural or environmental engineer but says the class has also inspired her to consider careers in energy.

The students’ findings also have implications beyond MIT campus. In accordance with MIT’s 2015 climate plan that committed to using the campus community as a “test bed for change,” the students’ recommendations also hold value for organizations around the world.

“The mission is definitely broader than just MIT,” Germonpre says. “We don’t just want to solve MIT’s problem. We’ve dismissed technologies that were too specific to MIT. The goal is for MIT to lead by example and help certain technologies mature so that we can accelerate their impact.”

Paying it forward

Professors Erik Lin-Greenberg and Tracy Slatyer are honored as “Committed to Caring.”

MIT professors Erik Lin-Greenberg and Tracy Slatyer truly understand the positive impact that advisors have in the life of a graduate student. Two of the most recent faculty members to be named “Committed to Caring,” they attribute their excellence in advising to the challenging experiences and life-changing mentorship they received during their own graduate school journeys.

Tracy Slatyer: Seeing the PhD as a journey

Tracy Slatyer is a professor in the Department of Physics who works on particle physics, cosmology, and astrophysics. Focused on unraveling the mysteries of dark matter, Slatyer investigates potential new physics through the analysis of astrophysical and cosmological data, exploring scenarios involving novel forces and theoretical predictions for photon signals.

One of Slatyer’s key approaches is to prioritize students’ development into independent researchers over academic accomplishments alone, also acknowledging the prevalence of imposter syndrome.

Having struggled with impostor syndrome in graduate school themselves, Slatyer shares their personal past challenges and encourages students to see the big picture: “I try to remind [students] that the PhD is a marathon, not a sprint, and that once you have your PhD, nobody will care if it took you one year or three to get through all the qualifying exams and required classes.” Many students also expressed gratitude for how  Slatyer offered opportunities to connect outside of work, including invitations to tea-time.

Slatyer encourages students to seek advice and mentorship from a range of colleagues at different career stages, and to explore their interests even where those lie outside their advisor’s primary field of research, including building connections with other professors. They believe in supporting community amongst students and postdocs, and the value of a broad and robust network of mentors to guide students in achieving their individual goals.

Advisees noted Slatyer’s realistic portrayal of expectations within the field and open discussion of work-life balance. They maintain a document with clear advising guidelines, such as placing new students on projects with experienced researchers. Slatyer also schedules weekly meetings to discuss non-research topics, including career goals and upcoming talks.

In addition, Slatyer does not shy away from the fact that their field is competitive and demanding. They try to be candid about their experiences in academia (both negative and positive), noting that the support and advice they have received from a diverse range of mentors have been key to their own successful career.

Erik Lin-Greenberg: Empathy and enduring support

Erik Lin-Greenberg is an assistant professor in the history and culture of science and technology in the Department of Political Science. His research examines how emerging military technology affects conflict dynamics and the use of force.

Lin-Greenberg’s thoughtful supervision of his students underlies his commitment to cultivating the next generation of researchers. Students are grateful for his knack for identifying weak arguments, as well as his guidance through challenging publication processes: “For my dissertation, Erik has mastered the difficult art of giving feedback in a way that does not discourage.”

Lin-Greenberg's personalized approach is further evidence of his exceptional teaching. In the classroom, students praise his thorough preparation, ability to facilitate rich discussions, and flexibility during high-pressure periods. In addition, his unique ability to break down complex material makes topics accessible to the diverse array of backgrounds in the classroom.

His mentorship extends far beyond academics, encompassing a genuine concern for the well-being of his students through providing personal check-ins and unwavering support.

Much of this empathy comes from Erik’s own tumultuous beginnings in graduate school at Columbia University, where he struggled to keep up with coursework and seriously considered leaving the program. He points to the care and dedication of mentors, and advisor Tonya Putnam in particular, as having an enormous impact.

“She consistently reassured me that I was doing interesting work, gave amazing feedback on my research, and was always open and transparent,” he recounts. “When I'm advising today, I constantly try to live up to Tonya's example.”

In his own group, Erik chooses creative approaches to mentorship, including taking mentees out for refreshments to navigate difficult dissertation discussions. In his students’ moments of despair, he boosts their mood with photos of his cat, Major General Lansdale.

Ultimately, one nominator credited his ability to continue his PhD to Lin-Greenberg’s uplifting spirit and endless encouragement: “I cannot imagine anyone more deserving of recognition than Erik Lin-Greenberg.”

Exotic black holes could be a byproduct of dark matter

In the first quintillionth of a second, the universe may have sprouted microscopic black holes with enormous amounts of nuclear charge, MIT physicists propose.

For every kilogram of matter that we can see — from the computer on your desk to distant stars and galaxies — there are 5 kilograms of invisible matter that suffuse our surroundings. This “dark matter” is a mysterious entity that evades all forms of direct observation yet makes its presence felt through its invisible pull on visible objects.

Fifty years ago, physicist Stephen Hawking offered one idea for what dark matter might be: a population of black holes, which might have formed very soon after the Big Bang. Such “primordial” black holes would not have been the goliaths that we detect today, but rather microscopic regions of ultradense matter that would have formed in the first quintillionth of a second following the Big Bang and then collapsed and scattered across the cosmos, tugging on surrounding space-time in ways that could explain the dark matter that we know today.

Now, MIT physicists have found that this primordial process also would have produced some unexpected companions: even smaller black holes with unprecedented amounts of a nuclear-physics property known as “color charge.”

These smallest, “super-charged” black holes would have been an entirely new state of matter, which likely evaporated a fraction of a second after they spawned. Yet they could still have influenced a key cosmological transition: the time when the first atomic nuclei were forged. The physicists postulate that the color-charged black holes could have affected the balance of fusing nuclei, in a way that astronomers might someday detect with future measurements. Such an observation would point convincingly to primordial black holes as the root of all dark matter today.

“Even though these short-lived, exotic creatures are not around today, they could have affected cosmic history in ways that could show up in subtle signals today,” says David Kaiser, the Germeshausen Professor of the History of Science and professor of physics at MIT. “Within the idea that all dark matter could be accounted for by black holes, this gives us new things to look for.”

Kaiser and his co-author, MIT graduate student Elba Alonso-Monsalve, have published their study today in the journal Physical Review Letters.

A time before stars

The black holes that we know and detect today are the product of stellar collapse, when the center of a massive star caves in on itself to form a region so dense that it can bend space-time such that anything — even light — gets trapped within. Such “astrophysical” black holes can be anywhere from a few times as massive as the sun to many billions of times more massive.

“Primordial” black holes, in contrast, can be much smaller and are thought to have formed in a time before stars. Before the universe had even cooked up the basic elements, let alone stars, scientists believe that pockets of ultradense, primordial matter could have accumulated and collapsed to form microscopic black holes that could have been so dense as to squeeze the mass of an asteroid into a region as small as a single atom. The gravitational pull from these tiny, invisible objects scattered throughout the universe could explain all the dark matter that we can’t see today.

If that were the case, then what would these primordial black holes have been made from? That’s the question Kaiser and Alonso-Monsalve took on with their new study.

“People have studied what the distribution of black hole masses would be during this early-universe production but never tied it to what kinds of stuff would have fallen into those black holes at the time when they were forming,” Kaiser explains.

Super-charged rhinos

The MIT physicists looked first through existing theories for the likely distribution of black hole masses as they were first forming in the early universe.

“Our realization was, there’s a direct correlation between when a primordial black hole forms and what mass it forms with,” Alonso-Monsalve says. “And that window of time is absurdly early.”

She and Kaiser calculated that primordial black holes must have formed within the first quintillionth of a second following the Big Bang. This flash of time would have produced “typical” microscopic black holes that were as massive as an asteroid and as small as an atom. It would have also yielded a small fraction of exponentially smaller black holes, with the mass of a rhinoceros and a size much smaller than a single proton.

What would these primordial black holes have been made from? For that, they looked to studies exploring the composition of the early universe, and specifically, to the theory of quantum chromodynamics (QCD) — the study of how quarks and gluons interact.

Quarks and gluons are the fundamental building blocks of protons and neutrons — elementary particles that combined to forge the basic elements of the periodic table. Immediately following the Big Bang, physicists estimate, based on QCD, that the universe was an immensely hot plasma of quarks and gluons that then quickly cooled and combined to produce protons and neutrons.

The researchers found that, within the first quintillionth of a second, the universe would still have been a soup of free quarks and gluons that had yet to combine. Any black holes that formed in this time would have swallowed up the untethered particles, along with an exotic property known as “color charge” — a state of charge that only uncombined quarks and gluons carry.

“Once we figured out that these black holes form in a quark-gluon plasma, the most important thing we had to figure out was, how much color charge is contained in the blob of matter that will end up in a primordial black hole?” Alonso-Monsalve says.

Using QCD theory, they worked out the distribution of color charge that should have existed throughout the hot, early plasma. Then they compared that to the size of a region that would collapse to form a black hole in the first quintillionth of a second. It turns out there wouldn’t have been much color charge in most typical black holes at the time, as they would have formed by absorbing a huge number of regions that had a mix of charges, which would have ultimately added up to a “neutral” charge.

But the smallest black holes would have been packed with color charge. In fact, they would have contained the maximum amount of any type of charge allowed for a black hole, according to the fundamental laws of physics. Whereas such “extremal” black holes have been hypothesized for decades, until now no one had discovered a realistic process by which such oddities actually could have formed in our universe.

Professor Bernard Carr of Queen Mary University of London, an expert on the topic of primordial black holes who first worked on the topic with Stephen Hawking, describes the new work as “exciting.” Carr, who was not involved in the study, says the work “shows that there are circumstances in which a tiny fraction of the early universe can go into objects with an enormous amount of color charge (at least for a while), exponentially greater than what has been identified in previous studies of QCD.”

The super-charged black holes would have quickly evaporated, but possibly only after the time when the first atomic nuclei began to form. Scientists estimate that this process started around one second after the Big Bang, which would have given extremal black holes plenty of time to disrupt the equilibrium conditions that would have prevailed when the first nuclei began to form. Such disturbances could potentially affect how those earliest nuclei formed, in ways that might some day be observed.

“These objects might have left some exciting observational imprints,” Alonso-Monsalve muses. “They could have changed the balance of this versus that, and that’s the kind of thing that one can begin to wonder about.”

This research was supported, in part, by the U.S. Department of Energy. Alonso-Monsalve is also supported by a fellowship from the MIT Department of Physics. 

Nuh Gedik receives 2024 National Brown Investigator Award

Physics professor will use the award to develop a new kind of microscopy.

Nuh Gedik, MIT’s Donner Professor of Physics, has been named a 2024 Ross Brown Investigator by the Brown Institute for Basic Sciences at Caltech.

One of eight awarded mid-career faculty working on fundamental challenges in the physical sciences, Gedik will receive up to $2 million over five years.

Gedik will use the award to develop a new kind of microscopy that images electrons photo-emitted from a surface while also measuring their energy and momentum. This microscope will make femtosecond movies of electrons to study the fascinating properties of two-dimensional quantum materials.  

Another awardee, professor of physics Andrea Young at the University of California Santa Barbara, was a 2011-14 Pappalardo Fellow at MIT in experimental condensed matter physics. 

The Brown Institute for Basic Sciences at Caltech was established in 2023 through a $400-million gift from entrepreneur, philanthropist, and Caltech alumnus Ross M. Brown, to support fundamental research in chemistry and physics. Initially created as the Investigator Awards in 2020, the award supports the belief that "scientific discovery is a driving force in the improvement of the human condition," according to a news release from the Science Philanthropy Alliance.

A total of 13 investigators were recognized in the program's first three years. Now that the Brown Investigator Award has found a long-term home at Caltech, the intent is to recognize a minimum of eight investigators each year. 

Other previous awardees with MIT connections include MIT professor of chemistry Mircea Dincă as well as physics alumni Waseem S. Bakr '05, '06, MNG '06 of Princeton University; David Hsieh of Caltech, who is another former Pappalardo Fellow; Munira Khalil PhD '04 and Mark Rudner PhD '08 of the University of Washington; and Tanya Zelevinsky ’99 of Columbia University.

Reducing carbon emissions from long-haul trucks

MIT researchers show a promising plan for using clean-burning hydrogen in place of the diesel fuel now used in most freight-transport trucks.

People around the world rely on trucks to deliver the goods they need, and so-called long-haul trucks play a critical role in those supply chains. In the United States, long-haul trucks moved 71 percent of all freight in 2022. But those long-haul trucks are heavy polluters, especially of the carbon emissions that threaten the global climate. According to U.S. Environmental Protection Agency estimates, in 2022 more than 3 percent of all carbon dioxide (CO2) emissions came from long-haul trucks.

The problem is that long-haul trucks run almost exclusively on diesel fuel, and burning diesel releases high levels of CO2 and other carbon emissions. Global demand for freight transport is projected to as much as double by 2050, so it’s critical to find another source of energy that will meet the needs of long-haul trucks while also reducing their carbon emissions. And conversion to the new fuel must not be costly. “Trucks are an indispensable part of the modern supply chain, and any increase in the cost of trucking will be felt universally,” notes William H. Green, the Hoyt Hottel Professor in Chemical Engineering and director of the MIT Energy Initiative.

For the past year, Green and his research team have been seeking a low-cost, cleaner alternative to diesel. Finding a replacement is difficult because diesel meets the needs of the trucking industry so well. For one thing, diesel has a high energy density — that is, energy content per pound of fuel. There’s a legal limit on the total weight of a truck and its contents, so using an energy source with a lower weight allows the truck to carry more payload — an important consideration, given the low profit margin of the freight industry. In addition, diesel fuel is readily available at retail refueling stations across the country — a critical resource for drivers, who may travel 600 miles in a day and sleep in their truck rather than returning to their home depot. Finally, diesel fuel is a liquid, so it’s easy to distribute to refueling stations and then pump into trucks.

Past studies have examined numerous alternative technology options for powering long-haul trucks, but no clear winner has emerged. Now, Green and his team have evaluated the available options based on consistent and realistic assumptions about the technologies involved and the typical operation of a long-haul truck, and assuming no subsidies to tip the cost balance. Their in-depth analysis of converting long-haul trucks to battery electric — summarized below — found a high cost and negligible emissions gains in the near term. Studies of methanol and other liquid fuels from biomass are ongoing, but already a major concern is whether the world can plant and harvest enough biomass for biofuels without destroying the ecosystem. An analysis of hydrogen — also summarized below — highlights specific challenges with using that clean-burning fuel, which is a gas at normal temperatures.

Finally, the team identified an approach that could make hydrogen a promising, low-cost option for long-haul trucks. And, says Green, “it’s an option that most people are probably unaware of.” It involves a novel way of using materials that can pick up hydrogen, store it, and then release it when and where it’s needed to serve as a clean-burning fuel.

Defining the challenge: A realistic drive cycle, plus diesel values to beat

The MIT researchers believe that the lack of consensus on the best way to clean up long-haul trucking may have a simple explanation: Different analyses are based on different assumptions about the driving behavior of long-haul trucks. Indeed, some of them don’t accurately represent actual long-haul operations. So the first task for the MIT team was to define a representative — and realistic — "drive cycle” for actual long-haul truck operations in the United States. Then the MIT researchers — and researchers elsewhere — can assess potential replacement fuels and engines based on a consistent set of assumptions in modeling and simulation analyses.

To define the drive cycle for long-haul operations, the MIT team used a systematic approach to analyze many hours of real-world driving data covering 58,000 miles. They examined 10 features and identified three — daily range, vehicle speed, and road grade — that have the greatest impact on energy demand and thus on fuel consumption and carbon emissions. The representative drive cycle that emerged covers a distance of 600 miles, an average vehicle speed of 55 miles per hour, and a road grade ranging from negative 6 percent to positive 6 percent.

The next step was to generate key values for the performance of the conventional diesel “powertrain,” that is, all the components involved in creating power in the engine and delivering it to the wheels on the ground. Based on their defined drive cycle, the researchers simulated the performance of a conventional diesel truck, generating “benchmarks” for fuel consumption, CO2 emissions, cost, and other performance parameters.

Now they could perform parallel simulations — based on the same drive-cycle assumptions — of possible replacement fuels and powertrains to see how the cost, carbon emissions, and other performance parameters would compare to the diesel benchmarks.

The battery electric option

When considering how to decarbonize long-haul trucks, a natural first thought is battery power. After all, battery electric cars and pickup trucks are proving highly successful. Why not switch to battery electric long-haul trucks? “Again, the literature is very divided, with some studies saying that this is the best idea ever, and other studies saying that this makes no sense,” says Sayandeep Biswas, a graduate student in chemical engineering.

To assess the battery electric option, the MIT researchers used a physics-based vehicle model plus well-documented estimates for the efficiencies of key components such as the battery pack, generators, motor, and so on. Assuming the previously described drive cycle, they determined operating parameters, including how much power the battery-electric system needs. From there they could calculate the size and weight of the battery required to satisfy the power needs of the battery electric truck.

The outcome was disheartening. Providing enough energy to travel 600 miles without recharging would require a 2 megawatt-hour battery. “That’s a lot,” notes Kariana Moreno Sader, a graduate student in chemical engineering. “It’s the same as what two U.S. households consume per month on average.” And the weight of such a battery would significantly reduce the amount of payload that could be carried. An empty diesel truck typically weighs 20,000 pounds. With a legal limit of 80,000 pounds, there’s room for 60,000 pounds of payload. The 2 MWh battery would weigh roughly 27,000 pounds — significantly reducing the allowable capacity for carrying payload.

Accounting for that “payload penalty,” the researchers calculated that roughly four electric trucks would be required to replace every three of today’s diesel-powered trucks. Furthermore, each added truck would require an additional driver. The impact on operating expenses would be significant.

Analyzing the emissions reductions that might result from shifting to battery electric long-haul trucks also brought disappointing results. One might assume that using electricity would eliminate CO2 emissions. But when the researchers included emissions associated with making that electricity, that wasn’t true.

“Battery electric trucks are only as clean as the electricity used to charge them,” notes Moreno Sader. Most of the time, drivers of long-haul trucks will be charging from national grids rather than dedicated renewable energy plants. According to Energy Information Agency statistics, fossil fuels make up more than 60 percent of the current U.S. power grid, so electric trucks would still be responsible for significant levels of carbon emissions. Manufacturing batteries for the trucks would generate additional CO2 emissions.

Building the charging infrastructure would require massive upfront capital investment, as would upgrading the existing grid to reliably meet additional energy demand from the long-haul sector. Accomplishing those changes would be costly and time-consuming, which raises further concern about electrification as a means of decarbonizing long-haul freight.

In short, switching today’s long-haul diesel trucks to battery electric power would bring major increases in costs for the freight industry and negligible carbon emissions benefits in the near term. Analyses assuming various types of batteries as well as other drive cycles produced comparable results.

However, the researchers are optimistic about where the grid is going in the future. “In the long term, say by around 2050, emissions from the grid are projected to be less than half what they are now,” says Moreno Sader. “When we do our calculations based on that prediction, we find that emissions from battery electric trucks would be around 40 percent lower than our calculated emissions based on today’s grid.”

For Moreno Sader, the goal of the MIT research is to help “guide the sector on what would be the best option.” With that goal in mind, she and her colleagues are now examining the battery electric option under different scenarios — for example, assuming battery swapping (a depleted battery isn’t recharged but replaced by a fully charged one), short-haul trucking, and other applications that might produce a more cost-competitive outcome, even for the near term.

A promising option: hydrogen

As the world looks to get off reliance on fossil fuels for all uses, much attention is focusing on hydrogen. Could hydrogen be a good alternative for today’s diesel-burning long-haul trucks?

To find out, the MIT team performed a detailed analysis of the hydrogen option. “We thought that hydrogen would solve a lot of the problems we had with battery electric,” says Biswas. It doesn’t have associated CO2 emissions. Its energy density is far higher, so it doesn’t create the weight problem posed by heavy batteries. In addition, existing compression technology can get enough hydrogen fuel into a regular-sized tank to cover the needed distance and range. “You can actually give drivers the range they want,” he says. “There’s no issue with ‘range anxiety.’”

But while using hydrogen for long-haul trucking would reduce carbon emissions, it would cost far more than diesel. Based on their detailed analysis of hydrogen, the researchers concluded that the main source of incurred cost is in transporting it. Hydrogen can be made in a chemical facility, but then it needs to be distributed to refueling stations across the country. Conventionally, there have been two main ways of transporting hydrogen: as a compressed gas and as a cryogenic liquid. As Biswas notes, the former is “super high pressure,” and the latter is “super cold.” The researchers’ calculations show that as much as 80 percent of the cost of delivered hydrogen is due to transportation and refueling, plus there’s the need to build dedicated refueling stations that can meet new environmental and safety standards for handling hydrogen as a compressed gas or a cryogenic liquid.

Having dismissed the conventional options for shipping hydrogen, they turned to a less-common approach: transporting hydrogen using “liquid organic hydrogen carriers” (LOHCs), special organic (carbon-containing) chemical compounds that can under certain conditions absorb hydrogen atoms and under other conditions release them.

LOHCs are in use today to deliver small amounts of hydrogen for commercial use. Here’s how the process works: In a chemical plant, the carrier compound is brought into contact with hydrogen in the presence of a catalyst under elevated temperature and pressure, and the compound picks up the hydrogen. The “hydrogen-loaded” compound — still a liquid — is then transported under atmospheric conditions. When the hydrogen is needed, the compound is again exposed to a temperature increase and a different catalyst, and the hydrogen is released.

LOHCs thus appear to be ideal hydrogen carriers for long-haul trucking. They’re liquid, so they can easily be delivered to existing refueling stations, where the hydrogen would be released; and they contain at least as much energy per gallon as hydrogen in a cryogenic liquid or compressed gas form. However, a detailed analysis of using hydrogen carriers showed that the approach would decrease emissions but at a considerable cost.

The problem begins with the “dehydrogenation” step at the retail station. Releasing the hydrogen from the chemical carrier requires heat, which is generated by burning some of the hydrogen being carried by the LOHC. The researchers calculate that getting the needed heat takes 36 percent of that hydrogen. (In theory, the process would take only 27 percent — but in reality, that efficiency won’t be achieved.) So out of every 100 units of starting hydrogen, 36 units are now gone.

But that’s not all. The hydrogen that comes out is at near-ambient pressure. So the facility dispensing the hydrogen will need to compress it — a process that the team calculates will use up 20-30 percent of the starting hydrogen.

Because of the needed heat and compression, there’s now less than half of the starting hydrogen left to be delivered to the truck — and as a result, the hydrogen fuel becomes twice as expensive. The bottom line is that the technology works, but “when it comes to really beating diesel, the economics don’t work. It’s quite a bit more expensive,” says Biswas. In addition, the refueling stations would require expensive compressors and auxiliary units such as cooling systems. The capital investment and the operating and maintenance costs together imply that the market penetration of hydrogen refueling stations will be slow.

A better strategy: onboard release of hydrogen from LOHCs

Given the potential benefits of using of LOHCs, the researchers focused on how to deal with both the heat needed to release the hydrogen and the energy needed to compress it. “That’s when we had the idea,” says Biswas. “Instead of doing the dehydrogenation [hydrogen release] at the refueling station and then loading the truck with hydrogen, why don’t we just take the LOHC and load that onto the truck?” Like diesel, LOHC is a liquid, so it’s easily transported and pumped into trucks at existing refueling stations. “We’ll then make hydrogen as it’s needed based on the power demands of the truck — and we can capture waste heat from the engine exhaust and use it to power the dehydrogenation process,” says Biswas.

In their proposed plan, hydrogen-loaded LOHC is created at a chemical “hydrogenation” plant and then delivered to a retail refueling station, where it’s pumped into a long-haul truck. Onboard the truck, the loaded LOHC pours into the fuel-storage tank. From there it moves to the “dehydrogenation unit” — the reactor where heat and a catalyst together promote chemical reactions that separate the hydrogen from the LOHC. The hydrogen is sent to the powertrain, where it burns, producing energy that propels the truck forward.

Hot exhaust from the powertrain goes to a “heat-integration unit,” where its waste heat energy is captured and returned to the reactor to help encourage the reaction that releases hydrogen from the loaded LOHC. The unloaded LOHC is pumped back into the fuel-storage tank, where it’s kept in a separate compartment to keep it from mixing with the loaded LOHC. From there, it’s pumped back into the retail refueling station and then transported back to the hydrogenation plant to be loaded with more hydrogen.

Switching to onboard dehydrogenation brings down costs by eliminating the need for extra hydrogen compression and by using waste heat in the engine exhaust to drive the hydrogen-release process. So how does their proposed strategy look compared to diesel? Based on a detailed analysis, the researchers determined that using their strategy would be 18 percent more expensive than using diesel, and emissions would drop by 71 percent.

But those results need some clarification. The 18 percent cost premium of using LOHC with onboard hydrogen release is based on the price of diesel fuel in 2020. In spring of 2023 the price was about 30 percent higher. Assuming the 2023 diesel price, the LOHC option is actually cheaper than using diesel.

Both the cost and emissions outcomes are affected by another assumption: the use of “blue hydrogen,” which is hydrogen produced from natural gas with carbon capture and storage. Another option is to assume the use of “green hydrogen,” which is hydrogen produced using electricity generated from renewable sources, such as wind and solar. Green hydrogen is much more expensive than blue hydrogen, so then the costs would increase dramatically.

If in the future the price of green hydrogen drops, the researchers’ proposed plan would shift to green hydrogen — and then the decline in emissions would no longer be 71 percent but rather close to 100 percent. There would be almost no emissions associated with the researchers’ proposed plan for using LHOCs with onboard hydrogen release.

Comparing the options on cost and emissions

To compare the options, Moreno Sader prepared bar charts showing the per-mile cost of shipping by truck in the United States and the CO2 emissions that result using each of the fuels and approaches discussed above: diesel fuel, battery electric, hydrogen as a cryogenic liquid or compressed gas, and LOHC with onboard hydrogen release. The LOHC strategy with onboard dehydrogenation looked promising on both the cost and the emissions charts. In addition to such quantitative measures, the researchers believe that their strategy addresses two other, less-obvious challenges in finding a less-polluting fuel for long-haul trucks.

First, the introduction of the new fuel and trucks to use it must not disrupt the current freight-delivery setup. “You have to keep the old trucks running while you’re introducing the new ones,” notes Green. “You cannot have even a day when the trucks aren’t running because it’d be like the end of the economy. Your supermarket shelves would all be empty; your factories wouldn’t be able to run.” The researchers’ plan would be completely compatible with the existing diesel supply infrastructure and would require relatively minor retrofits to today’s long-haul trucks, so the current supply chains would continue to operate while the new fuel and retrofitted trucks are introduced.

Second, the strategy has the potential to be adopted globally. Long-haul trucking is important in other parts of the world, and Moreno Sader thinks that “making this approach a reality is going to have a lot of impact, not only in the United States but also in other countries,” including her own country of origin, Colombia. “This is something I think about all the time.” The approach is compatible with the current diesel infrastructure, so the only requirement for adoption is to build the chemical hydrogenation plant. “And I think the capital expenditure related to that will be less than the cost of building a new fuel-supply infrastructure throughout the country,” says Moreno Sader.

Testing in the lab

“We’ve done a lot of simulations and calculations to show that this is a great idea,” notes Biswas. “But there’s only so far that math can go to convince people.” The next step is to demonstrate their concept in the lab.

To that end, the researchers are now assembling all the core components of the onboard hydrogen-release reactor as well as the heat-integration unit that’s key to transferring heat from the engine exhaust to the hydrogen-release reactor. They estimate that this spring they’ll be ready to demonstrate their ability to release hydrogen and confirm the rate at which it’s formed. And — guided by their modeling work — they’ll be able to fine-tune critical components for maximum efficiency and best performance.

The next step will be to add an appropriate engine, specially equipped with sensors to provide the critical readings they need to optimize the performance of all their core components together. By the end of 2024, the researchers hope to achieve their goal: the first experimental demonstration of a power-dense, robust onboard hydrogen-release system with highly efficient heat integration.

In the meantime, they believe that results from their work to date should help spread the word, bringing their novel approach to the attention of other researchers and experts in the trucking industry who are now searching for ways to decarbonize long-haul trucking.

Financial support for development of the representative drive cycle and the diesel benchmarks as well as the analysis of the battery electric option was provided by the MIT Mobility Systems Center of the MIT Energy Initiative. Analysis of LOHC-powered trucks with onboard dehydrogenation was supported by the MIT Climate and Sustainability Consortium. Sayandeep Biswas is supported by a fellowship from the Martin Family Society of Fellows for Sustainability, and Kariana Moreno Sader received fellowship funding from MathWorks through the MIT School of Science.

Advocating for science funding on Capitol Hill

During the MIT Science Policy Initiative’s Congressional Visit Days, PhD students and postdocs met with legislators to share expertise and advocate for science agency funding.

This spring, 26 MIT students and postdocs traveled to Washington to meet with congressional staffers to advocate for increased science funding for fiscal year 2025. These conversations were impactful given the recent announcement of budget cuts for several federal science agencies for FY24. 

The participants met with 85 congressional offices representing 30 states over two days April 8-9. Overall, the group advocated for $89.46 billion in science funding across 11 federal scientific agencies. 

Every spring, the MIT Science Policy Initiative (SPI) organizes the Congressional Visit Days (CVD). The trip exposes participants to the process of U.S. federal policymaking and the many avenues researchers can use to advocate for scientific research. The participants also meet with Washington-based alumni and members of the MIT Washington Office and learn about policy careers.

This year, CVD was jointly co-organized by Marie Floryan and Andrew Fishberg, two PhD students in the departments of Mechanical Engineering and Aeronautics and Astronautics, respectively. Before the trip, the participants attended two training sessions organized by SPI, the MIT Washington Office, and the MIT Policy Lab. The participants learned how funding is appropriated at the federal level, the role of elected congressional officials and their staffers in the legislative process, and how academic researchers can get involved in advocating for policies for science.

Julian Ufert, a doctoral student in chemical engineering, says, “CVD was a remarkable opportunity to share insights from my research with policymakers, learn about U.S. politics, and serve the greater scientific community. I thoroughly enjoyed the contacts I made both on Capitol Hill and with MIT students and postdocs who share an interest in science policy.”

In addition to advocating for increased science funding, the participants advocated for topics pertaining to their research projects. A wide variety of topics were discussed, including AI, cybersecurity, energy production and storage, and biotechnology. Naturally, the recent advent of groundbreaking AI technologies, like ChatGPT, brought the topic of AI to the forefront of many offices interested, with multiple offices serving on the newly formed bipartisan AI Task Force.

These discussions were useful for both parties: The participants learned about the methods and challenges associated with enacting legislation, and the staffers directly heard from academic researchers about what is needed to promote scientific progress and innovation.

“It was fascinating to experience the interest and significant involvement of Congressional offices in policy matters related to science and technology. Most staffers were well aware of the general technological advancements and eager to learn more about how our research will impact society,” says Vipindev Vasudevan, a postdoc in electrical and computer engineering.

Dina Sharon, a PhD student in chemistry, adds, “The offices where we met with Congressional staffers were valuable classrooms! Our conversations provided insights into policymakers’ goals, how science can help reach these goals, and how scientists can help cultivate connections between the research and policy spheres.”

Participants also shared how science funding has directly impacted them, discussing how federal grants have supported their graduate education and for the need for open access research.

New technique reveals how gene transcription is coordinated in cells

By capturing short-lived RNA molecules, scientists can map relationships between genes and the regulatory elements that control them.

The human genome contains about 23,000 genes, but only a fraction of those genes are turned on inside a cell at any given time. The complex network of regulatory elements that controls gene expression includes regions of the genome called enhancers, which are often located far from the genes that they regulate.

This distance can make it difficult to map the complex interactions between genes and enhancers. To overcome that, MIT researchers have invented a new technique that allows them to observe the timing of gene and enhancer activation in a cell. When a gene is turned on around the same time as a particular enhancer, it strongly suggests the enhancer is controlling that gene.

Learning more about which enhancers control which genes, in different types of cells, could help researchers identify potential drug targets for genetic disorders. Genomic studies have identified mutations in many non-protein-coding regions that are linked to a variety of diseases. Could these be unknown enhancers?

“When people start using genetic technology to identify regions of chromosomes that have disease information, most of those sites don’t correspond to genes. We suspect they correspond to these enhancers, which can be quite distant from a promoter, so it’s very important to be able to identify these enhancers,” says Phillip Sharp, an MIT Institute Professor Emeritus and member of MIT’s Koch Institute for Integrative Cancer Research.

Sharp is the senior author of the new study, which appears today in Nature. MIT Research Assistant D.B. Jay Mahat is the lead author of the paper.

Hunting for eRNA

Less than 2 percent of the human genome consists of protein-coding genes. The rest of the genome includes many elements that control when and how those genes are expressed. Enhancers, which are thought to turn genes on by coming into physical contact with gene promoter regions through transiently forming a complex, were discovered about 45 years ago.

More recently, in 2010, researchers discovered that these enhancers are transcribed into RNA molecules, known as enhancer RNA or eRNA. Scientists suspect that this transcription occurs when the enhancers are actively interacting with their target genes. This raised the possibility that measuring eRNA transcription levels could help researchers determine when an enhancer is active, as well as which genes it’s targeting.

“That information is extraordinarily important in understanding how development occurs, and in understanding how cancers change their regulatory programs and activate processes that lead to de-differentiation and metastatic growth,” Mahat says.

However, this kind of mapping has proven difficult to perform because eRNA is produced in very small quantities and does not last long in the cell. Additionally, eRNA lacks a modification known as a poly-A tail, which is the “hook” that most techniques use to pull RNA out of a cell.

One way to capture eRNA is to add a nucleotide to cells that halts transcription when incorporated into RNA. These nucleotides also contain a tag called biotin that can be used to fish the RNA out of a cell. However, this current technique only works on large pools of cells and doesn’t give information about individual cells.

While brainstorming ideas for new ways to capture eRNA, Mahat and Sharp considered using click chemistry, a technique that can be used to join two molecules together if they are each tagged with “click handles” that can react together.

The researchers designed nucleotides labeled with one click handle, and once these nucleotides are incorporated into growing eRNA strands, the strands can be fished out with a tag containing the complementary handle. This allowed the researchers to capture eRNA and then purify, amplify, and sequence it. Some RNA is lost at each step, but Mahat estimates that they can successfully pull out about 10 percent of the eRNA from a given cell.

Using this technique, the researchers obtained a snapshot of the enhancers and genes that are being actively transcribed at a given time in a cell.

“You want to be able to determine, in every cell, the activation of transcription from regulatory elements and from their corresponding gene. And this has to be done in a single cell because that’s where you can detect synchrony or asynchrony between regulatory elements and genes,” Mahat says.

Timing of gene expression

Demonstrating their technique in mouse embryonic stem cells, the researchers found that they could calculate approximately when a particular region starts to be transcribed, based on the length of the RNA strand and the speed of the polymerase (the enzyme responsible for transcription) — that is, how far the polymerase transcribes per second. This allowed them to determine which genes and enhancers were being transcribed around the same time.

The researchers used this approach to determine the timing of the expression of cell cycle genes in more detail than has previously been possible. They were also able to confirm several sets of known gene-enhancer pairs and generated a list of about 50,000 possible enhancer-gene pairs that they can now try to verify.

Learning which enhancers control which genes would prove valuable in developing new treatments for diseases with a genetic basis. Last year, the U.S. Food and Drug Administration approved the first gene therapy treatment for sickle cell anemia, which works by interfering with an enhancer that results in activation of a fetal globin gene, reducing the production of sickled blood cells.

The MIT team is now applying this approach to other types of cells, with a focus on autoimmune diseases. Working with researchers at Boston Children’s Hospital, they are exploring immune cell mutations that have been linked to lupus, many of which are found in non-coding regions of the genome.

“It’s not clear which genes are affected by these mutations, so we are beginning to tease apart the genes these putative enhancers might be regulating, and in what cell types these enhancers are active,” Mahat says. “This is a tool for creating gene-to-enhancer maps, which are fundamental in understanding the biology, and also a foundation for understanding disease.”

The findings of this study also offer evidence for a theory that Sharp has recently developed, along with MIT professors Richard Young and Arup Chakraborty, that gene transcription is controlled by membraneless droplets known as condensates. These condensates are made of large clusters of enzymes and RNA, which Sharp suggests may include eRNA produced at enhancer sites.

“We picture that the communication between an enhancer and a promoter is a condensate-type, transient structure, and RNA is part of that. This is an important piece of work in building the understanding of how RNAs from enhancers could be active,” he says.

The research was funded by the National Cancer Institute, the National Institutes of Health, and the Emerald Foundation Postdoctoral Transition Award. 

QS ranks MIT the world’s No. 1 university for 2024-25

Ranking at the top for the 13th year in a row, the Institute also places first in 11 subject areas.

MIT has again been named the world’s top university by the QS World University Rankings, which were announced today. This is the 13th year in a row MIT has received this distinction.

The full 2025 edition of the rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at The QS rankings are based on factors including academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students.

MIT was also ranked the world’s top university in 11 of the subject areas ranked by QS, as announced in April of this year.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.

MIT also placed second in five subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Chemistry; and Economics and Econometrics.

QS has also released a ranking of specialized master’s programs in business. MIT ranked first for its program in supply chain management and second for its program in business analytics.

Study models how ketamine’s molecular action leads to its effects on the brain

New research addresses a gap in understanding how ketamine’s impact on individual neurons leads to pervasive and profound changes in brain network function.

Ketamine, a World Health Organization Essential Medicine, is widely used at varying doses for sedation, pain control, general anesthesia, and as a therapy for treatment-resistant depression. While scientists know its target in brain cells and have observed how it affects brain-wide activity, they haven’t known entirely how the two are connected. A new study by a research team spanning four Boston-area institutions uses computational modeling of previously unappreciated physiological details to fill that gap and offer new insights into how ketamine works.

“This modeling work has helped decipher likely mechanisms through which ketamine produces altered arousal states as well as its therapeutic benefits for treating depression,” says co-senior author Emery N. Brown, the Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering at The Picower Institute for Learning and Memory at MIT, as well as an anesthesiologist at Massachusetts General Hospital and a professor at Harvard Medical School.

The researchers from MIT, Boston University (BU), MGH, and Harvard University say the predictions of their model, published May 20 in Proceedings of the National Academy of Sciences, could help physicians make better use of the drug.

“When physicians understand what's mechanistically happening when they administer a drug, they can possibly leverage that mechanism and manipulate it,” says study lead author Elie Adam, a research scientist at MIT who will soon join the Harvard Medical School faculty and launch a lab at MGH. “They gain a sense of how to enhance the good effects of the drug and how to mitigate the bad ones.”

Blocking the door

The core advance of the study involved biophysically modeling what happens when ketamine blocks the “NMDA” receptors in the brain’s cortex — the outer layer where key functions such as sensory processing and cognition take place. Blocking the NMDA receptors modulates the release of excitatory neurotransmitter glutamate.

When the neuronal channels (or doorways) regulated by the NMDA receptors open, they typically close slowly (like a doorway with a hydraulic closer that keeps it from slamming), allowing ions to go in and out of neurons, thereby regulating their electrical properties, Adam says. But, the channels of the receptor can be blocked by a molecule. Blocking by magnesium helps to naturally regulate ion flow. Ketamine, however, is an especially effective blocker.

Blocking slows the voltage build-up across the neuron’s membrane that eventually leads a neuron to “spike,” or send an electrochemical message to other neurons. The NMDA doorway becomes unblocked when the voltage gets high. This interdependence between voltage, spiking, and blocking can equip NMDA receptors with faster activity than its slow closing speed might suggest. The team’s model goes further than ones before by representing how ketamine’s blocking and unblocking affect neural activity.

“Physiological details that are usually ignored can sometimes be central to understanding cognitive phenomena,” says co-corresponding author Nancy Kopell, a professor of mathematics at BU. “The dynamics of NMDA receptors have more impact on network dynamics than has previously been appreciated.”

With their model, the scientists simulated how different doses of ketamine affecting NMDA receptors would alter the activity of a model brain network. The simulated network included key neuron types found in the cortex: one excitatory type and two inhibitory types. It distinguishes between “tonic” interneurons that tamp down network activity and “phasic” interneurons that react more to excitatory neurons.

The team’s simulations successfully recapitulated the real brain waves that have been measured via EEG electrodes on the scalp of a human volunteer who received various ketamine doses and the neural spiking that has been measured in similarly treated animals that had implanted electrode arrays. At low doses, ketamine increased brain wave power in the fast gamma frequency range (30-40 Hz). At the higher doses that cause unconsciousness, those gamma waves became periodically interrupted by “down” states where only very slow frequency delta waves occur. This repeated disruption of the higher frequency waves is what can disrupt communication across the cortex enough to disrupt consciousness.

But how? Key findings

Importantly, through simulations, they explained several key mechanisms in the network that would produce exactly these dynamics.

The first prediction is that ketamine can disinhibit network activity by shutting down certain inhibitory interneurons. The modeling shows that natural blocking and unblocking kinetics of NMDA-receptors can let in a small current when neurons are not spiking. Many neurons in the network that are at the right level of excitation would rely on this current to spontaneously spike. But when ketamine impairs the kinetics of the NMDA receptors, it quenches that current, leaving these neurons suppressed. In the model, while ketamine equally impairs all neurons, it is the tonic inhibitory neurons that get shut down because they happen to be at that level of excitation. This releases other neurons, excitatory or inhibitory, from their inhibition allowing them to spike vigorously and leading to ketamine’s excited brain state. The network’s increased excitation can then enable quick unblocking (and reblocking) of the neurons’ NMDA receptors, causing bursts of spiking.

Another prediction is that these bursts become synchronized into the gamma frequency waves seen with ketamine. How? The team found that the phasic inhibitory interneurons become stimulated by lots of input of the neurotransmitter glutamate from the excitatory neurons and vigorously spike, or fire. When they do, they send an inhibitory signal of the neurotransmitter GABA to the excitatory neurons that squelches the excitatory firing, almost like a kindergarten teacher calming down a whole classroom of excited children. That stop signal, which reaches all the excitatory neurons simultaneously, only lasts so long, ends up synchronizing their activity, producing a coordinated gamma brain wave.

“The finding that an individual synaptic receptor (NMDA) can produce gamma oscillations and that these gamma oscillations can influence network-level gamma was unexpected,” says co-corresponding author Michelle McCarthy, a research assistant professor of math at BU. “This was found only by using a detailed physiological model of the NMDA receptor. This level of physiological detail revealed a gamma time scale not usually associated with an NMDA receptor.”

So what about the periodic down states that emerge at higher, unconsciousness-inducing ketamine doses? In the simulation, the gamma-frequency activity of the excitatory neurons can’t be sustained for too long by the impaired NMDA-receptor kinetics. The excitatory neurons essentially become exhausted under GABA inhibition from the phasic interneurons. That produces the down state. But then, after they have stopped sending glutamate to the phasic interneurons, those cells stop producing their inhibitory GABA signals. That enables the excitatory neurons to recover, starting a cycle anew.

Antidepressant connection?

The model makes another prediction that might help explain how ketamine exerts its antidepressant effects. It suggests that the increased gamma activity of ketamine could entrain gamma activity among neurons expressing a peptide called VIP. This peptide has been found to have health-promoting effects, such as reducing inflammation, that last much longer than ketamine’s effects on NMDA receptors. The research team proposes that the entrainment of these neurons under ketamine could increase the release of the beneficial peptide, as observed when these cells are stimulated in experiments. This also hints at therapeutic features of ketamine that may go beyond antidepressant effects. The research team acknowledges, however, that this connection is speculative and awaits specific experimental validation.

“The understanding that the subcellular details of the NMDA receptor can lead to increased gamma oscillations was the basis for a new theory about how ketamine may work for treating depression,” Kopell says.

Additional co-authors of the study are Marek Kowalski, Oluwaseun Akeju, and Earl K. Miller.

The work was supported by the JPB Foundation; The Picower Institute for Learning and Memory; The Simons Center for The Social Brain; the National Institutes of Health; George J. Elbaum ’59, SM ’63, PhD ’67; Mimi Jensen; Diane B. Greene SM ’78; Mendel Rosenblum; Bill Swanson; and annual donors to the Anesthesia Initiative Fund.

Ten with MIT connections win 2024 Hertz Foundation Fellowships

The fellowships provide five years of funding to doctoral students in applied science, engineering, and mathematics who have “the extraordinary creativity and principled leadership necessary to tackle problems others can’t solve.”

The Fannie and John Hertz Foundation announced that it has awarded fellowships to 10 PhD students with ties to MIT. The prestigious award provides each recipient with five years of doctoral-level research funding (up to a total of $250,000), which allows them the flexibility and autonomy to pursue their own innovative ideas.

Fellows also receive lifelong access to Hertz Foundation programs, such as events, mentoring, and networking. They join the ranks of over 1,300 former Hertz Fellows who are leaders and scholars in a range of fields in science, engineering, and technology. Connections among fellows over the years have sparked collaborations in startups, research, and technology commercialization.

The 10 MIT recipients are among a total of 18 Hertz Foundation Fellows scholars selected this year from across the country. Five of them received their undergraduate degrees at the Institute and will pursue their PhDs at other schools. Two are current MIT graduate students, and four will begin their studies here in the fall.

“For more than 60 years, Hertz Fellows have led scientific and technical innovation in national security, applied biological sciences, materials research, artificial intelligence, space exploration, and more. Their contributions have been essential in advancing U.S. competitiveness,” says Stephen Fantone, chair of the Hertz Foundation board of directors and founder and president of Optikos Corp. “I’m excited to watch our newest Hertz Fellows as they pursue challenging research and continue the strong tradition of applying their work for the greater good.”

This year’s MIT-affiliated awardees are:

Owen Dugan ’24 graduated from MIT in just two-and-a-half years with a degree in physics, and he plans to pursue a PhD in computer science at Stanford University. His research interests lie at the intersection of AI and physics. As an undergraduate, he conducted research in a broad range of areas, including using physics concepts to enhance the speed of large language models and developing machine learning algorithms that automatically discover scientific theories. He was recognized with MIT’s Outstanding Undergraduate Research Award and is a U.S. Presidential Scholar, a Neo Scholar, and a Knight-Hennessy Scholar. Dugan holds multiple patents, co-developed an app to reduce food waste, and co-founded a startup that builds tools to verify the authenticity of digital images.

Kaylie Hausknecht will begin her physics doctorate at MIT in the fall, having completing her undergraduate degree in physics and astrophysics at Harvard University. While there, her undergraduate research focused on developing new machine learning techniques to solve problems in a range of fields, such as fluid dynamics, astrophysics, and condensed matter physics. She received the Hoopes Prize for her senior thesis, was inducted into Phi Beta Kappa as a junior, and won two major writing awards. In addition, she completed five NASA internships. As an intern, she helped identify 301 new exoplanets using archival data from the Kepler Space Telescope. Hausknecht served as the co-president of Harvard’s chapter of Science Club for Girls, which works to encourage girls from underrepresented backgrounds to pursue STEM.

Elijah Lew-Smith majored in physics at Brown University and plans to pursue a doctoral degree in physics at MIT. He is a theoretical physicist with broad intellectual interests in effective field theory (EFT), which is the study of systems with many interacting degrees of freedom. EFT reveals how to extract the relevant, long-distance behavior from complicated microscopic rules. In 2023, he received a national award to work on applying EFT systematically to non-equilibrium and active systems such as fluctuating hydrodynamics or flocking birds. In addition, Lew-Smith received a scholarship from the U.S. State Department to live for a year in Dakar, Senegal, and later studied at ’École Polytechnique in Paris, France.

Rupert Li ’24 earned his bachelor’s and master’s degrees at MIT in mathematics as well as computer science, data science, and economics, with a minor in business analytics.He was named a 2024 Marshall Scholar and will study abroad for a year at Cambridge University before matriculating at Stanford University for a mathematics doctorate. As an undergraduate, Li authored 12 math research articles, primarily in combinatorics, but also including discrete geometry, probability, and harmonic analysis. He was recognized for his work with a Barry Goldwater Scholarship and an honorable mention for the Morgan Prize, one of the highest undergraduate honors in mathematics.

Amani Maina-Kilaas is a first-year doctoral student at MIT in the Department of Brain and Cognitive Sciences, where he studies computational psycholinguistics. In particular, he is interested in using artificial intelligence as a scientific tool to study how the mind works, and using what we know about the mind to develop more cognitively realistic models. Maina-Kilaas earned his bachelor’s degree in computer science and mathematics from Harvey Mudd College. There, he conducted research regarding intention perception and theoretical machine learning, earning the Astronaut Scholarship and Computing Research Association’s Outstanding Undergraduate Researcher Award.

Zoë Marschner ’23 is a doctoral student at Carnegie Mellon University working on geometry processing, a subfield of computer graphics focused on how to represent and work with geometric data digitally; in her research, she aims to make these representations capable of enabling fundamentally better algorithms for solving geometric problems across science and engineering. As an undergraduate at MIT, she earned a bachelor’s degree in computer science and math and pursued research in geometry processing, including repairing hexahedral meshes and detecting intersections between high-order surfaces. She also interned at Walt Disney Animation Studios, where she worked on collision detection algorithms for simulation. Marschner is a recipient of the National Science Foundation’s Graduate Research Fellowship and the Goldwater Scholarship.

Zijian (William) Niu will start a doctoral program in computational and systems biology at MIT in the fall. He has a particular interest in developing new methods for imaging proteins and other biomolecules in their native cellular environments and using those data to build computational models for predicting their dynamics and molecular interactions. Niu received his bachelor’s degree in biochemistry, biophysics, and physics from the University of Pennsylvania. His undergraduate research involved developing novel computational methods for biological image analysis. He was awarded the Barry M. Goldwater Scholarship for creating a deep-learning algorithm for accurately detecting tiny diffraction-limited spots in fluorescence microscopy images that outperformed existing methods in quantifying spatial transcriptomics data.

James Roney received his bachelor’s and master’s degrees from Harvard University in computer science and statistics, respectively. He is currently working as a machine learning research engineer at D.E. Shaw Research. His past research has focused on interpreting the internal workings of AlphaFold and modeling cancer evolution. Roney plans to pursue a PhD in computational biology at MIT, with a specific interest in developing computational models of protein structure, function, and evolution and using those models to engineer novel proteins for applications in biotechnology.

Anna Sappington ’19 is a student in the Harvard University-MIT MD-PhD Program, currently in the first year of her doctoral program at MIT in electrical engineering and computer science. She is interested in building methods to predict evolutionary events, especially connections among machine learning, biology, and chemistry to develop reinforcement learning models inspired by evolutionary biology. Sappington graduated from MIT with a bachelor’s degree in computer science and molecular biology. As an undergraduate, she was awarded a 2018 Barry M. Goldwater Scholarship and selected as a Burchard Scholar and an Amgen Scholar. After graduating, she earned a master’s degree in genomic medicine from the University of Cambridge, where she studied as a Marshall Scholar, as well as a master’s degree in machine learning from University College London.

Jason Yang ’22 received his bachelor’s degree in biology with a minor in computer science from MIT and is currently a doctoral student in genetics at Stanford University. He is interested in understanding the biological processes that underlie human health and disease. At MIT, and subsequently at Massachusetts General Hospital, Yang worked on the mechanisms involved in neurodegeneration in repeat expansion diseases, uncovering a novel molecular consequence of repeat protein aggregation.

Microscopic defects in ice influence how massive glaciers flow, study shows

The findings should help scientists refine predictions of future sea-level rise.

As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.

Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.

The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.

Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.

“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”

“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”

Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.

Micro flow

Glacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.

In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.

“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”

The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.

The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.

Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.

“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”

A mapping match

For their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.

The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.

“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.” 

Using art and science to depict the MIT family from 1861 to the present

MIT.nano inscribes 340,000 names on a single silicon wafer in latest version of One.MIT.

In MIT.nano’s laboratories, researchers use silicon wafers as the platform to shape transformative technologies such as quantum circuitry, microfluidic devices, or energy-harvesting structures. But these substrates can also serve as a canvas for an artist, as MIT Professor W. Craig Carter demonstrates in the latest One.MIT mosaic.

The One.MIT project celebrates the people of MIT by using the tools of MIT.nano to etch their collective names, arranged as a mosaic by Carter, into a silicon wafer just 8 inches in diameter. The latest edition of One.MIT — including 339,537 names of students, faculty, staff, and alumni associated with MIT from 1861 to September 2023 — is now on display in the ground-floor galleries at MIT.nano in the Lisa T. Su Building (Building 12).

“A spirit of innovation and a relentless drive to solve big problems have permeated the campus in every decade of our history. This passion for discovery, learning, and invention is the thread connecting MIT’s 21st-century family to our 19th-century beginnings and all the years in between,” says Vladimir Bulović, director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology. “One.MIT celebrates the MIT ethos and reminds us that no matter when we came to MIT, whatever our roles, we all leave a mark on this remarkable community.”

A team of students, faculty, staff, and alumni inscribed the design on the wafer inside the MIT.nano cleanrooms. Because the names are too small to be seen with the naked eye — they measure only microns high on the wafer — the One.MIT website allows anyone to look up a name and find its location in the mosaic.

Finding inspiration in the archives

The first two One.MIT art pieces, created in 2018 and 2020, were inscribed in silicon wafers 6 inches in diameter, slightly smaller than the latest art piece, which benefited from the newest MIT.nano tools that can fabricate 8-inch wafers. The first designs form well-known, historic MIT images: the Great Dome (2018) and the MIT seal (2020).

Carter, who is the Toyota Professor of Materials Processing and professor of materials science and engineering, created the designs and algorithms for each version of One.MIT. He started a search last summer for inspiration for the 2024 design. “The image needed to be iconic of MIT,” says Carter, “and also work within the constraints of a large-scale mosaic.”

Carter ultimately found the solution within the Institute Archives, in the form of a lithograph used on the cover of a program for the 1916 MIT rededication ceremony that celebrated the Institute’s move from Boston to Cambridge on its 50th anniversary.

Incorporating MIT nerdiness

Carter began by creating a black-and-white image, redrawing the lithograph’s architectural features and character elements. He recreated the kerns (spaces) and the fonts of the letters as algorithmic geometric objects.

The color gradient of the sky behind the dome presented a challenge because only two shades were available. To tackle this issue and impart texture, Carter created a Hilbert curve — a hierarchical, continuous curve made by replacing an element with a combination of four elements. Each of these four elements are replaced by another four elements, and so on. The resulting object is like a fractal — the curve changes shape as it goes from top to bottom, with 90-degree turns throughout.

“This was an opportunity to add a fun and ‘nerdy’ element — fitting for MIT,” says Carter.

To achieve both the gradient and the round wafer shape, Carter morphed the square Hilbert curve (consisting of 90-degree angles) into a disk shape using Schwarz-Christoffel mapping, a type of conformal mapping that can be used to solve problems in many different domains.

“Conformal maps are lovely convergences of physics and engineering with mathematics and geometry,” says Carter.

Because the conformal mapping is smooth and also preserves the angles, the square’s corners produce four singular points on the circle where the Hilbert curve’s line segments shrink to a point. The location of the four points in the upper part of the circle “squeezes” the curve and creates the gradient (and the texture of the illustration) — dense-to-sparse from top-to-bottom.

The final mosaic is made up of 6,476,403 characters, and Carter needed to use font and kern types that would fill as much of the wafer’s surface as possible without having names break up and wrap around to the next line. Carter’s algorithm alleviated this problem, at least somewhat, by searching for names that slotted into remaining spaces at the end of each row. The algorithm also performed an optimization over many different choices for the random order of the names. 

Finding — and wrangling — hundreds of thousands of names

In addition to the art and algorithms, the foundation of One.MIT is the extensive collection of names spanning more than 160 years of MIT. The names reflect students, alumni, faculty, and staff — the wide variety of individuals who have always formed the MIT community.

Annie Wang, research scientist and special projects coordinator for MIT.nano, again played an instrumental role in collecting the names for the project, just as she had for the 2018 and 2020 versions. Despite her experience, collating the names to construct the newest edition still presented several challenges, given the variety of input sources to the dataset and the need to format names in a consistent manner.

“Both databases and OCR-scanned text can be messy,” says Wang, referring to the electronic databases and old paper directories from which names were sourced. “And cleaning them up is a lot of work.”

Many names were listed in multiple places, sometimes spelled or formatted differently across sources. There were very short first and last names, very long first and last names — and also a portion of names in which more than one person had nearly identical names. And some groups are simply hard to find in the records. “One thing I wish we had,” comments Wang, “is a list of long-term volunteers at MIT who contribute so much but aren’t reflected in the main directories.”

Once the design was completed, Carter and Wang handed off a CAD file to Jorg Scholvin, associate director of fabrication at MIT.nano. Scholvin assembled a team that reflected One.MIT — students, faculty, staff, and alumni — and worked with them to fabricate the wafer inside MIT.nano’s cleanroom. The fab team included Carter; undergraduate students Akorfa Dagadu, Sean Luk, Emilia K. Szczepaniak, Amber Velez, and twin brothers Juan Antonio Luera and Juan Angel Luera; MIT Sloan School of Management EMBA student Patricia LaBorda; staff member Kevin Verrier of MIT Facilities; and alumnae Madeline Hickman '11 and Eboney Hearn '01, who is also the executive director of MIT Introduction to Technology, Engineering and Science (MITES).

Understanding why autism symptoms sometimes improve amid fever

With support from The Marcus Foundation, an MIT neuroscientist and a Harvard Medical School immunologist will study the “fever effect” in an effort to devise therapies that mimic its beneficial effects.

Scientists are catching up to what parents and other caregivers have been reporting for many years: When some people with autism spectrum disorders experience an infection that sparks a fever, their autism-related symptoms seem to improve.

With a pair of new grants from The Marcus Foundation, scientists at MIT and Harvard Medical School hope to explain how this happens in an effort to eventually develop therapies that mimic the “fever effect” to similarly improve symptoms.

“Although it isn’t actually triggered by the fever, per se, the ‘fever effect’ is real, and it provides us with an opportunity to develop therapies to mitigate symptoms of autism spectrum disorders,” says neuroscientist Gloria Choi, associate professor in the MIT Department of Brain and Cognitive Sciences and affiliate of The Picower Institute for Learning and Memory.

Choi will collaborate on the project with Jun Huh, associate professor of immunology at Harvard Medical School. Together the grants to the two institutions provide $2.1 million over three years.

“To the best of my knowledge, the ‘fever effect’ is perhaps the only natural phenomenon in which developmentally determined autism symptoms improve significantly, albeit temporarily,” Huh says. “Our goal is to learn how and why this happens at the levels of cells and molecules, to identify immunological drivers, and produce persistent effects that benefit a broad group of individuals with autism.”

The Marcus Foundation has been involved in autism work for over 30 years, helping to develop the field and addressing everything from awareness to treatment to new diagnostic devices.

“I have long been interested in novel approaches to treating and lessening autism symptoms, and doctors Choi and Huh have honed in on a bold theory,” says Bernie Marcus, founder and chair of The Marcus Foundation. “It is my hope that this Marcus Foundation Medical Research Award helps their theory come to fruition and ultimately helps improve the lives of children with autism and their families.”

Brain-immune interplay

For a decade, Huh and Choi have been investigating the connection between infection and autism. Their studies suggest that the beneficial effects associated with fever may arise from molecular changes in the immune system during infection, rather than on the elevation of body temperature, per se.

Their work in mice has shown that maternal infection during pregnancy, modulated by the composition of the mother’s microbiome, can lead to neurodevelopmental abnormalities in the offspring that result in autism-like symptoms, such as impaired sociability. Huh’s and Choi’s labs have traced the effect to elevated maternal levels of a type of immune-signaling molecule called IL-17a, which acts on receptors in brain cells of the developing fetus, leading to hyperactivity in a region of the brain’s cortex called S1DZ. In another study, they’ve shown how maternal infection appears to prime offspring to produce more IL-17a during infection later in life.

Building on these studies, a 2020 paper clarified the fever effect in the setting of autism. This research showed that mice that developed autism symptoms as a result of maternal infection while in utero would exhibit improvements in their sociability when they had infections — a finding that mirrored observations in people. The scientists discovered that this effect depended on over-expression of IL-17a, which in this context appeared to calm affected brain circuits. When the scientists administered IL-17a directly to the brains of mice with autism-like symptoms whose mothers had not been infected during pregnancy, the treatment still produced improvements in symptoms.

New studies and samples

This work suggested that mimicking the “fever effect” by giving extra IL-17a could produce similar therapeutic effects for multiple autism-spectrum disorders, with different underlying causes. But the research also left wide-open questions that must be answered before any clinically viable therapy could be developed. How exactly does IL-17a lead to symptom relief and behavior change in the mice? Does the fever effect work in the same way in people?

In the new project, Choi and Huh hope to answer those questions in detail.

“By learning the science behind the fever effect and knowing the mechanism behind the improvement in symptoms, we can have enough knowledge to be able to mimic it, even in individuals who don’t naturally experience the fever effect,” Choi says.

Choi and Huh will continue their work in mice seeking to uncover the sequence of molecular, cellular and neural circuit effects triggered by IL-17a and similar molecules that lead to improved sociability and reduction in repetitive behaviors. They will also dig deeper into why immune cells in mice exposed to maternal infection become primed to produce IL-17a.

To study the fever effect in people, Choi and Huh plan to establish a “biobank” of samples from volunteers with autism who do or don’t experience symptoms associated with fever, as well as comparable volunteers without autism. The scientists will measure, catalog, and compare these immune system molecules and cellular responses in blood plasma and stool to determine the biological and clinical markers of the fever effect.

If the research reveals distinct cellular and molecular features of the immune response among people who experience improvements with fever, the researchers could be able to harness these insights into a therapy that mimics the benefits of fever without inducing actual fever. Detailing how the immune response acts in the brain would inform how the therapy should be crafted to produce similar effects.

"We are enormously grateful and excited to have this opportunity," Huh says. "We hope our work will ‘kick up some dust’ and make the first step toward discovering the underlying causes of fever responses. Perhaps, one day in the future, novel therapies inspired by our work will help transform the lives of many families and their children with ASD [autism spectrum disorder]."

Study explains why the brain can robustly recognize images, even without color

The findings also reveal why identifying objects in black-and-white images is more difficult for individuals who were born blind and had their sight restored.

Even though the human visual system has sophisticated machinery for processing color, the brain has no problem recognizing objects in black-and-white images. A new study from MIT offers a possible explanation for how the brain comes to be so adept at identifying both color and color-degraded images.

Using experimental data and computational modeling, the researchers found evidence suggesting the roots of this ability may lie in development. Early in life, when newborns receive strongly limited color information, the brain is forced to learn to distinguish objects based on their luminance, or intensity of light they emit, rather than their color. Later in life, when the retina and cortex are better equipped to process colors, the brain incorporates color information as well but also maintains its previously acquired ability to recognize images without critical reliance on color cues.

The findings are consistent with previous work showing that initially degraded visual and auditory input can actually be beneficial to the early development of perceptual systems.

“This general idea, that there is something important about the initial limitations that we have in our perceptual system, transcends color vision and visual acuity. Some of the work that our lab has done in the context of audition also suggests that there’s something important about placing limits on the richness of information that the neonatal system is initially exposed to,” says Pawan Sinha, a professor of brain and cognitive sciences at MIT and the senior author of the study.

The findings also help to explain why children who are born blind but have their vision restored later in life, through the removal of congenital cataracts, have much more difficulty identifying objects presented in black and white. Those children, who receive rich color input as soon as their sight is restored, may develop an overreliance on color that makes them much less resilient to changes or removal of color information.

MIT postdocs Marin Vogelsang and Lukas Vogelsang, and Project Prakash research scientist Priti Gupta, are the lead authors of the study, which appears today in Science. Sidney Diamond, a retired neurologist who is now an MIT research affiliate, and additional members of the Project Prakash team are also authors of the paper.

Seeing in black and white

The researchers’ exploration of how early experience with color affects later object recognition grew out of a simple observation from a study of children who had their sight restored after being born with congenital cataracts. In 2005, Sinha launched Project Prakash (the Sanskrit word for “light”), an effort in India to identify and treat children with reversible forms of vision loss.

Many of those children suffer from blindness due to dense bilateral cataracts. This condition often goes untreated in India, which has the world’s largest population of blind children, estimated between 200,000 and 700,000.

Children who receive treatment through Project Prakash may also participate in studies of their visual development, many of which have helped scientists learn more about how the brain's organization changes following restoration of sight, how the brain estimates brightness, and other phenomena related to vision.

In this study, Sinha and his colleagues gave children a simple test of object recognition, presenting both color and black-and-white images. For children born with normal sight, converting color images to grayscale had no effect at all on their ability to recognize the depicted object. However, when children who underwent cataract removal were presented with black-and-white images, their performance dropped significantly.

This led the researchers to hypothesize that the nature of visual inputs children are exposed to early in life may play a crucial role in shaping resilience to color changes and the ability to identify objects presented in black-and-white images. In normally sighted newborns, retinal cone cells are not well-developed at birth, resulting in babies having poor visual acuity and poor color vision. Over the first years of life, their vision improves markedly as the cone system develops.

Because the immature visual system receives significantly reduced color information, the researchers hypothesized that during this time, the baby brain is forced to gain proficiency at recognizing images with reduced color cues. Additionally, they proposed, children who are born with cataracts and have them removed later may learn to rely too much on color cues when identifying objects, because, as they experimentally demonstrated in the paper, with mature retinas, they commence their post-operative journeys with good color vision.

To rigorously test that hypothesis, the researchers used a standard convolutional neural network, AlexNet, as a computational model of vision. They trained the network to recognize objects, giving it different types of input during training. As part of one training regimen, they initially showed the model grayscale images only, then introduced color images later on. This roughly mimics the developmental progression of chromatic enrichment as babies’ eyesight matures over the first years of life.

Another training regimen comprised only color images. This approximates the experience of the Project Prakash children, because they can process full color information as soon as their cataracts are removed.

The researchers found that the developmentally inspired model could accurately recognize objects in either type of image and was also resilient to other color manipulations. However, the Prakash-proxy model trained only on color images did not show good generalization to grayscale or hue-manipulated images.

“What happens is that this Prakash-like model is very good with colored images, but it’s very poor with anything else. When not starting out with initially color-degraded training, these models just don’t generalize, perhaps because of their over-reliance on specific color cues,” Lukas Vogelsang says.

The robust generalization of the developmentally inspired model is not merely a consequence of it having been trained on both color and grayscale images; the temporal ordering of these images makes a big difference. Another object-recognition model that was trained on color images first, followed by grayscale images, did not do as well at identifying black-and-white objects.

“It’s not just the steps of the developmental choreography that are important, but also the order in which they are played out,” Sinha says.

The advantages of limited sensory input

By analyzing the internal organization of the models, the researchers found that those that begin with grayscale inputs learn to rely on luminance to identify objects. Once they begin receiving color input, they don’t change their approach very much, since they’ve already learned a strategy that works well. Models that began with color images did shift their approach once grayscale images were introduced, but could not shift enough to make them as accurate as the models that were given grayscale images first.

A similar phenomenon may occur in the human brain, which has more plasticity early in life, and can easily learn to identify objects based on their luminance alone. Early in life, the paucity of color information may in fact be beneficial to the developing brain, as it learns to identify objects based on sparse information.

“As a newborn, the normally sighted child is deprived, in a certain sense, of color vision. And that turns out to be an advantage,” Diamond says.

Researchers in Sinha’s lab have observed that limitations in early sensory input can also benefit other aspects of vision, as well as the auditory system. In 2022, they used computational models to show that early exposure to only low-frequency sounds, similar to those that babies hear in the womb, improves performance on auditory tasks that require analyzing sounds over a longer period of time, such as recognizing emotions. They now plan to explore whether this phenomenon extends to other aspects of development, such as language acquisition.

The research was funded by the National Eye Institute of NIH and the Intelligence Advanced Research Projects Activity.

The origin of the sun’s magnetic field could lie close to its surface

Sunspots and flares could be a product of a shallow magnetic field, according to surprising new findings that may help scientists predict space weather.

The sun’s surface is a brilliant display of sunspots and flares driven by the solar magnetic field, which is internally generated through a process called dynamo action. Astrophysicists have assumed that the sun’s field is generated deep within the star. But an MIT study finds that the sun’s activity may be shaped by a much shallower process.

In a paper appearing today in Nature, researchers at MIT, the University of Edinburgh, and elsewhere find that the sun’s magnetic field could arise from instabilities within the sun’s outermost layers.

The team generated a precise model of the sun’s surface and found that when they simulated certain perturbations, or changes in the flow of plasma (ionized gas) within the top 5 to 10 percent of the sun, these surface changes were enough to generate realistic magnetic field patterns, with similar characteristics to what astronomers have observed on the sun. In contrast, their simulations in deeper layers produced less realistic solar activity.

The findings suggest that sunspots and flares could be a product of a shallow magnetic field, rather than a field that originates deeper in the sun, as scientists had largely assumed.

“The features we see when looking at the sun, like the corona that many people saw during the recent solar eclipse, sunspots, and solar flares, are all associated with the sun’s magnetic field,” says study author Keaton Burns, a research scientist in MIT’s Department of Mathematics. “We show that isolated perturbations near the sun’s surface, far from the deeper layers, can grow over time to potentially produce the magnetic structures we see.”

If the sun’s magnetic field does in fact arise from its outermost layers, this might give scientists a better chance at forecasting flares and geomagnetic storms that have the potential to damage satellites and telecommunications systems.

“We know the dynamo acts like a giant clock with many complex interacting parts,” says co-author Geoffrey Vasil, a researcher at the University of Edinburgh. “But we don't know many of the pieces or how they fit together. This new idea of how the solar dynamo starts is essential to understanding and predicting it.”

The study’s co-authors also include Daniel Lecoanet and Kyle Augustson of Northwestern University, Jeffrey Oishi of Bates College, Benjamin Brown and Keith Julien of the University of Colorado at Boulder, and Nicholas Brummell of the University of California at Santa Cruz.

Flow zone

The sun is a white-hot ball of plasma that’s boiling on its surface. This boiling region is called the “convection zone,” where layers and plumes of plasma roil and flow. The convection zone comprises the top one-third of the sun’s radius and stretches about 200,000 kilometers below the surface.

“One of the basic ideas for how to start a dynamo is that you need a region where there’s a lot of plasma moving past other plasma, and that shearing motion converts kinetic energy into magnetic energy,” Burns explains. “People had thought that the sun’s magnetic field is created by the motions at the very bottom of the convection zone.”

To pin down exactly where the sun’s magnetic field originates, other scientists have used large three-dimensional simulations to try to solve for the flow of plasma throughout the many layers of the sun’s interior. “Those simulations require millions of hours on national supercomputing facilities, but what they produce is still nowhere near as turbulent as the actual sun,” Burns says.

Rather than simulating the complex flow of plasma throughout the entire body of the sun, Burns and his colleagues wondered whether studying the stability of plasma flow near the surface might be enough to explain the origins of the dynamo process.

To explore this idea, the team first used data from the field of “helioseismology,” where scientists use observed vibrations on the sun’s surface to determine the average structure and flow of plasma beneath the surface.

“If you take a video of a drum and watch how it vibrates in slow motion, you can work out the drumhead’s shape and stiffness from the vibrational modes,” Burns says. “Similarly, we can use vibrations that we see on the solar surface to infer the average structure on the inside.”

Solar onion

For their new study, the researchers collected models of the sun’s structure from helioseismic observations. “These average flows look sort like an onion, with different layers of plasma rotating past each other,” Burns explains. “Then we ask: Are there perturbations, or tiny changes in the flow of plasma, that we could superimpose on top of this average structure, that might grow to cause the sun’s magnetic field?”

To look for such patterns, the team utilized the Dedalus Project — a numerical framework that Burns developed that can simulate many types of fluid flows with high precision. The code has been applied to a wide range of problems, from modeling the dynamics inside individual cells, to ocean and atmospheric circulations.

“My collaborators have been thinking about the solar magnetism problem for years, and the capabilities of Dedalus have now reached the point where we could address it,” Burns says.

The team developed algorithms that they incorporated into Dedalus to find self-reinforcing changes in the sun’s average surface flows. The algorithm discovered new patterns that could grow and result in realistic solar activity. In particular, the team found patterns that match the locations and timescales of sunspots that have been have observed by astronomers since Galileo in 1612.

Sunspots are transient features on the surface of the sun that are thought to be shaped by the sun’s magnetic field. These relatively cooler regions appear as dark spots in relation to the rest of the sun’s white-hot surface. Astronomers have long observed that sunspots occur in a cyclical pattern, growing and receding every 11 years, and generally gravitating around the equator, rather than near the poles.

In the team’s simulations, they found that certain changes in the flow of plasma, within just the top 5 to 10 percent of the sun’s surface layers, were enough to generate magnetic structures in the same regions. In contrast, changes in deeper layers produce less realistic solar fields that are concentrated near the poles, rather than near the equator.

The team was motivated to take a closer look at flow patterns near the surface as conditions there resembled the unstable plasma flows in entirely different systems: the accretion disks around black holes. Accretion disks are massive disks of gas and stellar dust that rotate in towards a black hole, driven by the “magnetorotational instability,” which generates turbulence in the flow and causes it to fall inward.

Burns and his colleagues suspected that a similar phenomena is at play in the sun, and that the magnetorotational instability in the sun’s outermost layers could be the first step in generating the sun’s magnetic field.

“I think this result may be controversial,” he ventures. “Most of the community has been focused on finding dynamo action deep in the sun. Now we’re showing there’s a different mechanism that seems to be a better match to observations.” Burns says that the team is continuing to study if the new surface field patterns can generate individual sunspots and the full 11-year solar cycle.

“This is far from the final word on the problem,” says Steven Balbus, a professor of astronomy at Oxford University, who was not involved with the study. “However, it is a fresh and very promising avenue for further study. The current findings are very suggestive and the approach is innovative, and not in line with the current received wisdom. When the received wisdom has not been very fruitful for an extended period, something more creative is indicated, and that is what this work offers.”

This research was supported, in part, by NASA.

Using wobbling stellar material, astronomers measure the spin of a supermassive black hole for the first time

The results offer a new way to probe supermassive black holes and their evolution across the universe.

Astronomers at MIT, NASA, and elsewhere have a new way to measure how fast a black hole spins, by using the wobbly aftermath from its stellar feasting.

The method takes advantage of a black hole tidal disruption event — a blazingly bright moment when a black hole exerts tides on a passing star and rips it to shreds. As the star is disrupted by the black hole’s immense tidal forces, half of the star is blown away, while the other half is flung around the black hole, generating an intensely hot accretion disk of rotating stellar material.

The MIT-led team has shown that the wobble of the newly created accretion disk is key to working out the central black hole’s inherent spin.

In a study appearing today in Nature, the astronomers report that they have measured the spin of a nearby supermassive black hole by tracking the pattern of X-ray flashes that the black hole produced immediately following a tidal disruption event. The team followed the flashes over several months and determined that they were likely a signal of a bright-hot accretion disk that wobbled back and forth as it was pushed and pulled by the black hole’s own spin.

By tracking how the disk’s wobble changed over time, the scientists could work out how much the disk was being affected by the black hole’s spin, and in turn, how fast the black hole itself was spinning. Their analysis showed that the black hole was spinning at less than 25 percent the speed of light — relatively slow, as black holes go.

The study’s lead author, MIT Research Scientist Dheeraj “DJ” Pasham, says the new method could be used to gauge the spins of hundreds of black holes in the local universe in the coming years. If scientists can survey the spins of many nearby black holes, they can start to understand how the gravitational giants evolved over the history of the universe.

“By studying several systems in the coming years with this method, astronomers can estimate the overall distribution of black hole spins and understand the longstanding question of how they evolve over time,” says Pasham, who is a member of MIT’s Kavli Institute for Astrophysics and Space Research.

The study’s co-authors include collaborators from a number of institutions, including NASA, Masaryk University in the Czech Republic, the University of Leeds, the University of Syracuse, Tel Aviv University, the Polish Academy of Sciences, and elsewhere.

Shredded heat

Every black hole has an inherent spin that has been shaped by its cosmic encounters over time. If, for instance, a black hole has grown mostly through accretion — brief instances when some material falls onto the disk, this causes the black hole to spin up to quite high speeds. In contrast, if a black hole grows mostly by merging with other black holes, each merger could slow things down as one black hole’s spin meets up against the spin of the other.

As a black hole spins, it drags the surrounding space-time around with it. This drag effect is an example of Lense-Thirring precession, a longstanding theory that describes the ways in which extremely strong gravitational fields, such as those generated by a black hole, can pull on the surrounding space and time. Normally, this effect would not be obvious around black holes, as the massive objects emit no light.

But in recent years, physicists have proposed that, in instances such as during a tidal disruption event, or TDE, scientists might have a chance to track the light from stellar debris as it is dragged around. Then, they might hope to measure the black hole’s spin.

In particular, during a TDE, scientists predict that a star may fall onto a black hole from any direction, generating a disk of white-hot, shredded material that could be tilted, or misaligned, with respect to the black hole’s spin. (Imagine the accretion disk as a tilted donut that is spinning around a donut hole that has its own, separate spin.) As the disk encounters the black hole’s spin, it wobbles as the black hole pulls it into alignment. Eventually, the wobbling subsides as the disk settles into the black hole’s spin. Scientists predicted that a TDE’s wobbling disk should therefore be a measurable signature of the black hole’s spin.

“But the key was to have the right observations,” Pasham says. “The only way you can do this is, as soon as a tidal disruption event goes off, you need to get a telescope to look at this object continuously, for a very long time, so you can probe all kinds of timescales, from minutes to months.”

A high-cadence catch

For the past five years, Pasham has looked for tidal disruption events that are bright enough, and near enough, to quickly follow up and track for signs of Lense-Thirring precession. In February of 2020, he and his colleagues got lucky, with the detection of AT2020ocn, a bright flash, emanating from a galaxy about a billion light years away, that was initially spotted in the optical band by the Zwicky Transient Facility.

From the optical data, the flash appeared to be the first moments following a TDE. Being both bright and relatively close by, Pasham suspected the TDE might be the ideal candidate to look for signs of disk wobbling, and possibly measure the spin of the black hole at the host  galaxy’s center. But for that, he would need much more data.

“We needed quick and high-cadence data,” Pasham says. “The key was to catch this early on because this precession, or wobble, should only be present early on. Any later, and the disk would not wobble anymore.”

The team discovered that NASA’s NICER telescope was able to catch the TDE and continuously keep an eye on it over months at a time. NICER — an abbreviation for Neutron star Interior Composition ExploreR — is an X-ray telescope on the International Space Station that measures X-ray radiation around black holes and other extreme gravitational objects.

Pasham and his colleagues looked through NICER’s observations of AT2020ocn over 200 days following the initial detection of the tidal disruption event. They discovered that the event emitted X-rays that appeared to peak every 15 days, for several cycles, before eventually petering out. They interpreted the peaks as times when the TDE’s accretion disk wobbled face-on, emitting X-rays directly toward NICER’s telescope, before wobbling away as it continued to emit X-rays (similar to waving a flashlight toward and away from someone every 15 days).

The researchers took this pattern of wobbling and worked it into the original theory for Lense-Thirring precession. Based on estimates of the black hole’s mass, and that of the disrupted star, they were able to come up with an estimate for the black hole’s spin — less than 25 percent the speed of light.

Their results mark the first time that scientists have used observations of a wobbling disk following a tidal disruption event to estimate the spin of a black hole.

"Black holes are fascinating objects and the flows of material that we see falling onto them can generate some of the most luminous events in the universe,” says study co-author Chris Nixon, associate professor of theoretical physics at the University of Leeds. “While there is a lot we still don’t understand, there are amazing observational facilities that keep surprising us and generating new avenues to explore. This event is one of those surprises.”

As new telescopes such as the Rubin Observatory come online in the coming years, Pasham foresees more opportunities to pin down black hole spins.

“The spin of a supermassive black hole tells you about the history of that black hole,” Pasham says. “Even if a small fraction of those that Rubin captures have this kind of signal, we now have a way to measure the spins of hundreds of TDEs. Then we could make a big statement about how black holes evolve over the age of the universe.”

This research was funded, in part, by NASA and the European Space Agency.

An expansive approach to making new compounds

To create molecules with unique properties, Associate Professor Robert Gilliard and his team deploy strategies from both organic and inorganic chemistry.

While most chemistry labs focus on either organic (carbon-containing) or inorganic (metal-containing) molecules, Robert Gilliard’s lab takes a more expansive approach.

On any given day in his lab, researchers may be synthesizing new materials that can light up or change color in response to temperature changes, designing new molecules that activate chemical bonds, or finding new ways to make useful compounds out of carbon dioxide. Mixing different approaches and drawing from a variety of areas of expertise is the defining feature of his lab’s style of chemistry.

“At the core of our program, we are a chemical synthesis lab. We make molecules,” Gilliard says. “I have students that are in the organic division and students that are in the inorganic division, and we combine concepts from both worlds. We really can’t do our chemistry without both.”

Some of the molecules his lab creates require such specialized laboratory skills that very few other labs even try to make them. These compounds have a variety of unique optical and electrical properties that have drawn interest from companies that make LEDs and other optoelectronic devices.

Previously a professor at the University of Virginia, Gilliard joined the MIT faculty in 2023 as the Novartis Associate Professor of Chemistry, in part because of the opportunities to work with engineers to investigate device applications for those molecules, and to connect with companies interested in their lighting-generating properties.

“By bringing in components from different subareas of chemistry, we have generated some interesting optical and electronic properties in these compounds,” he says.

A winding path

After joining the faculty at UVA in 2017, Gilliard had no inkling that he would soon end up at MIT. His path to the Institute began soon after beginning his appointment, when he invited Christopher “Kit” Cummins, the Dreyfus Professor of Chemistry at MIT, to give a seminar at UVA. Cummins was very interested in the compounds Gilliard was working on and suggested that Gilliard come to MIT for six months as part of the MLK Visiting Professors and Scholars Program.

At the time, Gilliard was still getting settled as a new faculty member and didn’t want to leave his lab, but a few years later, when things were up and running, he joined the MLK program for the 2021-2022 school year. He worked closely with Cummins and others in MIT’s Department of Chemistry, and at the end of the year, department head Troy Van Voorhis broached the idea of bringing him to MIT as a permanent faculty member.

Gilliard, taken by surprise, had no intention of leaving his position at UVA, but he was intrigued by the opportunities for collaboration at MIT and in the Boston area in general.

“The MLK program was a great experience, a well-organized program that really exposed me to the whole MIT institution. I can say this, and I mean it: There’s no way I would've come here as a faculty member had I not done that MLK fellowship,” Gilliard says. “I was really enjoying my appointment at the University of Virginia and students that I had, and colleagues there. It would have been nearly impossible to get me to move if I hadn’t already spent that time at MIT and enjoyed the atmosphere and the people.”

Gilliard first became interested in chemistry as a high school student in Hartsville, South Carolina, thanks to an inspiring teacher, Charlotte Godwin, who taught his chemistry, physics, and physical science honors classes. He went to Clemson University planning to study premed, but he wasn’t enthusiastic about that choice.

“Before I arrived, I think I already knew I wasn’t going to do that because I don’t really like hospitals that much,” he recalls. “And so I changed my major before I even arrived, and I changed it to engineering.”

Clemson has a well-known engineering program, but after a couple of classes, Gilliard realized that wasn’t the best choice for him, either. He was, however, enjoying his chemistry classes, so he switched his major to chemistry and signed up to do undergraduate research.

He ended up working with a professor named Rhett Smith, who had just joined the Clemson faculty after doing a postdoc at MIT with Professor Stephen Lippard. In Smith’s lab, Gilliard worked on synthesizing catalysts as well as molecules that could be used as sensors, including sensors for cyanide and TNT, an explosive.

“That was just an amazing experience,” he says. “That’s when I knew that research was something that I enjoyed and that I would likely go on to graduate school.”

When he wasn’t working in Smith’s lab, Gilliard was still immersed in chemistry, working in the organic chemistry teaching labs. “I was doing so much chemistry, but I was having fun with it, so it didn’t really feel like work. It felt like something exciting to explore,” he says.

Novel compounds

As a graduate student at the University of Georgia, Gilliard focused on inorganic main-group chemistry but also took organic chemistry courses and was a teaching assistant for two organic chemistry classes. “I knew that I wanted to learn as much organic chemistry as possible because it would be beneficial for my career,” he says.

For his PhD research, he studied chemical bonds that can form between main-group elements — elements found at the edges of the periodic table, in columns 1-2 and 13-18. These types of bonds can be very difficult to achieve, but once made, they expand the possible bonding scenarios for non-transition metal elements, which makes them useful in a range of chemical reactions.

While doing a postdoctoral fellowship, which he divided between the Swiss Federal Institute of Technology (ETH Zürich) and Case Western Reserve University, Gilliard worked on combining small phosphorus-containing reagents into phosphorus heterocycles, which consist of multiple varied rings fused together.

At the University of Virginia, and now in his lab at MIT, Gilliard continued to study heterocycles, now focusing mainly on boron heterocycles. These molecules hold potential in numerous optical and electronic applications, in part because of their ability to efficiently donate or accept electrons from other molecules. Recently, in the Journal of the American Chemical Society, Gilliard’s lab published the first examples of boraphenalenyl radicals and diborepin biradicals that exhibit this important redox behavior. Such materials can also be used to make stimuli-responsive materials and chemical sensors, or to advance various light-emitting or absorbing technologies.

His lab also works on compounds containing bismuth and antimony that can be used to activate carbon-hydrogen bonds. Another area of focus is capturing carbon dioxide and converting it into useful chemicals.

The success of all of these projects, Gilliard says, depends on the “great team” working in his lab, including several students, postdocs, and research scientists who came with him from the University of Virginia.

“A lot of the compounds that we make are very, very difficult. They require specialized techniques and skills, so I’m grateful to have talented folks working in my lab,” he says.