MIT News - School of Science
MIT news feed about: School of Science
Janabel Xia: Algorithms, dance rhythms, and the drive to succeed

When the senior isn’t using mathematical and computational methods to boost driverless vehicles and fairer voting, she performs with MIT’s many dance groups to keep her on track.



Senior math major Janabel Xia is a study of a person in constant motion.

When she isn’t sorting algorithms and improving traffic control systems for driverless vehicles, she’s dancing as a member of at least four dance clubs. She’s joined several social justice organizations, worked on cryptography and web authentication technology, and created a polling app that allows users to vote anonymously.

In her final semester, she’s putting the pedal to the metal, with a green light to lessen the carbon footprint of urban transportation by using sensors at traffic light intersections.

First steps

Growing up in Lexington, Massachusetts, Janabel has been competing on math teams since elementary school. On her math team, which met early mornings before the start of school, she discovered a love of problem-solving that challenged her more than her classroom “plug-and-chug exercises.”

At Lexington High School, she was math team captain, a two-time Math Olympiad attendee, and a silver medalist for Team USA at the European Girls' Mathematical Olympiad.

As a math major, she studies combinatorics and theoretical computer science, including theoretical and applied cryptography. In her sophomore year, she was a researcher in the Cryptography and Information Security Group at the MIT Computer Science and Artificial Intelligence Laboratory, where she conducted cryptanalysis research under Professor Vinod Vaikuntanathan.

Part of her interests in cryptography stem from the beauty of the underlying mathematics itself — the field feels like clever engineering with mathematical tools. But another part of her interest in cryptography stems from its political dimensions, including its potential to fundamentally change existing power structures and governance. Xia and students at the University of California at Berkeley and Stanford University created zkPoll, a private polling app written with the Circom programming language, that allows users to create polls for specific sets of people, while generating a zero-knowledge proof that keeps personal information hidden to decrease negative voting influences from public perception.

Her participation in the PKG Center’s Active Community Engagement Freshman Pre-Orientation Program introduced her to local community organizations focusing on food security, housing for formerly incarcerated individuals, and access to health care. She is also part of Reading for Revolution, a student book club that discusses race, class, and working-class movements within MIT and the Greater Boston area.

Xia’s educational journey led to her ongoing pursuit of combining mathematical and computational methods in areas adjacent to urban planning.  “When I realized how much planning was concerned with social justice as it was concerned with design, I became more attracted to the field.”

Going on autopilot

She took classes with the Department of Urban Studies and Planning and is currently working on an Undergraduate Research Opportunities Program (UROP) project with Professor Cathy Wu in the Institute for Data, Systems, and Society.

Recent work on eco-driving by Wu and doctoral student Vindula Jayawardana investigated semi-autonomous vehicles that communicate with sensors localized at traffic intersections, which in theory could reduce carbon emissions by up to 21 percent.

Xia aims to optimize the implementation scheme for these sensors at traffic intersections, considering a graded scheme where perhaps only 20 percent of all sensors are initially installed, and more sensors get added in waves. She wants to maximize the emission reduction rates at each step of the process, as well as ensure there is no unnecessary installation and de-installation of such sensors.  

Dance numbers

Meanwhile, Xia has been a member of MIT’s Fixation, Ridonkulous, and MissBehavior groups, and as a traditional Chinese dance choreographer for the MIT Asian Dance Team

A dancer since she was 3, Xia started with Chinese traditional dance, and later added ballet and jazz. Because she is as much of a dancer as a researcher, she has figured out how to make her schedule work.

“Production weeks are always madness, with dancers running straight from class to dress rehearsals and shows all evening and coming back early next morning to take down lights and roll up marley [material that covers the stage floor],” she says. “As busy as it keeps me, I couldn’t have survived MIT without dance. I love the discipline, creativity, and most importantly the teamwork that dance demands of us. I really love the dance community here with my whole heart. These friends have inspired me and given me the love to power me through MIT.”

Xia lives with her fellow Dance Team members at the off-campus Women's Independent Living Group (WILG).  “I really value WILG's culture of independence, both in lifestyle — cooking, cleaning up after yourself, managing house facilities, etc. — and thought — questioning norms, staying away from status games, finding new passions.”

In addition to her UROP, she’s wrapping up some graduation requirements, finishing up a research paper on sorting algorithms from her summer at the University of Minnesota Duluth Research Experience for Undergraduates in combinatorics, and deciding between PhD programs in math and computer science.  

“My biggest goal right now is to figure out how to combine my interests in mathematics and urban studies, and more broadly connect technical perspectives with human-centered work in a way that feels right to me,” she says.

“Overall, MIT has given me so many avenues to explore that I would have never thought about before coming here, for which I’m infinitely grateful. Every time I find something new, it’s hard for me not to find it cool. There’s just so much out there to learn about. While it can feel overwhelming at times, I hope to continue that learning and exploration for the rest of my life.”


Researchers develop a detector for continuously monitoring toxic gases

The material could be made as a thin coating to analyze air quality in industrial or home settings over time.


Most systems used to detect toxic gases in industrial or domestic settings can be used only once, or at best a few times. Now, researchers at MIT have developed a detector that could provide continuous monitoring for the presence of these gases, at low cost.

The new system combines two existing technologies, bringing them together in a way that preserves the advantages of each while avoiding their limitations. The team used a material called a metal-organic framework, or MOF, which is highly sensitive to tiny traces of gas but whose performance quickly degrades, and combined it with a polymer material that is highly durable and easier to process, but much less sensitive.

The results are reported today in the journal Advanced Materials, in a paper by MIT professors Aristide Gumyusenge, Mircea Dinca, Heather Kulik, and Jesus del Alamo, graduate student Heejung Roh, and postdocs Dong-Ha Kim, Yeongsu Cho, and Young-Moo Jo.

Highly porous and with large surface areas, MOFs come in a variety of compositions. Some can be insulators, but the ones used for this work are highly electrically conductive. With their sponge-like form, they are effective at capturing molecules of various gases, and the sizes of their pores can be tailored to make them selective for particular kinds of gases. “If you are using them as a sensor, you can recognize if the gas is there if it has an effect on the resistivity of the MOF,” says Gumyusenge, the paper’s senior author and the Merton C. Flemings Career Development Assistant Professor of Materials Science and Engineering.

The drawback for these materials’ use as detectors for gases is that they readily become saturated, and then can no longer detect and quantify new inputs. “That’s not what you want. You want to be able to detect and reuse,” Gumyusenge says. “So, we decided to use a polymer composite to achieve this reversibility.”

The team used a class of conductive polymers that Gumyusenge and his co-workers had previously shown can respond to gases without permanently binding to them. “The polymer, even though it doesn’t have the high surface area that the MOFs do, will at least provide this recognize-and-release type of phenomenon,” he says.

The team combined the polymers in a liquid solution along with the MOF material in powdered form, and deposited the mixture on a substrate, where they dry into a uniform, thin coating. By combining the polymer, with its quick detection capability, and the more sensitive MOFs, in a one-to-one ratio, he says, “suddenly we get a sensor that has both the high sensitivity we get from the MOF and the reversibility that is enabled by the presence of the polymer.”

The material changes its electrical resistance when molecules of the gas are temporarily trapped in the material. These changes in resistance can be continuously monitored by simply attaching an ohmmeter to track the resistance over time. Gumyusenge and his students demonstrated the composite material’s ability to detect nitrogen dioxide, a toxic gas produced by many kinds of combustion, in a small lab-scale device. After 100 cycles of detection, the material was still maintaining its baseline performance within a margin of about 5 to 10 percent, demonstrating its long-term use potential.

In addition, this material has far greater sensitivity than most presently used detectors for nitrogen dioxide, the team reports. This gas is often detected after the use of stove ovens. And, with this gas recently linked to many asthma cases in the U.S., reliable detection in low concentrations is important. The team demonstrated that this new composite could detect, reversibly, the gas at concentrations as low as 2 parts per million.

While their demonstration was specifically aimed at nitrogen dioxide, Gumyusenge says, “we can definitely tailor the chemistry to target other volatile molecules,” as long as they are small polar analytes, “which tend to be most of the toxic gases.”

Besides being compatible with a simple hand-held detector or a smoke-alarm type of device, one advantage of the material is that the polymer allows it to be deposited as an extremely thin uniform film, unlike regular MOFs, which are generally in an inefficient powder form. Because the films are so thin, there is little material needed and production material costs could be low; the processing methods could be typical of those used for industrial coating processes. “So, maybe the limiting factor will be scaling up the synthesis of the polymers, which we’ve been synthesizing in small amounts,” Gumyusenge says.

“The next steps will be to evaluate these in real-life settings,” he says. For example, the material could be applied as a coating on chimneys or exhaust pipes to continuously monitor gases through readings from an attached resistance monitoring device. In such settings, he says, “we need tests to check if we truly differentiate it from other potential contaminants that we might have overlooked in the lab setting. Let’s put the sensors out in real-world scenarios and see how they do.”

The work was supported by the MIT Climate and Sustainability Consortium (MCSC), the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT, and the U.S. Department of Energy.


The beauty of biology

Senior Hanjun Lee planned to pursue chemistry at MIT. A course in genetics changed that.



When Hanjun Lee arrived at MIT, he was set on becoming a Course 5 chemistry student. Based on his experience in high school, biology was all about rote memorization.

That changed when he took course 7.03 (Genetics), taught by then-professor Aviv Regev, now head and executive vice president of research and early development at Genentech, and Peter Reddien, professor of biology and core member and associate director of the Whitehead Institute for Biomedical Research.

He notes that friends from other schools don’t cite a single course that changed their major, but he’s not alone in choosing Course 7 because of 7.03.

“Genetics has this interesting force, especially in MIT biology. The department’s historical — and active — role in genetics research ties directly into the way the course is taught,” Lee says. “Biology is about logic, scientific reasoning, and posing the right questions.”

A few years later, as a teaching assistant for class 7.002 (Fundamentals of Experimental Molecular Biology), he came to value how much care MIT biology professors take in presenting the material for all offered courses.

“I really appreciate how much effort MIT professors put into their teaching,” Lee says. “As a TA, you realize the beauty of how the professors organize these things — because they’re teaching you in a specific way, and you can grasp the beauty of it — there’s a beauty in studying and finding the patterns in nature.”

An undertaking to apply

To attend MIT at all hadn’t exactly been a lifelong dream. In fact, it didn’t occur to Lee that he could or should apply until he represented South Korea at the 49th International Chemistry Olympiad, where he won a Gold Medal in 2017. There, he had the chance to speak with MIT alumni, as well as current and aspiring students. More than half of those aspiring students eventually enrolled, Lee among them.

“Before that, MIT was this nearly mythical institution, so that experience really changed my life,” Lee recalls. “I heard so many different stories from people with so many different backgrounds — all converging towards the same enthusiasm towards science.” 

At the time, Lee was already attending medical school — a six-year undergraduate program in Korea — that would lead to a stable career in medicine. Attending MIT would involve both changing his career plans and uprooting his life, leaving all his friends and family behind.

His parents weren’t especially enthusiastic about his desire to study at MIT, so it was up to Lee to meet the application requirements. He woke up at 3 a.m. to find his own way to the only SAT testing site in South Korea — an undertaking he now recalls with a laugh. In just three months, he had gathered everything he needed; MIT was the only institution in the United States Lee applied to.

He arrived in Cambridge, Massachusetts, in 2018 but attended MIT only for a semester before returning to Korea for his two years of mandatory military service.

“During military service, my goal was to read as many papers as possible, because I wondered what topic of science I’m drawn to — and many of the papers I was reading were authored by people I recognized, people who taught biology at MIT,” Lee says. “I became really interested in cancer biology.”

Return to MIT

When he returned to campus, Lee pledged to do everything he could to meet with faculty and discuss their work. To that end, he joined the MIT Undergraduate Research Journal, allowing him to interview professors. He notes that most MIT faculty are enthusiastic about being contacted by undergraduate students.

Stateside, Lee also reached out to Michael Lawrence, an assistant professor of pathology at Harvard Medical School and assistant geneticist at Mass General Cancer Center, about a preprint concerning APOBEC, an enzyme Lee had studied at Seoul National University. Lawrence’s lab was looking into APOBEC and cancer evolution — and the idea that the enzyme might drive drug resistance to cancer treatment.

“Since he joined my lab, I’ve been absolutely amazed by his scientific talents,” Lawrence says. “Hanjun’s scientific maturity and achievements are extremely rare, especially in an undergraduate student.”

Lee has made new discoveries from genomic data and was involved in publishing a paper in Molecular Cell and a paper in Nature Genetics. In the latter, the lab identified the source of background noise in chromosome conformation capture experiments, a technique for analyzing chromatin in cells.

Lawrence thinks Lee “is destined for great leadership in science.” In the meantime, Lee has gained valuable insights into how much work these types of achievements require.

“Doing research has been rewarding, but it also taught me to appreciate that science is almost 100 percent about failures,” Lee says. “It is those failures that end up leading you to the path of success.”

Widening the scope

Lee’s personal motto is that to excel in a specific field, one must have a broad sense of what the entire field looks like, and suggests other budding scientists enroll in courses distant from their research area. He also says it was key to see his peers as collaborators rather than competitors, and that each student will excel in their own unique way.

“Your MIT experience is defined by interactions with others,” Lee says. “They will help identify and shape your path.”

For his accomplishments, Lee was recently named an American Association for Cancer Research Undergraduate Scholar. Last year, he also spoke at the Gordon Research Conference on Cell Growth and Proliferation about his work on the retinoblastoma gene product RB. Lee was also among the 2024 Biology Undergraduate Award Winners, recognized with the Salvador E. Luria Prize for outstanding scholarship and research of publication quality.

Encouraged by positive course evaluations during his time as a TA, Lee hopes to inspire other students in the future through teaching. Lee has recently decided to pursue a PhD in cancer biology at Harvard Medical School, although his interests remain broad.

“I want to explore other fields of biology as well,” he says. “I have so many questions that I want to answer.”

Although initially resistant, Lee’s mother and father are now “immensely proud to be MIT parents” and will be coming to Cambridge in May to celebrate Lee’s graduation.

“Throughout my years here, they’ve been able to see how I’ve changed,” he says. “I don’t think I’m a great scientist, yet, but I now have some sense of how to become one.” 


Jeong Min Park earns 2024 Schmidt Science Fellowship

The doctoral student will use the prize to find novel phases of matter and particles.


Physics graduate student Jeong Min (Jane) Park is among the 32 exceptional early-career scientists worldwide chosen to receive the prestigious 2024 Schmidt Science Fellows award.  

As a 2024 Schmidt Science Fellow, Park’s postdoctoral work will seek to directly detect phases that could host new particles by employing an instrument that can visualize subatomic-scale phenomena.  

With her advisor, Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, Park’s research at MIT focuses on discovering novel quantum phases of matter.

“When there are many electrons in a material, their interactions can lead to collective behaviors that are not expected from individual particles, known as emergent phenomena,” explains Park. “One example is superconductivity, where interacting electrons combine together as a pair at low temperatures to conduct electricity without energy loss.”

During her PhD studies, she has investigated novel types of superconductivity by designing new materials with targeted interactions and topology. In particular, she used graphene, atomically thin two-dimensional layers of graphite, the same material as pencil lead, and turned it into a “magic” material. This so-called magic-angle twisted trilayer graphene provided an extraordinarily strong form of superconductivity that is robust under high magnetic fields. Later, she found a whole “magic family” of these materials, elucidating the key mechanisms behind superconductivity and interaction-driven phenomena. These results have provided a new platform to study emergent phenomena in two dimensions, which can lead to innovations in electronics and quantum technology.

Park says she is looking forward to her postdoctoral studies with Princeton University physics professor Ali Yazdani's lab.

“I’m excited about the idea of discovering and studying new quantum phenomena that could further the understanding of fundamental physics,” says Park. “Having explored interaction-driven phenomena through the design of new materials, I’m now aiming to broaden my perspective and expertise to address a different kind of question, by combining my background in material design with the sophisticated local-scale measurements that I will adopt during my postdoc.”

She explains that elementary particles are classified as either bosons or fermions, with contrasting behaviors upon interchanging two identical particles, referred to as exchange statistics; bosons remain unchanged, while fermions acquire a minus sign in their quantum wavefunction.

Theories predict the existence of fundamentally different particles known as non-abelian anyons, whose wavefunctions braid upon particle exchange. Such a braiding process can be used to encode and store information, potentially opening the door to fault-tolerant quantum computing in the future.

Since 2018, this prestigious postdoctoral program has sought to break down silos among scientific fields to solve the world’s biggest challenges and support future leaders in STEM.

Schmidt Science Fellows, an initiative of Schmidt Sciences, delivered in partnership with the Rhodes Trust, identifies, develops, and amplifies the next generation of science leaders, by building a community of scientists and supporters of interdisciplinary science and leveraging this network to drive sector-wide change. The 2024 fellows consist of 17 nationalities across North America, Europe, and Asia.   

Nominated candidates undergo a rigorous selection process that includes a paper-based academic review with panels of experts in their home disciplines and final interviews with panels, including senior representatives from across many scientific disciplines and different business sectors.  


Scientists use generative AI to answer complex questions in physics

A new technique that can automatically classify phases of physical systems could help scientists investigate novel materials.


When water freezes, it transitions from a liquid phase to a solid phase, resulting in a drastic change in properties like density and volume. Phase transitions in water are so common most of us probably don’t even think about them, but phase transitions in novel materials or complex physical systems are an important area of study.

To fully understand these systems, scientists must be able to recognize phases and detect the transitions between. But how to quantify phase changes in an unknown system is often unclear, especially when data are scarce.

Researchers from MIT and the University of Basel in Switzerland applied generative artificial intelligence models to this problem, developing a new machine-learning framework that can automatically map out phase diagrams for novel physical systems.

Their physics-informed machine-learning approach is more efficient than laborious, manual techniques which rely on theoretical expertise. Importantly, because their approach leverages generative models, it does not require huge, labeled training datasets used in other machine-learning techniques.

Such a framework could help scientists investigate the thermodynamic properties of novel materials or detect entanglement in quantum systems, for instance. Ultimately, this technique could make it possible for scientists to discover unknown phases of matter autonomously.

“If you have a new system with fully unknown properties, how would you choose which observable quantity to study? The hope, at least with data-driven tools, is that you could scan large new systems in an automated way, and it will point you to important changes in the system. This might be a tool in the pipeline of automated scientific discovery of new, exotic properties of phases,” says Frank Schäfer, a postdoc in the Julia Lab in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-author of a paper on this approach.

Joining Schäfer on the paper are first author Julian Arnold, a graduate student at the University of Basel; Alan Edelman, applied mathematics professor in the Department of Mathematics and leader of the Julia Lab; and senior author Christoph Bruder, professor in the Department of Physics at the University of Basel. The research is published today in Physical Review Letters.

Detecting phase transitions using AI

While water transitioning to ice might be among the most obvious examples of a phase change, more exotic phase changes, like when a material transitions from being a normal conductor to a superconductor, are of keen interest to scientists.

These transitions can be detected by identifying an “order parameter,” a quantity that is important and expected to change. For instance, water freezes and transitions to a solid phase (ice) when its temperature drops below 0 degrees Celsius. In this case, an appropriate order parameter could be defined in terms of the proportion of water molecules that are part of the crystalline lattice versus those that remain in a disordered state.

In the past, researchers have relied on physics expertise to build phase diagrams manually, drawing on theoretical understanding to know which order parameters are important. Not only is this tedious for complex systems, and perhaps impossible for unknown systems with new behaviors, but it also introduces human bias into the solution.

More recently, researchers have begun using machine learning to build discriminative classifiers that can solve this task by learning to classify a measurement statistic as coming from a particular phase of the physical system, the same way such models classify an image as a cat or dog.

The MIT researchers demonstrated how generative models can be used to solve this classification task much more efficiently, and in a physics-informed manner.

The Julia Programming Language, a popular language for scientific computing that is also used in MIT’s introductory linear algebra classes, offers many tools that make it invaluable for constructing such generative models, Schäfer adds.

Generative models, like those that underlie ChatGPT and Dall-E, typically work by estimating the probability distribution of some data, which they use to generate new data points that fit the distribution (such as new cat images that are similar to existing cat images).

However, when simulations of a physical system using tried-and-true scientific techniques are available, researchers get a model of its probability distribution for free. This distribution describes the measurement statistics of the physical system.

A more knowledgeable model

The MIT team’s insight is that this probability distribution also defines a generative model upon which a classifier can be constructed. They plug the generative model into standard statistical formulas to directly construct a classifier instead of learning it from samples, as was done with discriminative approaches.

“This is a really nice way of incorporating something you know about your physical system deep inside your machine-learning scheme. It goes far beyond just performing feature engineering on your data samples or simple inductive biases,” Schäfer says.

This generative classifier can determine what phase the system is in given some parameter, like temperature or pressure. And because the researchers directly approximate the probability distributions underlying measurements from the physical system, the classifier has system knowledge.

This enables their method to perform better than other machine-learning techniques. And because it can work automatically without the need for extensive training, their approach significantly enhances the computational efficiency of identifying phase transitions.

At the end of the day, similar to how one might ask ChatGPT to solve a math problem, the researchers can ask the generative classifier questions like “does this sample belong to phase I or phase II?” or “was this sample generated at high temperature or low temperature?”

Scientists could also use this approach to solve different binary classification tasks in physical systems, possibly to detect entanglement in quantum systems (Is the state entangled or not?) or determine whether theory A or B is best suited to solve a particular problem. They could also use this approach to better understand and improve large language models like ChatGPT by identifying how certain parameters should be tuned so the chatbot gives the best outputs.

In the future, the researchers also want to study theoretical guarantees regarding how many measurements they would need to effectively detect phase transitions and estimate the amount of computation that would require.

This work was funded, in part, by the Swiss National Science Foundation, the MIT-Switzerland Lockheed Martin Seed Fund, and MIT International Science and Technology Initiatives.


Elaine Liu: Charging ahead

The MIT senior calculates how renewables and EVs impact the grid.


MIT senior Elaine Siyu Liu doesn’t own an electric car, or any car. But she sees the impact of electric vehicles (EVs) and renewables on the grid as two pieces of an energy puzzle she wants to solve.

The U.S. Department of Energy reports that the number of public and private EV charging ports nearly doubled in the past three years, and many more are in the works. Users expect to plug in at their convenience, charge up, and drive away. But what if the grid can’t handle it?

Electricity demand, long stagnant in the United States, has spiked due to EVs, data centers that drive artificial intelligence, and industry. Grid planners forecast an increase of 2.6 percent to 4.7 percent in electricity demand over the next five years, according to data reported to federal regulators. Everyone from EV charging-station operators to utility-system operators needs help navigating a system in flux.

That’s where Liu’s work comes in.

Liu, who is studying mathematics and electrical engineering and computer science (EECS), is interested in distribution — how to get electricity from a centralized location to consumers. “I see power systems as a good venue for theoretical research as an application tool,” she says. “I'm interested in it because I'm familiar with the optimization and probability techniques used to map this level of problem.”

Liu grew up in Beijing, then after middle school moved with her parents to Canada and enrolled in a prep school in Oakville, Ontario, 30 miles outside Toronto.

Liu stumbled upon an opportunity to take part in a regional math competition and eventually started a math club, but at the time, the school’s culture surrounding math surprised her. Being exposed to what seemed to be some students’ aversion to math, she says, “I don’t think my feelings about math changed. I think my feelings about how people feel about math changed.”

Liu brought her passion for math to MIT. The summer after her sophomore year, she took on the first of the two Undergraduate Research Opportunity Program projects she completed with electric power system expert Marija Ilić, a joint adjunct professor in EECS and a senior research scientist at the MIT Laboratory for Information and Decision Systems.

Predicting the grid

Since 2022, with the help of funding from the MIT Energy Initiative (MITEI), Liu has been working with Ilić on identifying ways in which the grid is challenged.

One factor is the addition of renewables to the energy pipeline. A gap in wind or sun might cause a lag in power generation. If this lag occurs during peak demand, it could mean trouble for a grid already taxed by extreme weather and other unforeseen events.

If you think of the grid as a network of dozens of interconnected parts, once an element in the network fails — say, a tree downs a transmission line — the electricity that used to go through that line needs to be rerouted. This may overload other lines, creating what’s known as a cascade failure.

“This all happens really quickly and has very large downstream effects,” Liu says. “Millions of people will have instant blackouts.”

Even if the system can handle a single downed line, Liu notes that “the nuance is that there are now a lot of renewables, and renewables are less predictable. You can't predict a gap in wind or sun. When such things happen, there’s suddenly not enough generation and too much demand. So the same kind of failure would happen, but on a larger and more uncontrollable scale.”

Renewables’ varying output has the added complication of causing voltage fluctuations. “We plug in our devices expecting a voltage of 110, but because of oscillations, you will never get exactly 110,” Liu says. “So even when you can deliver enough electricity, if you can't deliver it at the specific voltage level that is required, that’s a problem.”

Liu and Ilić are building a model to predict how and when the grid might fail. Lacking access to privatized data, Liu runs her models with European industry data and test cases made available to universities. “I have a fake power grid that I run my experiments on,” she says. “You can take the same tool and run it on the real power grid.”

Liu’s model predicts cascade failures as they evolve. Supply from a wind generator, for example, might drop precipitously over the course of an hour. The model analyzes which substations and which households will be affected. “After we know we need to do something, this prediction tool can enable system operators to strategically intervene ahead of time,” Liu says.

Dictating price and power

Last year, Liu turned her attention to EVs, which provide a different kind of challenge than renewables.

In 2022, S&P Global reported that lawmakers argued that the U.S. Federal Energy Regulatory Commission’s (FERC) wholesale power rate structure was unfair for EV charging station operators.

In addition to operators paying by the kilowatt-hour, some also pay more for electricity during peak demand hours. Only a few EVs charging up during those hours could result in higher costs for the operator even if their overall energy use is low.

Anticipating how much power EVs will need is more complex than predicting energy needed for, say, heating and cooling. Unlike buildings, EVs move around, making it difficult to predict energy consumption at any given time. “If users don't like the price at one charging station or how long the line is, they'll go somewhere else,” Liu says. “Where to allocate EV chargers is a problem that a lot of people are dealing with right now.”

One approach would be for FERC to dictate to EV users when and where to charge and what price they'll pay. To Liu, this isn’t an attractive option. “No one likes to be told what to do,” she says.

Liu is looking at optimizing a market-based solution that would be acceptable to top-level energy producers — wind and solar farms and nuclear plants — all the way down to the municipal aggregators that secure electricity at competitive rates and oversee distribution to the consumer.

Analyzing the location, movement, and behavior patterns of all the EVs driven daily in Boston and other major energy hubs, she notes, could help demand aggregators determine where to place EV chargers and how much to charge consumers, akin to Walmart deciding how much to mark up wholesale eggs in different markets.

Last year, Liu presented the work at MITEI’s annual research conference. This spring, Liu and Ilić are submitting a paper on the market optimization analysis to a journal of the Institute of Electrical and Electronics Engineers.

Liu has come to terms with her early introduction to attitudes toward STEM that struck her as markedly different from those in China. She says, “I think the (prep) school had a very strong ‘math is for nerds’ vibe, especially for girls. There was a ‘why are you giving yourself more work?’ kind of mentality. But over time, I just learned to disregard that.”

After graduation, Liu, the only undergraduate researcher in Ilić’s MIT Electric Energy Systems Group, plans to apply to fellowships and graduate programs in EECS, applied math, and operations research.

Based on her analysis, Liu says that the market could effectively determine the price and availability of charging stations. Offering incentives for EV owners to charge during the day instead of at night when demand is high could help avoid grid overload and prevent extra costs to operators. “People would still retain the ability to go to a different charging station if they chose to,” she says. “I'm arguing that this works.”


John Joannopoulos receives 2024-2025 Killian Award

The MIT physicist is honored for pioneering work in photonics that helped to advance tools for telecommunications and biomedicine.


John Joannopoulos, an innovator and mentor in the fields of theoretical condensed matter physics and nanophotonics, has been named the recipient of the 2024-2025 James R. Killian Jr. Faculty Achievement Award.

Joannopoulos is the Francis Wright Davis Professor of Physics and director of MIT’s Institute for Soldier Nanotechnologies. He has been a member of the MIT faculty for 50 years.

“Professor Joannopoulos’s profound and lasting impact on the field of theoretical condensed matter physics finds its roots in his pioneering work in harnessing ab initio physics to elucidate the behavior of materials at the atomic level,” states the award citation, which was announced at today’s faculty meeting by Roger White, chair of the Killian Award Selection Committee and professor of philosophy at MIT. “His seminal research in the development of photonic crystals has revolutionized understanding of light-matter interactions, laying the groundwork for transformative advancements in diverse fields ranging from telecommunications to biomedical engineering.”

The award also honors Joannopoulos’ service as a “legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship.”

The Killian Award was established in 1971 to recognize outstanding professional contributions by MIT faculty members. It is the highest honor that the faculty can give to one of its members.

“I have to tell you, it was a complete and utter surprise,” Joannopoulos told MIT News shortly after he received word of the award. “I didn’t expect it at all, and was extremely flattered, honored, and moved by it, frankly.”

Joannopoulous has spent his entire professional career at MIT. He came to the Institute in 1974, directly after receiving his PhD in physics at the University of California at Berkeley, where he also earned his bachelor’s degree. Starting out as an assistant professor in MIT’s Department of Physics, he quickly set up a research program focused on theoretical condensed matter physics.

Over the first half of his MIT career, Joannopoulos worked to elucidate the fundamental nature of the electronic, vibrational, and optical structure of crystalline and amorphous bulk solids, their surfaces, interfaces, and defects. He and his students developed numerous theoretical methods to enable tractable and accurate calculations of these complex systems.

In the 1990s, his work with microscopic material systems expanded to a new class of materials, called photonic crystals — materials that could be engineered at the micro- and nanoscale to manipulate light in ways that impart surprising and exotic optical qualities to the material as a whole.

“I saw that you could create photonic crystals with defects that can affect the properties of photons, in much the same way that defects in a semiconductor affect the properties of electrons,” Joannopoulos says. “So I started working in this area to try and explore what anomalous light phenomena can we discover using this approach?”

Among his various breakthroughs in the field was the realization of a “perfect dielectric mirror” — a multilayered optical device that reflects light from all angles as normal metallic mirrors do, and that can also be tuned to reflect and trap light at specific frequencies. He and his colleagues saw potential for the mirror to be made into a hollow fiber that could serve as a highly effective optical conduit, for use in a wide range of applications. To further advance the technology, he and his colleagues launched a startup, which has since developed the technology into a flexible, fiber-optic “surgical scalpel.”

Throughout his career, Joannopoulos has helped to launch numerous startups and photonics-based technologies.

“His ability to bridge the gap between academia and industry has not only advanced scientific knowledge but also led to the creation of dozens of new companies, thousands of jobs, and groundbreaking products that continue to benefit society to this day,” the award citation states.

In 2006, Joannopoulos accepted the position as director of MIT’s Institute for Soldier Nanotechnologies (ISN), a collaboration between MIT researchers, industry partners, and military defense experts, who seek innovations to protect and enhance soldiers’ survivability in the field. In his role as ISN head, Joannopoulos has worked across MIT, making connections and supporting new projects with researchers specializing in fields far from his own.

“I get a chance to explore and learn fascinating new things,” says Joannopoulos, who is currently overseeing projects related to hyperspectral imaging, smart and responsive fabrics, and nanodrug delivery. “I love that aspect of really getting to understand what people in other fields are doing. And they’re doing great work across many, many different fields.”

Throughout his career at MIT, Joannopoulos has been especially inspired and motivated by his students, many of whom have gone on to found companies, lead top academic and research institutions, and make significant contributions to their respective fields, including one student who was awarded the Nobel Prize in Physics in 1998.

“One’s proudest moments are the successes of one’s students, and in that regard, I’ve been extremely lucky to have had truly exceptional students over the years,” Joannopolous says.

His many contributions to academia and industry have earned Joannopoulos numerous honors and awards, including his election to both the National Academy of Sciences and the American Academy of Arts and Sciences. He is also a fellow of both the American Physical Society and the American Association for the Advancement of Science.

“The Selection Committee is delighted to have this opportunity to honor Professor John Joannopoulos: a visionary scientist, a beloved mentor, a great believer in the goodness of people, and a leader whose contributions to MIT and the broader scientific community are immeasurable,” the award citation concludes.


Newly discovered Earth-sized planet may lack an atmosphere

Circling a cold, Jupiter-sized star, the new world could offer an unobstructed view of its surface composition and history.


Astronomers at MIT, the University of Liège, and elsewhere have discovered a new planet orbiting a small cold star, a mere 55 light years away. The nearby planet is similar to Earth in its size and rocky composition, though that’s where the similarities end. Because this new world is likely missing an atmosphere.

In a paper appearing today in Nature Astronomy, the researchers confirm the detection of SPECULOOS-3b, an Earth-sized, likely airless planet that the team discovered using a network of telescopes as part of the SPECULOOS (Search for Planets EClipsing ULtra-cOOl Stars) project.

The new planet orbits a nearby ultracool dwarf — a type of star that is smaller and colder than the sun. Ultracool dwarf stars are thought to be the most common type of star in our galaxy, though they are also the faintest, making them difficult to spot in the night sky.

The ultracool dwarf that hosts the new planet is about one-tenth the size of, and 1,000 times dimmer than, the sun. The star is more similar in size to Jupiter and is twice as cold as the sun. Nevertheless, the dwarf star radiates an enormous amount of energy onto the planet’s surface due to the planet’s extremely close proximity: SPECULOOS-3b circles its star in just 17 hours. One year on the new planet, then, is shorter than one day on Earth.

Because it is so close to its star, the planet is blasted with 16 times more radiation per second compared to what the Earth receives from the sun. The team believes that such intense and relentless exposure has likely vaporized any atmosphere that the planet once held, leaving it an airless, exposed, blistering ball of rock.

If the planet lacks an atmosphere, scientists might soon be able to zero in on exactly what type of rocks are on its surface and even what sort of geological processes shaped its landscape, such as whether the planet’s crust experienced magma oceans, volcanic activity, and plate tectonics in its past.

“SPECULOOS-3b is the first planet for which we can consider moving toward constraining surface properties of planets beyond the solar system,” says study co-author Julien de Wit, associate professor of planetary sciences at MIT. “With this world, we could basically start doing exoplanetary geology. How cool is that?”

The study’s MIT co-authors include research scientists Benjamin Rackham and Artem Burdanov, along with lead author Michel Gillon of the University of Liège and colleagues from collaborating institutions and observatories around the world.

Lining up

Astronomers observed the first inklings of the new planet in 2021, with observations taken by SPECULOOS — a network of six robotic, 1-meter telescopes (four in the Southern Hemisphere, and two in the Northern Hemisphere) that continuously observe the sky for signs of planets orbiting around ultracool dwarf stars. SPECULOOS is the parent project of the TRAPPIST (TRAnsiting Planets and PlanetesImals Small Telescope-South) survey, which discovered seven terrestrial planets — several potentially habitable — around a small cold star named TRAPPIST-1.

SPECULOOS aims to observe about 1,600 nearby ultracool dwarf stars. As these stars are small, any planets that orbit and cross in front of them should momentarily block their light, by a more noticeable amount compared to planets that orbit around larger, brighter stars. Ultracool dwarf stars, then, could give astronomers a better view of any planets that they host.

In 2021, a telescope in SPECULOOS’ network picked up some inconclusive signs of a transit, in front of one ultracool dwarf star about 55 light years away. Then in 2022, a close monitoring with MIT’s Artemis telescope changed the game.

“While there were structures in the 2021 data that didn’t look convincing, the 2022 Artemis data really got our attention,” recalls MIT’s Artem Burdanov, who manages the SPECULOOS Northern Observatory. “We started to analyze one clear transit-like signal in the Artemis data, quickly decided to launch a campaign around this star, and then things just started lining up.”

Dark like the moon

The team zeroed in on the star with MIT’s Artemis telescope, the rest of the SPECULOOS network, and several other observatories. The multipronged observations confirmed that the star did indeed host a planet, which appeared to orbit every 17 hours. Judging from the amount of light it blocked with each crossing, the scientists estimate that the planet is about the size of the Earth.

They were then able to estimate certain properties of the star and the planet based on analyses of the star’s light taken by MIT’s Benjamin Rackham, who has led a campaign using the Magellan telescopes in Chile and the NASA Infrared Telescope Facility (IRTF) in Hawaii to analyze the light from nearby ultracool dwarf stars.

“We can say from our spectra and other observations that the star has a temperature of about 2,800 kelvins, it is about 7 billion years old — not too young, and not too old — and it is moderately active, meaning that it flares quite a lot,” Rackham says. “We think the planet must not have an atmosphere anymore because it would easily have been eroded away by the activity of the host star that’s basically constantly flaring.”

Without an atmosphere, then, what might one see if they were to look up from the planet’s surface?

“If there’s no atmosphere, there would be no blue sky or clouds — it would just be dark, like on the surface of the moon,” Rackham offers. “And the ‘sun’ would be a big, purplish-red, spotted, and flaring star that would look about 18 times as big as the sun looks to us in the sky.”

Because the planet lacks an atmosphere and is relatively close by, the team says that SPECULOOS-3b is an ideal candidate for follow-up studies by NASA’s James Webb Space Telescope (JWST), which is powerful enough to parse the star’s light and discern more details of both the star and the planet. With JWST’s observations, the team hopes to be able to identify details of the planet’s surface, which would be a first in the field of exoplanetary studies.

“We think that the planet is nearly as hot as Venus, so not habitable,” Rackham says. “It’s not hot enough to have a lava surface. It should be solid rock. But depending on how bright that rock is, it could be recently resurfaced due to plate tectonics or volcanic activity, or it could be a planet that’s been eroded by space weathering and has a much darker surface. Going forward, we should be able to distinguish between some interesting scenarios for the surface of the planet.”

This research was supported, in part, by the European Research Council, the Simons Foundation, and the Heising-Simons Foundation.


Five MIT faculty elected to the National Academy of Sciences for 2024

Guoping Feng, Piotr Indyk, Daniel Kleitman, Daniela Rus, Senthil Todadri, and nine alumni are recognized by their peers for their outstanding contributions to research.


The National Academy of Sciences has elected 120 members and 24 international members, including five faculty members from MIT. Guoping Feng, Piotr Indyk, Daniel J. Kleitman, Daniela Rus, and Senthil Todadri were elected in recognition of their “distinguished and continuing achievements in original research.” Membership to the National Academy of Sciences is one of the highest honors a scientist can receive in their career.

Among the new members added this year are also nine MIT alumni, including Zvi Bern ’82; Harold Hwang ’93, SM ’93; Leonard Kleinrock SM ’59, PhD ’63; Jeffrey C. Lagarias ’71, SM ’72, PhD ’74; Ann Pearson PhD ’00; Robin Pemantle PhD ’88; Jonas C. Peters PhD ’98; Lynn Talley PhD ’82; and Peter T. Wolczanski ’76. Those elected this year bring the total number of active members to 2,617, with 537 international members.

The National Academy of Sciences is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.

Guoping Feng

Guoping Feng is the James W. (1963) and Patricia T. Poitras Professor in the Department of Brain and Cognitive Sciences. He is also associate director and investigator in the McGovern Institute for Brain Research, a member of the Broad Institute of MIT and Harvard, and director of the Hock E. Tan and K. Lisa Yang Center for Autism Research.

His research focuses on understanding the molecular mechanisms that regulate the development and function of synapses, the places in the brain where neurons connect and communicate. He’s interested in how defects in the synapses can contribute to psychiatric and neurodevelopmental disorders. By understanding the fundamental mechanisms behind these disorders, he’s producing foundational knowledge that may guide the development of new treatments for conditions like obsessive-compulsive disorder and schizophrenia.

Feng received his medical training at Zhejiang University Medical School in Hangzhou, China, and his PhD in molecular genetics from the State University of New York at Buffalo. He did his postdoctoral training at Washington University at St. Louis and was on the faculty at Duke University School of Medicine before coming to MIT in 2010. He is a member of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, and was elected to the National Academy of Medicine in 2023.

Piotr Indyk

Piotr Indyk is the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science. He received his magister degree from the University of Warsaw and his PhD from Stanford University before coming to MIT in 2000.

Indyk’s research focuses on building efficient, sublinear, and streaming algorithms. He’s developed, for example, algorithms that can use limited time and space to navigate massive data streams, that can separate signals into individual frequencies faster than other methods, and can address the “nearest neighbor” problem by finding highly similar data points without needing to scan an entire database. His work has applications on everything from machine learning to data mining.

He has been named a Simons Investigator and a fellow of the Association for Computer Machinery. In 2023, he was elected to the American Academy of Arts and Sciences.

Daniel J. Kleitman

Daniel Kleitman, a professor emeritus of applied mathematics, has been at MIT since 1966. He received his undergraduate degree from Cornell University and his master's and PhD in physics from Harvard University before doing postdoctoral work at Harvard and the Niels Bohr Institute in Copenhagen, Denmark.

Kleitman’s research interests include operations research, genomics, graph theory, and combinatorics, the area of math concerned with counting. He was actually a professor of physics at Brandeis University before changing his field to math, encouraged by the prolific mathematician Paul Erdős. In fact, Kleitman has the rare distinction of having an Erdős number of just one. The number is a measure of the “collaborative distance” between a mathematician and Erdős in terms of authorship of papers, and studies have shown that leading mathematicians have particularly low numbers.

He’s a member of the American Academy of Arts and Sciences and has made important contributions to the MIT community throughout his career. He was head of the Department of Mathematics and served on a number of committees, including the Applied Mathematics Committee. He also helped create web-based technology and an online textbook for several of the department’s core undergraduate courses. He was even a math advisor for the MIT-based film “Good Will Hunting.”

Daniela Rus

Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). She also serves as director of the Toyota-CSAIL Joint Research Center.

Her research on robotics, artificial intelligence, and data science is geared toward understanding the science and engineering of autonomy. Her ultimate goal is to create a future where machines are seamlessly integrated into daily life to support people with cognitive and physical tasks, and deployed in way that ensures they benefit humanity. She’s working to increase the ability of machines to reason, learn, and adapt to complex tasks in human-centered environments with applications for agriculture, manufacturing, medicine, construction, and other industries. She’s also interested in creating new tools for designing and fabricating robots and in improving the interfaces between robots and people, and she’s done collaborative projects at the intersection of technology and artistic performance.

Rus received her undergraduate degree from the University of Iowa and her PhD in computer science from Cornell University. She was a professor of computer science at Dartmouth College before coming to MIT in 2004. She is part of the Class of 2002 MacArthur Fellows; was elected to the National Academy of Engineering and the American Academy of Arts and Sciences; and is a fellow of the Association for Computer Machinery, the Institute of Electrical and Electronics Engineers, and the Association for the Advancement of Artificial Intelligence.

Senthil Todadri

Senthil Todadri, a professor of physics, came to MIT in 2001. He received his undergraduate degree from the Indian Institute of Technology in Kanpur and his PhD from Yale University before working as a postdoc at the Kavli Institute for Theoretical Physics in Santa Barbara, California.

Todadri’s research focuses on condensed matter theory. He’s interested in novel phases and phase transitions of quantum matter that expand beyond existing paradigms. Combining modeling experiments and abstract methods, he’s working to develop a theoretical framework for describing the physics of these systems. Much of that work involves understanding the phenomena that arise because of impurities or strong interactions between electrons in solids that don’t conform with conventional physical theories. He also pioneered the theory of deconfined quantum criticality, which describes a class of phase transitions, and he discovered the dualities of quantum field theories in two dimensional superconducting states, which has important applications to many problems in the field.

Todadri has been named a Simons Investigator, a Sloan Research Fellow, and a fellow of the American Physical Society. In 2023, he was elected to the American Academy of Arts and Sciences


Astronomers spot a giant planet that is as light as cotton candy

The new world is the second-lightest planet discovered to date.


Astronomers at MIT, the University of Liège in Belgium, and elsewhere have discovered a huge, fluffy oddball of a planet orbiting a distant star in our Milky Way galaxy. The discovery, reported today in the journal Nature Astronomy, is a promising key to the mystery of how such giant, super-light planets form.

The new planet, named WASP-193b, appears to dwarf Jupiter in size, yet it is a fraction of its density. The scientists found that the gas giant is 50 percent bigger than Jupiter, and about a tenth as dense — an extremely low density, comparable to that of cotton candy.

WASP-193b is the second lightest planet discovered to date, after the smaller, Neptune-like world, Kepler 51d. The new planet’s much larger size, combined with its super-light density, make WASP-193b something of an oddity among the more than 5,400 planets discovered to date.

“To find these giant objects with such a small density is really, really rare,” says lead study author and MIT postdoc Khalid Barkaoui. “There’s a class of planets called puffy Jupiters, and it’s been a mystery for 15 years now as to what they are. And this is an extreme case of that class.”

“We don’t know where to put this planet in all the formation theories we have right now, because it’s an outlier of all of them,” adds co-lead author Francisco Pozuelos, a senior researcher at the Institute of Astrophysics of Andalucia, in Spain. “We cannot explain how this planet was formed, based on classical evolution models. Looking more closely at its atmosphere will allow us to obtain an evolutionary path of this planet.”

The study’s MIT co-authors include Julien de Wit, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences, and MIT postdoc Artem Burdanov, along with collaborators from multiple institutions across Europe.

“An interesting twist”

The new planet was initially spotted by the Wide Angle Search for Planets, or WASP — an international collaboration of academic institutions that together operate two robotic observatories, one in the northern hemisphere and the other in the south. Each observatory uses an array of wide-angle cameras to measure the brightness of thousands of individual stars across the entire sky.

In surveys taken between 2006 and 2008, and again from 2011 to 2012, the WASP-South observatory detected periodic transits, or dips in light, from WASP-193 — a bright, nearby, sun-like star located 1,232 light years from Earth. Astronomers determined that the star’s periodic dips in brightness were consistent with a planet circling the star and blocking its light every 6.25 days. The scientists measured the total amount of light the planet blocked with each transit, which gave them an estimate of the planet’s giant, super-Jupiter size.

The astronomers then looked to pin down the planet’s mass — a measure that would then reveal its density and potentially also clues to its composition. To get a mass estimate, astronomers typically employ radial velocity, a technique in which scientists analyze a star’s spectrum, or various wavelengths of light, as a planet circles the star. A star’s spectrum can be shifted in specific ways depending on whatever is pulling on the star, such as an orbiting planet. The more massive a planet is, and the closer it is to its star, the more its spectrum can shift — a distortion that can give scientists an idea of a planet’s mass.

For WASP-193 b, astronomers obtained additional high-resolution spectra of the star taken by various ground-based telescopes, and attempted to employ radial velocity to calculate the planet’s mass. But they kept coming up empty — precisely because, as it turned out, the planet was far too light to have any detectable pull on its star.

“Typically, big planets are pretty easy to detect because they are usually massive, and lead to a big pull on their star,” de Wit explains. “But what was tricky about this planet was, even though it’s big — huge — its mass and density are so low that it was actually very difficult to detect with just the radial velocity technique. It was an interesting twist.”

“[WASP-193b] is so very light that it took four years to gather data and show that there is a mass signal, but it’s really, really tiny,” Barkaoui says.

“We were initially getting extremely low densities, which were very difficult to believe in the beginning,” Pozuelos adds. “We repeated the process of all the data analysis several times to make sure this was the real density of the planet because this was super rare.”

An inflated world

In the end, the team confirmed that the planet was indeed extremely light. Its mass, they calculated, was about 0.14 that of Jupiter. And its density, derived from its mass, came out to about 0.059 grams per cubic centimeter. Jupiter, in contrast, is about 1.33 grams per cubic centimeter; and Earth is a more substantial 5.51 grams per cubic centimeter. Perhaps the material closest in density to the new, puffy planet is cotton candy, which has a density of about 0.05 grams per cubic centimeter.

“The planet is so light that it’s difficult to think of an analogous, solid-state material,” Barkaoui says. “The reason why it’s close to cotton candy is because both are mostly made of light gases rather than solids. The planet is basically super fluffy.”

The researchers suspect that the new planet is made mostly from hydrogen and helium, like most other gas giants in the galaxy. For WASP-193b, these gases likely form a hugely inflated atmosphere that extends tens of thousands of kilometers farther than Jupiter’s own atmosphere. Exactly how a planet can inflate so far while maintaining a super-light density is a question that no existing theory of planetary formation can yet answer.

To get a better picture of the new fluffy world, the team plans to use a technique de Wit previously developed, to first derive certain properties of the planet’s atmosphere, such as its temperature, composition, and pressure at various depths. These characteristics can then be used to precisely work out the planet’s mass. For now, the team sees WASP-193b as an ideal candidate for follow-up study by observatories such as the James Webb Space Telescope.

“The bigger a planet’s atmosphere, the more light can go through,” de Wit says. “So it’s clear that this planet is one of the best targets we have for studying atmospheric effects. It will be a Rosetta Stone to try and resolve the mystery of puffy Jupiters.”

This research was funded, in part, by consortium universities and the UK’s Science and Technology Facilities Council for WASP; the European Research Council; the Wallonia-Brussels Federation; and the Heising-Simons Foundation, Colin and Leslie Masson, and Peter A. Gilman, supporting Artemis and the other SPECULOOS Telescopes.


MIT researchers discover the universe’s oldest stars in our own galactic backyard

Three stars circling the Milky Way’s halo formed 12 to 13 billion years ago.


MIT researchers, including several undergraduate students, have discovered three of the oldest stars in the universe, and they happen to live in our own galactic neighborhood.

The team spotted the stars in the Milky Way’s “halo” — the cloud of stars that envelopes the entire main galactic disk. Based on the team’s analysis, the three stars formed between 12 and 13 billion years ago, the time when the very first galaxies were taking shape.

The researchers have coined the stars “SASS,” for Small Accreted Stellar System stars, as they believe each star once belonged to its own small, primitive galaxy that was later absorbed by the larger but still growing Milky Way. Today, the three stars are all that are left of their respective galaxies. They circle the outskirts of the Milky Way, where the team suspects there may be more such ancient stellar survivors.

“These oldest stars should definitely be there, given what we know of galaxy formation,” says MIT professor of physics Anna Frebel. “They are part of our cosmic family tree. And we now have a new way to find them.”

As they uncover similar SASS stars, the researchers hope to use them as analogs of ultrafaint dwarf galaxies, which are thought to be some of the universe’s surviving first galaxies. Such galaxies are still intact today but are too distant and faint for astronomers to study in depth. As SASS stars may have once belonged to similarly primitive dwarf galaxies but are in the Milky Way and as such much closer, they could be an accessible key to understanding the evolution of ultrafaint dwarf galaxies.

“Now we can look for more analogs in the Milky Way, that are much brighter, and study their chemical evolution without having to chase these extremely faint stars,” Frebel says.

She and her colleagues have published their findings today in the Monthly Notices of the Royal Astronomical Society (MNRAS). The study’s co-authors are Mohammad Mardini, at Zarqa University, in Jordan; Hillary Andales ’23; and current MIT undergraduates Ananda Santos and Casey Fienberg.

Stellar frontier

The team’s discoveries grew out of a classroom concept. During the 2022 fall semester, Frebel launched a new course, 8.S30 (Observational Stellar Archaeology), in which students learned techniques for analyzing ancient stars and then applied those tools to stars that had never been studied before, to determine their origins.

“While most of our classes are taught from the ground up, this class immediately put us at the frontier of research in astrophysics,” Andales says.

The students worked from star data collected by Frebel over the years from the 6.5-meter Magellan-Clay telescope at the Las Campanas Observatory. She keeps hard copies of the data in a large binder in her office, which the students combed through to look for stars of interest.

In particular, they were searching ancient stars that formed soon after the Big Bang, which occurred 13.8 billion years ago. At this time, the universe was made mostly of hydrogen and helium and very low abundances of other chemical elements, such as strontium and barium. So, the students looked through Frebel’s binder for stars with spectra, or measurements of starlight, that indicated low abundances of strontium and barium.

Their search narrowed in on three stars that were originally observed by the Magellan telescope between 2013 and 2014. Astronomers never followed up on these particular stars to interpret their spectra and deduce their origins. They were, then, perfect candidates for the students in Frebel’s class.

The students learned how to characterize a star in order to prepare for the analysis of the spectra for each of the three stars. They were able to determine the chemical composition of each one with various stellar models. The intensity of a particular feature in the stellar spectrum, corresponding to a specific wavelength of light, corresponds to a particular abundance of a specific element.

After finalizing their analysis, the students were able to confidently conclude that the three stars did hold very low abundances of strontium, barium, and other elements such as iron, compared to their reference star — our own sun. In fact, one star contained less than 1/10,000 the amount of iron to helium compared to the sun today.

“It took a lot of hours staring at a computer, and a lot of debugging, frantically texting and emailing each other to figure this out,” Santos recalls. “It was a big learning curve, and a special experience.”

“On the run”

The stars’ low chemical abundance did hint that they originally formed 12 to 13 billion years ago. In fact, their low chemical signatures were similar to what astronomers had previously measured for some ancient, ultrafaint dwarf galaxies. Did the team’s stars originate in similar galaxies? And how did they come to be in the Milky Way?

On a hunch, the scientists checked out the stars’ orbital patterns and how they move across the sky. The three stars are in different locations throughout the Milky Way’s halo and are estimated to be about 30,000 light years from Earth. (For reference, the disk of the Milky Way spans 100,000 light years across.)

As they retraced each star’s motion about the galactic center using observations from the Gaia astrometric satellite, the team noticed a curious thing: Relative to most of the stars in the main disk, which move like cars on a racetrack, all three stars seemed to be going the wrong way. In astronomy, this is known as “retrograde motion” and is a tipoff that an object was once “accreted,” or drawn in from elsewhere.

“The only way you can have stars going the wrong way from the rest of the gang is if you threw them in the wrong way,” Frebel says.

The fact that these three stars were orbiting in completely different ways from the rest of the galactic disk and even the halo, combined with the fact that they held low chemical abundances, made a strong case that the stars were indeed ancient and once belonged to older, smaller dwarf galaxies that fell into the Milky Way at random angles and continued their stubborn trajectories billions of years later.

Frebel, curious as to whether retrograde motion was a feature of other ancient stars in the halo that astronomers previously analyzed, looked through the scientific literature and found 65 other stars, also with low strontium and barium abundances, that appeared to also be going against the galactic flow.

“Interestingly they’re all quite fast — hundreds of kilometers per second, going the wrong way,” Frebel says. “They’re on the run! We don’t know why that’s the case, but it was the piece to the puzzle that we needed, and that I didn’t quite anticipate when we started.”

The team is eager to search out other ancient SASS stars, and they now have a relatively simple recipe to do so: First, look for stars with low chemical abundances, and then track their orbital patterns for signs of retrograde motion. Of the more than 400 billion stars in the Milky Way, they anticipate that the method will turn up a small but significant number of the universe’s oldest stars.

Frebel plans to relaunch the class this fall, and looks back at that first course, and the three students who took their results through to publication, with admiration and gratitude.

“It’s been awesome to work with three women undergrads. That’s a first for me,” she says. “It’s really an example of the MIT way. We do. And whoever says, ‘I want to participate,’ they can do that, and good things happen.”

This research was supported, in part, by the National Science Foundation.


Four from MIT named 2024 Knight-Hennessy Scholars

The fellowship funds graduate studies at Stanford University.


MIT senior Owen Dugan, graduate student Vittorio Colicci ’22, predoctoral research fellow Carine You ’22, and recent alumna Carina Letong Hong ’22 are recipients of this year’s Knight-Hennessy Scholarships. The competitive fellowship, now in its seventh year, funds up to three years of graduate studies in any field at Stanford University. To date, 22 MIT students and alumni have been awarded Knight-Hennessy Scholarships.

“We are excited for these students to continue their education at Stanford with the generous support of the Knight Hennessy Scholarship,” says Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development. “They have all demonstrated extraordinary dedication, intellect, and leadership, and this opportunity will allow them to further hone their skills to make real-world change.”

Vittorio Colicci ’22

Vittorio Colicci, from Trumbull, Connecticut, graduated from MIT in May 2022 with a BS in aerospace engineering and physics. He will receive his master’s degree in planetary sciences this spring. At Stanford, Colicci will pursue a PhD in earth and planetary sciences at the Stanford Doerr School of Sustainability. He hopes to investigate how surface processes on Earth and Mars have evolved through time alongside changes in habitability. Colicci has worked largely on spacecraft engineering projects, developing a monodisperse silica ceramic for electrospray thrusters and fabricating high-energy diffraction gratings for space telescopes. As a Presidential Graduate Fellow at MIT, he examined the influence of root geometry on soil cohesion for early terrestrial plants using 3D-printed reconstructions. Outside of research, Colicci served as co-director of TEDxMIT and propulsion lead for the MIT Rocket Team. He is also passionate about STEM engagement and outreach, having taught educational workshops in Zambia and India.

Owen Dugan

Owen Dugan, from Sleepy Hollow, New York, is a senior majoring in physics. As a Knight-Hennessy Scholar, he will pursue a PhD in computer science at the Stanford School of Engineering. Dugan aspires to combine artificial intelligence and physics, developing AI that enables breakthroughs in physics and using physics techniques to design more capable and safe AI systems. He has collaborated with researchers from Harvard University, the University of Chicago, and DeepMind, and has presented his first-author research at venues including the International Conference on Machine Learning, the MIT Mechanistic Interpretability Conference, and the American Physical Society March Meeting. Among other awards, Dugan is a Hertz Finalist, a U.S. Presidential Scholar, an MIT Outstanding Undergraduate Research Awardee, a Research Science Institute Scholar, and a Neo Scholar. He is also a co-founder of VeriLens, a funded startup enabling trust on the internet by cryptographically verifying digital media.

Carina Letong Hong ’22

Carina Letong Hong, from Canton, China, is currently pursuing a JD/PhD in mathematics at Stanford. A first-generation college student, Hong graduated from MIT in May 2022 with a double major in mathematics and physics and was inducted into Sigma Pi Sigma, the physics honor society. She then earned a neuroscience master’s degree with dissertation distinctions from the University of Oxford, where she conducted artificial intelligence and machine learning research at Sainsbury Wellcome Center’s Gatsby Unit. At Stanford Law School, Hong provides legal aid to low-income workers and uses economic analysis to push for law enforcement reform. She has published numerous papers in peer-reviewed journals, served as an expert referee for journals and conferences, and spoken at summits in the United States, Germany, France, the U.K., and China. She was the recipient of the AMS-MAA-SIAM Morgan Prize for Outstanding Research, the highest honor for an undergraduate in mathematics in North America; the AWM Alice T. Schafer Prize for Mathematical Excellence, given annually to an undergraduate woman in the United States; the Maryam Mirzakhani Fellowship; and a Rhodes Scholarship.

Carine You ’22

Carine You, from San Diego, California, graduated from MIT in May 2022 with bachelor’s degrees in electrical engineering and computer science and in mathematics. Since graduating, You has worked as a predoctoral research assistant with Professor Amy Finkelstein in the MIT Department of Economics, where she has studied the quality of Medicare nursing home care and the targeting of medical screening technologies. This fall, You will embark on a PhD in economic analysis and policy at the Stanford Graduate School of Business. She wishes to address pressing issues in environmental and health-care markets, with a particular focus on economic efficiency and equity. You previously developed audio signal processing algorithms at Bose, refined mechanistic models to inform respiratory monitoring at the MIT Research Laboratory of Electronics, and analyzed corruption in developmental projects in India at the World Bank. Through Middle East Entrepreneurs of Tomorrow, she taught computer science to Israeli and Palestinian students in Jerusalem and spearheaded an online pilot expansion for the organization. At MIT, she was named a Burchard Scholar.


Taking RNAi from interesting science to impactful new treatments

Alnylam Pharmaceuticals, founded by MIT professors and former postdocs, has turned the promise of RNAi research into a new class of powerful therapies.


There are many hurdles to clear before a research discovery becomes a life-changing treatment for patients. That’s especially true when the treatments being developed represent an entirely new class of medicines. But overcoming those obstacles can revolutionize our ability to treat diseases.

Few companies exemplify that process better than Alnylam Pharmaceuticals. Alnylam was founded by a group of MIT-affiliated researchers who believed in the promise of a technology — RNA interference, or RNAi.

The researchers had done foundational work to understand how RNAi, which is a naturally occurring process, works to silence genes through the degradation of messenger RNA. But it was their decision to found Alnylam in 2002 that attracted the funding and expertise necessary to turn their discoveries into a new class of medicines. Since that decision, Alnylam has made remarkable progress taking RNAi from an interesting scientific discovery to an impactful new treatment pathway.

Today Alnylam has five medicines approved by the U.S. Food and Drug Administration (one Alnylam-discovered RNAi therapeutic is licensed to Novartis) and a rapidly expanding clinical pipeline. The company’s approved medicines are for debilitating, sometimes fatal conditions that many patients have grappled with for decades with few other options.

The company estimates its treatments helped more than 5,000 patients in 2023 alone. Behind that number are patient stories that illustrate how Alnylam has changed lives. A mother of three says Alnylam’s treatments helped her take back control of her life after being bed-ridden with attacks associated with the rare genetic disease acute intermittent porphyria (AIP). Another patient reported that one of the company’s treatments helped her attend her daughter’s wedding. A third patient, who had left college due to frequent AIP attacks, was able to return to school.

These days Alnylam is not the only company developing RNAi-based medicines. But it is still a pioneer in the field, and the company’s founders — MIT Institute Professor Phil Sharp, Professor David Bartel, Professor Emeritus Paul Schimmel, and former MIT postdocs Thomas Tuschl and Phillip Zamore — see Alnylam as a champion for the field more broadly.

“Alnylam has published more than 250 scientific papers over 20 years,” says Sharp, who currently serves as chair of Alnylam’s scientific advisory board. “Not only did we do the science, not only did we translate it to benefit patients, but we also described every step. We established this as a modality to treat patients, and I’m very proud of that record.”

Pioneering RNAi development

MIT’s involvement in RNAi dates back to its discovery. Before Andrew Fire PhD ’83 shared a Nobel Prize for the discovery of RNAi in 1998, he worked on understanding how DNA was transcribed into RNA, as a graduate student in Sharp’s lab.

After leaving MIT, Fire and collaborators showed that double-stranded RNA could be used to silence specific genes in worms. But the biochemical mechanisms that allowed double-stranded RNA to work were unknown until MIT professors Sharp, Bartel, and Ruth Lehmann, along with Zamore and Tuschl, published foundational papers explaining the process. The researchers developed a system for studying RNAi and showed how RNAi can be controlled using different genetic sequences. Soon after Tuschl left MIT, he showed that a similar process could also be used to silence specific genes in human cells, opening up a new frontier in studying genes and ultimately treating diseases.

“Tom showed you could synthesize these small RNAs, transfect them into cells, and get a very specific knockdown of the gene that corresponded to that the small RNAs,” Bartel explains. “That discovery transformed biological research. The ability to specifically knockdown a mammalian gene was huge. You could suddenly study the function of any gene you were interested in by knocking it down and seeing what happens. … The research community immediately started using that approach to study the function of their favorite genes in mammalian cells.”

Beyond illuminating gene function, another application came to mind.

“Because almost all diseases are related to genes, could we take these small RNAs and silence genes to treat patients?” Sharp remembers wondering.

To answer the question, the researchers founded Alnylam in 2002. (They recruited Schimmel, a biotech veteran, around the same time.) But there was a lot of work to be done before the technology could be tried in patients. The main challenge was getting RNAi into the cytoplasm of the patients’ cells.

“Through work in Dave Bartel and Phil Sharp's lab, among others, it became evident that to make RNAi into therapies, there were three problems to solve: delivery, delivery, and delivery,” says Alnylam Chief Scientific Officer Kevin Fitzgerald, who has been with the company since 2005.

Early on, Alnylam collaborated with MIT drug delivery expert and Institute Professor Bob Langer. Eventually, Alnylam developed the first lipid nanoparticles (LNPs) that could be used to encase RNA and deliver it into patient cells. LNPs were later used in the mRNA vaccines for Covid-19.

“Alnylam has invested over 20 years and more than $4 billion in RNAi to develop these new therapeutics,” Sharp says. “That is the means by which innovations can be translated to the benefit of society.”

From scientific breakthrough to patient bedside

Alnylam received its first FDA approval in 2018 for treatment of the polyneuropathy of hereditary transthyretin-mediated amyloidosis, a rare and fatal disease. It doubled as the first RNAi therapeutic to reach the market and the first drug approved to treat that condition in the United States.

“What I keep in mind is, at the end of the day for certain patients, two months is everything,” Fitzgerald says. “The diseases that we’re trying to treat progress month by month, day by day, and patients can get to a point where nothing is helping them. If you can move their disease by a stage, that’s huge.”

Since that first treatment, Alnylam has updated its RNAi delivery system — including by conjugating small interfering RNAs to molecules that help them gain entry to cells — and earned approvals to treat other rare genetic diseases along with high cholesterol (the treatment licensed to Novartis). All of those treatments primarily work by silencing genes that encode for the production of proteins in the liver, which has proven to be the easiest place to deliver RNAi molecules. But Alnylam’s team is confident they can deliver RNAi to other areas of the body, which would unlock a new world of treatment possibilities. The company has reported promising early results in the central nervous system and says a phase one study last year was the first RNAi therapeutic to demonstrate gene silencing in the human brain.

“There’s a lot of work being done at Alnylam and other companies to deliver these RNAis to other tissues: muscles, immune cells, lung cells, etc.,” Sharp says. “But to me the most interesting application is delivery to the brain. We think we have a therapeutic modality that can very specifically control the activity of certain genes in the nervous system. I think that’s extraordinarily important, for diseases from Alzheimer’s to schizophrenia and depression.”

The central nervous system work is particularly significant for Fitzgerald, who watched his father struggle with Parkinson’s.

“Our goal is to be in every organ in the human body, and then combinations of organs, and then combinations of targets within individual organs, and then combinations of targets within multi-organs,” Fitzgerald says. “We’re really at the very beginning of what this technology is going do for human health.”

It’s an exciting time for the RNAi scientific community, including many who continue to study it at MIT. Still, Alnylam will need to continue executing in its drug development efforts to deliver on that promise and help an expanding pool of patients.

“I think this is a real frontier,” Sharp says. “There’s major therapeutic need, and I think this technology could have a huge impact. But we have to prove it. That’s why Alnylam exists: to pursue new science that unlocks new possibilities and discover if they can be made to work. That, of course, also why MIT is here: to improve lives.”


Using MRI, engineers have found a way to detect light deep in the brain

The new technique could enable detailed studies of how brain cells develop and communicate with each other.


Scientists often label cells with proteins that glow, allowing them to track the growth of a tumor, or measure changes in gene expression that occur as cells differentiate.

While this technique works well in cells and some tissues of the body, it has been difficult to apply this technique to image structures deep within the brain, because the light scatters too much before it can be detected.

MIT engineers have now come up with a novel way to detect this type of light, known as bioluminescence, in the brain: They engineered blood vessels of the brain to express a protein that causes them to dilate in the presence of light. That dilation can then be observed with magnetic resonance imaging (MRI), allowing researchers to pinpoint the source of light.

“A well-known problem that we face in neuroscience, as well as other fields, is that it’s very difficult to use optical tools in deep tissue. One of the core objectives of our study was to come up with a way to image bioluminescent molecules in deep tissue with reasonably high resolution,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

The new technique developed by Jasanoff and his colleagues could enable researchers to explore the inner workings of the brain in more detail than has previously been possible.

Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Former MIT postdocs Robert Ohlendorf and Nan Li are the lead authors of the paper.

Detecting light

Bioluminescent proteins are found in many organisms, including jellyfish and fireflies. Scientists use these proteins to label specific proteins or cells, whose glow can be detected by a luminometer. One of the proteins often used for this purpose is luciferase, which comes in a variety of forms that glow in different colors.

Jasanoff’s lab, which specializes in developing new ways to image the brain using MRI, wanted to find a way to detect luciferase deep within the brain. To achieve that, they came up with a method for transforming the blood vessels of the brain into light detectors. A popular form of MRI works by imaging changes in blood flow in the brain, so the researchers engineered the blood vessels themselves to respond to light by dilating.

“Blood vessels are a dominant source of imaging contrast in functional MRI and other non-invasive imaging techniques, so we thought we could convert the intrinsic ability of these techniques to image blood vessels into a means for imaging light, by photosensitizing the blood vessels themselves,” Jasanoff says.

To make the blood vessels sensitive to light, the researcher engineered them to express a bacterial protein called Beggiatoa photoactivated adenylate cyclase (bPAC). When exposed to light, this enzyme produces a molecule called cAMP, which causes blood vessels to dilate. When blood vessels dilate, it alters the balance of oxygenated and deoxygenated hemoglobin, which have different magnetic properties. This shift in magnetic properties can be detected by MRI.

BPAC responds specifically to blue light, which has a short wavelength, so it detects light generated within close range. The researchers used a viral vector to deliver the gene for bPAC specifically to the smooth muscle cells that make up blood vessels. When this vector was injected in rats, blood vessels throughout a large area of the brain became light-sensitive.

“Blood vessels form a network in the brain that is extremely dense. Every cell in the brain is within a couple dozen microns of a blood vessel,” Jasanoff says. “The way I like to describe our approach is that we essentially turn the vasculature of the brain into a three-dimensional camera.”

Once the blood vessels were sensitized to light, the researchers implanted cells that had been engineered to express luciferase if a substrate called CZT is present. In the rats, the researchers were able to detect luciferase by imaging the brain with MRI, which revealed dilated blood vessels.

Tracking changes in the brain

The researchers then tested whether their technique could detect light produced by the brain’s own cells, if they were engineered to express luciferase. They delivered the gene for a type of luciferase called GLuc to cells in a deep brain region known as the striatum. When the CZT substrate was injected into the animals, MRI imaging revealed the sites where light had been emitted.

This technique, which the researchers dubbed bioluminescence imaging using hemodynamics, or BLUsH, could be used in a variety of ways to help scientists learn more about the brain, Jasanoff says.

For one, it could be used to map changes in gene expression, by linking the expression of luciferase to a specific gene. This could help researchers observe how gene expression changes during embryonic development and cell differentiation, or when new memories form. Luciferase could also be used to map anatomical connections between cells or to reveal how cells communicate with each other.

The researchers now plan to explore some of those applications, as well as adapting the technique for use in mice and other animal models.

The research was funded by the U.S. National Institutes of Health, the G. Harold and Leila Y. Mathers Foundation, Lore Harp McGovern, Gardner Hendrie, a fellowship from the German Research Foundation, a Marie Sklodowska-Curie Fellowship from the European Union, and a Y. Eva Tan Fellowship and a J. Douglas Tan Fellowship, both from the McGovern Institute for Brain Research.


Study: Heavy snowfall and rain may contribute to some earthquakes

The results suggest that climate may influence seismic activity.


When scientists look for an earthquake’s cause, their search often starts underground. As centuries of seismic studies have made clear, it’s the collision of tectonic plates and the movement of subsurface faults and fissures that primarily trigger a temblor.

But MIT scientists have now found that certain weather events may also play a role in setting off some quakes.

In a study appearing today in Science Advances, the researchers report that episodes of heavy snowfall and rain likely contributed to a swarm of earthquakes over the past several years in northern Japan. The study is the first to show that climate conditions could initiate some quakes.

“We see that snowfall and other environmental loading at the surface impacts the stress state underground, and the timing of intense precipitation events is well-correlated with the start of this earthquake swarm,” says study author William Frank, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, climate obviously has an impact on the response of the solid earth, and part of that response is earthquakes.”

The new study focuses on a series of ongoing earthquakes in Japan’s Noto Peninsula. The team discovered that seismic activity in the region is surprisingly synchronized with certain changes in underground pressure, and that those changes are influenced by seasonal patterns of snowfall and precipitation. The scientists suspect that this new connection between quakes and climate may not be unique to Japan and could play a role in shaking up other parts of the world.

Looking to the future, they predict that the climate’s influence on earthquakes could be more pronounced with global warming.

“If we’re going into a climate that’s changing, with more extreme precipitation events, and we expect a redistribution of water in the atmosphere, oceans, and continents, that will change how the Earth’s crust is loaded,” Frank adds. “That will have an impact for sure, and it’s a link we could further explore.”

The study’s lead author is former MIT research associate Qing-Yu Wang (now at Grenoble Alpes University), and also includes EAPS postdoc Xin Cui, Yang Lu of the University of Vienna, Takashi Hirose of Tohoku University, and Kazushige Obara of the University of Tokyo.

Seismic speed

Since late 2020, hundreds of small earthquakes have shaken up Japan’s Noto Peninsula — a finger of land that curves north from the country’s main island into the Sea of Japan. Unlike a typical earthquake sequence, which begins as a main shock that gives way to a series of aftershocks before dying out, Noto’s seismic activity is an “earthquake swarm” — a pattern of multiple, ongoing quakes with no obvious main shock, or seismic trigger.

The MIT team, along with their colleagues in Japan, aimed to spot any patterns in the swarm that would explain the persistent quakes. They started by looking through the Japanese Meteorological Agency’s catalog of earthquakes that provides data on seismic activity throughout the country over time. They focused on quakes in the Noto Peninsula over the last 11 years, during which the region has experienced episodic earthquake activity, including the most recent swarm.

With seismic data from the catalog, the team counted the number of seismic events that occurred in the region over time, and found that the timing of quakes prior to 2020 appeared sporadic and unrelated, compared to late 2020, when earthquakes grew more intense and clustered in time, signaling the start of the swarm, with quakes that are correlated in some way.

The scientists then looked to a second dataset of seismic measurements taken by monitoring stations over the same 11-year period. Each station continuously records any displacement, or local shaking that occurs. The shaking from one station to another can give scientists an idea of how fast a seismic wave travels between stations. This “seismic velocity” is related to the structure of the Earth through which the seismic wave is traveling. Wang used the station measurements to calculate the seismic velocity between every station in and around Noto over the last 11 years.

The researchers generated an evolving picture of seismic velocity beneath the Noto Peninsula and observed a surprising pattern: In 2020, around when the earthquake swarm is thought to have begun, changes in seismic velocity appeared to be synchronized with the seasons.

“We then had to explain why we were observing this seasonal variation,” Frank says.

Snow pressure

The team wondered whether environmental changes from season to season could influence the underlying structure of the Earth in a way that would set off an earthquake swarm. Specifically, they looked at how seasonal precipitation would affect the underground “pore fluid pressure” — the amount of pressure that fluids in the Earth’s cracks and fissures exert within the bedrock.

“When it rains or snows, that adds weight, which increases pore pressure, which allows seismic waves to travel through slower,” Frank explains. “When all that weight is removed, through evaporation or runoff, all of a sudden, that pore pressure decreases and seismic waves are faster.”

Wang and Cui developed a hydromechanical model of the Noto Peninsula to simulate the underlying pore pressure over the last 11 years in response to seasonal changes in precipitation. They fed into the model meteorological data from this same period, including measurements of daily snow, rainfall, and sea-level changes. From their model, they were able to track changes in excess pore pressure beneath the Noto Peninsula, before and during the earthquake swarm. They then compared this timeline of evolving pore pressure with their evolving picture of seismic velocity.

“We had seismic velocity observations, and we had the model of excess pore pressure, and when we overlapped them, we saw they just fit extremely well,” Frank says.

In particular, they found that when they included snowfall data, and especially, extreme snowfall events, the fit between the model and observations was stronger than if they only considered rainfall and other events. In other words, the ongoing earthquake swarm that Noto residents have been experiencing can be explained in part by seasonal precipitation, and particularly, heavy snowfall events.

“We can see that the timing of these earthquakes lines up extremely well with multiple times where we see intense snowfall,” Frank says. “It’s well-correlated with earthquake activity. And we think there’s a physical link between the two.”

The researchers suspect that heavy snowfall and similar extreme precipitation could play a role in earthquakes elsewhere, though they emphasize that the primary trigger will always originate underground.

“When we first want to understand how earthquakes work, we look to plate tectonics, because that is and will always be the number one reason why an earthquake happens,” Frank says. “But, what are the other things that could affect when and how an earthquake happens? That’s when you start to go to second-order controlling factors, and the climate is obviously one of those.”

This research was supported, in part, by the National Science Foundation.


This sound-suppressing silk can create quiet spaces

Researchers engineered a hair-thin fabric to create a lightweight, compact, and efficient mechanism to reduce noise transmission in a large room.


We are living in a very noisy world. From the hum of traffic outside your window to the next-door neighbor’s blaring TV to sounds from a co-worker’s cubicle, unwanted noise remains a resounding problem.

To cut through the din, an interdisciplinary collaboration of researchers from MIT and elsewhere developed a sound-suppressing silk fabric that could be used to create quiet spaces.

The fabric, which is barely thicker than a human hair, contains a special fiber that vibrates when a voltage is applied to it. The researchers leveraged those vibrations to suppress sound in two different ways.

In one, the vibrating fabric generates sound waves that interfere with an unwanted noise to cancel it out, similar to noise-canceling headphones, which work well in a small space like your ears but do not work in large enclosures like rooms or planes.

In the other, more surprising technique, the fabric is held still to suppress vibrations that are key to the transmission of sound. This prevents noise from being transmitted through the fabric and quiets the volume beyond. This second approach allows for noise reduction in much larger spaces like rooms or cars.

By using common materials like silk, canvas, and muslin, the researchers created noise-suppressing fabrics which would be practical to implement in real-world spaces. For instance, one could use such a fabric to make dividers in open workspaces or thin fabric walls that prevent sound from getting through.

“Noise is a lot easier to create than quiet. In fact, to keep noise out we dedicate a lot of space to thick walls. [First author] Grace’s work provides a new mechanism for creating quiet spaces with a thin sheet of fabric,” says Yoel Fink, a professor in the departments of Materials Science and Engineering and Electrical Engineering and Computer Science, a Research Laboratory of Electronics principal investigator, and senior author of a paper on the fabric.

The study’s lead author is Grace (Noel) Yang SM ’21, PhD ’24. Co-authors include MIT graduate students Taigyu Joo, Hyunhee Lee, Henry Cheung, and Yongyi Zhao; Zachary Smith, the Robert N. Noyce Career Development Professor of Chemical Engineering at MIT; graduate student Guanchun Rui and professor Lei Zhu of Case Western University; graduate student Jinuan Lin and Assistant Professor Chu Ma of the University of Wisconsin at Madison; and Latika Balachander, a graduate student at the Rhode Island School of Design. An open-access paper about the research appeared recently in Advanced Materials.

Silky silence

The sound-suppressing silk builds off the group’s prior work to create fabric microphones.

In that research, they sewed a single strand of piezoelectric fiber into fabric. Piezoelectric materials produce an electrical signal when squeezed or bent. When a nearby noise causes the fabric to vibrate, the piezoelectric fiber converts those vibrations into an electrical signal, which can capture the sound.

In the new work, the researchers flipped that idea to create a fabric loudspeaker that can be used to cancel out soundwaves.

“While we can use fabric to create sound, there is already so much noise in our world. We thought creating silence could be even more valuable,” Yang says.

Applying an electrical signal to the piezoelectric fiber causes it to vibrate, which generates sound. The researchers demonstrated this by playing Bach’s “Air” using a 130-micrometer sheet of silk mounted on a circular frame.

To enable direct sound suppression, the researchers use a silk fabric loudspeaker to emit sound waves that destructively interfere with unwanted sound waves. They control the vibrations of the piezoelectric fiber so that sound waves emitted by the fabric are opposite of unwanted sound waves that strike the fabric, which can cancel out the noise.

However, this technique is only effective over a small area. So, the researchers built off this idea to develop a technique that uses fabric vibrations to suppress sound in much larger areas, like a bedroom.

Let’s say your next-door neighbors are playing foosball in the middle of the night. You hear noise in your bedroom because the sound in their apartment causes your shared wall to vibrate, which forms sound waves on your side.

To suppress that sound, the researchers could place the silk fabric onto your side of the shared wall, controlling the vibrations in the fiber to force the fabric to remain still. This vibration-mediated suppression prevents sound from being transmitted through the fabric.

“If we can control those vibrations and stop them from happening, we can stop the noise that is generated, as well,” Yang says.

A mirror for sound

Surprisingly, the researchers found that holding the fabric still causes sound to be reflected by the fabric, resulting in a thin piece of silk that reflects sound like a mirror does with light.

Their experiments also revealed that both the mechanical properties of a fabric and the size of its pores affect the efficiency of sound generation. While silk and muslin have similar mechanical properties, the smaller pore sizes of silk make it a better fabric loudspeaker.

But the effective pore size also depends on the frequency of sound waves. If the frequency is low enough, even a fabric with relatively large pores could function effectively, Yang says.

When they tested the silk fabric in direct suppression mode, the researchers found that it could significantly reduce the volume of sounds up to 65 decibels (about as loud as enthusiastic human conversation). In vibration-mediated suppression mode, the fabric could reduce sound transmission up to 75 percent.

These results were only possible due to a robust group of collaborators, Fink says. Graduate students at the Rhode Island School of Design helped the researchers understand the details of constructing fabrics; scientists at the University of Wisconsin at Madison conducted simulations; researchers at Case Western Reserve University characterized materials; and chemical engineers in the Smith Group at MIT used their expertise in gas membrane separation to measure airflow through the fabric.

Moving forward, the researchers want to explore the use of their fabric to block sound of multiple frequencies. This would likely require complex signal processing and additional electronics.

In addition, they want to further study the architecture of the fabric to see how changing things like the number of piezoelectric fibers, the direction in which they are sewn, or the applied voltages could improve performance.

“There are a lot of knobs we can turn to make this sound-suppressing fabric really effective. We want to get people thinking about controlling structural vibrations to suppress sound. This is just the beginning,” says Yang.

This work is funded, in part, by the National Science Foundation (NSF), the Army Research Office (ARO), the Defense Threat Reduction Agency (DTRA), and the Wisconsin Alumni Research Foundation.


President Sally Kornbluth and OpenAI CEO Sam Altman discuss the future of AI

The conversation in Kresge Auditorium touched on the promise and perils of the rapidly evolving technology.


How is the field of artificial intelligence evolving and what does it mean for the future of work, education, and humanity? MIT President Sally Kornbluth and OpenAI CEO Sam Altman covered all that and more in a wide-ranging discussion on MIT’s campus May 2.

The success of OpenAI’s ChatGPT large language models has helped spur a wave of investment and innovation in the field of artificial intelligence. ChatGPT-3.5 became the fastest-growing consumer software application in history after its release at the end of 2022, with hundreds of millions of people using the tool. Since then, OpenAI has also demonstrated AI-driven image-, audio-, and video-generation products and partnered with Microsoft.

The event, which took place in a packed Kresge Auditorium, captured the excitement of the moment around AI, with an eye toward what’s next.

“I think most of us remember the first time we saw ChatGPT and were like, ‘Oh my god, that is so cool!’” Kornbluth said. “Now we’re trying to figure out what the next generation of all this is going to be.”

For his part, Altman welcomes the high expectations around his company and the field of artificial intelligence more broadly.

“I think it’s awesome that for two weeks, everybody was freaking out about ChatGPT-4, and then by the third week, everyone was like, ‘Come on, where’s GPT-5?’” Altman said. “I think that says something legitimately great about human expectation and striving and why we all have to [be working to] make things better.”

The problems with AI

Early on in their discussion, Kornbluth and Altman discussed the many ethical dilemmas posed by AI.

“I think we’ve made surprisingly good progress around how to align a system around a set of values,” Altman said. “As much as people like to say ‘You can’t use these things because they’re spewing toxic waste all the time,’ GPT-4 behaves kind of the way you want it to, and we’re able to get it to follow a given set of values, not perfectly well, but better than I expected by this point.”

Altman also pointed out that people don’t agree on exactly how an AI system should behave in many situations, complicating efforts to create a universal code of conduct.

“How do we decide what values a system should have?” Altman asked. “How do we decide what a system should do? How much does society define boundaries versus trusting the user with these tools? Not everyone will use them the way we like, but that’s just kind of the case with tools. I think it’s important to give people a lot of control … but there are some things a system just shouldn’t do, and we’ll have to collectively negotiate what those are.”

Kornbluth agreed doing things like eradicating bias in AI systems will be difficult.

“It’s interesting to think about whether or not we can make models less biased than we are as human beings,” she said.

Kornbluth also brought up privacy concerns associated with the vast amounts of data needed to train today’s large language models. Altman said society has been grappling with those concerns since the dawn of the internet, but AI is making such considerations more complex and higher-stakes. He also sees entirely new questions raised by the prospect of powerful AI systems.

“How are we going to navigate the privacy versus utility versus safety tradeoffs?” Altman asked. “Where we all individually decide to set those tradeoffs, and the advantages that will be possible if someone lets the system be trained on their entire life, is a new thing for society to navigate. I don’t know what the answers will be.”

For both privacy and energy consumption concerns surrounding AI, Altman said he believes progress in future versions of AI models will help.

"What we want out of GPT-5 or 6 or whatever is for it to be the best reasoning engine possible,” Altman said. “It is true that right now, the only way we’re able to do that is by training it on tons and tons of data. In that process, it’s learning something about how to do very, very limited reasoning or cognition or whatever you want to call it. But the fact that it can memorize data, or the fact that it’s storing data at all in its parameter space, I think we'll look back and say, ‘That was kind of a weird waste of resources.’ I assume at some point, we’ll figure out how to separate the reasoning engine from the need for tons of data or storing the data in [the model], and be able to treat them as separate things.”

Kornbluth also asked about how AI might lead to job displacement.

“One of the things that annoys me most about people who work on AI is when they stand up with a straight face and say, ‘This will never cause any job elimination. This is just an additive thing. This is just all going to be great,’” Altman said. “This is going to eliminate a lot of current jobs, and this is going to change the way that a lot of current jobs function, and this is going to create entirely new jobs. That always happens with technology."

The promise of AI

Altman believes progress in AI will make grappling with all of the field’s current problems worth it.

“If we spent 1 percent of the world’s electricity training a powerful AI, and that AI helped us figure out how to get to non-carbon-based energy or make deep carbon capture better, that would be a massive win,” Altman said.

He also said the application of AI he’s most interested in is scientific discovery.

“I believe [scientific discovery] is the core engine of human progress and that it is the only way we drive sustainable economic growth,” Altman said. “People aren’t content with GPT-4. They want things to get better. Everyone wants life more and better and faster, and science is how we get there.”

Kornbluth also asked Altman for his advice for students thinking about their careers. He urged students not to limit themselves.

“The most important lesson to learn early on in your career is that you can kind of figure anything out, and no one has all of the answers when they start out,” Altman said. “You just sort of stumble your way through, have a fast iteration speed, and try to drift toward the most interesting problems to you, and be around the most impressive people and have this trust that you’ll successfully iterate to the right thing. ... You can do more than you think, faster than you think.”

The advice was part of a broader message Altman had about staying optimistic and working to create a better future.

“The way we are teaching our young people that the world is totally screwed and that it’s hopeless to try to solve problems, that all we can do is sit in our bedrooms in the dark and think about how awful we are, is a really deeply unproductive streak,” Altman said. “I hope MIT is different than a lot of other college campuses. I assume it is. But you all need to make it part of your life mission to fight against this. Prosperity, abundance, a better life next year, a better life for our children. That is the only path forward. That is the only way to have a functioning society ... and the anti-progress streak, the anti ‘people deserve a great life’ streak, is something I hope you all fight against.”


MIT astronomers observe elusive stellar light surrounding ancient quasars

The observations suggest some of earliest “monster” black holes grew from massive cosmic seeds.


MIT astronomers have observed the elusive starlight surrounding some of the earliest quasars in the universe. The distant signals, which trace back more than 13 billion years to the universe’s infancy, are revealing clues to how the very first black holes and galaxies evolved.

Quasars are the blazing centers of active galaxies, which host an insatiable supermassive black hole at their core. Most galaxies host a central black hole that may occasionally feast on gas and stellar debris, generating a brief burst of light in the form of a glowing ring as material swirls in toward the black hole.

Quasars, by contrast, can consume enormous amounts of matter over much longer stretches of time, generating an extremely bright and long-lasting ring — so bright, in fact, that quasars are among the most luminous objects in the universe.

Because they are so bright, quasars outshine the rest of the galaxy in which they reside. But the MIT team was able for the first time to observe the much fainter light from stars in the host galaxies of three ancient quasars.

Based on this elusive stellar light, the researchers estimated the mass of each host galaxy, compared to the mass of its central supermassive black hole. They found that for these quasars, the central black holes were much more massive relative to their host galaxies, compared to their modern counterparts.

The findings, published today in the Astrophysical Journal, may shed light on how the earliest supermassive black holes became so massive despite having a relatively short amount of cosmic time in which to grow. In particular, those earliest monster black holes may have sprouted from more massive “seeds” than more modern black holes did.

“After the universe came into existence, there were seed black holes that then consumed material and grew in a very short time,” says study author Minghao Yue, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “One of the big questions is to understand how those monster black holes could grow so big, so fast.”

“These black holes are billions of times more massive than the sun, at a time when the universe is still in its infancy,” says study author Anna-Christina Eilers, assistant professor of physics at MIT. “Our results imply that in the early universe, supermassive black holes might have gained their mass before their host galaxies did, and the initial black hole seeds could have been more massive than today.”

Eilers’ and Yue’s co-authors include MIT Kavli Director Robert Simcoe, MIT Hubble Fellow and postdoc Rohan Naidu, and collaborators in Switzerland, Austria, Japan, and at North Carolina State University.

Dazzling cores

A quasar’s extreme luminosity has been obvious since astronomers first discovered the objects in the 1960s. They assumed then that the quasar’s light stemmed from a single, star-like “point source.” Scientists designated the objects “quasars,” as a portmanteau of a “quasi-stellar” object. Since those first observations, scientists have realized that quasars are in fact not stellar in origin but emanate from the accretion of intensely powerful and persistent supermassive black holes sitting at the center of galaxies that also host stars, which are much fainter in comparison to their dazzling cores.

It’s been extremely challenging to separate the light from a quasar’s central black hole from the light of the host galaxy’s stars. The task is a bit like discerning a field of fireflies around a central, massive searchlight. But in recent years, astronomers have had a much better chance of doing so with the launch of NASA’s James Webb Space Telescope (JWST), which has been able to peer farther back in time, and with much higher sensitivity and resolution, than any existing observatory.

In their new study, Yue and Eilers used dedicated time on JWST to observe six known, ancient quasars, intermittently from the fall of 2022 through the following spring. In total, the team collected more than 120 hours of observations of the six distant objects.

“The quasar outshines its host galaxy by orders of magnitude. And previous images were not sharp enough to distinguish what the host galaxy with all its stars looks like,” Yue says. “Now for the first time, we are able to reveal the light from these stars by very carefully modeling JWST’s much sharper images of those quasars.”

A light balance

The team took stock of the imaging data collected by JWST of each of the six distant quasars, which they estimated to be about 13 billion years old. That data included measurements of each quasar’s light in different wavelengths. The researchers fed that data into a model of how much of that light likely comes from a compact “point source,” such as a central black hole’s accretion disk, versus a more diffuse source, such as light from the host galaxy’s surrounding, scattered stars.

Through this modeling, the team teased apart each quasar’s light into two components: light from the central black hole’s luminous disk and light from the host galaxy’s more diffuse stars. The amount of light from both sources is a reflection of their total mass. The researchers estimate that for these quasars, the ratio between the mass of the central black hole and the mass of the host galaxy was about 1:10. This, they realized, was in stark contrast to today’s mass balance of 1:1,000, in which more recently formed black holes are much less massive compared to their host galaxies.

“This tells us something about what grows first: Is it the black hole that grows first, and then the galaxy catches up? Or is the galaxy and its stars that first grow, and they dominate and regulate the black hole’s growth?” Eilers explains. “We see that black holes in the early universe seem to be growing faster than their host galaxies. That is tentative evidence that the initial black hole seeds could have been more massive back then.”

“There must have been some mechanism to make a black hole gain their mass earlier than their host galaxy in those first billion years,” Yue adds. “It’s kind of the first evidence we see for this, which is exciting.”


Three from MIT named 2024-25 Goldwater Scholars

Undergraduates Ben Lou, Srinath Mahankali, and Kenta Suzuki, whose research explores math and physics, are honored for their academic excellence.


MIT students Ben Lou, Srinath Mahankali, and Kenta Suzuki have been selected to receive Barry Goldwater Scholarships for the 2024-25 academic year. They are among just 438 recipients from across the country selected based on academic merit from an estimated pool of more than 5,000 college sophomores and juniors, approximately 1,350 of whom were nominated by their academic institution to compete for the scholarship.

Since 1989, the Barry Goldwater Scholarship and Excellence in Education Foundation has awarded nearly 11,000 Goldwater scholarships to support undergraduates who intend to pursue research careers in the natural sciences, mathematics, and engineering and have the potential to become leaders in their respective fields. Past scholars have gone on to win an impressive array of prestigious postgraduate fellowships. Almost all, including the three MIT recipients, intend to obtain doctorates in their area of research.

Ben Lou

Ben Lou is a third-year student originally from San Diego, California, majoring in physics and math with a minor in philosophy.

“My research interests are scattered across different disciplines,” says Lou. “I want to draw from a wide range of topics in math and physics, finding novel connections between them, to push forward the frontier of knowledge.”

Since January 2022, he has worked with Nergis Mavalvala, dean of the School of Science, and Hudson Loughlin, a graduate student in the LIGO group, which studies the detection of gravitational waves. Lou is working with them to advance the field of quantum measurement and better understand quantum gravity.

“Ben has enormous intellectual horsepower and works with remarkable independence,” writes Mavalvala in her recommendation letter. “I have no doubt he has an outstanding career in physics ahead of him.”

Lou, for his part, is grateful to Mavalvala and Loughlin, as well as all of his scientific mentors that have supported him along his research path. That includes MIT professors Alan Guth and Barton Zwiebach, who introduced him to quantum physics, as well as his first-year advisor, Richard Price; current advisor, Janet Conrad; Elijah Bodish and Roman Bezrukavnikov in the Department of Mathematics; and David W. Brown of the San Diego Math Circle.

In terms of his future career goals, Lou wants to be a professor of theoretical physics and study, as he says, the “fundamental aspects of reality” while also inspiring students to love math and physics.

In addition to his research, Lou is currently the vice president of the Assistive Technology Club at MIT and actively engaged in raising money for Spinal Muscular Atrophy research. In the future, he’d like to continue his philanthropy work and use his personal experience to advise an assistive technology company.

Srinath Mahankali

Srinath Mahankali is a third-year student from New York City majoring in computer science.

Since June 2022, Mahankali has been an undergraduate researcher in the MIT Computer Science and Artificial Intelligence Laboratory. Working with Pulkit Agrawal, assistant professor of electrical engineering and computer science and head of the Improbable AI Lab, Mahankali’s research is on training robots. Currently, his focus is on training quadruped robots to move in an energy-efficient manner and training agents to interact in environments with minimal feedback. But in the future, he’d like to develop robots that can complete athletic tasks like gymnastics.

“The experience of discussing research with Srinath is similar to discussions with the best PhD students in my group,” writes Agrawal in his recommendation letter. “He is fearless, willing to take risks, persistent, creative, and gets things done.”

Before coming to MIT, Mahankali was a 2021 Regeneron STS scholar, which is one of the oldest and most prestigious awards for math and science students. In 2020, he was also a participant in the MIT PRIMES program, studying objective functions in optimization problems with Yunan Yang, an assistant professor of math at Cornell University.

“I’m deeply grateful to all my research advisors for their invaluable mentorship and guidance,” says Mahankali, extending his thanks to PhD students Zhang-Wei Hong and Gabe Margolis, as well as assistant professor of math at Brandeis, Promit Ghosal, and all of the organizers of the PRIMES program. “I’m also very grateful to all the members of the Improbable AI Lab for their support, encouragement, and willingness to help and discuss any questions I have,”

In the future, Mahankali wants to obtain a PhD and one day lead his own lab in robotics and artificial intelligence.

Kenta Suzuki

Kenta Suzuki is a third-year student majoring in mathematics from Bloomfield Hills, Michigan, and Tokyo, Japan.

Currently, Suzuki works with professor of mathematics Roman Bezrukavnikov on research at the intersection of number and representation theory, using geometric methods to represent p-adic groups. Suzuki has also previously worked with math professors Wei Zhang and Zhiwei Yun, crediting the latter with inspiring him to pursue research in representation theory.

In his recommendation letter, Yun writes, “Kenta is the best undergraduate student that I have worked with in terms of the combination of raw talent, mathematical maturity, and research abilities.”

Before coming to MIT, Suzuki was a Yau Science Award USA finalist in 2020, receiving a gold in math, and he received honorable mention from the Davidson Institute Fellows program in 2021. He also participated in the MIT PRIMES program in 2020. Suzuki credits his PRIMES mentor, Michael Zieve at the University of Michigan, with giving him his first taste of mathematical research. In addition, he extended his thanks to all of his math mentors, including the organizers of MIT Summer Program in Undergraduate Research.

After MIT, Suzuki intends to obtain a PhD in pure math, continuing his research in representation theory and number theory and, one day, teaching at a research-oriented institution.

The Barry Goldwater Scholarship and Excellence in Education Program was established by U.S. Congress in 1986 to honor Senator Barry Goldwater, a soldier and national leader who served the country for 56 years. Awardees receive scholarships of up to $7,500 a year to cover costs related to tuition, room and board, fees, and books.


Physicists arrange atoms in extremely close proximity

The technique opens possibilities for exploring exotic states of matter and building new quantum materials.


Proximity is key for many quantum phenomena, as interactions between atoms are stronger when the particles are close. In many quantum simulators, scientists arrange atoms as close together as possible to explore exotic states of matter and build new quantum materials.

They typically do this by cooling the atoms to a stand-still, then using laser light to position the particles as close as 500 nanometers apart — a limit that is set by the wavelength of light. Now, MIT physicists have developed a technique that allows them to arrange atoms in much closer proximity, down to a mere 50 nanometers. For context, a red blood cell is about 1,000 nanometers wide.

The physicists demonstrated the new approach in experiments with dysprosium, which is the most magnetic atom in nature. They used the new approach to manipulate two layers of dysprosium atoms, and positioned the layers precisely 50 nanometers apart. At this extreme proximity, the magnetic interactions were 1,000 times stronger than if the layers were separated by 500 nanometers.

What’s more, the scientists were able to measure two new effects caused by the atoms’ proximity. Their enhanced magnetic forces caused “thermalization,” or the transfer of heat from one layer to another, as well as synchronized oscillations between layers. These effects petered out as the layers were spaced farther apart.

“We have gone from positioning atoms from 500 nanometers to 50 nanometers apart, and there is a lot you can do with this,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. “At 50 nanometers, the behavior of atoms is so much different that we’re really entering a new regime here.”

Ketterle and his colleagues say the new approach can be applied to many other atoms to study quantum phenomena. For their part, the group plans to use the technique to manipulate atoms into configurations that could generate the first purely magnetic quantum gate — a key building block for a new type of quantum computer.

The team has published their results today in the journal Science. The study’s co-authors include lead author and physics graduate student Li Du, along with Pierre Barral, Michael Cantara, Julius de Hond, and Yu-Kun Lu — all members of the MIT-Harvard Center for Ultracold Atoms, the Department of Physics, and the Research Laboratory of Electronics at MIT.

Peaks and valleys

To manipulate and arrange atoms, physicists typically first cool a cloud of atoms to temperatures approaching absolute zero, then use a system of laser beams to corral the atoms into an optical trap.

Laser light is an electromagnetic wave with a specific wavelength (the distance between maxima of the electric field) and frequency. The wavelength limits the smallest pattern into which light can be shaped to typically 500 nanometers, the so-called optical resolution limit. Since atoms are attracted by laser light of certain frequencies, atoms will be positioned at the points of peak laser intensity. For this reason, existing techniques have been limited in how close they can position atomic particles, and could not be used to explore phenomena that happen at much shorter distances.

“Conventional techniques stop at 500 nanometers, limited not by the atoms but by the wavelength of light,” Ketterle explains. “We have found now a new trick with light where we can break through that limit.”

The team’s new approach, like current techniques, starts by cooling a cloud of atoms — in this case, to about 1 microkelvin, just a hair above absolute zero — at which point, the atoms come to a near-standstill. Physicists can then use lasers to move the frozen particles into desired configurations.

Then, Du and his collaborators worked with two laser beams, each with a different frequency, or color, and circular polarization, or direction of the laser’s electric field. When the two beams travel through a super-cooled cloud of atoms, the atoms can orient their spin in opposite directions, following either of the two lasers’ polarization. The result is that the beams produce two groups of the same atoms, only with opposite spins.

Each laser beam formed a standing wave, a periodic pattern of electric field intensity with a spatial period of 500 nanometers. Due to their different polarizations, each standing wave attracted and corralled one of two groups of atoms, depending on their spin. The lasers could be overlaid and tuned such that the distance between their respective peaks is as small as 50 nanometers, meaning that the atoms gravitating to each respective laser’s peaks would be separated by the same 50 nanometers.

But in order for this to happen, the lasers would have to be extremely stable and immune to all external noise, such as from shaking or even breathing on the experiment. The team realized they could stabilize both lasers by directing them through an optical fiber, which served to lock the light beams in place in relation to each other.

“The idea of sending both beams through the optical fiber meant the whole machine could shake violently, but the two laser beams stayed absolutely stable with respect to each others,” Du says.

Magnetic forces at close range

As a first test of their new technique, the team used atoms of dysprosium — a rare-earth metal that is one of the strongest magnetic elements in the periodic table, particularly at ultracold temperatures. However, at the scale of atoms, the element’s magnetic interactions are relatively weak at distances of even 500 nanometers. As with common refrigerator magnets, the magnetic attraction between atoms increases with proximity, and the scientists suspected that if their new technique could space dysprosium atoms as close as 50 nanometers apart, they might observe the emergence of otherwise weak interactions between the magnetic atoms.

“We could suddenly have magnetic interactions, which used to be almost neglible but now are really strong,” Ketterle says.

The team applied their technique to dysprosium, first super-cooling the atoms, then passing two lasers through to split the atoms into two spin groups, or layers. They then directed the lasers through an optical fiber to stabilize them, and found that indeed, the two layers of dysprosium atoms gravitated to their respective laser peaks, which in effect separated the layers of atoms by 50 nanometers — the closest distance that any ultracold atom experiment has been able to achieve.

At this extremely close proximity, the atoms’ natural magnetic interactions were significantly enhanced, and were 1,000 times stronger than if they were positioned 500 nanometers apart. The team observed that these interactions resulted in two novel quantum phenomena: collective oscillation, in which one layer’s vibrations caused the other layer to vibrate in sync; and thermalization, in which one layer transferred heat to the other, purely through magnetic fluctuations in the atoms.

“Until now, heat between atoms could only by exchanged when they were in the same physical space and could collide,” Du notes. “Now we have seen atomic layers, separated by vacuum, and they exchange heat via fluctuating magnetic fields.”

The team’s results introduce a new technique that can be used to position many types of atom in close proximity. They also show that atoms, placed close enough together, can exhibit interesting quantum phenomena, that could be harnessed to build new quantum materials, and potentially, magnetically-driven atomic systems for quantum computers.

“We are really bringing super-resolution methods to the field, and it will become a general tool for doing quantum simulations,” Ketterle says. “There are many variants possible, which we are working on.”

This research was funded, in part, by the National Science Foundation and the Department of Defense.


Natural language boosts LLM performance in coding, planning, and robotics

Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.


Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.

Luckily, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have found a treasure trove of abstractions within natural language. In three papers to be presented at the International Conference on Learning Representations this month, the group shows how our everyday words are a rich source of context for language models, helping them build better overarching representations for code synthesis, AI planning, and robotic navigation and manipulation.

The three separate frameworks build libraries of abstractions for their given task: LILO (library induction from language observations) can synthesize, compress, and document code; Ada (action domain acquisition) explores sequential decision-making for artificial intelligence agents; and LGA (language-guided abstraction) helps robots better understand their environments to develop more feasible plans. Each system is a neurosymbolic method, a type of AI that blends human-like neural networks and program-like logical components.

LILO: A neurosymbolic framework that codes

Large language models can be used to quickly write solutions to small-scale coding tasks, but cannot yet architect entire software libraries like the ones written by human software engineers. To take their software development capabilities further, AI models need to refactor (cut down and combine) code into libraries of succinct, readable, and reusable programs.

Refactoring tools like the previously developed MIT-led Stitch algorithm can automatically identify abstractions, so, in a nod to the Disney movie “Lilo & Stitch,” CSAIL researchers combined these algorithmic refactoring approaches with LLMs. Their neurosymbolic method LILO uses a standard LLM to write code, then pairs it with Stitch to find abstractions that are comprehensively documented in a library.

LILO’s unique emphasis on natural language allows the system to do tasks that require human-like commonsense knowledge, such as identifying and removing all vowels from a string of code and drawing a snowflake. In both cases, the CSAIL system outperformed standalone LLMs, as well as a previous library learning algorithm from MIT called DreamCoder, indicating its ability to build a deeper understanding of the words within prompts. These encouraging results point to how LILO could assist with things like writing programs to manipulate documents like Excel spreadsheets, helping AI answer questions about visuals, and drawing 2D graphics.

“Language models prefer to work with functions that are named in natural language,” says Gabe Grand SM '23, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author on the research. “Our work creates more straightforward abstractions for language models and assigns natural language names and documentation to each one, leading to more interpretable code for programmers and improved system performance.”

When prompted on a programming task, LILO first uses an LLM to quickly propose solutions based on data it was trained on, and then the system slowly searches more exhaustively for outside solutions. Next, Stitch efficiently identifies common structures within the code and pulls out useful abstractions. These are then automatically named and documented by LILO, resulting in simplified programs that can be used by the system to solve more complex tasks.

The MIT framework writes programs in domain-specific programming languages, like Logo, a language developed at MIT in the 1970s to teach children about programming. Scaling up automated refactoring algorithms to handle more general programming languages like Python will be a focus for future research. Still, their work represents a step forward for how language models can facilitate increasingly elaborate coding activities.

Ada: Natural language guides AI task planning

Just like in programming, AI models that automate multi-step tasks in households and command-based video games lack abstractions. Imagine you’re cooking breakfast and ask your roommate to bring a hot egg to the table — they’ll intuitively abstract their background knowledge about cooking in your kitchen into a sequence of actions. In contrast, an LLM trained on similar information will still struggle to reason about what they need to build a flexible plan.

Named after the famed mathematician Ada Lovelace, who many consider the world’s first programmer, the CSAIL-led “Ada” framework makes headway on this issue by developing libraries of useful plans for virtual kitchen chores and gaming. The method trains on potential tasks and their natural language descriptions, then a language model proposes action abstractions from this dataset. A human operator scores and filters the best plans into a library, so that the best possible actions can be implemented into hierarchical plans for different tasks.

“Traditionally, large language models have struggled with more complex tasks because of problems like reasoning about abstractions,” says Ada lead researcher Lio Wong, an MIT graduate student in brain and cognitive sciences, CSAIL affiliate, and LILO coauthor. “But we can combine the tools that software engineers and roboticists use with LLMs to solve hard problems, such as decision-making in virtual environments.”

When the researchers incorporated the widely-used large language model GPT-4 into Ada, the system completed more tasks in a kitchen simulator and Mini Minecraft than the AI decision-making baseline “Code as Policies.” Ada used the background information hidden within natural language to understand how to place chilled wine in a cabinet and craft a bed. The results indicated a staggering 59 and 89 percent task accuracy improvement, respectively.

With this success, the researchers hope to generalize their work to real-world homes, with the hopes that Ada could assist with other household tasks and aid multiple robots in a kitchen. For now, its key limitation is that it uses a generic LLM, so the CSAIL team wants to apply a more powerful, fine-tuned language model that could assist with more extensive planning. Wong and her colleagues are also considering combining Ada with a robotic manipulation framework fresh out of CSAIL: LGA (language-guided abstraction).

Language-guided abstraction: Representations for robotic tasks

Andi Peng SM ’23, an MIT graduate student in electrical engineering and computer science and CSAIL affiliate, and her coauthors designed a method to help machines interpret their surroundings more like humans, cutting out unnecessary details in a complex environment like a factory or kitchen. Just like LILO and Ada, LGA has a novel focus on how natural language leads us to those better abstractions.

In these more unstructured environments, a robot will need some common sense about what it’s tasked with, even with basic training beforehand. Ask a robot to hand you a bowl, for instance, and the machine will need a general understanding of which features are important within its surroundings. From there, it can reason about how to give you the item you want. 

In LGA’s case, humans first provide a pre-trained language model with a general task description using natural language, like “bring me my hat.” Then, the model translates this information into abstractions about the essential elements needed to perform this task. Finally, an imitation policy trained on a few demonstrations can implement these abstractions to guide a robot to grab the desired item.

Previous work required a person to take extensive notes on different manipulation tasks to pre-train a robot, which can be expensive. Remarkably, LGA guides language models to produce abstractions similar to those of a human annotator, but in less time. To illustrate this, LGA developed robotic policies to help Boston Dynamics’ Spot quadruped pick up fruits and throw drinks in a recycling bin. These experiments show how the MIT-developed method can scan the world and develop effective plans in unstructured environments, potentially guiding autonomous vehicles on the road and robots working in factories and kitchens.

“In robotics, a truth we often disregard is how much we need to refine our data to make a robot useful in the real world,” says Peng. “Beyond simply memorizing what’s in an image for training robots to perform tasks, we wanted to leverage computer vision and captioning models in conjunction with language. By producing text captions from what a robot sees, we show that language models can essentially build important world knowledge for a robot.”

The challenge for LGA is that some behaviors can’t be explained in language, making certain tasks underspecified. To expand how they represent features in an environment, Peng and her colleagues are considering incorporating multimodal visualization interfaces into their work. In the meantime, LGA provides a way for robots to gain a better feel for their surroundings when giving humans a helping hand. 

An “exciting frontier” in AI

“Library learning represents one of the most exciting frontiers in artificial intelligence, offering a path towards discovering and reasoning over compositional abstractions,” says assistant professor at the University of Wisconsin-Madison Robert Hawkins, who was not involved with the papers. Hawkins notes that previous techniques exploring this subject have been “too computationally expensive to use at scale” and have an issue with the lambdas, or keywords used to describe new functions in many languages, that they generate. “They tend to produce opaque 'lambda salads,' big piles of hard-to-interpret functions. These recent papers demonstrate a compelling way forward by placing large language models in an interactive loop with symbolic search, compression, and planning algorithms. This work enables the rapid acquisition of more interpretable and adaptive libraries for the task at hand.”

By building libraries of high-quality code abstractions using natural language, the three neurosymbolic methods make it easier for language models to tackle more elaborate problems and environments in the future. This deeper understanding of the precise keywords within a prompt presents a path forward in developing more human-like AI models.

MIT CSAIL members are senior authors for each paper: Joshua Tenenbaum, a professor of brain and cognitive sciences, for both LILO and Ada; Julie Shah, head of the Department of Aeronautics and Astronautics, for LGA; and Jacob Andreas, associate professor of electrical engineering and computer science, for all three. The additional MIT authors are all PhD students: Maddy Bowers and Theo X. Olausson for LILO, Jiayuan Mao and Pratyusha Sharma for Ada, and Belinda Z. Li for LGA. Muxin Liu of Harvey Mudd College was a coauthor on LILO; Zachary Siegel of Princeton University, Jaihai Feng of the University of California at Berkeley, and Noa Korneev of Microsoft were coauthors on Ada; and Ilia Sucholutsky, Theodore R. Sumers, and Thomas L. Griffiths of Princeton were coauthors on LGA. 

LILO and Ada were supported, in part, by ​​MIT Quest for Intelligence, the MIT-IBM Watson AI Lab, Intel, U.S. Air Force Office of Scientific Research, the U.S. Defense Advanced Research Projects Agency, and the U.S. Office of Naval Research, with the latter project also receiving funding from the Center for Brains, Minds and Machines. LGA received funding from the U.S. National Science Foundation, Open Philanthropy, the Natural Sciences and Engineering Research Council of Canada, and the U.S. Department of Defense.


Nuno Loureiro named director of MIT’s Plasma Science and Fusion Center

A lauded professor, theoretical physicist, and fusion scientist, Loureiro is keenly positioned to advance the center’s research and education goals.


Nuno Loureiro, professor of nuclear science and engineering and of physics, has been appointed the new director of the MIT Plasma Science and Fusion Center, effective May 1.

Loureiro is taking the helm of one of MIT’s largest labs: more than 250 full-time researchers, staff members, and students work and study in seven buildings with 250,000 square feet of lab space. A theoretical physicist and fusion scientist, Loureiro joined MIT as a faculty member in 2016, and was appointed deputy director of the Plasma Science and Fusion Center (PSFC) in 2022. Loureiro succeeds Dennis Whyte, who stepped down at the end of 2023 to return to teaching and research.

Stepping into his new role as director, Loureiro says, “The PSFC has an impressive tradition of discovery and leadership in plasma and fusion science and engineering. Becoming director of the PSFC is an incredible opportunity to shape the future of these fields. We have a world-class team, and it’s an honor to be chosen as its leader.”

Loureiro’s own research ranges widely. He is recognized for advancing the understanding of multiple aspects of plasma behavior, particularly turbulence and the physics underpinning solar flares and other astronomical phenomena. In the fusion domain, his work enables the design of fusion devices that can more efficiently control and harness the energy of fusing plasmas, bringing the dream of clean, near-limitless fusion power that much closer. 

Plasma physics is foundational to advancing fusion science, a fact Loureiro has embraced and that is relevant as he considers the direction of the PSFC’s multidisciplinary research. “But plasma physics is only one aspect of our focus. Building a scientific agenda that continues and expands on the PSFC’s history of innovation in all aspects of fusion science and engineering is vital, and a key facet of that work is facilitating our researchers’ efforts to produce the breakthroughs that are necessary for the realization of fusion energy.”

As the climate crisis accelerates, fusion power continues to grow in appeal: It produces no carbon emissions, its fuel is plentiful, and dangerous “meltdowns” are impossible. The sooner that fusion power is commercially available, the greater impact it can have on reducing greenhouse gas emissions and meeting global climate goals. While technical challenges remain, “the PSFC is well poised to meet them, and continue to show leadership. We are a mission-driven lab, and our students and staff are incredibly motivated,” Loureiro comments.

“As MIT continues to lead the way toward the delivery of clean fusion power onto the grid, I have no doubt that Nuno is the right person to step into this key position at this critical time,” says Maria T. Zuber, MIT’s presidential advisor for science and technology policy. “I look forward to the steady advance of plasma physics and fusion science at MIT under Nuno’s leadership.”

Over the last decade, there have been massive leaps forward in the field of fusion energy, driven in part by innovations like high-temperature superconducting magnets developed at the PSFC. Further progress is guaranteed: Loureiro believes that “The next few years are certain to be an exciting time for us, and for fusion as a whole. It’s the dawn of a new era with burning plasma experiments” — a reference to the collaboration between the PSFC and Commonwealth Fusion Systems, a startup company spun out of the PSFC, to build SPARC, a fusion device that is slated to turn on in 2026 and produce a burning plasma that yields more energy than it consumes. “It’s going to be a watershed moment,” says Loureiro.

He continues, “In addition, we have strong connections to inertial confinement fusion experiments, including those at Lawrence Livermore National Lab, and we’re looking forward to expanding our research into stellarators, which are another kind of magnetic fusion device.” Over recent years, the PSFC has significantly increased its collaboration with industrial partners such Eni, IBM, and others. Loureiro sees great value in this: “These collaborations are mutually beneficial: they allow us to grow our research portfolio while advancing companies’ R&D efforts. It’s very dynamic and exciting.”

Loureiro’s directorship begins as the PSFC is launching key tech development projects like LIBRA, a “blanket” of molten salt that can be wrapped around fusion vessels and perform double duty as a neutron energy absorber and a breeder for tritium (the fuel for fusion). Researchers at the PSFC have also developed a way to rapidly test the durability of materials being considered for use in a fusion power plant environment, and are now creating an experiment that will utilize a powerful particle accelerator called a gyrotron to irradiate candidate materials.

Interest in fusion is at an all-time high; the demand for researchers and engineers, particularly in the nascent commercial fusion industry, is reflected by the record number of graduate students that are studying at the PSFC — more than 90 across seven affiliated MIT departments. The PSFC’s classrooms are full, and Loureiro notes a palpable sense of excitement. “Students are our greatest strength,” says Loureiro. “They come here to do world-class research but also to grow as individuals, and I want to give them a great place to do that. Supporting those experiences, making sure they can be as successful as possible is one of my top priorities.” Loureiro plans to continue teaching and advising students after his appointment begins.

MIT President Sally Kornbluth’s recently announced Climate Project is a clarion call for Loureiro: “It’s not hyperbole to say MIT is where you go to find solutions to humanity’s biggest problems,” he says. “Fusion is a hard problem, but it can be solved with resolve and ingenuity — characteristics that define MIT. Fusion energy will change the course of human history. It’s both humbling and exciting to be leading a research center that will play a key role in enabling that change.” 


To understand cognition — and its dysfunction — neuroscientists must learn its rhythms

A new framework describes how thought arises from the coordination of neural activity driven by oscillating electric fields — a.k.a. brain “waves” or “rhythms.”


It could be very informative to observe the pixels on your phone under a microscope, but not if your goal is to understand what a whole video on the screen shows. Cognition is much the same kind of emergent property in the brainIt can only be understood by observing how millions of cells act in coordination, argues a trio of MIT neuroscientists. In a new article, they lay out a framework for understanding how thought arises from the coordination of neural activity driven by oscillating electric fields — also known as brain “waves” or “rhythms.”

Historically dismissed solely as byproducts of neural activity, brain rhythms are actually critical for organizing it, write Picower Professor Earl Miller and research scientists Scott Brincat and Jefferson Roy in Current Opinion in Behavioral Science. And while neuroscientists have gained tremendous knowledge from studying how individual brain cells connect and how and when they emit “spikes” to send impulses through specific circuits, there is also a need to appreciate and apply new concepts at the brain rhythm scale, which can span individual, or even multiple, brain regions.

“Spiking and anatomy are important, but there is more going on in the brain above and beyond that,” says senior author Miller, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “There’s a whole lot of functionality taking place at a higher level, especially cognition.”

The stakes of studying the brain at that scale, the authors write, might not only include understanding healthy higher-level function but also how those functions become disrupted in disease.

“Many neurological and psychiatric disorders, such as schizophrenia, epilepsy, and Parkinson’s, involve disruption of emergent properties like neural synchrony,” they write. “We anticipate that understanding how to interpret and interface with these emergent properties will be critical for developing effective treatments as well as understanding cognition.”

The emergence of thoughts

The bridge between the scale of individual neurons and the broader-scale coordination of many cells is founded on electric fields, the researchers write. Via a phenomenon called “ephaptic coupling,” the electrical field generated by the activity of a neuron can influence the voltage of neighboring neurons, creating an alignment among them. In this way, electric fields both reflect neural activity and also influence it. In a paper in 2022, Miller and colleagues showed via experiments and computational modeling that the information encoded in the electric fields generated by ensembles of neurons can be read out more reliably than the information encoded by the spikes of individual cells. In 2023 Miller’s lab provided evidence that rhythmic electrical fields may coordinate memories between regions.

At this larger scale, in which rhythmic electric fields carry information between brain regions, Miller’s lab has published numerous studies showing that lower-frequency rhythms in the so-called “beta” band originate in deeper layers of the brain’s cortex and appear to regulate the power of faster-frequency “gamma” rhythms in more superficial layers. By recording neural activity in the brains of animals engaged in working memory games, the lab has shown that beta rhythms carry “top-down” signals to control when and where gamma rhythms can encode sensory information, such as the images that the animals need to remember in the game.

Some of the lab’s latest evidence suggests that beta rhythms apply this control of cognitive processes to physical patches of the cortex, essentially acting like stencils that pattern where and when gamma can encode sensory information into memory, or retrieve it. According to this theory, which Miller calls “Spatial Computing,” beta can thereby establish the general rules of a task (for instance, the back-and-forth turns required to open a combination lock), even as the specific information content may change (for instance, new numbers when the combination changes). More generally, this structure also enables neurons to flexibly encode more than one kind of information at a time, the authors write, a widely observed neural property called “mixed selectivity.” For instance, a neuron encoding a number of the lock combination can also be assigned, based on which beta-stenciled patch it is in, the particular step of the unlocking process that the number matters for.

In the new study, Miller, Brincat, and Roy suggest another advantage consistent with cognitive control being based on an interplay of large-scale coordinated rhythmic activity: “subspace coding.” This idea postulates that brain rhythms organize the otherwise massive number of possible outcomes that could result from, say, 1,000 neurons engaging in independent spiking activity. Instead of all the many combinatorial possibilities, many fewer “subspaces” of activity actually arise, because neurons are coordinated, rather than independent. It is as if the spiking of neurons is like a flock of birds coordinating their movements. Different phases and frequencies of brain rhythms provide this coordination, aligned to amplify each other, or offset to prevent interference. For instance, if a piece of sensory information needs to be remembered, neural activity representing it can be protected from interference when new sensory information is perceived.

“Thus the organization of neural responses into subspaces can both segregate and integrate information,” the authors write.

The power of brain rhythms to coordinate and organize information processing in the brain is what enables functional cognition to emerge at that scale, the authors write. Understanding cognition in the brain, therefore, requires studying rhythms.

“Studying individual neural components in isolation — individual neurons and synapses — has made enormous contributions to our understanding of the brain and remains important,” the authors conclude. “However, it’s becoming increasingly clear that, to fully capture the brain’s complexity, those components must be analyzed in concert to identify, study, and relate their emergent properties.”


MIT faculty, instructors, students experiment with generative AI in teaching and learning

At MIT’s Festival of Learning 2024, panelists stressed the importance of developing critical thinking skills while leveraging technologies like generative AI.


How can MIT’s community leverage generative AI to support learning and work on campus and beyond?

At MIT’s Festival of Learning 2024, faculty and instructors, students, staff, and alumni exchanged perspectives about the digital tools and innovations they’re experimenting with in the classroom. Panelists agreed that generative AI should be used to scaffold — not replace — learning experiences.

This annual event, co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, celebrates teaching and learning innovations. When introducing new teaching and learning technologies, panelists stressed the importance of iteration and teaching students how to develop critical thinking skills while leveraging technologies like generative AI.

“The Festival of Learning brings the MIT community together to explore and celebrate what we do every day in the classroom,” said Christopher Capozzola, senior associate dean for open learning. “This year's deep dive into generative AI was reflective and practical — yet another remarkable instance of ‘mind and hand’ here at the Institute.”  

Incorporating generative AI into learning experiences 

MIT faculty and instructors aren’t just willing to experiment with generative AI — some believe it’s a necessary tool to prepare students to be competitive in the workforce. “In a future state, we will know how to teach skills with generative AI, but we need to be making iterative steps to get there instead of waiting around,” said Melissa Webster, lecturer in managerial communication at MIT Sloan School of Management. 

Some educators are revisiting their courses’ learning goals and redesigning assignments so students can achieve the desired outcomes in a world with AI. Webster, for example, previously paired written and oral assignments so students would develop ways of thinking. But, she saw an opportunity for teaching experimentation with generative AI. If students are using tools such as ChatGPT to help produce writing, Webster asked, “how do we still get the thinking part in there?”

One of the new assignments Webster developed asked students to generate cover letters through ChatGPT and critique the results from the perspective of future hiring managers. Beyond learning how to refine generative AI prompts to produce better outputs, Webster shared that “students are thinking more about their thinking.” Reviewing their ChatGPT-generated cover letter helped students determine what to say and how to say it, supporting their development of higher-level strategic skills like persuasion and understanding audiences.

Takako Aikawa, senior lecturer at the MIT Global Studies and Languages Section, redesigned a vocabulary exercise to ensure students developed a deeper understanding of the Japanese language, rather than just right or wrong answers. Students compared short sentences written by themselves and by ChatGPT and developed broader vocabulary and grammar patterns beyond the textbook. “This type of activity enhances not only their linguistic skills but stimulates their metacognitive or analytical thinking,” said Aikawa. “They have to think in Japanese for these exercises.”

While these panelists and other Institute faculty and instructors are redesigning their assignments, many MIT undergraduate and graduate students across different academic departments are leveraging generative AI for efficiency: creating presentations, summarizing notes, and quickly retrieving specific ideas from long documents. But this technology can also creatively personalize learning experiences. Its ability to communicate information in different ways allows students with different backgrounds and abilities to adapt course material in a way that’s specific to their particular context. 

Generative AI, for example, can help with student-centered learning at the K-12 level. Joe Diaz, program manager and STEAM educator for MIT pK-12 at Open Learning, encouraged educators to foster learning experiences where the student can take ownership. “Take something that kids care about and they’re passionate about, and they can discern where [generative AI] might not be correct or trustworthy,” said Diaz.

Panelists encouraged educators to think about generative AI in ways that move beyond a course policy statement. When incorporating generative AI into assignments, the key is to be clear about learning goals and open to sharing examples of how generative AI could be used in ways that align with those goals. 

The importance of critical thinking

Although generative AI can have positive impacts on educational experiences, users need to understand why large language models might produce incorrect or biased results. Faculty, instructors, and student panelists emphasized that it’s critical to contextualize how generative AI works. “[Instructors] try to explain what goes on in the back end and that really does help my understanding when reading the answers that I’m getting from ChatGPT or Copilot,” said Joyce Yuan, a senior in computer science. 

Jesse Thaler, professor of physics and director of the National Science Foundation Institute for Artificial Intelligence and Fundamental Interactions, warned about trusting a probabilistic tool to give definitive answers without uncertainty bands. “The interface and the output needs to be of a form that there are these pieces that you can verify or things that you can cross-check,” Thaler said.

When introducing tools like calculators or generative AI, the faculty and instructors on the panel said it’s essential for students to develop critical thinking skills in those particular academic and professional contexts. Computer science courses, for example, could permit students to use ChatGPT for help with their homework if the problem sets are broad enough that generative AI tools wouldn’t capture the full answer. However, introductory students who haven’t developed the understanding of programming concepts need to be able to discern whether the information ChatGPT generated was accurate or not.

Ana Bell, senior lecturer of the Department of Electrical Engineering and Computer Science and MITx digital learning scientist, dedicated one class toward the end of the semester of Course 6.100L (Introduction to Computer Science and Programming Using Python) to teach students how to use ChatGPT for programming questions. She wanted students to understand why setting up generative AI tools with the context for programming problems, inputting as many details as possible, will help achieve the best possible results. “Even after it gives you a response back, you have to be critical about that response,” said Bell. By waiting to introduce ChatGPT until this stage, students were able to look at generative AI’s answers critically because they had spent the semester developing the skills to be able to identify whether problem sets were incorrect or might not work for every case. 

A scaffold for learning experiences

The bottom line from the panelists during the Festival of Learning was that generative AI should provide scaffolding for engaging learning experiences where students can still achieve desired learning goals. The MIT undergraduate and graduate student panelists found it invaluable when educators set expectations for the course about when and how it’s appropriate to use AI tools. Informing students of the learning goals allows them to understand whether generative AI will help or hinder their learning. Student panelists asked for trust that they would use generative AI as a starting point, or treat it like a brainstorming session with a friend for a group project. Faculty and instructor panelists said they will continue iterating their lesson plans to best support student learning and critical thinking. 

Panelists from both sides of the classroom discussed the importance of generative AI users being responsible for the content they produce and avoiding automation bias — trusting the technology’s response implicitly without thinking critically about why it produced that answer and whether it’s accurate. But since generative AI is built by people making design decisions, Thaler told students, “You have power to change the behavior of those tools.”


Three from MIT awarded 2024 Guggenheim Fellowships

MIT professors Roger Levy, Tracy Slatyer, and Martin Wainwright appointed to the 2024 class of “trail-blazing fellows.”


MIT faculty members Roger Levy, Tracy Slatyer, and Martin Wainwright are among 188 scientists, artists, and scholars awarded 2024 fellowships from the John Simon Guggenheim Memorial Foundation. Working across 52 disciplines, the fellows were selected from almost 3,000 applicants for “prior career achievement and exceptional promise.”

Each fellow receives a monetary stipend to pursue independent work at the highest level. Since its founding in 1925, the Guggenheim Foundation has awarded over $400 million in fellowships to more than 19,000 fellows. This year, MIT professors were recognized in the categories of neuroscience, physics, and data science.

Roger Levy is a professor in the Department of Brain and Cognitive Sciences. Combining computational modeling of large datasets with psycholinguistic experimentation, his work furthers our understanding of the cognitive underpinning of language processing, and helps to design models and algorithms that will allow machines to process human language. He is a recipient of the Alfred P. Sloan Research Fellowship, the NSF Faculty Early Career Development (CAREER) Award, and a fellowship at the Center for Advanced Study in the Behavioral Sciences.

Tracy Slatyer is a professor in the Department of Physics as well as the Center for Theoretical Physics in the MIT Laboratory for Nuclear Science and the MIT Kavli Institute for Astrophysics and Space Research. Her research focuses on dark matter — novel theoretical models, predicting observable signals, and analysis of astrophysical and cosmological datasets. She was a co-discoverer of the giant gamma-ray structures known as the “Fermi Bubbles” erupting from the center of the Milky Way, for which she received the New Horizons in Physics Prize in 2021. She is also a recipient of a Simons Investigator Award and Presidential Early Career Awards for Scientists and Engineers.

Martin Wainwright is the Cecil H. Green Professor in Electrical Engineering and Computer Science and Mathematics, and affiliated with the Laboratory for Information and Decision Systems and Statistics and Data Science Center. He is interested in statistics, machine learning, information theory, and optimization. Wainwright has been recognized with an Alfred P. Sloan Foundation Fellowship, the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. Wainwright has also co-authored books on graphical and statistical modeling, and solo-authored a book on high dimensional statistics.

“Humanity faces some profound existential challenges,” says Edward Hirsch, president of the foundation. “The Guggenheim Fellowship is a life-changing recognition. It’s a celebrated investment into the lives and careers of distinguished artists, scholars, scientists, writers and other cultural visionaries who are meeting these challenges head-on and generating new possibilities and pathways across the broader culture as they do so.”


Two from MIT awarded 2024 Paul and Daisy Soros Fellowships for New Americans

Fellowship funds graduate studies for outstanding immigrants and children of immigrants.


MIT graduate student Riyam Al Msari and alumna Francisca Vasconcelos ’20 are among the 30 recipients of this year’s Paul and Daisy Soros Fellowships for New Americans. In addition, two Soros winners will begin PhD studies at MIT in the fall: Zijian (William) Niu in computational and systems biology and Russell Legate-Yang in economics.

The P.D. Soros Fellowships for New Americans program recognizes the potential of immigrants to make significant contributions to U.S. society, culture, and academia by providing $90,000 in graduate school financial support over two years.

Riyam Al Msari

Riyam Al Msari, born in Baghdad, Iraq, faced a turbulent childhood shaped by the 2003 war. At age 8, her life took a traumatic turn when her home was bombed in 2006, leading to her family's displacement to Iraqi Kurdistan. Despite experiencing educational and ethnic discriminatory challenges, Al Msari remained undeterred, wholeheartedly embracing her education.

Soon after her father immigrated to the United States to seek political asylum in 2016, Al Msari’s mother was diagnosed with head and neck cancer, leaving Al Msari, at just 18, as her mother’s primary caregiver. Despite her mother’s survival, Al Msari witnessed the limitations and collateral damage caused by standardized cancer therapies, which left her mother in a compromised state. This realization invigorated her determination to pioneer translational cancer-targeted therapies.

In 2018, when Al Msari was 20, she came to the United States and reunited with her father and the rest of her family, who arrived later with significant help from then-senator Kamala Harris’s office. Despite her Iraqi university credits not transferring, Al Msari persevered and continued her education at Houston Community College as a Louis Stokes Alliances for Minority Participation (LSAMP) scholar, and then graduated magna cum laude as a Regents Scholar from the University of California at San Diego’s bioengineering program, where she focused on lymphatic-preserving neoadjuvant immunotherapies for head and neck cancers.

As a PhD student in the MIT Department of Biological Engineering, Al Masri conducts research in the Irvine and Wittrup labs to employ engineering strategies for localized immune targeting of cancers. She aspires to establish a startup that bridges preclinical and clinical oncology research, specializing in the development of innovative protein and biomaterial-based translational cancer immunotherapies.

Francisca Vasconcelos ’20

In the early 1990s, Francisca Vasconcelos’s parents emigrated from Portugal to the United States in pursuit of world-class scientific research opportunities. Vasconcelos was born in Boston while her parents were PhD students at MIT and Harvard University. When she was 5, her family relocated to San Diego, when her parents began working at the University of California at San Diego.

Vasconcelos graduated from MIT in 2020 with a BS in electrical engineering, computer science, and physics. As an undergraduate, she performed substantial research involving machine learning and data analysis for quantum computers in the MIT Engineering Quantum Systems Group, under the guidance of Professor William Oliver. Drawing upon her teaching and research experience at MIT, Vasconcelos became the founding academic director of The Coding School nonprofit’s Qubit x Qubit initiative, where she taught thousands of students from different backgrounds about the fundamentals of quantum computation.

In 2020, Vasconcelos was awarded a Rhodes Scholarship to the University of Oxford, where she pursued an MSc in statistical sciences and an MSt in philosophy of physics. At Oxford, she performed substantial research on uncertainty quantification of machine learning models for medical imaging in the OxCSML group. She also played for Oxford’s Women’s Blues Football team. 

Now a computer science PhD student and NSF Graduate Research Fellow at the University of California at Berkeley, Vasconcelos is a member of both the Berkeley Artificial Intelligence Research Lab and CS Theory Group. Her research interests lie at the intersection of quantum computation and machine learning. She is especially interested in developing efficient classical algorithms to learn about quantum systems, as well as quantum algorithms to improve simulations of quantum processes. In doing so, she hopes to find meaningful ways in which quantum computers can outperform classical computers.

The P.D. Soros Fellowship attracts more than 1,800 applicants annually. MIT students interested in applying may contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.


Seven from MIT elected to American Academy of Arts and Sciences for 2024

The prestigious honor society announces more than 250 new members.


Seven MIT faculty members are among the 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 24.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT in 2024 are:

“We honor these artists, scholars, scientists, and leaders in the public, non-profit, and private sectors for their accomplishments and for the curiosity, creativity, and courage required to reach new heights,” says David Oxtoby, president of the academy. “We invite these exceptional individuals to join in the academy’s work to address serious challenges and advance the common good.”

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.


MIT announces 2024 Bose Grants

The grants fund studies of clean hydrogen production, fetal health-sensing fabric, basalt architecture, and shark-based ocean monitoring.


MIT Provost Cynthia Barnhart announced four Professor Amar G. Bose Research Grants to support bold research projects across diverse areas of study, including a way to generate clean hydrogen from deep in the Earth, build an environmentally friendly house of basalt, design maternity clothing that monitors fetal health, and recruit sharks as ocean oxygen monitors.

This year's recipients are Iwnetim Abate, assistant professor of materials science and engineering; Andrew Babbin, the Cecil and Ida Green Associate Professor in Earth, Atmospheric and Planetary Sciences; Yoel Fink, professor of materials science and engineering and of electrical engineering and computer science; and Skylar Tibbits, associate professor of design research in the Department of Architecture.

The program was named for the visionary founder of the Bose Corporation and MIT alumnus Amar G. Bose ’51, SM ’52, ScD ’56. After gaining admission to MIT, Bose became a top math student and a Fulbright Scholarship recipient. He spent 46 years as a professor at MIT, led innovations in sound design, and founded the Bose Corp. in 1964. MIT launched the Bose grant program 11 years ago to provide funding over a three-year period to MIT faculty who propose original, cross-disciplinary, and often risky research projects that would likely not be funded by conventional sources.

“The promise of the Bose Fellowship is to help bold, daring ideas become realities, an approach that honors Amar Bose’s legacy,” says Barnhart. “Thanks to support from this program, these talented faculty members have the freedom to explore their bold and innovative ideas.”

Deep and clean hydrogen futures

A green energy future will depend on harnessing hydrogen as a clean energy source, sequestering polluting carbon dioxide, and mining the minerals essential to building clean energy technologies such as advanced batteries. Iwnetim Abate thinks he has a solution for all three challenges: an innovative hydrogen reactor.

He plans to build a reactor that will create natural hydrogen from ultramafic mineral rocks in the crust. “The Earth is literally a giant hydrogen factory waiting to be tapped,” Abate explains. “A back-of-the-envelope calculation for the first seven kilometers of the Earth’s crust estimates that there is enough ultramafic rock to produce hydrogen for 250,000 years.”

The reactor envisioned by Abate injects water to create a reaction that releases hydrogen, while also supporting the injection of climate-altering carbon dioxide into the rock, providing a global carbon capacity of 100 trillion tons. At the same time, the reactor process could provide essential elements such as lithium, nickel, and cobalt — some of the most important raw materials used in advanced batteries and electronics.

“Ultimately, our goal is to design and develop a scalable reactor for simultaneously tapping into the trifecta from the Earth's subsurface,” Abate says.

Sharks as oceanographers

If we want to understand more about how oxygen levels in the world’s seas are disturbed by human activities and climate change, we should turn to a sensing platform “that has been honed by 400 million years of evolution to perfectly sample the ocean: sharks,” says Andrew Babbin.

As the planet warms, oceans are projected to contain less dissolved oxygen, with impacts on the productivity of global fisheries, natural carbon sequestration, and the flux of climate-altering greenhouse gasses from the ocean to the air. While scientists know dissolved oxygen is important, it has proved difficult to track over seasons, decades, and underexplored regions both shallow and deep.

Babbin’s goal is to develop a low-cost sensor for dissolved oxygen that can be integrated with preexisting electronic shark tags used by marine biologists. “This fleet of sharks … will finally enable us to measure the extent of the low-oxygen zones of the ocean, how they change seasonally and with El Niño/La Niña oscillation, and how they expand or contract into the future.”

The partnership with sharks will also spotlight the importance of these often-maligned animals for global marine and fisheries health, Babbin says. “We hope in pursuing this work marrying microscopic and macroscopic life we will inspire future oceanographers and conservationists, and lead to a better appreciation for the chemistry that underlies global habitability.”

Maternity wear that monitors fetal health

There are 2 million stillbirths around the world each year, and in the United States alone, 21,000 families suffer this terrible loss. In many cases, mothers and their doctors had no warning of any abnormalities or changes in fetal health leading up to these deaths. Yoel Fink and colleagues are looking for a better way to monitor fetal health and provide proactive treatment.

Fink is building on years of research on acoustic fabrics to design an affordable shirt for mothers that would monitor and communicate important details of fetal health. His team’s original research drew inspiration from the function of the eardrum, designing a fiber that could be woven into other fabrics to create a kind of fabric microphone.

“Given the sensitivity of the acoustic fabrics in sensing these nanometer-scale vibrations, could a mother's clothing transcend its conventional role and become a health monitor, picking up on the acoustic signals and subsequent vibrations that arise from her unborn baby's heartbeat and motion?” Fink says. “Could a simple and affordable worn fabric allow an expecting mom to sleep better, knowing that her fetus is being listened to continuously?”

The proposed maternity shirt could measure fetal heart and breathing rate, and might be able to give an indication of the fetal body position, he says. In the final stages of development, he and his colleagues hope to develop machine learning approaches that would identify abnormal fetal heart rate and motion and deliver real-time alerts.

A basalt house in Iceland

In the land of volcanoes, Skylar Tibbits wants to build a case-study home almost entirely from the basalt rock that makes up the Icelandic landscape.

Architects are increasingly interested in building using one natural material — creating a monomaterial structure — that can be easily recycled. At the moment, the building industry represents 40 percent of carbon emissions worldwide, and consists of many materials and structures, from metal to plastics to concrete, that can’t be easily disassembled or reused.

The proposed basalt house in Iceland, a project co-led by J. Jih, associate professor of the practice in the Department of Architecture, is “an architecture that would be fully composed of the surrounding earth, that melts back into that surrounding earth at the end of its lifespan, and that can be recycled infinitely,” Tibbits explains.

Basalt, the most common rock form in the Earth’s crust, can be spun into fibers for insulation and rebar. Basalt fiber performs as well as glass and carbon fibers at a lower cost in some applications, although it is not widely used in architecture. In cast form, it can make corrosion- and heat-resistant plumbing, cladding and flooring.

“A monomaterial architecture is both a simple and radical proposal that unfortunately falls outside of traditional funding avenues,” says Tibbits. “The Bose grant is the perfect and perhaps the only option for our research, which we see as a uniquely achievable moonshot with transformative potential for the entire built environment.”


MIT scientists tune the entanglement structure in an array of qubits

The advance offers a way to characterize a fundamental resource needed for quantum computing.


Entanglement is a form of correlation between quantum objects, such as particles at the atomic scale. This uniquely quantum phenomenon cannot be explained by the laws of classical physics, yet it is one of the properties that explains the macroscopic behavior of quantum systems.

Because entanglement is central to the way quantum systems work, understanding it better could give scientists a deeper sense of how information is stored and processed efficiently in such systems.

Qubits, or quantum bits, are the building blocks of a quantum computer. However, it is extremely difficult to make specific entangled states in many-qubit systems, let alone investigate them. There are also a variety of entangled states, and telling them apart can be challenging.

Now, MIT researchers have demonstrated a technique to efficiently generate entanglement among an array of superconducting qubits that exhibit a specific type of behavior.

Over the past years, the researchers at the Engineering Quantum Systems (EQuS) group have developed techniques using microwave technology to precisely control a quantum processor composed of superconducting circuits. In addition to these control techniques, the methods introduced in this work enable the processor to efficiently generate highly entangled states and shift those states from one type of entanglement to another — including between types that are more likely to support quantum speed-up and those that are not.

“Here, we are demonstrating that we can utilize the emerging quantum processors as a tool to further our understanding of physics. While everything we did in this experiment was on a scale which can still be simulated on a classical computer, we have a good roadmap for scaling this technology and methodology beyond the reach of classical computing,” says Amir H. Karamlou ’18, MEng ’18, PhD ’23, the lead author of the paper.

The senior author is William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of the EQuS group, and associate director of the Research Laboratory of Electronics. Karamlou and Oliver are joined by Research Scientist Jeff Grover, postdoc Ilan Rosen, and others in the departments of Electrical Engineering and Computer Science and of Physics at MIT, at MIT Lincoln Laboratory, and at Wellesley College and the University of Maryland. The research appears today in Nature.

Assessing entanglement

In a large quantum system comprising many interconnected qubits, one can think about entanglement as the amount of quantum information shared between a given subsystem of qubits and the rest of the larger system.

The entanglement within a quantum system can be categorized as area-law or volume-law, based on how this shared information scales with the geometry of subsystems. In volume-law entanglement, the amount of entanglement between a subsystem of qubits and the rest of the system grows proportionally with the total size of the subsystem.

On the other hand, area-law entanglement depends on how many shared connections exist between a subsystem of qubits and the larger system. As the subsystem expands, the amount of entanglement only grows along the boundary between the subsystem and the larger system.

In theory, the formation of volume-law entanglement is related to what makes quantum computing so powerful.

“While have not yet fully abstracted the role that entanglement plays in quantum algorithms, we do know that generating volume-law entanglement is a key ingredient to realizing a quantum advantage,” says Oliver.

However, volume-law entanglement is also more complex than area-law entanglement and practically prohibitive at scale to simulate using a classical computer.

“As you increase the complexity of your quantum system, it becomes increasingly difficult to simulate it with conventional computers. If I am trying to fully keep track of a system with 80 qubits, for instance, then I would need to store more information than what we have stored throughout the history of humanity,” Karamlou says.

The researchers created a quantum processor and control protocol that enable them to efficiently generate and probe both types of entanglement.

Their processor comprises superconducting circuits, which are used to engineer artificial atoms. The artificial atoms are utilized as qubits, which can be controlled and read out with high accuracy using microwave signals.

The device used for this experiment contained 16 qubits, arranged in a two-dimensional grid. The researchers carefully tuned the processor so all 16 qubits have the same transition frequency. Then, they applied an additional microwave drive to all of the qubits simultaneously.

If this microwave drive has the same frequency as the qubits, it generates quantum states that exhibit volume-law entanglement. However, as the microwave frequency increases or decreases, the qubits exhibit less volume-law entanglement, eventually crossing over to entangled states that increasingly follow an area-law scaling.

Careful control

“Our experiment is a tour de force of the capabilities of superconducting quantum processors. In one experiment, we operated the processor both as an analog simulation device, enabling us to efficiently prepare states with different entanglement structures, and as a digital computing device, needed to measure the ensuing entanglement scaling,” says Rosen.

To enable that control, the team put years of work into carefully building up the infrastructure around the quantum processor.

By demonstrating the crossover from volume-law to area-law entanglement, the researchers experimentally confirmed what theoretical studies had predicted. More importantly, this method can be used to determine whether the entanglement in a generic quantum processor is area-law or volume-law.

“The MIT experiment underscores the distinction between area-law and volume-law entanglement in two-dimensional quantum simulations using superconducting qubits. This beautifully complements our work on entanglement Hamiltonian tomography with trapped ions in a parallel publication published in Nature in 2023,” says Peter Zoller, a professor of theoretical physics at the University of Innsbruck, who was not involved with this work.

“Quantifying entanglement in large quantum systems is a challenging task for classical computers but a good example of where quantum simulation could help,” says Pedram Roushan of Google, who also was not involved in the study. “Using a 2D array of superconducting qubits, Karamlou and colleagues were able to measure entanglement entropy of various subsystems of various sizes. They measure the volume-law and area-law contributions to entropy, revealing crossover behavior as the system’s quantum state energy is tuned. It powerfully demonstrates the unique insights quantum simulators can offer.”

In the future, scientists could utilize this technique to study the thermodynamic behavior of complex quantum systems, which is too complex to be studied using current analytical methods and practically prohibitive to simulate on even the world’s most powerful supercomputers.

“The experiments we did in this work can be used to characterize or benchmark larger-scale quantum systems, and we may also learn something more about the nature of entanglement in these many-body systems,” says Karamlou.

Additional co-authors of the study are Sarah E. Muschinske, Cora N. Barrett, Agustin Di Paolo, Leon Ding, Patrick M. Harrington, Max Hays, Rabindra Das, David K. Kim, Bethany M. Niedzielski, Meghan Schuldt, Kyle Serniak, Mollie E. Schwartz, Jonilyn L. Yoder, Simon Gustavsson, and Yariv Yanay.

This research is funded, in part, by the U.S. Department of Energy, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the National Science Foundation, the STC Center for Integrated Quantum Materials, the Wellesley College Samuel and Hilda Levitt Fellowship, NASA, and the Oak Ridge Institute for Science and Education.


Geologists discover rocks with the oldest evidence yet of Earth’s magnetic field

The 3.7 billion-year-old rocks may extend the magnetic field’s age by 200 million years.


Geologists at MIT and Oxford University have uncovered ancient rocks in Greenland that bear the oldest remnants of Earth’s early magnetic field.

The rocks appear to be exceptionally pristine, having preserved their properties for billions of years. The researchers determined that the rocks are about 3.7 billion years old and retain signatures of a magnetic field with a strength of at least 15 microtesla. The ancient field is similar in magnitude to the Earth’s magnetic field today.

The open-access findings, appearing today in the Journal of Geophysical Research, represent some of the earliest evidence of a magnetic field surrounding the Earth. The results potentially extend the age of the Earth’s magnetic field by hundreds of millions of years, and may shed light on the planet’s early conditions that helped life take hold.

A drone photo shows three small researchers on a rocky formation, with a vast expanse of ice and snow in background.

“The magnetic field is, in theory, one of the reasons we think Earth is really unique as a habitable planet,” says Claire Nichols, a former MIT postdoc who is now an associate professor of the geology of planetary processes at Oxford University. “It’s thought our magnetic field protects us from harmful radiation from space, and also helps us to have oceans and atmospheres that can be stable for long periods of time.”

Previous studies have shown evidence for a magnetic field on Earth that is at least 3.5 billion years old. The new study is extending the magnetic field’s lifetime by another 200 million years.

“That’s important because that’s the time when we think life was emerging,” says Benjamin Weiss, the Robert R. Shrock Professor of Planetary Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “If the Earth’s magnetic field was around a few hundred million years earlier, it could have played a critical role in making the planet habitable.”

Nichols and Weiss are co-authors of the new study, which also includes Craig Martin and Athena Eyster at MIT, Adam Maloof at Princeton University, and additional colleagues from institutions including Tufts University and the University of Colorado at Boulder.

A slow churn

Today, the Earth’s magnetic field is powered by its molten iron core, which slowly churns up electric currents in a self-generating “dynamo.” The resulting magnetic field extends out and around the planet like a protective bubble. Scientists suspect that, early in its evolution, the Earth was able to foster life, in part due to an early magnetic field that was strong enough to retain a life-sustaining atmosphere and simultaneously shield the planet from damaging solar radiation.

Exactly how early and robust this magnetic shield was is up for debate, though there has been evidence dating its existence to about 3.5 billion years ago.

“We wanted to see if we could extend this record back beyond 3.5 billion years and nail down how strong that early field was,” Nichols says.

In 2018, as a postdoc working in Weiss’ lab at the time, Nichols and her team set off on an expedition to the Isua Supracrustal Belt, a 20-mile stretch of exposed rock formations surrounded by towering ice sheets in the southwest of Greenland. There, scientists have discovered the oldest preserved rocks on Earth, which have been extensively studied in hopes of answering a slew of scientific questions about Earth’s ancient conditions.

For Nichols and Weiss, the objective was to find rocks that still held signatures of the Earth’s magnetic field when the rocks first formed. Rocks form through many millions of years, as grains of sediment and minerals accumulate and are progressively packed and buried under subsequent deposition over time. Any magnetic minerals such as iron-oxides that are in the deposits follow the pull of the Earth’s magnetic field as they form. This collective orientation, and the imprint of the magnetic field, are preserved in the rocks.

However, this preserved magnetic field can be scrambled and completely erased if the rocks subsequently undergo extreme thermal or aqueous events such as hydrothermal activity or plate tectonics that can pressurize and crush up these deposits. Determining the age of a magnetic field in ancient rocks has therefore been a highly contested area of study.

To get to rocks that were hopefully preserved and unaltered since their original deposition, the team sampled from rock formations in the Isua Supracrustal Belt, a remote location that was only accessible by helicopter.

“It’s about 150 kilometers away from the capital city, and you get helicoptered in, right up against the ice sheet,” Nichols says. “Here, you have the world’s oldest rocks essentially, surrounded by this dramatic expression of the ice age. It’s a really spectacular place.”

Dynamic history

The team returned to MIT with whole rock samples of banded iron formations — a rock type that appears as stripes of iron-rich and silica-rich rock. The iron-oxide minerals found in these rocks can act as tiny magnets that orient with any external magnetic field. Given their composition, the researchers suspect the rocks were originally formed in primordial oceans prior to the rise in atmospheric oxygen around 2.5 billion years ago.

“Back when there wasn’t oxygen in the atmosphere, iron didn’t oxidize so easily, so it was in solution in the oceans until it reached a critical concentration, when it precipitated out,” Nichols explains. “So, it’s basically a result of iron raining out of the oceans and depositing on the seafloor.”

“They’re very beautiful, weird rocks that don’t look like anything that forms on Earth today,” Weiss adds.

Previous studies had used uranium-lead dating to determine the age of the iron oxides in these rock samples. The ratio of uranium to lead (U-Pb) gives scientists an estimate of a rock’s age. This analysis found that some of the magnetized minerals were likely about 3.7 billion years old. The MIT team, in collaboration with researchers from Rensselaer Polytechnic Institute, showed in a paper published last year that the U-Pb age also dates the age of the magnetic record in these minerals.

The researchers then set out to determine whether the ancient rocks preserved magnetic field from that far back, and how strong that field might have been.

“The samples we think are best and have that very old signature, we then demagnetize in the lab, in steps. We apply a laboratory field that we know the strength of, and we remagnetize the rocks in steps, so you can compare the gradient of the demagnetization to the gradient of the lab magnetization. That gradient tells you how strong the ancient field was,” Nichols explains.

Through this careful process of remagnetization, the team concluded that the rocks likely harbored an ancient, 3.7-billion-year-old magnetic field, with a magnitude of at least 15 microtesla. Today, Earth’s magnetic field measures around 30 microtesla.

“It’s half the strength, but the same order of magnitude,” Nichols says. “The fact that it’s similar in strength as today’s field implies whatever is driving Earth’s magnetic field has not changed massively in power over billions of years.”

The team’s experiments also showed that the rocks retained the ancient field, despite having undergone two subsequent thermal events. Any extreme thermal event, such as a tectonic shake-up of the subsurface or hydrothermal eruptions, could potentially heat up and erase a rock’s magnetic field. But the team found that the iron in their samples likely oriented, then crystallized, 3.7 billion years ago, in some initial, extreme thermal event. Around 2.8 billion years ago, and then again at 1.5 billion years ago, the rocks may have been reheated, but not to the extreme temperatures that would have scrambled their magnetization.

“The rocks that the team has studied have experienced quite a bit during their long geological journey on our planet,” says Annique van der Boon, a planetary science researcher at the University of Oslo who was not involved in the study. “The authors have done a lot of work on constraining which geological events have affected the rocks at different times.” 

“The team have taken their time to deliver a very thorough study of these complex rocks, which do not give up their secrets easily,” says Andy Biggin, professor of geomagnetism at the University of Liverpool, who did not contribute to the study. “These new results tell us that the Earth’s magnetic field was alive and well 3.7 billion years ago. Knowing it was there and strong contributes a significant boundary constraint on the early Earth’s environment.”

The results also raise questions about how the ancient Earth could have powered such a robust magnetic field. While today’s field is powered by crystallization of the solid iron inner core, it’s thought that the inner core had not yet formed so early in the planet’s evolution.

“It seems like evidence for whatever was generating a magnetic field back then was a different power source from what we have today,” Weiss says. “And we care about Earth because there’s life here, but it’s also a touchstone for understanding other terrestrial planets. It suggests planets throughout the galaxy probably have lots of ways of powering a magnetic field, which is important for the question of habitability elsewhere.”

This research was supported, in part, by the Simons Foundation.


Professor Emeritus Bernhardt Wuensch, crystallographer and esteemed educator, dies at 90

A pioneer in solid-state ionics and materials science education, Wuensch is remembered for his thoughtful scholarship and grace in teaching and mentoring.


MIT Professor Emeritus Bernhardt Wuensch ’55, SM ’57, PhD ’63, a crystallographer and beloved teacher whose warmth and dedication to ensuring his students mastered the complexities of a precise science matched the analytical rigor he applied to the study of crystals, died this month in Concord, Massachusetts. He was 90.

Remembered fondly for his fastidious attention to detail and his office stuffed with potted orchids and towers of papers, Wuensch was an expert in X-ray crystallography, which involves shooting X-ray beams at crystalline materials to determine their underlying structure. He did pioneering work in solid-state ionics, investigating the movement of charged particles in solids that underpins technologies critical for batteries, fuel cells, and sensors. In education, he carried out a major overhaul of the curriculum in what is today MIT’s Department of Materials Science and Engineering (DMSE).

Despite his wide-ranging research and teaching interests, colleagues and students said, he was a perfectionist who favored quality over quantity.

“All the work he did, he wasn’t in a hurry to get a lot of stuff done,” says DMSE’s Professor Harry Tuller. “But what he did, he wanted to ensure was correct and proper, and that was characteristic of his research.”

Born in Paterson, New Jersey, in 1933, Wuensch first arrived at MIT as a first-year undergraduate in the 1950s. He earned bachelor’s and master’s degrees in physics before switching to crystallography and earning a PhD from what was then the Department of Geology (now Earth, Atmospheric and Planetary Sciences). He joined the faculty of the Department of Metallurgy in 1964 and saw its name change twice over his 46 years, retiring from DMSE in 2011.

As a professor of ceramics, Wuensch was a part of the 20th-century shift from a traditional focus on metals and mining to a broader class of materials that included polymers, ceramics, semiconductors, and biomaterials. In a 1973 letter supporting his promotion to full professor, then-department head Walter Owen credits Wuensch for contributing to “a completely new approach to the teaching of the structure of materials.”

His research led to major advancements in understanding how atomic-level structures affect magnetic and electrical properties of materials. For example, Tuller says, he was one of the first to detail how the arrangement of atoms in fast-ion conductors — materials used in batteries, fuel cells, and other devices — influences their ability to swiftly conduct ions.

Wuensch was a leading light in other areas, including diffusion, the movement of ions in materials such as liquids or gases, and neutron diffraction, aiming neutrons at materials to collect information about their atomic and magnetic structure.

Tuller, a DMSE faculty member for 49 years, tapped Wuensch’s expertise to study zinc oxide, a material used to make varistors, semiconducting components that protect circuits from high-voltage surges of electricity. Together, Tuller and Wuensch found that in such materials ions move much more rapidly along the grain boundaries — the interfaces between the crystallites that make up these polycrystalline ceramic materials.

“It’s what happens at those grain boundaries that actually limits the power that would go through your computer during a voltage surge by instead short-circuiting the current through these devices,” Tuller says. He credited the partnership with Wuensch for the knowledge. “He was instrumental in helping us confirm that we could engineer those grain boundaries by taking advantage of the very rapid diffusivity of impurity elements along those boundaries.”

In recognition of his accomplishments, Wuensch was elected a fellow of the American Ceramics Society and the Mineralogical Society of America and belonged to other professional associations, including The Electrochemical Society and Materials Research Society. In 2003 he was awarded an honorary doctorate from South Korea’s Hanyang University for his work in crystallography and diffusion-related phenomena in ceramic materials.

“A great, great teacher”

Known as “Bernie” to friends and colleagues, Wuensch was equally at home in the laboratory and the classroom. “He instilled in several generations of young scientists this ability to think deeply, be very careful about their research, and be able to stand behind it,” Tuller says.

One of those scientists is Sossina Haile ’86, PhD ’92, the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern University, a researcher of solid-state ionic materials who develops new types of fuel cells, devices that convert fuel into electricity.

Her introduction to Wuensch, in the 1980s, was his class 3.13 (Symmetry Theory). Haile was at first puzzled by the subject, the study of the symmetrical properties of crystals and their effects on material properties. The arrangements of atoms and molecules in a material is crucial for predicting how materials behave in different situations — whether they will be strong enough for certain uses, for example, or can conduct electricity — but to an undergraduate it was “a little esoteric.”

“I certainly remember thinking to myself, ‘What is this good for?’” Haile says with a laugh. She would later return to MIT as a PhD student working alongside Wuensch in his laboratory with a renewed perspective.

“He just made seemingly esoteric topics really interesting and was very astute in knowing whether or not a student understood.” Haile describes Wuensch’s articulate speech, “immaculate” handwriting, and detailed drawings of three-dimensional objects on the chalkboard. Haile notes that his sketches were so skillful that students felt disappointed when they looked at a figure they tried to copy in their notebooks.

“They couldn’t tell what it was,” Haile says. “It felt really clear during lecture, and it wasn’t clear afterwards because no one had a drawing as good as his.”

Carl Thompson, the Stavros V. Salapatas Professor in Materials Science and Engineering at DMSE, was another student of Wuensch’s who came away with a broadened outlook. In 3.13, Thompson recalls Wuensch asking students to look for symmetry outside of class, patterns in a brick wall or in subway station tiles. “He said, ‘This course will change the way you see the world,’ and it did. He was a great, great teacher.”

In a 2005 videorecorded session of 3.60 (Symmetry, Structure, and Tensor Properties of Materials), a graduate class that he taught for three decades, Wuensch writes his name on the board along with his telephone extension number, 6889, pointing out its rotational symmetry.

“You can pick it up, turn it head-over-heels by 180 degrees, and it’s mapped into coincidence with itself,” Wuensch said. “You might think I would have had to have fought for years to get it, an extension number like that, but no. It just happened to come my way.”

(The class can be watched in its entirety on MIT OpenCourseWare.)

Wuensch also had a whimsical sense of humor, which he often exercised in the margins of his students’ papers, Haile says. In a LinkedIn tribute to him, she recalled a time she sent him a research manuscript with figures that was missing Figure 5 but referred to it in the text, writing that it plotted conductivity versus temperature.

“Bernie noted that figures don’t plot; people do, and evidently Figure 5 was missing because ‘it was off plotting somewhere,’” Haile wrote.

Reflecting on Wuensch’s legacy in materials science and engineering, Haile says his knowledge of crystallography and the manual analysis and interpretation he did in his time was critical. Today, materials science students use crystallographic software that automates the algorithms and calculations.

“The current students don’t know that analysis but benefit from it because people like Bernie made sure it got into the common vernacular at the time when code was being put together,” Haile said.

A multifaceted tenure

Wuensch served DMSE and MIT in innumerable other ways, serving on departmental committees on curriculum development, graduate students, and policy, and on School of Engineering and Institute-level committees on education and foreign scholarships, among others. “He was always involved in any committee work he was asked to do,” Thompson says.

He was acting department head for six months starting in 1980, and in 1988-93 he was the director of the Center for Materials Science and Engineering, an earlier iteration of today’s Materials Research Center.

For all his contributions, there are few things Wuensch was better known for at MIT than his office in Building 13, which had shelves lined with multicolored crystal lattice models, representing the arrangements of atoms in materials, and orchids he took meticulous care of. And then there was the cityscape of papers, piled in heaps on the floor, on his desk, on pullout extensions. Thompson says walking into his office was like navigating a canyon.

“He had so many stacks of paper that he had no place to actually work at his desk, so he would put things on his lap — he would start writing on his lap,” Haile says. “I remember calling him at one point in time and talking to him, and I said, ‘Bernie, you’re writing this down on your lap, aren’t you?’ And he said, ‘In fact, yes, I am.’”

Wuensch was also known for his kindness and decency. Angelita Mireles, graduate academic administrator at DMSE, says he was a popular pick for graduate students assembling committees for their thesis area examinations, which test how prepared students are to conduct doctoral research, “because he was so nice.”

That said, he had exacting standards. “He expected near perfection from his students, and that made them a lot deeper,” Tuller says.

Outside of MIT, Wuensch enjoyed tending his garden; collecting minerals, gemstones, and rare coins; and reading spy novels. Other pastimes included fishing and clamming in Maine, splitting his own firewood, and traveling with his wife, Mary Jane.

Wuensch is survived by his wife; son Stefan Wuensch and wife Wendy Joseph; daughter Katrina Wuensch and partner Jason Staly; and grandchildren Noemi and Jack.

Friends and family are invited to a memorial service Sunday, April 28, at 1:30 p.m. at Duvall Chapel at 80 Deaconess Road in Concord, Massachusetts. Memories or condolences can be posted at obits.concordfuneral.com/bernhardt-wuensch.


Researchers detect a new molecule in space

Such discoveries help researchers better understand the development of molecular complexity in space during star formation.


New research from the group of MIT Professor Brett McGuire has revealed the presence of a previously unknown molecule in space. The team's open-access paper, “Rotational Spectrum and First Interstellar Detection of 2-Methoxyethanol Using ALMA Observations of NGC 6334I,” appears in April 12 issue of The Astrophysical Journal Letters.

Zachary T.P. Fried, a graduate student in the McGuire group and the lead author of the publication, worked to assemble a puzzle comprised of pieces collected from across the globe, extending beyond MIT to France, Florida, Virginia, and Copenhagen, to achieve this exciting discovery. 

“Our group tries to understand what molecules are present in regions of space where stars and solar systems will eventually take shape,” explains Fried. “This allows us to piece together how chemistry evolves alongside the process of star and planet formation. We do this by looking at the rotational spectra of molecules, the unique patterns of light they give off as they tumble end-over-end in space. These patterns are fingerprints (barcodes) for molecules. To detect new molecules in space, we first must have an idea of what molecule we want to look for, then we can record its spectrum in the lab here on Earth, and then finally we look for that spectrum in space using telescopes.”

Searching for molecules in space

The McGuire Group has recently begun to utilize machine learning to suggest good target molecules to search for. In 2023, one of these machine learning models suggested the researchers target a molecule known as 2-methoxyethanol. 

“There are a number of 'methoxy' molecules in space, like dimethyl ether, methoxymethanol, ethyl methyl ether, and methyl formate, but 2-methoxyethanol would be the largest and most complex ever seen,” says Fried. To detect this molecule using radiotelescope observations, the group first needed to measure and analyze its rotational spectrum on Earth. The researchers combined experiments from the University of Lille (Lille, France), the New College of Florida (Sarasota, Florida), and the McGuire lab at MIT to measure this spectrum over a broadband region of frequencies ranging from the microwave to sub-millimeter wave regimes (approximately 8 to 500 gigahertz). 

The data gleaned from these measurements permitted a search for the molecule using Atacama Large Millimeter/submillimeter Array (ALMA) observations toward two separate star-forming regions: NGC 6334I and IRAS 16293-2422B. Members of the McGuire group analyzed these telescope observations alongside researchers at the National Radio Astronomy Observatory (Charlottesville, Virginia) and the University of Copenhagen, Denmark. 

“Ultimately, we observed 25 rotational lines of 2-methoxyethanol that lined up with the molecular signal observed toward NGC 6334I (the barcode matched!), thus resulting in a secure detection of 2-methoxyethanol in this source,” says Fried. “This allowed us to then derive physical parameters of the molecule toward NGC 6334I, such as its abundance and excitation temperature. It also enabled an investigation of the possible chemical formation pathways from known interstellar precursors.”

Looking forward

Molecular discoveries like this one help the researchers to better understand the development of molecular complexity in space during the star formation process. 2-methoxyethanol, which contains 13 atoms, is quite large for interstellar standards — as of 2021, only six species larger than 13 atoms were detected outside the solar system, many by McGuire’s group, and all of them existing as ringed structures.  

“Continued observations of large molecules and subsequent derivations of their abundances allows us to advance our knowledge of how efficiently large molecules can form and by which specific reactions they may be produced,” says Fried. “Additionally, since we detected this molecule in NGC 6334I but not in IRAS 16293-2422B, we were presented with a unique opportunity to look into how the differing physical conditions of these two sources may be affecting the chemistry that can occur.”


Twenty-three MIT faculty honored as “Committed to Caring” for 2023-25

The honor recognizes professors for their outstanding mentorship of graduate students.


In the halls of MIT, a distinctive thread of compassion weaves through the fabric of education. As students adjust to a postpandemic normal, many professors have played a pivotal role by helping them navigate the realities of hybrid learning and a rapidly changing postgraduation landscape. 

The Committed to Caring (C2C) program at MIT is a student-driven initiative that celebrates faculty members who have served as exceptional mentors to graduate students. Twenty-three MIT professors have been selected as recipients of the C2C award for 2023-25, marking the most extensive cohort of honorees to date. These individuals join the ranks of 75 previous C2C honorees. 

The actions of these MIT faculty members over the past two years underscore their profound commitment to the well-being, growth, and success of their students. These educators go above and beyond their roles, demonstrating an unwavering dedication to mentorship, inclusion, and a holistic approach to student development. They aim to create a nurturing environment where students not only thrive academically, but also flourish personally. 

The following faculty members are the 2023-25 Committed to Caring honorees:

Since the founding of the C2C program in 2014 by the Office of Graduate Education, the nomination process for honorees has centered on student involvement. Graduate students from all departments are invited to submit nomination letters detailing professors’ outstanding mentorship practices. A committee of graduate students and staff members then selects individuals who have shown genuine contributions to MIT’s vibrant academic community through student mentorship.

The selection committee this year included: Maria Carreira (Biology), Rima Das (Mechanical Engineering), Ahmet Gulek (Economics), Bishal Thapa (Biological Engineering), Katie Rotman (Architecture), Dóra Takács (Linguistics), Dan Korsun (Nuclear Science and Engineering), Leslie Langston (Student Mental Health and Counseling), Patricia Nesti (MIT-Woods Hole Oceanographic Institution), Beth Marois (Office of Graduate Education [OGE]), Sara Lazo (OGE), and Chair Suraiya Baluch (OGE).  

This year’s nomination letters highlighted unique stories of how students felt supported by professors. Students noted their mentors’ commitment to frequent meetings despite their own busy personal lives, as well as their dedication to ensuring equal access to opportunities for underrepresented and underserved students.

Some wrote about their advisors’ careful consideration of students’ needs alongside their own when faced with professional advancement opportunities; others appreciated their active support for students in the LGBTQ+ community. Lastly, students reflected on their advisors’ encouragement for open and constructive discourse around the graduate unionization vote, showing a genuine desire to hear about graduate issues.

Baluch shared, “Working with the amazing selection committee was the highlight of my work year. I was so impressed by the thoughtful consideration each nomination received. Selecting the next round of C2C nominees is always a heartwarming experience.” 

“As someone who aspires to be a faculty member someday,” noted Das, “being on the selection committee … was a phenomenal opportunity in understanding the breadth and depth of possibility in how to be a caring mentor in academia.”

She continued, “It was so heartening to hear the different ways that these faculty members are going above and beyond their explicit research and teaching duties and the amazing impact that has made on so many students’ well-being and ability to be successful in graduate school.” 

The Committed to Caring program continues to reinforce MIT’s culture of mentorship, inclusion, and collaboration by recognizing the contributions of outstanding professors. In the coming months, news articles will feature pairs of honorees, and a reception will be held in May.


The many-body dynamics of cold atoms and cross-country running

Senior Olivia Rosenstein balances cross-country competitions with research in quantum gasses and early-universe radio wave signals.


Newton's third law of motion states that for every action, there is an equal and opposite reaction. The basic physics of running involves someone applying a force to the ground in the opposite direction of their sprint. 

For senior Olivia Rosenstein, her cross-country participation provides momentum to her studies as an experimental physicist working with 2D materials, optics, and computational cosmology.

An undergraduate researcher with Professor Richard Fletcher in his Emergent Quantum Matter Group, she is helping to build an erbium-lithium trap for studies of many-body physics and quantum simulation. The group’s focus during this past fall was increasing the trap’s number of erbium atoms and decreasing the atoms’ temperature while preparing the experiment’s next steps.

To this end, Rosenstein helped analyze the behavior of the apparatus’s magnetic fields, perform imaging of the atoms, and develop infrared (IR) optics for future stages of laser cooling, which the group is working on now.  

As she wraps up her time at MIT, she also credits her participation on MIT’s Cross Country team as the key to keeping up with her academic and research workload.

“Running is an integral part of my life,” she says. “It brings me joy and peace, and I am far less functional without it.

First steps

Rosenstein’s parents — a special education professor and a university director of global education programs — encouraged her to explore a wide range of subjects that included math and science. Her early interest in STEM included the University of Illinois Urbana-Champaign’s Engineering Outreach Society, where engineering students visit local elementary schools.

At Urbana High School, she was a cross-country runner — three-year captain of varsity cross country and track, and a five-time Illinois All-State athlete — whose coach taught advanced placement biology. “He did a lot to introduce me to the physiological processes that drive aerobic adaptation and how runners train,” she recalls.

So, she was leaning toward studying biology and physiology when she was applying to colleges. At first, she wasn’t sure she was “smart enough” for MIT.

“I figured everyone at MIT was probably way too stressed, ultracompetitive, and drowning in psets [problem sets], proposals, and research projects,” she says. But once she had a chance to talk to MIT students, she changed her mind.

“MIT kids work hard not because we’re pressured to, but because we’re excited about solving that nagging pset problem, or we get so engrossed in the lab that we don’t notice an extra hour has passed. I learned that people put a lot of time into their living groups, dance teams, music ensembles, sports, activism, and every pursuit in between. As a prospective student, I got to talk to some future cross-country teammates too, and it was clear that people here truly enjoy spending time together.”

Drawn to physics

As a first year, she was intent on Course 20, but then she found herself especially engaged with class 8.022 (Physics II: Electricity and Magnetism), taught by Professor Daniel Harlow.

“I remember there was one time he guided us to a conclusion with completely logical steps, then proceeded to point out all of the inconsistencies in the theory, and told us that unfortunately we would need relativity and more advanced physics to explain it, so we would all need to take those courses and maybe a couple grad classes and then we could come back satisfied.

“I thought, ‘Well shoot, I guess I have to go to physics grad school now.’ It was mostly a joke at the time, but he successfully piqued my interest.”

She compared the course requirements for bioengineering with physics and found she was more drawn to the physics classes. Plus, her time with remote learning also pushed her toward more hands-on activities.

“I realized I’m happiest when at least some of my work involves having something in front of me.”

The summer of her rising sophomore year, she worked in Professor Brian DeMarco’s lab at the University of Illinois in her hometown of Urbana.

“The group was constructing a trapped ion quantum computing apparatus, and I got to see how physics concepts could be used in practice,” she recalls. “I liked that experimentalists got to combine time studying theory with time building in the lab.”

She followed up with stints in Fletcher’s group, a MISTI internship in France with researcher Rebeca Ribeiro-Palau’s condensed matter lab, and an Undergraduate Research Opportunity Program project working on computational cosmology projects with Professor Mark Vogelsberger's group at the Kavli Institute for Astrophysics and Space Research, reviewing the evolution of galaxies and dark matter halos in self-interacting dark-matter simulations.

By the spring of her junior year, she was especially drawn to doing atomic, molecular, and optical (AMO) experiments experiments in class 8.14 (Experimental Physics II), the second semester of Junior Lab.

“Experimental AMO is a lot of fun because you get to study very interesting physics — things like quantum superposition, using light to slow down atoms, and unexplored theoretical effects — while also building real-world, tangible systems,” she says. “Achieving a MOT [magneto-optical trap] is always an exciting phase in an experiment because you get to see quantum mechanics at work with your own eyes, and it’s the first step towards more complex manipulations of the atoms. Current AMO research will let us test concepts that have never been observed before, adding to what we know about how atoms interact at a fundamental level.” 

For the exploratory project, Rosenstein and her lab partner, Nicolas Tanaka, chose to build a MOT for rubidium using JLab’s ColdQuanta MiniMOT kit and laser locking through modulation transfer spectroscopy. The two presented at the class’s poster session to the department and won the annual Edward C. Pickering Award for Outstanding Original Project.

“We wanted the experience working with optics and electronics, as well as to create an experimental setup for future student use,” she says. “We got a little obsessed — at least one of us was in the lab almost every hour it was open for the final two weeks of class. Seeing a cloud of rubidium finally appear on our IR TV screen filled us with excitement, pride, and relief. I got really invested in building the MOT, and felt I could see myself working on projects like this for a long time in the future.”

She added, “I enjoyed the big questions being asked in cosmology, but couldn’t get over how much fun I had in the lab, getting to use my hands. I know some people can’t stand assembling optics, but it’s kind of like Legos for me, and I’m happy to spend an afternoon working on getting the mirror alignment just right and ignoring the outside world.”

As a senior, Rosenstein’s goal is to collect experience in experimental optics and cold atoms in preparation for PhD work. “I’d like to combine my passion for big physics questions and AMO experiments, perhaps working on fundamental physics tests using precision measurement, or tests of many-body physics.”

Simultaneously, she’s wrapping up her cosmology research, finishing a project in partnership with Katelin Schutz at McGill University, where they are testing a model to interpret 21-centimeter radio wave signals from the earliest stages of the universe and inform future telescope measurements. Her goal is to see how well an effective field theory (EFT) model can predict 21cm fields with a limited amount of information.

“The EFT we’re using was originally applied to very large-scale simulations, and we had hoped it would still be effective for a set of smaller simulations, but we found that this is not the case. What we want to know now, then, is how much data the simulation would have to have for the model to work. The research requires a lot of data analysis, finding ways to extract and interpret meaningful trends,” Rosenstein says. “It’s even more exciting knowing that the effects we’re seeing are related to the story of our universe, and the tools we’re developing could be used by astronomers to learn even more.”

After graduation, she will spend her summer as a quantum computing company intern. She will then use her Fulbright award to spend a year at ENS Paris-Saclay before heading to Caltech for her PhD.

Running past a crisis 

Rosenstein credits her participation in cross country for getting through the pandemic, which delayed setting foot on MIT’s campus until spring 2021. 

“The team did provide my main form of social interaction,” she says. “We were sad we didn’t get to compete, but I ran a time trial that was my fastest mile up to that point, which was a small win.”

In her sophomore year, her 38th-place finish at nationals secured her a spot as a National Collegiate Athletic Association All-American in her first collegiate cross-country season. A stress fracture curtailed her running for a bit until placing 12th as an NCAA DIII All-American. (The women’s team placed seventh overall, and the men’s team won MIT’s first NCAA national title.) When another injury sidelined her, she mentored first-year students as team captain and stayed engaged however she could, while biking and swimming to maintain training. She hopes to keep running in her life.

“Both running and physics deal a lot with delayed gratification: You’re not going to run a personal record every day, and you’re not going to publish a groundbreaking discovery every day. Sometimes you might go months or even years without feeling like you’ve made a big jump in your progress. If you can’t take that, you won’t make it as a runner or as a physicist.

“Maybe that makes it sound like runners and physicists are just grinding away, enduring constant suffering in pursuit of some grand goal. But there’s a secret: It isn’t suffering. Running every day is a privilege and a chance to spend time with friends, getting away from other work. Aligning optics, debugging code, and thinking through complex problems isn’t a day in the life of a masochist, just a satisfying Wednesday afternoon.”

She adds, “Cross country and physics both require a combination of naive optimism and rigorous skepticism. On the one hand, you have to believe you’re fully capable of winning that race or getting those new results, otherwise, you might not try at all. On the other hand, you have to be brutally honest about what it’s going to take because those outcomes won’t happen if you aren’t diligent with your training or if you just assume your experimental setup will work exactly as planned. In all, running and physics both consist of minute daily progress that integrates to a big result, and every infinitesimal segment is worth appreciating.”


Researching extreme environments

PhD candidate Emma Bullock studies the local and global impacts of changing mineral levels in Arctic groundwater.


A quick scan of Emma Bullock’s CV reads like those of many other MIT graduate students: She has served as a teaching assistant, written several papers, garnered grants from prestigious organizations, and acquired extensive lab and programming skills. But one skill sets her apart: “fieldwork experience and survival training for Arctic research.”

That’s because Bullock, a doctoral student in chemical oceanography at the Woods Hole Oceanographic Institution (WHOI), spends significant time collecting samples in the Arctic Circle for her research. Working in such an extreme environment requires comprehensive training in everything from Arctic gear usage and driving on unpaved roads to handling wildlife encounters — like the curious polar bear that got into her team’s research equipment.

To date, she has ventured to Prudhoe Bay, Alaska, five times, where she typically spends long days — from 5:00 a.m. to 11 p.m. — collecting and processing samples from Simpson Lagoon. Her work focuses on Arctic environmental changes, particularly the effects of permafrost thaw on mercury levels in groundwater.

“Even though I am doing foundational science, I can link it directly to communities in that region that are going to be impacted by the changes that we are seeing,” she says. “As the mercury escapes from the permafrost, it has the potential to impact not just Arctic communities but also anyone who eats fish in the entire world.”

Weathering a storm of setbacks

Growing up in rural Vermont, Bullock spent a lot of time outside, and she attributes her strong interest in environmental studies to her love of nature as a child. Despite her conviction about a career path involving the environment, her path to the Institute has not been easy. In fact, Bullock weathered several challenges and setbacks on the road to MIT.

As an undergraduate at Haverford College, Bullock quickly recognized that she did not have the same advantages as other students. She realized that her biggest challenge in pursuing an academic career was her socioeconomic background. She says, “In Vermont, the cost of living is a bit lower than a lot of other areas. So, I didn’t quite realize until I got to undergrad that I was not as middle-class as I thought.” Bullock had learned financial prudence from her parents, which informed many of the decisions she made as a student. She says, “I didn’t have a phone in undergrad because it was a choice between getting a good laptop that I could do research on or a phone. And so I went with the laptop.”

Bullock majored in chemistry because Haverford did not offer an environmental science major. To gain experience in environmental research, she joined the lab of Helen White, focusing on the use of silicone bands as passive samplers of volatile organic compounds in honeybee hives. A pivotal moment occurred when Bullock identified errors in a collaborative project. She says, “[Dr. White and I] brought the information about flawed statistical tests to the collaborators, who were all men. They were not happy with that. They made comments that they did not like being told how to do chemistry by women.”

White sat Bullock down and explained the pervasiveness of sexism in this field. “She said, ‘You have to remember that it is not you. You are a good scientist. You are capable,’” Bullock recalls. That experience strengthened her resolve to become an environmental scientist. “The way that Dr. Helen White approached dealing with this problem made me want to stick in the STEM field, and in the environmental and geochemistry fields specifically. It made me realize that we need more women in these fields,” she says.

As she reached the end of college, Bullock knew that she wanted to continue her educational journey in environmental science. “Environmental science impacts the world around us in such visible ways, especially now with climate change,” she says. She submitted applications to many graduate programs, including to MIT, which was White’s alma mater, but was rejected by all of them.

Undeterred, Bullock decided to get more research experience. She took a position as a lab technician at the Max Planck Institute of Marine Microbiology in Bremen, Germany, where she studied methane emissions from seagrass beds — her first foray into chemical oceanography. A year later, she applied to graduate schools again and was accepted by nearly all of the programs, including MIT. She hopes her experience can serve as a lesson for future applicants. “Just because you get rejected the first time does not mean that you’re not a good candidate. It just means that you may not have the right experience or that you didn’t understand the application process correctly,” she says.

Understanding the ocean through the lens of chemistry

Ultimately, Bullock chose MIT because she was most interested in the specific scientific projects within the program and liked the sense of community. “It is a very unique program because we have the opportunity to take classes at MIT and access to the resources that MIT has, but we also perform research at Woods Hole,” she says. Some people warned her about the cutthroat nature of the Institute, but Bullock has found the exact opposite to be a true. “A lot of people think of MIT, and they think it is one of those top tier schools, so it must be competitive. My experience in this program is that it is very collaborative because our research is so individual and unique that you really can’t be competitive. What you are doing is so different from any other student,” she says.

Bullock joined the group of Matthew Charette, senior scientist and director of the WHOI Sea Grant Program, which investigates the ocean through a chemical lens by characterizing the Arctic groundwater sampled during field campaigns in Prudhoe Bay, Alaska. Bullock analyzes mercury and biotoxic methylmercury levels impacted by permafrost thaw, which is already affecting the health of Arctic communities. For comparison, Bullock points to mercury-based dental fillings, which have been the subject of scientific scrutiny for health impacts. She says, “You get more mercury by eating sushi and tuna and salmon than you would by having a mercury-based dental filling.”

Promoting environmental advocacy

Bullock has been recognized as an Arctic PASSION Ambassador for her work in the historically underresearched Arctic region. As part of this program, she was invited to participate in a “sharing circle,” which connected early-career scientists with Indigenous community members, and then empowered them to pass what they learned about the importance of Arctic research onto their communities. This experience has been the highlight of her PhD journey so far. She says, “It was small enough, and the people there were invested enough in the issues that we got to have very interesting, dynamic conversations, which doesn’t always happen at typical conferences.”

Bullock has also spearheaded her own form of environmental activism via a project called en-justice, which she launched in September 2023. Through a website and a traveling art exhibit, the project showcases portraits and interviews of lesser-known environmental advocates that “have arguably done more for the environment but are not as famous” as household names like Greta Thunberg and Leonardo DiCaprio.

“They are doing things like going to town halls, arguing with politicians, getting petitions signed … the very nitty-gritty type work. I wanted to create a platform that highlighted some of these people from around the country but also inspired people in their own communities to try and make a change,” she says. Bullock has also written an op-ed for the WHOI magazine, Oceanus, and has served as a staff writer for the MIT-WHOI Joint Program newsletter, “Through the Porthole.”

After she graduates this year, Bullock plans to continue her focus on the Arctic. She says, “I find Arctic research very interesting, and there are so many unanswered research questions.” She also aspires to foster further interactions like the sharing circle.

“Trying to find a way where I can help facilitate Arctic communities and researchers in terms of finding each other and finding common interests would be a dream role. But I don’t know if that job exists,” Bullock says. Given her track record of overcoming obstacles, odds are, she will turn these aspirations into reality.


New major crosses disciplines to address climate change

Combining engineering, earth system science, and the social sciences, Course 1-12 prepares students to develop climate solutions.


Lauren Aguilar knew she wanted to study energy systems at MIT, but before Course 1-12 (Climate System Science and Engineering) became a new undergraduate major, she didn't see an obvious path to study the systems aspects of energy, policy, and climate associated with the energy transition.

Aguilar was drawn to the new major that was jointly launched by the departments of Civil and Environmental Engineering (CEE) and Earth, Atmospheric and Planetary Sciences (EAPS) in 2023. She could take engineering systems classes and gain knowledge in climate.

“Having climate knowledge enriches my understanding of how to build reliable and resilient energy systems for climate change mitigation. Understanding upon what scale we can forecast and predict climate change is crucial to build the appropriate level of energy infrastructure,” says Aguilar.

The interdisciplinary structure of the 1-12 major has students engaging with and learning from professors in different disciplines across the Institute. The blended major was designed to provide a foundational understanding of the Earth system and engineering principles — as well as an understanding of human and institutional behavior as it relates to the climate challenge. Students learn the fundamental sciences through subjects like an atmospheric chemistry class focused on the global carbon cycle or a physics class on low-carbon energy systems. The major also covers topics in data science and machine learning as they relate to forecasting climate risks and building resilience, in addition to policy, economics, and environmental justice studies.

Junior Ananda Figueiredo was one of the first students to declare the 1-12 major. Her decision to change majors stemmed from a motivation to improve people’s lives, especially when it comes to equality. “I like to look at things from a systems perspective, and climate change is such a complicated issue connected to many different pieces of our society,” says Figueiredo.

A multifaceted field of study

The 1-12 major prepares students with the necessary foundational expertise across disciplines to confront climate change. Andrew Babbin, an academic advisor in the new degree program and the Cecil and Ida Green Career Development Associate Professor in EAPS, says the new major harnesses rigorous training encompassing science, engineering, and policy to design and execute a way forward for society.

Within its first year, Course 1-12 has attracted students with a diverse set of interests, ranging from machine learning for sustainability to nature-based solutions for carbon management to developing the next renewable energy technology and integrating it into the power system.

Academic advisor Michael Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering, says the best part of this degree is the students, and the enthusiasm and optimism they bring to the climate challenge.

“We have students seeking to impact policy and students double-majoring in computer science. For this generation, climate change is a challenge for today, not for the future. Their actions inside and outside the classroom speak to the urgency of the challenge and the promise that we can solve it,” Howland says.

The degree program also leaves plenty of space for students to develop and follow their interests. Sophomore Katherine Kempff began this spring semester as a 1-12 major interested in sustainability and renewable energy. Kempff was worried she wouldn’t be able to finish 1-12 once she made the switch to a different set of classes, but Howland assured her there would be no problems, based on the structure of 1-12.

“I really like how flexible 1-12 is. There's a lot of classes that satisfy the requirements, and you are not pigeonholed. I feel like I'm going to be able to do what I'm interested in, rather than just following a set path of a major,” says Kempff.

Kempff is leveraging her skills she developed this semester and exploring different career interests. She is interviewing for sustainability and energy-sector internships in Boston and MIT this summer, and is particularly interested in assisting MIT in meeting its new sustainability goals.

Engineering a sustainable future

The new major dovetail’s MIT’s commitment to address climate change with its steps in prioritizing and enhancing climate education. As the Institute continues making strides to accelerate solutions, students can play a leading role in changing the future.   

“Climate awareness is critical to all MIT students, most of whom will face the consequences of the projection models for the end of the century,” says Babbin. “One-12 will be a focal point of the climate education mission to train the brightest and most creative students to engineer a better world and understand the complex science necessary to design and verify any solutions they invent."

Justin Cole, who transferred to MIT in January from the University of Colorado, served in the U.S. Air Force for nine years. Over the course of his service, he had a front row seat to the changing climate. From helping with the wildfire cleanup in Black Forest, Colorado — after the state's most destructive fire at the time — to witnessing two category 5 typhoons in Japan in 2018, Cole's experiences of these natural disasters impressed upon him that climate security was a prerequisite to international security. 

Cole was recently accepted into the MIT Energy and Climate Club Launchpad initiative where he will work to solve real-world climate and energy problems with professionals in industry.

“All of the dots are connecting so far in my classes, and all the hopes that I have for studying the climate crisis and the solutions to it at MIT are coming true,” says Cole.

With a career path that is increasingly growing, there is a rising demand for scientists and engineers who have both deep knowledge of environmental and climate systems and expertise in methods for climate change mitigation.

“Climate science must be coupled with climate solutions. As we experience worsening climate change, the environmental system will increasingly behave in new ways that we haven’t seen in the past,” says Howland. “Solutions to climate change must go beyond good engineering of small-scale components. We need to ensure that our system-scale solutions are maximally effective in reducing climate change, but are also resilient to climate change. And there is no time to waste,” he says.


Four MIT faculty named 2023 AAAS Fellows

Engelward, Oliver, Rothman, and Vuletić are recognized for their efforts to advance science.


Four MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).

The 2023 class of AAAS Fellows includes 502 scientists, engineers, and innovators across 24 scientific disciplines, who are being recognized for their scientifically and socially distinguished achievements.  

Bevin Engelward initiated her scientific journey at Yale University under the mentorship of Thomas Steitz; following this, she pursued her doctoral studies at the Harvard School of Public Health under Leona Samson. In 1997, she became a faculty member at MIT, contributing to the establishment of the Department of Biological Engineering. Engelward’s research focuses on understanding DNA sequence rearrangements and developing innovative technologies for detecting genomic damage, all aimed at enhancing global public health initiatives.

William Oliver is the Henry Ellis Warren Professor of Electrical Engineering and Computer Science with a joint appointment in the Department of Physics, and was recently a Lincoln Laboratory Fellow. He serves as director of the Center for Quantum Engineering and associate director of the Research Laboratory of Electronics, and is a member of the National Quantum Initiative Advisory Committee. His research spans the materials growth, fabrication, 3D integration, design, control, and measurement of superconducting qubits and their use in small-scale quantum processors. He also develops cryogenic packaging and control electronics involving cryogenic complementary metal-oxide-semiconductors and single-flux quantum digital logic.

Daniel Rothman is a professor of geophysics in the Department of Earth, Atmospheric, and Planetary Sciences and co-director of the MIT Lorenz Center, a privately funded interdisciplinary research center devoted to learning how climate works. As a theoretical scientist, Rothman studies how the organization of the natural world emerges from the interactions of life and the physical environment. Using mathematics and statistical and nonlinear physics, he builds models that predict or explain observational data, contributing to our understanding of the dynamics of the carbon cycle and climate, instabilities and tipping points in the Earth system, and the dynamical organization of the microbial biosphere.

Vladan Vuletić is the Lester Wolfe Professor of Physics. His research areas include ultracold atoms, laser cooling, large-scale quantum entanglement, quantum optics, precision tests of physics beyond the Standard Model, and quantum simulation and computing with trapped neutral atoms. His Experimental Atomic Physics Group is also affiliated with the MIT-Harvard Center for Ultracold Atoms and the Research Laboratory of Electronics. In 2020, his group showed that the precision of current atomic clocks could be improved by entangling the atoms — a quantum phenomenon by which particles are coerced to behave in a collective, highly correlated state. 


Erin Kara named Edgerton Award winner

The award recognizes exceptional distinction in teaching, research, and service at MIT.


Class of 1958 Career Development Assistant Professor Erin Kara of the Department of Physics has been named as the recipient of the 2023-24 Harold E. Edgerton Faculty Achievement Award.
 
Established in 1982, the award is a tribute to the late Institute Professor Emeritus Harold E. Edgerton for his support for younger faculty members. This award recognizes exceptional distinction in teaching, research, and service.

Professor Kara is an observational astrophysicist who is a faculty member in the Department of Physics and a member of the MIT Kavli Institute for Astrophysics and Space Research (MKI). She uses high-energy transients and time-variable phenomena to understand the physics behind how black holes grow and how they affect their environments.

Kara has advanced a new technique called X-ray reverberation mapping, which allows astronomers to map the gas falling onto black holes and measure the effects of strongly curved spacetime close to the event horizon. She also works on a variety of transient phenomena, such as tidal disruption events and galactic black hole outbursts.

She is a NASA Participating Scientist for the XRISM Observatory, a joint JAXA/NASA X-ray spectroscopy mission that just launched this past September, and is a NASA Participating Scientist for the ULTRASAT Mission, an ultraviolet all-sky time domain mission, set to launch in 2027. She is also working to develop and launch the next generation of NASA missions, as deputy principal investigator of the AXIS Probe Mission.

“I am delighted for Erin,” says Claude Canizares, the Bruno Rossi Professor of Physics. “She is an exemplary Edgerton awardee. As one of the leading observational astrophysicists of her generation, she has made major advances in our understanding of black holes and their environments. She also plays a leadership role in the design of new space missions, is a passionate and effective teacher, and a thoughtful mentor of graduate students and postdocs.”

Adds Kavli Director Rob Simcoe, “Erin is one of a very rare breed of experimental astrophysicists who have the interest and stamina not only to use observatories built by colleagues before her, but also to dive into a leadership role planning and executing new spaceflight missions that will shape the future of her field.”

The committee also recognized Kara’s work to create “a stimulating and productive multigenerational research group. Her mentorship is thoughtful and intentional, guiding and supporting each student or postdoc while giving them the freedom to grow and become self-reliant.”

During the nomination process, students praised Kara’s teaching skills, enthusiasm, organization, friendly demeanor, and knowledge of the material.

“Erin is the best faculty mentor I have ever had,” says one of her students. “She is supportive, engaged, and able to provide detailed input on projects when needed, but also gives the right amount of freedom to her students/postdocs to aid in their development. Working with Erin has been one of the best parts of my time at MIT.”

Kara received a BA in physics from Barnard College, and an MPhil in physics and a PhD in astronomy from the Institute of Astronomy at Cambridge University. She subsequently served as Hubble Postdoctoral Fellow and then Neil Gehrels Prize Postdoctoral Fellow at the University of Maryland and NASA’s Goddard Space Flight Center. She joined the MIT faculty in 2019.

Her recognitions include the American Astronomical Society‘s Newton Lacy Pierce Prize, for “outstanding achievement, over the past five years, in observational astronomical research,” and the Rossi Prize from the High-Energy Astrophysics Division of the AAS (shared).

The award committee lauded Kara’s service in the field and at MIT, including her participation with the Physics Graduate Admissions Committee, the Pappalardo Postdoctoral Fellowship Committee, and the MKI Anti-Racism Task Force. Professor Kara also participates in dinners and meet-and-greets invited by student groups, such as Undergraduate Women in Physics, Graduate Women in Physics, and the Society of Physics Students.

Her participation in public outreach programs includes her talks “Black Hole Echoes and the Music of the Cosmos” at both the Concord Conservatory of Music and an event with MIT School of Science alumni, and “What’s for dinner? How black holes eat nearby stars” for the MIT Summer Research Program.

“There is nothing more gratifying than being recognized by your peers, and I am so appreciative and touched that my colleagues in physics even thought to nominate me for this award,” says Kara. “I also want to express my gratitude to my awesome research group. They are what makes this job so fun and so rewarding, and I know I wouldn’t be in this position without their hard work, great attitudes, and unwavering curiosity.” 


Women in STEM — A celebration of excellence and curiosity

An MIT Values event showcased three women's career journeys and how they are paving the way for the next generation.


What better way to commemorate Women's History Month and International Women's Day than to give  three of the world’s most accomplished scientists an opportunity to talk about their careers? On March 7, MindHandHeart invited professors Paula Hammond, Ann Graybiel, and Sangeeta Bhatia to share their career journeys, from the progress they have witnessed to the challenges they have faced as women in STEM. Their conversation was moderated by Mary Fuller, chair of the faculty and professor of literature. 

Hammond, an Institute professor with appointments in the Department of Chemical Engineering and the Koch Institute for Integrative Cancer Research, reflected on the strides made by women faculty at MIT, while acknowledging ongoing challenges. “I think that we have advanced a great deal in the last few decades in terms of the numbers of women who are present, although we still have a long way to go,” Hammond noted in her opening. “We’ve seen a remarkable increase over the past couple of decades in our undergraduate population here at MIT, and now we’re beginning to see it in the graduate population, which is really exciting.” Hammond was recently appointed to the role of vice provost for faculty.

Ann Graybiel, also an Institute professor, who has appointments in the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, described growing up in the Deep South. “Girls can’t do science,” she remembers being told in school, and they “can’t do research.” Yet her father, a physician scientist, often took her with him to work and had her assist from a young age, eventually encouraging her directly to pursue a career in science. Graybiel, who first came to MIT in 1973, noted that she continued to face barriers and rejection throughout her career long after leaving the South, but that individual gestures of inspiration, generosity, or simple statements of “You can do it” from her peers helped her power through and continue in her scientific pursuits. 

Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, director of the Marble Center for Cancer Nanomedicine at the Koch Institute for Integrative Cancer Research, and a member of the Institute for Medical Engineering and Science, is also the mother of two teenage girls. She shared her perspective on balancing career and family life: “I wanted to pick up my kids from school and I wanted to know their friends. … I had a vision for the life that I wanted.” Setting boundaries at work, she noted, empowered her to achieve both personal and professional goals. Bhatia also described her collaboration with President Emerita Susan Hockfield and MIT Amgen Professor of Biology Emerita Nancy Hopkins to spearhead the Future Founders Initiative, which aims to boost the representation of female faculty members pursuing biotechnology ventures.

A video of the full panel discussion is available on the MindHandHeart YouTube channel.


From neurons to learning and memory

Mark Harnett investigates how electrical activity in mammalian cortical cells helps to produce neural computations that give rise to behavior.


Mark Harnett, an associate professor at MIT, still remembers the first time he saw electrical activity spiking from a living neuron.

He was a senior at Reed College and had spent weeks building a patch clamp rig — an experimental setup with an electrode that can be used to gently probe a neuron and measure its electrical activity.

“The first time I stuck one of these electrodes onto one of these cells and could see the electrical activity happening in real time on the oscilloscope, I thought, ‘Oh my God, this is what I’m going to do for the rest of my life. This is the coolest thing I’ve ever seen!’” Harnett says.

Harnett, who recently earned tenure in MIT’s Department of Brain and Cognitive Sciences, now studies the electrical properties of neurons and how these properties enable neural circuits to perform the computations that give rise to brain functions such as learning, memory, and sensory perception.

“My lab’s ultimate goal is to understand how the cortex works,” Harnett says. “What are the computations? How do the cells and the circuits and the synapses support those computations? What are the molecular and structural substrates of learning and memory? How do those things interact with circuit dynamics to produce flexible, context-dependent computation?”

“We go after that by looking at molecules, like synaptic receptors and ion channels, all the way up to animal behavior, and building theoretical models of neural circuits,” he adds.

Influence on the mind

Harnett’s interest in science was sparked in middle school, when he had a teacher who made the subject come to life. “It was middle school science, which was a lot of just mixing random things together. It wasn’t anything particularly advanced, but it was really fun,” he says. “Our teacher was just super encouraging and inspirational, and she really sparked what became my lifelong interest in science.”

When Harnett was 11, his father got a new job at a technology company in Minneapolis and the family moved from New Jersey to Minnesota, which proved to be a difficult adjustment. When choosing a college, Harnett decided to go far away, and ended up choosing Reed College, a school in Portland, Oregon, that encourages a great deal of independence in both academics and personal development.

“Reed was really free,” he recalls. “It let you grow into who you wanted to be, and try things, both for what you wanted to do academically or artistically, but also the kind of person you wanted to be.”

While in college, Harnett enjoyed both biology and English, especially Shakespeare. His English professors encouraged him to go into science, believing that the field needed scientists who could write and think creatively. He was interested in neuroscience, but Reed didn’t have a neuroscience department, so he took the closest subject he could find — a course in neuropharmacology.

“That class totally blew my mind. It was just fascinating to think about all these pharmacological agents, be they from plants or synthetic or whatever, influencing how your mind worked,” Harnett says. “That class really changed my whole way of thinking about what I wanted to do, and that’s when I decided I wanted to become a neuroscientist.”

For his senior research thesis, Harnett joined an electrophysiology lab at Oregon Health Sciences University (OHSU), working with Professor Larry Trussell, who studies synaptic transmission in the auditory system. That lab was where he first built and used a patch clamp rig to measure neuron activity.

After graduating from college, he spent a year as a research technician in a lab at the University of Minnesota, then returned to OHSU to work in a different research lab studying ion channels and synaptic physiology. Eventually he decided to go to graduate school, ending up at the University of Texas at Austin, where his future wife was studying public policy.

For his PhD research, he studied the neurons that release the neuromodulator dopamine and how they are affected by drugs of abuse and addiction. However, once he finished his degree, he decided to return to studying the biophysics of computation, which he pursued during a postdoc at the Howard Hughes Medical Institute Janelia Research Campus with Jeff Magee.

A broad approach

When he started his lab at MIT’s McGovern Institute in 2015, Harnett set out to expand his focus. While the physiology of ion channels and synapses forms the basis of much of his lab’s work, they connect these processes to neuronal computation, cortical circuit operation, and higher-level cognitive functions.

Electrical impulses that flow between neurons, allowing them to communicate with each other, are produced by ion channels that control the flow of ions such as potassium and sodium. In a 2021 study, Harnett and his students discovered that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

This reduction in density may have evolved to help the brain operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks. Harnett’s lab has also found that in human neurons, electrical signals weaken as they flow along dendrites, meaning that small sections of dendrites can form units that perform individual computations within a neuron.

Harnett’s lab also recently discovered, to their surprise, that the adult brain contains millions of “silent synapses” — immature connections that remain inactive until they’re recruited to help form new memories. The existence of these synapses offers a clue to how the adult brain is able to continually form new memories and learn new things without having to modify mature synapses.

Many of these projects fall into areas that Harnett didn’t necessarily envision himself working on when he began his faculty career, but they naturally grew out of the broad approach he wanted to take to studying the cortex. To that end, he sought to bring people to the lab who wanted to work at different levels — from molecular physiology up to behavior and computational modeling.

As a postdoc studying electrophysiology, Harnett spent most of his time working alone with his patch clamp device and two-photon microscope. While that type of work still goes on his lab, the overall atmosphere is much more collaborative and convivial, and as a mentor, he likes to give his students broad leeway to come up with their own projects that fit in with the lab’s overall mission.

“I have this incredible, dynamic group that has been really great to work with. We take a broad approach to studying the cortex, and I think that’s what makes it fun,” he says. “Working with the folks that I’ve been able to recruit — grad students, techs, undergrads, and postdocs — is probably the thing that really matters the most to me.”


MIT tops among single-campus universities in US patents granted

For 10th consecutive year, the Institute ranks No. 2 among all colleges and No. 1 among colleges with one main campus, underlying the impact of innovation and critical role of technology transfer.


In an era defined by unprecedented challenges and opportunities, MIT remains at the forefront of pioneering research and innovation.

The Institute's relentless pursuit of knowledge has once again been recognized, with MIT securing 365 utility patents issued by the United States Patent and Trademark Office in 2023. This marks the 10th consecutive year that the National Academy of Inventors has both ranked worldwide colleges for number of U.S. patents issued and recognized MIT as the top single-campus university for patents granted. (The University of California system, which comprises 10 campuses and six academic health centers across the state, is No. 1 overall.)

Technology transfer is at the core of MIT’s mission to advance knowledge for the benefit of the world, and the Technology Licensing Office (TLO) plays a transformative role in bridging the gap between groundbreaking research and societal impact. Impact is recognized in many ways through startups, small- to medium-sized companies, and large corporations. The TLO's efforts in patenting and licensing are vital for transforming academic discoveries into practical solutions that address societal needs, drive economic growth, and create new opportunities. 

Each year, the TLO receives over 600 invention disclosures, resulting in a high volume of issued patents. The TLO's ongoing strategic licensing efforts bolster MIT’s endeavors across six clear impact areas: healthy living, sustainable futures, connected worlds, advanced materials, climate stabilization, and the exploration of uncharted frontiers. These areas, intentionally curated to reflect the interests and priorities of MIT’s faculty and research staff, drive meaningful change through translation and tech transfer. 

Lesley Millar-Nicholson, the executive director of the TLO, further underscores the importance of aligning efforts with President Sally Kornbluth’s vision for MIT. “Our collaborative efforts ensure that the innovations born here at MIT make a difference across the globe, addressing some of the most pressing challenges of our time,” Millar-Nicholson states. “This reflects a shared commitment to what Kornbluth described in her inaugural address about climate change, ‘... [this is] the kind of grand creative enterprise in which the energy you release together is greater than what you each put in. A nuclear fusion of problem-solving and possibility!’” 

Verdox and Cognito Therapeutics are two of the many startups that epitomize a grand creative enterprise. Verdox, a startup from the lab of T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering Practice and director of the David H. Koch School of Chemical Engineering Practice, is on a mission to combat climate change by capturing carbon dioxide with unrivaled efficiency using electricity. Cognito, which sprang from the labs of Li-Huei Tsai, professor of neuroscience and director of the Picower Institute for Learning and Memory, and Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and member of the McGovern Institute for Brain Research, pioneers treatments for neurodegenerative diseases, including dementias, offering Alzheimer's patients a beacon of noninvasive hope with their neuro-stimulatory therapy. These enterprises, just two of many that have licensed and are developing MIT’s intellectual property, embody the spirit of MIT — they are not merely companies; they are catalysts for a more sustainable, healthier world. 

Technology Licensing Officer Nestor Franco highlights the daily journey of MIT’s research from concept to commercialization: “Our commitment to out-license these innovations not only amplify MIT's contribution to global progress but also reinforces our dedication to societal betterment,” he says.  

As MIT continues to push the boundaries of what is possible, from deep space to quantum computing, the TLO remains a cornerstone of the Institute's strategy for impact.  

To explore the cutting-edge technologies emerging from MIT, visit patents.mit.edu. Here, you can discover the innovations available for licensing that are set to shape the future. To delve deeper into the work and initiatives of the TLO, and to understand how MIT's inventions are transformed into societal solutions, visit tlo.mit.edu.


A crossroads for computing at MIT

The MIT Schwarzman College of Computing building will form a new cluster of connectivity across a spectrum of disciplines in computing and artificial intelligence.


On Vassar Street, in the heart of MIT’s campus, the MIT Stephen A. Schwarzman College of Computing recently opened the doors to its new headquarters in Building 45. The building’s central location and welcoming design will help form a new cluster of connectivity at MIT and enable the space to have a multifaceted role. 

“The college has a broad mandate for computing across MIT,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “The building is designed to be the computing crossroads of the campus. It’s a place to bring a mix of people together to connect, engage, and catalyze collaborations in computing, and a home to a related set of computing research groups from multiple departments and labs.”

“Computing is the defining technology of our time and it will continue to be, well into the future,” says MIT President Sally Kornbluth. “As the people of MIT make progress in high-impact fields from AI to climate, this fantastic new building will enable collaboration across computing, engineering, biological science, economics, and countless other fields, encouraging the cross-pollination of ideas that inspires us to generate fresh solutions. The college has opened its doors at just the right time.”

A physical embodiment

An approximately 178,000 square foot eight-floor structure, the building is designed to be a physical embodiment of the MIT Schwarzman College of Computing’s three-fold mission: strengthen core computer science and artificial intelligence; infuse the forefront of computing with disciplines across MIT; and advance social, ethical, and policy dimensions of computing.

Oriented for the campus community and the public to come in and engage with the college, the first two floors of the building encompass multiple convening areas, including a 60-seat classroom, a 250-seat lecture hall, and an assortment of spaces for studying and social interactions.

Academic activity has commenced in both the lecture hall and classroom this semester with 13 classes for undergraduate and graduate students. Subjects include 6.C35/6.C85 (Interactive Data Visualization and Society), a class taught by faculty from the departments of Electrical Engineering and Computer Science (EECS) and Urban Studies and Planning. The class was created as part of the Common Ground for Computing Education, a cross-cutting initiative of the college that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

“The new college building is catering not only to educational and research needs, but also fostering extensive community connections. It has been particularly exciting to see faculty teaching classes in the building and the lobby bustling with students on any given day, engrossed in their studies or just enjoying the space while taking a break,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of EECS.

The building will also accommodate 50 computing research groups, which correspond to the number of new faculty the college is hiring — 25 in core computing positions and 25 in shared positions with departments at MIT. These groups bring together a mix of new and existing teams in related research areas spanning floors four through seven of the building.

In mid-January, the initial two dozen research groups moved into the building, including faculty from the departments of EECS; Aeronautics and Astronautics; Brain and Cognitive Sciences; Mechanical Engineering; and Economics who are affiliated with the Computer Science and Artificial Intelligence Laboratory and the Laboratory for Information and Decision Systems. The research groups form a coherent overall cluster in deep learning and generative AI, natural language processing, computer vision, robotics, reinforcement learning, game theoretic methods, and societal impact of AI.

More will follow suit, including some of the 10 faculty who have been hired into shared positions by the college with the departments of Brain and Cognitive Sciences; Chemical Engineering; Comparative Media Studies and Writing; Earth, Atmospheric and Planetary Sciences; Music and Theater Arts; Mechanical Engineering; Nuclear Science and Engineering; Political Science; and the MIT Sloan School of Management.

“I eagerly anticipate the building's expansion of opportunities, facilitating the development of even deeper connections the college has made so far spanning all five schools," says Anantha Chandrakasan, chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Other college programs and activities that are being supported in the building include the MIT Quest for Intelligence, Center for Computational Science and Engineering, and MIT-IBM Watson AI Lab. There are also dedicated areas for the dean’s office, as well as for the cross-cutting areas of the college — the Social and Ethical Responsibilities of Computing, Common Ground, and Special Semester Topics in Computing, a new experimental program designed to bring MIT researchers and visitors together in a common space for a semester around areas of interest.

Additional spaces include conference rooms on the third floor that are available for use by any college unit. These rooms are accessible to both residents and nonresidents of the building to host weekly group meetings or other computing-related activities.

For the MIT community at large, the building’s main event space, along with three conference rooms, is available for meetings, events, and conferences. Located eight stories high on the top floor with striking views across Cambridge and Boston and of the Great Dome, the event space is already in demand with bookings through next fall, and has quickly become a popular destination on campus.

The college inaugurated the event space over the January Independent Activities Period, welcoming students, faculty, and visitors to the building for Expanding Horizons in Computing — a weeklong series of bootcamps, workshops, short talks, panels, and roundtable discussions. Organized by various MIT faculty, the 12 sessions in the series delved into exciting areas of computing and AI, with topics ranging from security, intelligence, and deep learning to design, sustainability, and policy.

Form and function

Designed by Skidmore, Owings & Merrill, the state-of-the-art space for education, research, and collaboration took shape over four years of design and construction.

“In the design of a new multifunctional building like this, I view my job as the dean being to make sure that the building fulfills the functional needs of the college mission,” says Huttenlocher. “I think what has been most rewarding for me, now that the building is finished, is to see its form supporting its wide range of intended functions.”

In keeping with MIT’s commitment to environmental sustainability, the building is designed to meet Leadership in Energy and Environmental Design (LEED) Gold certification. The final review with the U.S. Green Building Council is tracking toward a Platinum certification.

The glass shingles on the building’s south-facing side serve a dual purpose in that they allow abundant natural light in and form a double-skin façade constructed of interlocking units that create a deep sealed cavity, which is anticipated to notably lower energy consumption.

Other sustainability features include embodied carbon tracking, on-site stormwater management, fixtures that reduce indoor potable water usage, and a large green roof. The building is also the first to utilize heat from a newly completed utilities plant built on top of Building 42, which converted conventional steam-based distributed systems into more efficient hot-water systems. This conversion significantly enhances the building’s capacity to deliver more efficient medium-temperature hot water across the entire facility.

Grand unveiling

A dedication ceremony for the building is planned for the spring.

The momentous event will mark the official completion and opening of the new building and celebrate the culmination of hard work, commitment, and collaboration in bringing it to fruition.

It will also celebrate the 2018 foundational gift that established the college from Stephen A. Schwarzman, the chair, CEO, and co-founder of Blackstone, the global asset management and financial services firm. In addition, it will acknowledge Sebastian Man ’79, SM ’80, the first donor to support the building after Schwarzman. Man’s gift will be recognized with the naming of a key space in the building that will enrich the academic and research activities of the MIT Schwarzman College of Computing and the Institute.


With inspiration from “Tetris,” MIT researchers develop a better radiation detector

The device, based on simple tetromino shapes, could determine the direction and distance of a radiation source, with fewer detector pixels.


The spread of radioactive isotopes from the Fukushima Daiichi Nuclear Power Plant in Japan in 2011 and the ongoing threat of a possible release of radiation from the Zaporizhzhia nuclear complex in the Ukrainian war zone have underscored the need for effective and reliable ways of detecting and monitoring radioactive isotopes. Less dramatically, everyday operations of nuclear reactors, mining and processing of uranium into fuel rods, and the disposal of spent nuclear fuel also require monitoring of radioisotope release.

Now, researchers at MIT and the Lawrence Berkeley National Laboratory (LBNL) have come up with a computational basis for designing very simple, streamlined versions of sensor setups that can pinpoint the direction of a distributed source of radiation. They also demonstrated that by moving that sensor around to get multiple readings, they can pinpoint the physical location of the source. The inspiration for their clever innovation came from a surprising source: the popular computer game “Tetris.”

The team’s findings, which could likely be generalized to detectors for other kinds of radiation, are described in a paper published in Nature Communications, by MIT professors Mingda Li, and Benoit Forget, senior research scientist Lin-Wen Hu, and principal research scientist Gordon Kohse; graduate students Ryotaro Okabe and Shangjie Xue; research scientist Jayson Vavrek SM ’16, PhD ’19 at LBNL; and a number of others at MIT and Lawrence Berkeley.

Radiation is usually detected using semiconductor materials, such as cadmium zinc telluride, that produce an electrical response when struck by high-energy radiation such as gamma rays. But because radiation penetrates so readily through matter, it’s difficult to determine the direction that signal came from with simple counting. Geiger counters, for example, simply provide a click sound when receiving radiation, without resolving the energy or type, so finding a source requires moving around to try to find the maximum sound, similarly to how handheld metal detectors work. The process requires the user to move closer to the source of radiation, which can add risk.

To provide directional information from a stationary device without getting too close, researchers use an array of detector grids along with another grid called a mask, which imprints a pattern on the array that differs depending on the direction of the source. An algorithm interprets the different timings and intensities of signals received by each separate detector or pixel. This often leads to a complex design of detectors.  

Typical detector arrays for sensing the direction of radiation sources are large and expensive and include at least 100 pixels in a 10 by 10 array. However, the group found that using as few as four pixels arranged in the tetromino shapes of the figures in the “Tetris” game can come close to matching the accuracy of the large, expensive systems. The key is proper computerized reconstruction of the angles of arrival of the rays, based on the times each sensor detects the signal and the relative intensity each one detects, as reconstructed through an AI-guided study of simulated systems.

Of the different configurations of four pixels the researchers tried — square, or S-, J- or T-shaped — they found through repeated experiments that the most precise results were provided by the S-shaped array. This array gave directional readings that were accurate to within about 1 degree, but all three of the irregular shapes performed better than the square. This approach, Li says, “was literally inspired by ‘Tetris.’”

Key to making the system work is placing an insulating material such as a lead sheet between the pixels to increase the contrast between radiation readings coming into the detector from different directions. The lead between the pixels in these simplified arrays serves the same function as the more elaborate shadow masks used in the larger-array systems. Less symmetrical arrangements, the team found, provide more useful information from a small array, explains Okabe, who is the lead author of the work.

“The merit of using a small detector is in terms of engineering costs,” he says. Not only are the individual detector elements expensive, typically made of cadmium-zinc-telluride, or CZT, but all of the interconnections carrying information from those pixels also become much more complex. “The smaller and simpler the detector is, the better it is in terms of applications,” adds Li.

While there have been other versions of simplified arrays for radiation detection, many are only effective if the radiation is coming from a single localized source. They can be confused by multiple sources or those that are spread out in space, while the “Tetris”-based version can handle these situations well, adds Xue, co-lead author of the work.

In a single-blind field test at the Berkeley Lab with a real cesium radiation source, led by Vavrek, where the researchers at MIT did not know the ground-truth source location, a test device was performed with high accuracy in finding the direction and distance to the source. 

“Radiation mapping is of utmost importance to the nuclear industry, as it can help rapidly locate sources of radiation and keep everyone safe,” says co-author Forget, an MIT professor of nuclear engineering and head of the Department of Nuclear Science and Engineering.

Vavrek, another co-lead-author, says that while in their study they focused on gamma-ray sources, he believes the computational tools they developed to extract directional information from the limited number of pixels are “much, much more general.” It isn’t restricted to certain wavelengths, it can also be used for neutrons, or even other forms of light, such as ultraviolet light. Using this machine learning-based algorithm and aerial radiation detection “will allow real-time monitoring and integrated emergency planning of radiological accidents,” adds Hu, a senior scientist at the MIT Nuclear Reactor Lab.

Nick Mann, a scientist with the Defense Systems branch at the Idaho National Laboratory, says, "This work is critical to the U.S. response community and the ever-increasing threat of a radiological incident or accident.”

Additional research team members include Ryan Pavlovsky, Victor Negut, Brian Quiter, and Joshua Cates at Lawrence Berkely National Laboratory, and Jiankai Yu, Tongtong Liu, Stephanie Jegelka at MIT. The work was supported by the U.S. Department of Energy.


QS World University Rankings rates MIT No. 1 in 11 subjects for 2024

The Institute also ranks second in five subject areas.


QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2024, the organization announced today.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.

MIT also placed second in five subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Chemistry; and Economics and Econometrics.

For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.

Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.

MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 12 straight years.


Physicist Netta Engelhardt is searching black holes for universal truths

She says one question drives her work: “Which pillars of gravitational physics are just not true?”


As Netta Engelhardt sees it, secrets never die. Not even in a black hole.

Engelhardt is a theoretical physicist at MIT who is teasing out the convoluted physics in and around black holes, in search of the fundamental ingredients that shape our universe.  In the process, she’s upending popular ideas in the fields of quantum and gravitational physics.

One of the biggest revelations from her work to date is the way in which information that falls into a black hole can avoid being lost forever. In 2019, shortly before coming to MIT, she and other physicists used gravitational methods to demonstrate that whatever might happen to the information inside a black hole can in principle be undone as the black hole evaporates away.

The team’s conclusion stunned the physics community, as it constituted the most quantitative direct advance toward resolving the longstanding black hole information paradox — a conundrum raised in the work of physicist Stephen Hawking. The paradox pits in opposition two theories that both appear to be true: one, the pillar of “unitarity,” which is the principle that information in the universe is neither created nor destroyed; and two, a calculation by Hawking from standard gravitational physics showing that information can indeed be destroyed, specifically, when radiating out from an evaporating black hole.

“Imagine you had a diary and you set it on fire in the lab,” Engelhardt explains. “According to unitarity, if you knew the fundamental dynamics of the universe, you could take the ashes and reverse-engineer them to see the diary and its contents. It would be very difficult, but you could do it. But Hawking’s calculation shows that, even if you knew the fundamental dynamics of the universe, you still couldn’t reverse-engineer the process of black hole evaporation.”

Engelhardt, then at Princeton University, and her colleagues showed that, contrary to Hawking’s calculation, it is possible to use gravitational physics to see that the process of black hole evaporation does in fact conserve information.

As a newly tenured member of the MIT faculty, Engelhardt is now tackling other longstanding questions about gravity, hoping to fill the last, largest gaps in physicists’ understanding of the universe at the most fundamental scales.

“At the end of the day, I’m driven by questions about nature and how the universe works,” says Engelhardt, who is now an associate professor of physics. “Answering these questions is a vocation.”

Gateway to gravity

Engelhardt was born in Jerusalem, where she developed an early interest in all things science. When she was 9, she and her family moved to Boston, partly so that her mother could enroll in a visiting scholars program in MIT Linguistics. New to America, and having only learned to read in Hebrew, Engelhardt spent those first weeks reading every book the family brought with them, some of them atypical for a 9-year-old.

“I read all the books we had left in Hebrew, until at long last, there was just one left, which was Stephen Hawking’s ‘A Brief History of Time.’”

Hawking’s book was Engelhardt’s first introduction to black holes, the Big Bang, and the fundamental forces and building blocks that shape the universe. What she found especially exciting were the missing pieces to physicists’ understanding.

“People can spend their entire life searching for answers to these very foundational questions that I just found completely fascinating,” Engelhardt says. “Where does the universe come from? What are the fundamental building blocks? Those are questions I realized I just wanted to know the answer to. And from that point on, I wasn’t just set on physics — I was set on quantum gravity at 9.”

She fed that early spark through college, double-majoring in physics and math at Brandeis University. She went on to the University of California at Santa Barbara, where she pursued a PhD in physics and really began to dig into the puzzle of quantum gravity, a field that seeks to describe the effects of gravity according to the principles of quantum mechanics.

The theory of quantum mechanics is a remarkably good blueprint for describing the interactions in nature at the scale of atoms and smaller. These quantum interactions are governed by three of the four fundamental forces that physicists know of. But the fourth force, gravity, has eluded quantum mechanical explanation, particularly in situations where the effect of gravity is overwhelming, such as deep inside black holes.

In such extreme regimes, there is no prediction for how matter and gravity behave. Such a theory would complete physicists’ understanding of the universe’s workings at the most fundamental scales.

For Engelhardt, quantum gravity is also a gateway to other mysterious questions to be answered. For example, the way in which space and time emerge from something even more fundamental. Engelhardt spent much of her graduate work focused on questions about the geometry of spacetime, and how its curvature may emerge from something more basic as described by quantum gravity.

“Those are big questions to tackle,” Engelhardt admits. “The largest bulk of my time is spent thinking, hmm, how do I take this vague intuition and condense it into a question that can be concretely answered, quantitatively? That’s a large part of the progress you can make.”

A black hole imprint

In 2014, midway through her PhD work, Engelhardt honed one of her questions about quantum gravity and spacetime emergence to a specific problem: how to compute the quantum corrections to the entropy of gravitating systems.

“There are surfaces (in spacetime) that are sensitive to gravitational (curving) called extremal surfaces,” Engelhardt explains. “There already was a formula that used such surfaces to compute the entropy of gravitational systems in the absence of quantum effects. But in realistic quantum gravity, there are quantum effects, and I wanted a formula that took that into account.”

She and postdoc Aron Wall worked to construct a general equation that would describe how entropy of gravitating regions should be computed when quantum effects are taken into account. The result: quantum extremal surfaces, a quantum generalization of the old classical surfaces.

At the time, the exercise was purely theoretical, as the quantum effects from most processes in the universe are too small to even slightly wobble the surrounding spacetime. Their new equation, therefore, would land on similar predictions as the purely classical counterpart.

But in 2019, as a postdoc at Princeton, Engelhardt and others realized that this equation might give a very different prediction for what a quantum extremal surface might do, and what the corresponding quantum gravitational entropy would be, in one specific situation: as a black hole evaporates. What’s more, what the equation predicts could be the key to resolving the longstanding black hole information paradox.

“This was a very dramatic moment,” she recalls. “Everyone was working around the clock to try to figure this out, not really sleeping at night because we were so excited.”

After three sleep-deprived weeks, the physicists were convinced that they had made a dramatic step toward resolving the paradox: As a black hole evaporates and releases radiation in a scrambled form of the information that originally fell into it, a new, completely nonclassical quantum extremal surface emerges, resulting in a gravitational entropy that shrinks as more information radiates away. They reasoned that this surface can serve as an imprint of the radiated information, which could in principle be used to reconstruct the original information, which Stephen Hawking had shown would be lost forever.

“That was a Eureka! moment,” she says. “I remember driving home, and thinking, and maybe even saying out loud, ‘I think this is it!’’

It’s not yet clear what Hawking was actually calculating to assume the contrary. But Engelhardt considers the paradox close to resolved, at least in broad strokes, and her team’s work has held up to repeated checks and careful scrutiny. In the meantime, she set her sights on other questions.

Testing pillars

Engelhardt’s breakthrough came in May of 2019. Just two months later, she headed to Cambridge to start her faculty position at MIT. She first visited the campus and interviewed for the position in 2017.

“There was a palpable sense of excitement about science in the Center for Theoretical Physics, and you feel it everywhere — it permeates the Institute,” she recalls. “That was one of the reasons I wanted to be at MIT.”

She was offered the position, which she accepted and chose to defer for a year to complete her postdoc at Princeton. In July 2019, she started at MIT as an assistant professor of physics.

In the early days on campus, as she set up her research group, Engelhardt followed up on the black hole information paradox, to see if she could find out not only how Hawking got it wrong but what he was actually calculating, if not the entropy of the radiation.

“At the end of the day, if you really want to resolve the paradox, we have to explain what Hawking’s mistake was,” Engelhardt says. 

Her hunch is that he was in a way computing a different quantity altogether. She believes Hawking’s work, which raised the paradox to begin with, might have been computing a different type of gravitational entropy, that appears to result in information loss when run forward as a black hole evaporates. However, this other form of gravitational entropy does not correspond to information content, and so its increase would not be paradoxical.

Today, she and her students are following up on questions related to quantum gravity as well as a thornier concept having to do with singularities — instances when an object such as a star collapses into a region so gravitationally intense as to destroy spacetime itself. Physicists historically have predicted that singularities should only be present behind a black hole’s event horizon, though others have seen hints that they exist outside of these gravitational boundaries. 

“A lot of my work now is going into understanding how many pillars of gravitational physics are just not true as we currently understand them,” she says. “Answering these questions is the ultimate motivation.”


Reevaluating an approach to functional brain imaging

An MRI method purported to detect neurons’ rapid impulses produces its own misleading signals instead, an MIT study finds.


A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute for Brain Research.

The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of MIT Professor Alan Jasanoff, reported March 27 in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.

Jasanoff, a professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, as well as an associate investigator of the McGovern Institute, explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.

So when a team of scientists reported in 2022 a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.

“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”

Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.

Recreating the MRI procedure reported by DIANA’s developers, postdoc Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”

With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.

Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.

Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.

Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”


MIT Haystack scientists prepare a constellation of instruments to observe the solar eclipse’s effects

In a first, four different technologies will monitor changes in the upper atmosphere, locally and across the continent, as the sun’s radiation dips.


On April 8, the moon’s shadow will sweep through North America, trailing a diagonal ribbon of momentary, midday darkness across parts of the continent. Those who happen to be within the “path of totality” will experience a total solar eclipse — a few eerie minutes when the sun, moon, and Earth align, such that the moon perfectly blocks out the sun.

The last solar eclipse to pass over the continental United States occurred in August 2017, when the moon’s shadow swept from Oregon down to South Carolina. This time, the moon will be closer to the Earth and will track a wider ribbon, from Mexico through Texas and on up into Maine and eastern Canada. The shadow will move across more populated regions than in 2017, and will completely block the sun for more than 31 million people who live in its path. The eclipse will also partly shade many more regions, giving much of the country a partial eclipse, depending on the local weather.

While many of us ready our eclipse-grade eyewear, scientists at MIT’s Haystack Observatory are preparing a constellation of instruments to study the eclipse and how it will affect the topmost layers of the atmosphere. In particular, they will be focused on the ionosphere — the atmosphere’s outermost layer where many satellites orbit. The ionosphere stretches from 50 to 400 miles above the Earth’s surface and is continually blasted by the sun’s extreme ultraviolet and X-ray radiation. This daily solar exposure ionizes gas molecules in the atmosphere, creating a charged sea of electrons and ions that shifts with changes in the sun’s energy.

As they did in 2017, Haystack researchers will study how the ionosphere responds before, during, and after the eclipse, as the sun’s radiation suddenly dips. With this year’s event, the scientists will be adding two new technologies to the mix, giving them a first opportunity to observe the eclipse’s effects at local, regional, and national scales. What they observe will help scientists better understand how the atmosphere reacts to other sudden changes in solar radiation, such as solar storms and flares.

Two lead members of Haystack’s eclipse effort are research scientists Larisa Goncharenko, who studies the physics of the ionosphere using measurements from multiple observational sources, and John Swoboda, who develops instruments to observe near-Earth space phenomena. While preparing for eclipse day, Goncharenko and Swoboda took a break to chat with MIT News about the ways in which they will be watching the event and what they hope to learn from Monday’s rare planetary alignment.

Q: There’s a lot of excitement around this solar eclipse. Before we dive into how you’ll be observing it, let’s take a step back to talk about what we know so far: How does a total eclipse affect the atmosphere?

Goncharenko: We know quite a bit. One of the largest effects is, as the moon’s shadow moves over part of the continent, we have a significant decrease in electron, or plasma, density in the ionosphere. The sun is an ionization source, and as soon as that source is removed, we have a decrease in electron density. So, we sort of have a hole in the ionosphere that moves behind the moon’s shadow.

During an eclipse, solar heating shuts off and it’s like a rapid sunset and sunrise, and we have significant cooling in the atmosphere. So, we have this cold area of low ionization, moving in latitude and longitude. And because of this change in temperature, you also have disturbances in the wind system that affect how plasma, or electrons in the ionosphere, are distributed. And these are changes on large scales.

From this cold area that follows totality, we also have different kinds of waves emanating. Like a boat moving on the water, you have bow shock waves moving from the shadow. These are waves in electron density. They are small perturbations but can cover really large areas. We saw similar waves in the 2017 eclipse. But every eclipse is different. So, we will be using this eclipse as a unique lab experiment. And we will be able to see changes in electron density, temperature, and winds in the upper atmosphere as the eclipse moves over the continental United States.

Q: How will you be seeing all this? What experiments will you be running to catch the eclipse and its effects on the atmosphere?

Swoboda: We’re going to measure local changes in the atmosphere and ionosphere using two new radar technologies. The first is Zephyr, which was developed by [Haystack research scientist] Ryan Volz. Zephyr looks at how meteors break up in our atmosphere. There are always little bits of sand that burn up in the Earth’s atmosphere, and when they burn up, they leave a trail of plasma that follows the wind patterns in the upper atmosphere. Zephyr sends out a signal that bounces off these plasma trails, so we can see how they are carried by winds moving at very high altitude. We will use Zephyr to observe how these winds in the upper atmosphere change during the eclipse.

The other radar system is EMVSIS [Electro-Magnetic Vector Sensor Ionospheric Sounder], which will measure the electron or plasma density and the bulk velocity of the charged particles in the ionosphere. Both these systems comprise a distributed array of transmitters and receivers that send and receive radio waves at various frequencies to do their measurements. Traditional ionospheric sounders require high-power transmitters and large towers on the order of hundreds of feet, and can cover an area the size of a football field. But we’ve developed a lower-power and physically smaller system, about the size of a refrigerator, and we’re deploying multiple of these systems around New England to make local and regional measurements.

Goncharenko: We will also make regional observations with two antennas at the Millstone Hill Geospace Facility [in Westford, Massachusetts]. One antenna is a fixed vertical antenna, 220 feet in diameter, that we can use to observe parameters in the ionosphere over a huge range of altitudes, from 90 to 1,000 kilometers above the ground. The other is a steerable antenna that’s 150 feet in diameter, which we can move to look what happens as far away as Florida and all the way to the central United States. We are planning to use both antennas to see changes during the eclipse.

We’ll also be processing data from a national network of almost 3,000 GNSS [Global Navigation Satellite System] receivers across the United States, and we’re installing new receivers in undersampled regions along the area of totality. These receivers will measure how the ionosphere’s electron content changes before, during, and after the eclipse.

One of the most exciting things is, this is the first time we’ll have all four of these technologies working together. Each of these technologies provides a unique point of view. And for me as a scientist, I feel like a little kid on Christmas Eve. You know great things are coming, and you know you’ll have new things to play with and new data to analyze.

Q: And speaking of what you’ll find, what do you expect to see from the measurements you collect?

Goncharenko: I expect to see the unexpected. It will be first time for us to look at the near-Earth space with a combination of four very different technologies at the same time and in the same geographic region. We expect higher sensitivity that translates into better resolution in time and space. Probing the upper atmosphere with a combination of these diagnostic tools will provide simultaneous observations we never had before four-dimensional wind flow, electron density, ion temperature, plasma motion. We will observe how they change during the eclipse and study how and why changes in one area of the upper atmosphere are linked to perturbations in other areas in space and time.

Swoboda: We’re also sort of thinking longer term. What the eclipse is giving us is a chance to show what these technologies can do, and say, what if we could have these going all the time? We could run it as a sort of radar network for space weather, like how we monitor weather in the lower atmosphere. And we need to monitor space weather, because we have so much going on in the near-Earth space environment, with satellites launching all the time that are affected by space weather.

Goncharenko: We have a lot of space to study. The eclipse is just the highlight. But overall, these systems can produce more data to get a look at what happens in the upper atmosphere and ionosphere during other disturbances, such as storms and lightning periods, or coronal mass ejections and solar flares. And all of this is part of a large effort to build up our understanding of near-Earth space to meet demands of modern technological society.


A new computational technique could make it easier to engineer useful proteins

MIT researchers plan to search for proteins that could be used to measure electrical activity in the brain.


To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says. “It grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.” 

Proof-of-concept

The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.


Second round of seed grants awarded to MIT scholars studying the impact and applications of generative AI

The 16 finalists — representing every school at MIT — will explore generative AI’s impact on privacy, art, drug discovery, aging, and more.


Last summer, MIT President Sally Kornbluth and Provost Cynthia Barnhart issued a call for papers to “articulate effective roadmaps, policy recommendations, and calls for action across the broad domain of generative AI.” The response to the call far exceeded expectations with 75 proposals submitted. Of those, 27 proposals were selected for seed funding.

In light of this enthusiastic response, Kornbluth and Barnhart announced a second call for proposals this fall.

“The groundswell of interest and the caliber of the ideas overall made clear that a second round was in order,” they said in their email to MIT’s research community this fall. This second call for proposals resulted in 53 submissions.

Following the second call, the faculty committee from the first round considered the proposals and selected 16 proposals to receive exploratory funding. Co-authored by interdisciplinary teams of faculty and researchers affiliated with all five of the Institute’s schools and the MIT Schwarzman College of Computing, the proposals offer insights and perspectives on the potential impact and applications of generative AI across a broad range of topics and disciplines.

Each selected research group will receive between $50,000 and $70,000 to create 10-page impact papers. Those papers will be shared widely via a publication venue managed and hosted by the MIT Press under the auspices of the MIT Open Publishing Services program.

As with the first round of papers, Thomas Tull, a member of the MIT School of Engineering Dean’s Advisory Council and a former innovation scholar at the School of Engineering, contributed funding to support the effort.

The selected papers are:


Persistent “hiccups” in a far-off galaxy draw astronomers to new black hole behavior

Analysis reveals a tiny black hole repeatedly punching through a larger black hole’s disk of gas.


At the heart of a far-off galaxy, a supermassive black hole appears to have had a case of the hiccups.

Astronomers from MIT, Italy, the Czech Republic, and elsewhere have found that a previously quiet black hole, which sits at the center of a galaxy about 800 million light-years away, has suddenly erupted, giving off plumes of gas every 8.5 days before settling back to its normal, quiet state.

The periodic hiccups are a new behavior that has not been observed in black holes until now. The scientists believe the most likely explanation for the outbursts stems from a second, smaller black hole that is zinging around the central, supermassive black hole and slinging material out from the larger black hole’s disk of gas every 8.5 days.

The team’s findings, which are published today in the journal Science Advances, challenge the conventional picture of black hole accretion disks, which scientists had assumed are relatively uniform disks of gas that rotate around a central black hole. The new results suggest that accretion disks may be more varied in their contents, possibly containing other black holes and even entire stars.

Animation of small circle orbiting another circle in center of lenses opening. Bright orange fumes emit from top and bottom.

“We thought we knew a lot about black holes, but this is telling us there are a lot more things they can do,” says study author Dheeraj “DJ” Pasham, a research scientist in MIT’s Kavli Institute for Astrophysics and Space Research. “We think there will be many more systems like this, and we just need to take more data to find them.”

The study’s MIT co-authors include postdoc Peter Kosec, graduate student Megan Masterson, Associate Professor Erin Kara, Principal Research Scientist Ronald Remillard, and former research scientist Michael Fausnaugh, along with collaborators from multiple institutions, including the Tor Vergata University of Rome, the Astronomical Institute of the Czech Academy of Sciences, and Masaryk University in the Czech Republic.

“Use it or lose it”

The team’s findings grew out of an automated detection by ASAS-SN (the All Sky Automated Survey for SuperNovae), a network of 20 robotic telescopes situated in various locations across the Northern and Southern Hemispheres. The telescopes automatically survey the entire sky once a day for signs of supernovae and other transient phenomena.

In December of 2020, the survey spotted a burst of light in a galaxy about 800 million light years away. That particular part of the sky had been relatively quiet and dark until the telescopes’ detection, when the galaxy suddenly brightened by a factor of 1,000. Pasham, who happened to see the detection reported in a community alert, chose to focus in on the flare with NASA’s NICER (the Neutron star Interior Composition Explorer), an X-ray telescope aboard the International Space Station that continuously monitors the sky for X-ray bursts that could signal activity from neutron stars, black holes, and other extreme gravitational phenomena. The timing was fortuitous, as it was getting toward the end of the yearlong period during which Pasham had permission to point, or “trigger,” the telescope.

“It was either use it or lose it, and it turned out to be my luckiest break,” he says.

He trained NICER to observe the far-off galaxy as it continued to flare. The outburst lasted about four months before petering out. During that time, NICER took measurements of the galaxy’s X-ray emissions on a daily, high-cadence basis. When Pasham looked closely at the data, he noticed a curious pattern within the four-month flare: subtle dips, in a very narrow band of X-rays, that seemed to reappear every 8.5 days.

It seemed that the galaxy’s burst of energy periodically dipped every 8.5 days. The signal is similar to what astronomers see when an orbiting planet crosses in front of its host star, briefly blocking the star’s light. But no star would be able to block a flare from an entire galaxy.

“I was scratching my head as to what this means because this pattern doesn’t fit anything that we know about these systems,” Pasham recalls.

Punch it

As he was looking for an explanation to the periodic dips, Pasham came across a recent paper by theoretical physicists in the Czech Republic. The theorists had separately worked out that it would be possible, in theory, for a galaxy’s central supermassive black hole to host a second, much smaller black hole. That smaller black hole could orbit at an angle from its larger companion’s accretion disk.

As the theorists proposed, the secondary would periodically punch through the primary black hole’s disk as it orbits. In the process, it would release a plume of gas, like a bee flying through a cloud of pollen. Powerful magnetic fields, to the north and south of the black hole, could then slingshot the plume up and out of the disk. Each time the smaller black hole punches through the disk, it would eject another plume, in a regular, periodic pattern. If that plume happened to point in the direction of an observing telescope, it might observe the plume as a dip in the galaxy’s overall energy, briefly blocking the disk’s light every so often.

“I was super excited by this theory, and I immediately emailed them to say, ‘I think we’re observing exactly what your theory predicted,’” Pasham says.

He and the Czech scientists teamed up to test the idea, with simulations that incorporated NICER’s observations of the original outburst, and the regular, 8.5-day dips. What they found supports the theory: The observed outburst was likely a signal of a second, smaller black hole, orbiting a central supermassive black hole, and periodically puncturing its disk.

Specifically, the team found that the galaxy was relatively quiet prior to the December 2020 detection. The team estimates the galaxy’s central supermassive black hole is as massive as 50 million suns. Prior to the outburst, the black hole may have had a faint, diffuse accretion disk rotating around it, as a second, smaller black hole, measuring 100 to 10,000 solar masses, was orbiting in relative obscurity.

The researchers suspect that, in December 2020, a third object — likely a nearby star — swung too close to the system and was shredded to pieces by the supermassive black hole’s immense gravity — an event that astronomers know as a “tidal disruption event.” The sudden influx of stellar material momentarily brightened the black hole’s accretion disk as the star’s debris swirled into the black hole. Over four months, the black hole feasted on the stellar debris as the second black hole continued orbiting. As it punched through the disk, it ejected a much larger plume than it normally would, which happened to eject straight out toward NICER’s scope.

The team carried out numerous simulations to test the periodic dips. The most likely explanation, they conclude, is a new kind of David-and-Goliath system — a tiny, intermediate-mass black hole, zipping around a supermassive black hole.

“This is a different beast,” Pasham says. “It doesn’t fit anything that we know about these systems. We’re seeing evidence of objects going in and through the disk, at different angles, which challenges the traditional picture of a simple gaseous disk around black holes. We think there is a huge population of these systems out there.”

“This is a brilliant example of how to use the debris from a disrupted star to illuminate the interior of a galactic nucleus which would otherwise remain dark. It is akin to using fluorescent dye to find a leak in a pipe,” says Richard Saxton, an X-ray astronomer from the European Space Astronomy Centre (ESAC) in Madrid, who was not involved in the study. “This result shows that very close super-massive black hole binaries could be common in galactic nuclei, which is a very exciting development for future gravitational wave detectors.”

This research was supported, in part, by NASA.