How far would you go for a good meal? For some of the ocean’s top predators, maintaining a decent diet requires some surprisingly long-distance dives.
MIT oceanographers have found that big fish like tuna and swordfish get a large fraction of their food from the ocean’s twilight zone — a cold and dark layer of the ocean about half a mile below the surface, where sunlight rarely penetrates. Tuna and swordfish have been known to take extreme plunges, but it was unclear whether these deep dives were for food, and to what extent the fishes’ diet depends on prey in the twilight zone.
In a study published recently in the ICES Journal of Marine Science, the MIT student-led team reports that the twilight zone is a major food destination for three predatory fish — bigeye tuna, yellowfin tuna, and swordfish. While the three species swim primarily in the shallow open ocean, the scientists found these fish are sourcing between 50 and 60 percent of their diet from the twilight zone.
The findings suggest that tuna and swordfish rely more heavily on the twilight zone than scientists had assumed. This implies that any change to the twilight zone’s food web, such as through increased fishing, could negatively impact fisheries of more shallow tuna and swordfish.
“There is increasing interest in commercial fishing in the ocean’s twilight zone,” says Ciara Willis, the study’s lead author, who was a PhD student in the MIT-Woods Hole Oceanographic Institution (WHOI) Joint Program when conducting the research and is now a postdoc at WHOI. “If we start heavily fishing that layer of the ocean, our study suggests that could have profound implications for tuna and swordfish, which are very reliant on the twilight zone and are highly valuable existing fisheries.”
The study’s co-authors include Kayla Gardener of MIT-WHOI, and WHOI researchers Martin Arostegui, Camrin Braun, Leah Hougton, Joel Llopiz, Annette Govindarajan, and Simon Thorrold, along with Walt Golet at the University of Maine.
Deep-ocean buffet
The ocean’s twilight zone is a vast and dim layer that lies between the sunlit surface waters and the ocean’s permanently dark, midnight zone. Also known as the midwater, or mesopelagic layer, the twilight zone stretches between 200 and 1,000 meters below the ocean’s surface and is home to a huge variety of organisms that have adapted to live in the darkness.
“This is a really understudied region of the ocean, and it’s filled with all these fantastic, weird animals,” Willis says.
In fact, it’s estimated that the biomass of fish in the twilight zone is somewhere close to 10 billion tons, much of which is concentrated in layers at certain depths. By comparison, the marine life that lives closer to the surface, Willis says, is “a thin soup,” which is slim pickings for large predators.
“It’s important for predators in the open ocean to find concentrated layers of food. And I think that’s what drives them to be interested in the ocean’s twilight zone,” Willis says. “We call it the ‘deep ocean buffet.’”
And much of this buffet is on the move. Many kinds of fish, squid, and other deep-sea organisms in the twilight zone will swim up to the surface each night to find food. This twilight community will descend back into darkness at dawn to avoid detection.
Scientists have observed that many large predatory fish will make regular dives into the twilight zone, presumably to feast on the deep-sea bounty. For instance, bigeye tuna spend much of their day making multiple short, quick plunges into the twilight zone, while yellowfin tuna dive down every few days to weeks. Swordfish, in contrast, appear to follow the daily twilight migration, feeding on the community as it rises and falls each day.
“We’ve known for a long time that these fish and many other predators feed on twilight zone prey,” Willis says. “But the extent to which they rely on this deep-sea food web for their forage has been unclear.”
Twilight signal
For years, scientists and fishers have found remnants of fish from the twilight zone in the stomach contents of larger, surface-based predators. This suggests that predator fish do indeed feed on twilight food, such as lanternfish, certain types of squid, and long, snake-like fish called barracudina. But, as Willis notes, stomach contents give just a “snapshot” of what a fish ate that day.
She and her colleagues wanted to know how big a role twilight food plays in the general diet of predator fish. For their new study, the team collaborated with fishermen in New Jersey and Florida, who fish for a living in the open ocean. They supplied the team with small tissue samples of their commercial catch, including samples of bigeye tuna, yellowfin tuna, and swordfish.
Willis and her advisor, Senior Scientist Simon Thorrold, brought the samples back to Thorrold’s lab at WHOI and analyzed the fish bits for essential amino acids — the key building blocks of proteins. Essential amino acids are only made by primary producers, or members of the base of the food web, such as phytoplankton, microbes, and fungi. Each of these producers makes essential amino acids with a slightly different carbon isotope configuration that then is conserved as the producers are consumed on up their respective food chains.
“One of the hypotheses we had was that we’d be able to distinguish the carbon isotopic signature of the shallow ocean, which would logically be more phytoplankton-based, versus the deep ocean, which is more microbially based,” Willis says.
The researchers figured that if a fish sample had one carbon isotopic make-up over another, it would be a sign that that fish feeds more on food from the deep, rather than shallow waters.
“We can use this [carbon isotope signature] to infer a lot about what food webs they’ve been feeding in, over the last five to eight months,” Willis says.
The team looked at carbon isotopes in tissue samples from over 120 samples including bigeye tuna, yellowfin tuna, and swordfish. They found that individuals from all three species contained a substantial amount of carbon derived from sources in the twilight zone. The researchers estimate that, on average, food from the twilight zone makes up 50 to 60 percent of the diet of the three predator species, with some slight variations among species.
“We saw the bigeye tuna were far and away the most consistent in where they got their food from. They didn’t vary much from individual to individual,” Willis says. “Whereas the swordfish and yellowfin tuna were more variable. That means if you start having big-scale fishing in the twilight zone, the bigeye tuna might be the ones who are most at risk from food web effects.”
The researchers note there has been increased interest in commercially fishing the twilight zone. While many fish in that region are not edible for humans, they are starting to be harvested as fishmeal and fish oil products. In ongoing work, Willis and her colleagues are evaluating the potential impacts to tuna fisheries if the twilight zone becomes a target for large-scale fishing.
“If predatory fish like tunas have 50 percent reliance on twilight zone food webs, and we start heavily fishing that region, that could lead to uncertainty around the profitability of tuna fisheries,” Willis says. “So we need to be very cautious about impacts on the twilight zone and the larger ocean ecosystem.”
This work was part of the Woods Hole Oceanographic Institution’s Ocean Twilight Zone Project, funded as part of the Audacious Project housed at TED. Willis was additionally supported by the Natural Sciences and Engineering Research Council of Canada and the MIT Martin Family Society of Fellows for Sustainability.
Professor Emeritus Frederick Greene, influential chemist who focused on free radicals, dies at 97The physical organic chemist and MIT professor for over 40 years is celebrated for his lasting impact on generations of chemists.Frederick “Fred” Davis Greene II, professor emeritus in the MIT Department of Chemistry who was accomplished in the field of physical organic chemistry and free radicals, passed away peacefully after a brief illness, surrounded by his family, on Saturday, March 22. He had been a member of the MIT community for over 70 years.
“Greene’s dedication to teaching, mentorship, and the field of physical organic chemistry is notable,” said Professor Troy Van Voorhis, head of the Department of Chemistry, upon learning of Greene’s passing. “He was also a constant source of joy to those who interacted with him, and his commitment to students and education was legendary. He will be sorely missed.”
Greene, a native of Glen Ridge, New Jersey, was born on July 7, 1927 to parents Phillips Foster Greene and Ruth Altman Greene. He spent his early years in China, where his father was a medical missionary with Yale-In-China. Greene and his family moved to the Philippines just ahead of the Japanese invasion prior to World War Il, and then back to the French Concession of Shanghai, and to the United States in 1940. He joined the U.S. Navy in December 1944, and afterwards earned his bachelor’s degree from Amherst College in 1949 and a PhD from Harvard University in 1952. Following a year at the University of California at Los Angeles as a research associate, he was appointed a professor of chemistry at MIT by then-Department Head Arthur C. Cope in 1953. Greene retired in 1995.
Greene’s research focused on peroxide decompositions and free radical chemistry, and he reported the remarkable bimolecular reaction between certain diacyl peroxides and electron-rich olefins and aromatics. He was also interested in small-ring heterocycles, e.g., the three-membered ring 2,3-diaziridinones. His research also covered strained olefins, the Greene-Viavattene diene, and 9, 9', 10, 10'-tetradehydrodianthracene.
Greene was elected to the American Academy of Arts and Sciences in 1965 and received an honorary doctorate from Amherst College for his research in free radicals. He served as editor-in-chief of the Journal of Organic Chemistry of the American Chemical Society from 1962 to 1988. He was awarded a special fellowship form the National Science Foundation and spent a year at Cambridge University, Cambridge, England, and was a member of the Chemical Society of London.
Greene and Professor James Moore of the University of Philadelphia worked closely with Greene’s wife, Theodora “Theo” W. Greene, in the conversion of her PhD thesis, which was overseen by Professor Elias J. Corey of Harvard University, into her book “Greene’s Protective Groups in Organic Synthesis.” The book became an indispensable reference for any practicing synthetic organic or medicinal chemist and is now in its fifth edition. Theo, who predeceased Fred in July 2005, was a tremendous partner to Greene, both personally and professionally. A careful researcher in her own right, she served as associate editor of the Journal of Organic Chemistry for many years.
Fred Greene was recently featured in a series of videos featuring Professor Emeritus Dietmar Seyferth (who passed away in 2020) that was spearheaded by Professor Rick Danheiser. The videos cover a range of topics, including Seyferth and Greene’s memories during the 1950s to mid-1970s of their fellow faculty members, how they came to be hired, the construction of various lab spaces, developments in teaching and research, the evolution of the department’s graduate program, and much more.
Danheiser notes that it was a privilege to share responsibility for the undergraduate class 5.43 (Advanced Organic Chemistry) with Greene. “Fred Greene was a fantastic teacher and inspired several generations of MIT undergraduate and graduate students with his superb lectures,” Danheiser recalls. The course they shared was Danheiser’s first teaching assignment at MIT, and he states that Greene’s “counsel and mentoring was invaluable to me.”
The Department of Chemistry recognized Greene’s contributions to its academic program by naming the annual student teaching award the “Frederick D. Greene Teaching Award.” This award recognizes outstanding contributions in teaching in chemistry by undergraduates. Since 1993 the award has been given to 46 students.
Dabney White Dixon PhD ’76 was one of many students with whom Greene formed a lifelong friendship and mentorship. Dixon shares, “Fred Greene was an outstanding scientist — intelligent, ethical, and compassionate in every aspect of his life. He possessed an exceptional breadth of knowledge in organic chemistry, particularly in mechanistic organic chemistry, as evidenced by his long tenure as editor of the Journal of Organic Chemistry (1962 to 1988). Weekly, large numbers of manuscripts flowed through his office. He had an acute sense of fairness in evaluating submissions and was helpful to those submitting manuscripts. His ability to navigate conflicting scientific viewpoints was especially evident during the heated debates over non-classical carbonium ions in the 1970s.
“Perhaps Fred’s greatest contribution to science was his mentorship. At a time when women were rare in chemistry PhD programs, Fred’s mentorship was particularly meaningful. I was the first woman in my scientific genealogical lineage to study chemistry, and his guidance gave me the confidence to overcome challenges. He and Theo provided a supportive and joyful environment, helping me forge a career in academia where I have since mentored 13 PhD students — an even mix of men and women — a testament to the social progress in science that Fred helped foster.
“Fred’s meticulous attention to detail was legendary. He insisted that every new molecule be fully characterized spectroscopically before he would examine the data. Through this, his students learned the importance of thoroughness, accuracy, and organization. He was also an exceptional judge of character, entrusting students with as much responsibility as they could handle. His honesty was unwavering — he openly acknowledged mistakes, setting a powerful example for his students.
“Shortly before the pandemic, I had the privilege of meeting Fred with two of his scientific ‘granddaughters’ — Elizabeth Draganova, then a postdoc at Tufts (now an assistant professor at Emory), and Cyrianne Keutcha, then a graduate student at Harvard (now a postdoc at Yale). As we discussed our work, it was striking how much science had evolved — from IR and NMR of small-ring heterocycles to surface plasmon resonance and cryo-electron microscopy of large biochemical systems. Yet, Fred’s intellectual curiosity remained as sharp as ever. His commitment to excellence, attention to detail, and passion for uncovering chemical mechanisms lived on in his scientific descendants.
“He leaves a scientific legacy of chemists who internalized his lessons on integrity, kindness, and rigorous analysis, carrying them forward to their own students and research. His impact on the field of chemistry — and on the lives of those fortunate enough to have known him — will endure.”
Carl Renner PhD ’74 felt fortunate and privileged to be a doctoral student in the Greene group from 1969 to 1973, and also his teaching assistant for his 5.43 course. Renner recalls, “He possessed a curious mind of remarkable clarity and discipline. He prepared his lectures meticulously and loved his students. He was extremely generous with his time and knowledge. I never heard him complain or say anything unkind. Everyone he encountered came away better for it.”
Gary Breton PhD ’91 credits the development of his interest in physical organic chemistry to his time spent in Greene’s class. Breton says, “During my time in the graduate chemistry program at MIT (1987-91) I had the privilege of learning from some of the world’s greatest minds in chemistry, including Dr. Fred Greene. At that time, all incoming graduate students in organic chemistry were assigned in small groups to a seminar-type course that met each week to work on the elucidation of reaction mechanisms, and I was assigned to Dr. Greene’s class. It was here that not only did Dr. Greene afford me a confidence in how to approach reaction mechanisms, but he also ignited my fascination with physical organic chemistry. I was only too happy to join his research group, and begin a love/hate relationship with reactive nitrogen-containing heterocycles that continues to this day in my own research lab as a chemistry professor.
“Anyone that knew Dr. Greene quickly recognized that he was highly intelligent and exceptionally knowledgeable about all things organic, but under his mentorship I also saw his creativity and cleverness. Beyond that, and even more importantly, I witnessed his kindness and generosity, and his subtle sense of humor. Dr. Greene’s enduring legacy is the large number of undergraduate students, graduate students, and postdocs whose lives he touched over his many years. He will be greatly missed.”
John Dolhun PhD ’73 recalls Greene’s love for learning, and that he “was one of the kindest persons that I have known.” Dolhun shares, “I met Fred Greene when I was a graduate student. His organic chemistry course was one of the most popular, and he was a top choice for many students’ thesis committees. When I returned to MIT in 2008 and reconnected with him, he was still endlessly curious — always learning, asking questions. A few years ago, he visited me and we had lunch. Back at the chemistry building, I reached for the elevator button and he said, ‘I always walk up the five flights of stairs.’ So, I walked up with him. Fred knew how to keep both mind and body in shape. He was truly a beacon of light in the department.”
Liz McGrath, retired chemistry staff member, warmly recalls the regular coffees and conversations she shared with Fred over two decades at the Institute. She shares, “Fred, who was already emeritus by the time of my arrival, imparted to me a deep interest in the history of MIT Chemistry’s events and colorful faculty. He had a phenomenal memory, which made his telling of the history so rich in its content. He was a true gentleman and sweet and kind to boot. ... I will remember him with much fondness.”
Greene is survived by his children, Alan, Carol, Elizabeth, and Phillips; nine grandchildren; and six great grandchildren. A memorial service will be held on April 5 at 11 a.m. at the First Congregational Church in Winchester, Massachusetts.
Collaboration between MIT and GE Vernova aims to develop and scale sustainable energy systemsThe MIT-GE Vernova Energy and Climate Alliance includes research, education, and career opportunities across the Institute.MIT and GE Vernova today announced the creation of the MIT-GE Vernova Energy and Climate Alliance to help develop and scale sustainable energy systems across the globe.
The alliance launches a five-year collaboration between MIT and GE Vernova, a global energy company that spun off from General Electric’s energy business in 2024. The endeavor will encompass research, education, and career opportunities for students, faculty, and staff across MIT’s five schools and the MIT Schwarzman College of Computing. It will focus on three main themes: decarbonization, electrification, and renewables acceleration.
“This alliance will provide MIT students and researchers with a tremendous opportunity to work on energy solutions that could have real-world impact,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of the School of Engineering. “GE Vernova brings domain knowledge and expertise deploying these at scale. When our researchers develop new innovative technologies, GE Vernova is strongly positioned to bring them to global markets.”
Through the alliance, GE Vernova is sponsoring research projects at MIT and providing philanthropic support for MIT research fellowships. The company will also engage with MIT’s community through participation in corporate membership programs and professional education.
“It’s a privilege to combine forces with MIT’s world-class faculty and students as we work together to realize an optimistic, innovation-driven approach to solving the world’s most pressing challenges,” says Scott Strazik, GE Vernova CEO. “Through this alliance, we are proud to be able to help drive new technologies while at the same time inspire future leaders to play a meaningful role in deploying technology to improve the planet at companies like GE Vernova.”
“This alliance embodies the spirit of the MIT Climate Project — combining cutting-edge research, a shared drive to tackle today’s toughest energy challenges, and a deep sense of optimism about what we can achieve together,” says Sally Kornbluth, president of MIT. “With the combined strengths of MIT and GE Vernova, we have a unique opportunity to make transformative progress in the flagship areas of electrification, decarbonization, and renewables acceleration.”
The alliance, comprising a $50 million commitment, will operate within MIT’s Office of Innovation and Strategy. It will fund approximately 12 annual research projects relating to the three themes, as well as three master’s student projects in MIT’s Technology and Policy Program. The research projects will address challenges like developing and storing clean energy, as well as the creation of robust system architectures that help sustainable energy sources like solar, wind, advanced nuclear reactors, green hydrogen, and more compete with carbon-emitting sources.
The projects will be selected by a joint steering committee composed of representatives from MIT and GE Vernova, following an annual Institute-wide call for proposals.
The collaboration will also create approximately eight endowed GE Vernova research fellowships for MIT students, to be selected by faculty and beginning in the fall. There will also be 10 student internships that will span GE Vernova’s global operations, and GE Vernova will also sponsor programming through MIT’s New Engineering Education Transformation (NEET), which equips students with career-oriented experiential opportunities. Additionally, the alliance will create professional education programming for GE Vernova employees.
“The internships and fellowships will be designed to bring students into our ecosystem,” says GE Vernova Chief Corporate Affairs Officer Roger Martella. “Students will walk our factory floor, come to our labs, be a part of our management teams, and see how we operate as business leaders. They’ll get a sense for how what they’re learning in the classroom is being applied in the real world.”
Philanthropic support from GE Vernova will also support projects in MIT’s Human Insight Collaborative (MITHIC), which launched last fall to elevate human-centered research and teaching. The projects will allow faculty to explore how areas like energy and cybersecurity influence human behavior and experiences.
In connection with the alliance, GE Vernova is expected to join several MIT consortia and membership programs, helping foster collaborations and dialogue between industry experts and researchers and educators across campus.
With operations across more than 100 countries, GE Vernova designs, manufactures, and services technologies to generate, transfer, and store electricity with a mission to decarbonize the world. The company is headquartered in Kendall Square, right down the road from MIT, which its leaders say is not a coincidence.
“We’re really good at taking proven technologies and commercializing them and scaling them up through our labs,” Martella says. “MIT excels at coming up with those ideas and being a sort of time machine that thinks outside the box to create the future. That’s why this such a great fit: We both have a commitment to research, innovation, and technology.”
The alliance is the latest in MIT’s rapidly growing portfolio of research and innovation initiatives around sustainable energy systems, which also includes the Climate Project at MIT. Separate from, but complementary to, the MIT-GE Vernova Alliance, the Climate Project is a campus-wide effort to develop technological, behavioral, and policy solutions to some of the toughest problems impeding an effective global climate response.
MIT affiliates named 2024 AAAS FellowsThe American Association for the Advancement of Science recognizes six current affiliates and 27 additional MIT alumni for their efforts to advance science and related fields.Six current MIT affiliates and 27 additional MIT alumni have been elected as fellows of the American Association for the Advancement of Science (AAAS).
The 2024 class of AAAS Fellows includes 471 scientists, engineers, and innovators, spanning all 24 of AAAS disciplinary sections, who are being recognized for their scientifically and socially distinguished achievements.
Noubar Afeyan PhD ’87, life member of the MIT Corporation, was named a AAAS Fellow “for outstanding leadership in biotechnology, in particular mRNA therapeutics, and for advocacy for recognition of the contributions of immigrants to economic and scientific progress.” Afeyan is the founder and CEO of the venture creation company Flagship Pioneering, which has built over 100 science-based companies to transform human health and sustainability. He is also the chairman and cofounder of Moderna, which was awarded a 2024 National Medal of Technology and Innovation for the development of its Covid-19 vaccine. Afeyan earned his PhD in biochemical engineering at MIT in 1987 and was a senior lecturer at the MIT Sloan School of Management for 16 years, starting in 2000. Among other activities at the Institute, he serves on the advisory board of the MIT Abdul Latif Jameel Clinic for Machine Learning and delivered MIT’s 2024 Commencement address.
Cynthia Breazeal SM ’93, ScD ’00 is a professor of media arts and sciences at MIT, where she founded and directs the Personal Robots group in the MIT Media Lab. At MIT Open Learning, she is the MIT dean for digital learning, and in this role, she leverages her experience in emerging digital technologies and business, research, and strategic initiatives to lead Open Learning’s business and research and engagement units. She is also the director of the MIT-wide Initiative on Responsible AI for Social Empowerment and Education (raise.mit.edu). She co-founded the consumer social robotics company, Jibo, Inc., where she served as chief scientist and chief experience officer. She is recognized for distinguished contributions in the field of artificial intelligence education, particularly around the use of social robots, and learning at scale.
Alan Edelman PhD ’89 is an applied mathematics professor for the Department of Mathematics and leads the Applied Computing Group of the Computer Science and Artificial Intelligence Laboratory, the MIT Julia Lab. He is recognized as a 2024 AAAS fellow for distinguished contributions and outstanding breakthroughs in high-performance computing, linear algebra, random matrix theory, computational science, and in particular for the development of the Julia programming language. Edelman has been elected a fellow of five different societies — AMS, the Society for Industrial and Applied Mathematics, the Association for Computing Machinery, the Institute of Electrical and Electronics Engineers, and AAAS.
Robert B. Millard '73, life member and chairman emeritus of the MIT Corporation, was named a 2024 AAAS Fellow for outstanding contributions to the scientific community and U.S. higher education "through exemplary leadership service to such storied institutions as AAAS and MIT." Millard joined the MIT Corporation as a term member in 2003 and was elected a life member in 2013. He served on the Executive Committee for 10 years and on the Investment Company Management Board for seven years, including serving as its chair for the last four years. He served as a member of the Visiting Committees for Physics, Architecture, and Chemistry. In addition, Millard has served as a member of the Linguistics and Philosophy Visiting Committee, the Corporation Development Committee, and the Advisory Council for the Council for the Arts. In 2011, Millard received the Bronze Beaver Award, the MIT Alumni Association’s highest honor for distinguished service.
Jagadeesh S. Moodera is a senior research scientist in the Department of Physics. His research interests include experimental condensed matter physics: spin polarized tunneling and nano spintronics; exchange coupled ferromagnet/superconductor interface, triplet pairing, nonreciprocal current transport and memory toward superconducting spintronics for quantum technology; and topological insulators/superconductors, including Majorana bound state studies in metallic systems. His research in the area of spin polarized tunneling led to a breakthrough in observing tunnel magnetoresistance (TMR) at room temperature in magnetic tunnel junctions. This resulted in a huge surge in this area of research, currently one of the most active areas. TMR effect is used in all ultra-high-density magnetic data storage, as well as for the development of nonvolatile magnetic random access memory (MRAM) that is currently being advanced further in various electronic devices, including for neuromorphic computing architecture. For his leadership in spintronics, the discovery of TMR, the development of MRAM, and for mentoring the next generation of scientists, Moodera was named a 2024 AAAS Fellow. For his TMR discovery he was awarded the Oliver Buckley Prize (2009) by the American Physical Society (APS), named an American National Science Foundation Competitiveness and Innovation Fellow (2008-10), won IBM and TDK Research Awards (1995-98), and became a Fellow of APS (2000).
Noelle Eckley Selin, the director of the MIT Center for Sustainability Science and Strategy and a professor in the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences, uses atmospheric chemistry modeling to inform decision-making strategies on air pollution, climate change, and toxic substances, including mercury and persistent organic pollutants. She has also published articles and book chapters on the interactions between science and policy in international environmental negotiations, in particular focusing on global efforts to regulate hazardous chemicals and persistent organic pollutants. She is named a 2024 AAAS Fellow for world-recognized leadership in modeling the impacts of air pollution on human health, in assessing the costs and benefits of related policies, and in integrating technology dynamics into sustainability science.
Additional MIT alumni honored as 2024 AAAS Fellows include: Danah Boyd SM ’02 (Media Arts and Sciences); Michael S. Branicky ScD ’95 (EECS); Jane P. Chang SM ’95, PhD ’98 (Chemical Engineering); Yong Chen SM '99 (Mathematics); Roger Nelson Clark PhD '80 (EAPS); Mark Stephen Daskin ’74, PhD ’78 (Civil and Environmental Engineering); Marla L. Dowell PhD ’94 (Physics); Raissa M. D’Souza PhD ’99 (Physics); Cynthia Joan Ebinger SM '86, PhD '88 (EAPS/WHOI); Thomas Henry Epps III ’98, SM ’99 (Chemical Engineering); Daniel Goldman ’94 (Physics); Kenneth Keiler PhD ’96 (Biology); Karen Jean Meech PhD '87 (EAPS); Christopher B. Murray PhD ’95 (Chemistry); Jason Nieh '89 (EECS); William Nordhaus PhD ’67 (Economics); Milica Radisic PhD '04 (Chemical Engineering); James G. Rheinwald PhD ’76 (Biology); Adina L. Roskies PhD ’04 (Philosophy); Linda Rothschild (Preiss) PhD '70 (Mathematics); Soni Lacefield Shimoda PhD '03 (Biology); Dawn Y. Sumner PhD ’95 (EAPS); Tina L. Tootle PhD ’04 (Biology); Karen Viskupic PhD '03 (EAPS); Brant M. Weinstein PhD ’92 (Biology); Chee Wei Wong SM ’01, ScD ’03 (Mechanical Engineering; and Fei Xu PhD ’95 (Brain and Cognitive Sciences).
Professor Emeritus Earle Lomon, nuclear theorist, dies at 94On the physics faculty for nearly 40 years and a member of the Center for Theoretical Physics, he focused on the interactions of hadrons and developed an R-matrix formulation of scattering theory.Earle Leonard Lomon PhD ’54, MIT professor emeritus of physics, died on March 7 in Newton, Massachusetts, at the age of 94.
A longtime member of the Center for Theoretical Physics, Lomon was interested primarily in the forces between protons and neutrons at low energies, where the effects of quarks and gluons are hidden by their confinement.
His research focused on the interactions of hadrons — protons, neutrons, mesons, and nuclei — before it was understood that they were composed of quarks and gluons.
“Earle developed an R-matrix formulation of scattering theory that allowed him to separate known effects at long distance from then-unknown forces at short distances,” says longtime colleague Robert Jaffe, the Jane and Otto Morningstar Professor of Physics.
“When QCD [quantum chromodynamics] emerged as the correct field theory of hadrons, Earle moved quickly to incorporate the effects of quarks and gluons at short distance and high energies,” says Jaffe. “Earle’s work can be interpreted as a precursor to modern chiral effective field theory, where the pertinent degrees of freedom at low energy, which are hadrons, are matched smoothly onto the quark and gluon degrees of freedom that dominate at higher energy.”
“He was a truly cosmopolitan scientist, given his open mind and deep kindness,” says Bruno Coppi, MIT professor emeritus of physics.
Early years
Born Nov. 15, 1930, in Montreal, Quebec, Earle was the only son of Harry Lomon and Etta Rappaport. At Montreal High School, he met his future wife, Ruth Jones. Their shared love for classical music drew them both to the school's Classical Music Club, where Lomon served as president and Ruth was an accomplished musician.
While studying at McGill University, he was a research physicist for the Canada Defense Research Board from 1950 to 1951. After graduating in 1951, he married Jones, and they moved to Cambridge, where he pursued his doctorate at MIT in theoretical physics, mentored by Professor Hermann Feshbach.
Lomon spent 1954 to 1955 at the Institute for Theoretical Physics (now the Niels Bohr Institute) in Copenhagen. “With the presence of Niels Bohr, Aage Bohr, Ben Mottelson, and Willem V.R. Malkus, there were many physicists from Europe and elsewhere, including MIT’s Dave Frisch, making the Institute for Physics an exciting place to be,” recalled Lomon.
In 1956-57, he was a research associate at the Laboratory for Nuclear Studies at Cornell University. He received his PhD from MIT in 1954, and did postdoctoral work at the Institute of Theoretical Physics in Denmark, the Weizmann Institute of Science in Israel, and Cornell. He was an associate professor at McGill from 1957 until 1960, when he joined the MIT faculty.
In 1965, Lomon was awarded a Guggenheim Memorial Foundation Fellowship and was a visiting scientist at CERN. In 1968, he joined the newly formed MIT Center for Theoretical Physics. He became a full professor in 1970 and retired in 1999.
Los Alamos and math theory
From 1968 to 2015, Lomon was an affiliate researcher at the Los Alamos National Laboratory. During this time, he collaborated with Fred Begay, a Navajo nuclear physicist and medicine man. New Mexico became the Lomon family’s second home, and Lomon enjoyed the area hiking trails and climbing Baldy Mountain.
Lomon also developed educational materials for mathematical theory. He developed textbooks, educational tools, research, and a creative problem-solving curriculum for the Unified Science and Mathematics for Elementary Schools. His children recall when Earle would review the educational tools with them at the dinner table. From 2001 to 2013, he was program director for mathematical theory for the U.S. National Science Foundation’s Theoretical Physics research hub.
Lomon was an American Physical Society Fellow and a member of the Canadian Association of Physicists.
Husband of the late Ruth Lomon, he is survived by his daughters Glynis Lomon and Deirdre Lomon; his son, Dylan Lomon; grandchildren Devin Lomon, Alexia Layne-Lomon, and Benjamin Garner; and six great-grandchildren. There will be a memorial service at a later date; instead of flowers, please consider donating to the Los Alamos National Laboratory Foundation.
Mathematicians uncover the logic behind how people walk in crowdsThe findings could help planners design safer, more efficient pedestrian thoroughfares.Next time you cross a crowded plaza, crosswalk, or airport concourse, take note of the pedestrian flow. Are people walking in orderly lanes, single-file, to their respective destinations? Or is it a haphazard tangle of personal trajectories, as people dodge and weave through the crowd?
MIT instructor Karol Bacik and his colleagues studied the flow of human crowds and developed a first-of-its-kind way to predict when pedestrian paths will transition from orderly to entangled. Their findings may help inform the design of public spaces that promote safe and efficient thoroughfares.
In a paper appearing this week in the Proceedings of the National Academy of Sciences, the researchers consider a common scenario in which pedestrians navigate a busy crosswalk. The team analyzed the scenario through mathematical analysis and simulations, considering the many angles at which individuals may cross and the dodging maneuvers they may make as they attempt to reach their destinations while avoiding bumping into other pedestrians along the way.
The researchers also carried out controlled crowd experiments and studied how real participants walked through a crowd to reach certain locations. Through their mathematical and experimental work, the team identified a key measure that determines whether pedestrian traffic is ordered, such that clear lanes form in the flow, or disordered, in which there are no discernible paths through the crowd. Called “angular spread,” this parameter describes the number of people walking in different directions.
If a crowd has a relatively small angular spread, this means that most pedestrians walk in opposite directions and meet the oncoming traffic head-on, such as in a crosswalk. In this case, more orderly, lane-like traffic is likely. If, however, a crowd has a larger angular spread, such as in a concourse, it means there are many more directions that pedestrians can take to cross, with more chance for disorder.
In fact, the researchers calculated the point at which a moving crowd can transition from order to disorder. That point, they found, was an angular spread of around 13 degrees, meaning that if pedestrians don’t walk straight across, but instead an average pedestrian veers off at an angle larger than 13 degrees, this can tip a crowd into disordered flow.
“This all is very commonsense,” says Bacik, who is a instructor of applied mathematics at MIT. “The question is whether we can tackle it precisely and mathematically, and where the transition is. Now we have a way to quantify when to expect lanes — this spontaneous, organized, safe flow — versus disordered, less efficient, potentially more dangerous flow.”
The study’s co-authors include Grzegorz Sobota and Bogdan Bacik of the Academy of Physical Education in Katowice, Poland, and Tim Rogers at the University of Bath in the United Kingdom.
Right, left, center
Bacik, who is trained in fluid dynamics and granular flow, came to study pedestrian flow during 2021, when he and his collaborators looked into the impacts of social distancing, and ways in which people might walk among each other while maintaining safe distances. That work inspired them to look more generally into the dynamics of crowd flow.
In 2023, he and his collaborators explored “lane formation,” a phenomenon by which particles, grains, and, yes, people have been observed to spontaneously form lanes, moving in single-file when forced to cross a region from two opposite directions. In that work, the team identified the mechanism by which such lanes form, which Bacik sums up as “an imbalance of turning left versus right.” Essentially, they found that as soon as something in a crowd starts to look like a lane, individuals around that fledgling lane either join up, or are forced to either side of it, walking parallel to the original lane, which others can follow. In this way, a crowd can spontaneously organize into regular, structured lanes.
“Now we’re asking, how robust is this mechanism?” Bacik says. “Does it only work in this very idealized situation, or can lane formation tolerate some imperfections, such as some people not going perfectly straight, as they might do in a crowd?”
Lane change
For their new study, the team looked to identify a key transition in crowd flow: When do pedestrians switch from orderly, lane-like traffic, to less organized, messy flow? The researchers first probed the question mathematically, with an equation that is typically used to describe fluid flow, in terms of the average motion of many individual molecules.
“If you think about the whole crowd flowing, rather than individuals, you can use fluid-like descriptions,” Bacik explains. “It’s this art of averaging, where, even if some people may cross more assertively than others, these effects are likely to average out in a sufficiently large crowd. If you only care about the global characteristics like, are there lanes or not, then you can make predictions without detailed knowledge of everyone in the crowd.”
Bacik and his colleagues used equations of fluid flow, and applied them to the scenario of pedestrians flowing across a crosswalk. The team tweaked certain parameters in the equation, such as the width of the fluid channel (in this case, the crosswalk), and the angle at which molecules (or people) flowed across, along with various directions that people can “dodge,” or move around each other to avoid colliding.
Based on these calculations, the researchers found that pedestrians in a crosswalk are more likely to form lanes, when they walk relatively straight across, from opposite directions. This order largely holds until people start veering across at more extreme angles. Then, the equation predicts that the pedestrian flow is likely to be disordered, with few to no lanes forming.
The researchers were curious to see whether the math bears out in reality. For this, they carried out experiments in a gymnasium, where they recorded the movements of pedestrians using an overhead camera. Each volunteer wore a paper hat, depicting a unique barcode that the overhead camera could track.
In their experiments, the team assigned volunteers various start and end positions along opposite sides of a simulated crosswalk, and tasked them with simultaneously walking across the crosswalk to their target location without bumping into anyone. They repeated the experiment many times, each time having volunteers assume different start and end positions. In the end, the researchers were able to gather visual data of multiple crowd flows, with pedestrians taking many different crossing angles.
When they analyzed the data and noted when lanes spontaneously formed, and when they did not, the team found that, much like the equation predicted, the angular spread mattered. Their experiments confirmed that the transition from ordered to disordered flow occurred somewhere around the theoretically predicted 13 degrees. That is, if an average person veered more than 13 degrees away from straight ahead, the pedestrian flow could tip into disorder, with little lane formation. What’s more, they found that the more disorder there is in a crowd, the less efficiently it moves.
The team plans to test their predictions on real-world crowds and pedestrian thoroughfares.
“We would like to analyze footage and compare that with our theory,” Bacik says. “And we can imagine that, for anyone designing a public space, if they want to have a safe and efficient pedestrian flow, our work could provide a simpler guideline, or some rules of thumb.”
This work is supported, in part, by the Engineering and Physical Sciences Research Council of UK Research and Innovation.
MIT scientists engineer starfish cells to shape-shift in response to lightThe research may enable the design of synthetic, light-activated cells for wound healing or drug delivery.Life takes shape with the motion of a single cell. In response to signals from certain proteins and enzymes, a cell can start to move and shake, leading to contractions that cause it to squeeze, pinch, and eventually divide. As daughter cells follow suit down the generational line, they grow, differentiate, and ultimately arrange themselves into a fully formed organism.
Now MIT scientists have used light to control how a single cell jiggles and moves during its earliest stage of development. The team studied the motion of egg cells produced by starfish — an organism that scientists have long used as a classic model for understanding cell growth and development.
The researchers focused on a key enzyme that triggers a cascade of motion within a starfish egg cell. They genetically designed a light-sensitive version of the same enzyme, which they injected into egg cells, and then stimulated the cells with different patterns of light.
They found that the light successfully triggered the enzyme, which in turn prompted the cells to jiggle and move in predictable patterns. For instance, the scientists could stimulate cells to exhibit small pinches or sweeping contractions, depending on the pattern of light they induced. They could even shine light at specific points around a cell to stretch its shape from a circle to a square.
Their results, appearing today in the journal Nature Physics, provide scientists with a new optical tool for controlling cell shape in its earliest developmental stages. Such a tool, they envision, could guide the design of synthetic cells, such as therapeutic “patch” cells that contract in response to light signals to help close wounds, or drug-delivering “carrier” cells that release their contents only when illuminated at specific locations in the body. Overall, the researchers see their findings as a new way to probe how life takes shape from a single cell.
“By revealing how a light-activated switch can reshape cells in real time, we’re uncovering basic design principles for how living systems self-organize and evolve shape,” says the study’s senior author, Nikta Fakhri, associate professor of physics at MIT. “The power of these tools is that they are guiding us to decode all these processes of growth and development, to help us understand how nature does it.”
The study’s MIT authors include first author Jinghui Liu, Yu-Chen Chao, and Tzer Han Tan; along with Tom Burkart, Alexander Ziepke, and Erwin Frey of Ludwig Maximilian University of Munich; John Reinhard of Saarland University; and S. Zachary Swartz of the Whitehead Institute for Biomedical Research.
Cell circuitry
Fakhri’s group at MIT studies the physical dynamics that drive cell growth and development. She is particularly interested in symmetry, and the processes that govern how cells follow or break symmetry as they grow and divide. The five-limbed starfish, she says, is an ideal organism for exploring such questions of growth, symmetry, and early development.
“A starfish is a fascinating system because it starts with a symmetrical cell and becomes a bilaterally symmetric larvae at early stages, and then develops into pentameral adult symmetry,” Fakhri says. “So there’s all these signaling processes that happen along the way to tell the cell how it needs to organize.”
Scientists have long studied the starfish and its various stages of development. Among many revelations, researchers have discovered a key “circuitry” within a starfish egg cell that controls its motion and shape. This circuitry involves an enzyme, GEF, that naturally circulates in a cell’s cytoplasm. When this enzyme is activated, it induces a change in a protein, called Rho, that is known to be essential for regulating cell mechanics.
When the GEF enzyme stimulates Rho, it causes the protein to switch from an essentially free-floating state to a state that binds the protein to the cell’s membrane. In this membrane-bound state, the protein then triggers the growth of microscopic, muscle-like fibers that thread out across the membrane and subsequently twitch, enabling the cell to contract and move.
In previous work, Fakhri’s group showed that a cell’s movements can be manipulated by varying the cell’s concentrations of GEF enzyme: The more enzyme they introduced into a cell, the more contractions the cell would exhibit.
“This whole idea made us think whether it’s possible to hack this circuitry, to not just change a cell’s pattern of movements but get a desired mechanical response,” Fakhri says.
Lights and action
To precisely manipulate a cell’s movements, the team looked to optogenetics — an approach that involves genetically engineering cells and cellular components such as proteins and enzymes, such that they activate in response to light.
Using established optogenetic techniques, the researchers developed a light-sensitive version of the GEF enzyme. From this engineered enzyme, they isolated its mRNA — essentially, the genetic blueprint for building the enzyme. They then injected this blueprint into egg cells that the team harvested from a single starfish ovary, which can hold millions of unfertilized cells. The cells, infused with the new mRNA, then began to produce light-sensitive GEF enzymes on their own.
In experiments, the researchers then placed each enzyme-infused egg cell under a microscope and shone light onto the cell in different patterns and from different points along the cell’s periphery. They took videos of the cell’s movements in response.
They found that when they aimed the light in specific points, the GEF enzyme became activated and recruited Rho protein to the light-targeted sites. There, the protein then set off its characteristic cascade of muscle-like fibers that pulled or pinched the cell in the same, light-stimulated spots. Much like pulling the strings of a marionette, they were able to control the cell’s movements, for instance directing it to morph into various shapes, including a square.
Surprisingly, they also found they could stimulate the cell to undergo sweeping contractions by shining a light in a single spot, exceeding a certain threshold of enzyme concentration.
“We realized this Rho-GEF circuitry is an excitable system, where a small, well-timed stimulus can trigger a large, all-or-nothing response,” Fakhri says. “So we can either illuminate the whole cell, or just a tiny place on the cell, such that enough enzyme is recruited to that region so the system gets kickstarted to contract or pinch on its own.”
The researchers compiled their observations and derived a theoretical framework to predict how a cell’s shape will change, given how it is stimulated with light. The framework, Fakhri says, opens a window into “the ‘excitability’ at the heart of cellular remodeling, which is a fundamental process in embryo development and wound healing.”
She adds: “This work provides a blueprint for designing ‘programmable’ synthetic cells, letting researchers orchestrate shape changes at will for future biomedical applications.”
This work was supported, in part, by the Sloan Foundation, and the National Science Foundation.
Device enables direct communication among multiple quantum processorsMIT researchers developed a photon-shuttling “interconnect” that can facilitate remote entanglement, a key step toward a practical quantum computer.Quantum computers have the potential to solve complex problems that would be impossible for the most powerful classical supercomputer to crack.
Just like a classical computer has separate, yet interconnected, components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate quantum information between multiple processors.
Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between network nodes, with compounding error rates.
On the way to overcoming these challenges, MIT researchers developed a new interconnect device that can support scalable, “all-to-all” communication, such that all superconducting quantum processors in a network can communication directly with each other.
They created a network of two quantum processors and used their interconnect to send microwave photons back and forth on demand in a user-defined direction. Photons are particles of light that can carry quantum information.
The device includes a superconducting wire, or waveguide, that shuttles photons between processors and can be routed as far as needed. The researchers can couple any number of modules to it, efficiently transmitting information between a scalable network of processors.
They used this interconnect to demonstrate remote entanglement, a type of correlation between quantum processors that are not physically connected. Remote entanglement is a key step toward developing a powerful, distributed network of many quantum processors.
“In the future, a quantum computer will probably need both local and nonlocal interconnects. Local interconnects are natural in arrays of superconducting qubits. Ours allows for more nonlocal connections. We can send photons at different frequencies, times, and in two propagation directions, which gives our network more flexibility and throughput,” says Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) and lead author of a paper on the interconnect.
Her co-authors include Beatriz Yankelevich, a graduate student in the EQuS Group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science (EECS) and professor of Physics, director of the Center for Quantum Engineering, and associate director of RLE; and others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.
A scalable architecture
The researchers previously developed a quantum computing module, which enabled them to send information-carrying microwave photons in either direction along a waveguide.
In the new work, they took that architecture a step further by connecting two modules to a waveguide in order to emit photons in a desired direction and then absorb them at the other end.
Each module is composed of four qubits, which serve as an interface between the waveguide carrying the photons and the larger quantum processors.
The qubits coupled to the waveguide emit and absorb photons, which are then transferred to nearby data qubits.
The researchers use a series of microwave pulses to add energy to a qubit, which then emits a photon. Carefully controlling the phase of those pulses enables a quantum interference effect that allows them to emit the photon in either direction along the waveguide. Reversing the pulses in time enables a qubit in another module any arbitrary distance away to absorb the photon.
“Pitching and catching photons enables us to create a ‘quantum interconnect’ between nonlocal quantum processors, and with quantum interconnects comes remote entanglement,” explains Oliver.
“Generating remote entanglement is a crucial step toward building a large-scale quantum processor from smaller-scale modules. Even after that photon is gone, we have a correlation between two distant, or ‘nonlocal,’ qubits. Remote entanglement allows us to take advantage of these correlations and perform parallel operations between two qubits, even though they are no longer connected and may be far apart,” Yankelevich explains.
However, transferring a photon between two modules is not enough to generate remote entanglement. The researchers need to prepare the qubits and the photon so the modules “share” the photon at the end of the protocol.
Generating entanglement
The team did this by halting the photon emission pulses halfway through their duration. In quantum mechanical terms, the photon is both retained and emitted. Classically, one can think that half-a-photon is retained and half is emitted.
Once the receiver module absorbs that “half-photon,” the two modules become entangled.
But as the photon travels, joints, wire bonds, and connections in the waveguide distort the photon and limit the absorption efficiency of the receiving module.
To generate remote entanglement with high enough fidelity, or accuracy, the researchers needed to maximize how often the photon is absorbed at the other end.
“The challenge in this work was shaping the photon appropriately so we could maximize the absorption efficiency,” Almanakly says.
They used a reinforcement learning algorithm to “predistort” the photon. The algorithm optimized the protocol pulses in order to shape the photon for maximal absorption efficiency.
When they implemented this optimized absorption protocol, they were able to show photon absorption efficiency greater than 60 percent.
This absorption efficiency is high enough to prove that the resulting state at the end of the protocol is entangled, a major milestone in this demonstration.
“We can use this architecture to create a network with all-to-all connectivity. This means we can have multiple modules, all along the same bus, and we can create remote entanglement among any pair of our choosing,” Yankelevich says.
In the future, they could improve the absorption efficiency by optimizing the path over which the photons propagate, perhaps by integrating modules in 3D instead of having a superconducting wire connecting separate microwave packages. They could also make the protocol faster so there are fewer chances for errors to accumulate.
“In principle, our remote entanglement generation protocol can also be expanded to other kinds of quantum computers and bigger quantum internet systems,” Almanakly says.
This work was funded, in part, by the U.S. Army Research Office, the AWS Center for Quantum Computing, and the U.S. Air Force Office of Scientific Research.
Professor Emeritus Lee Grodzins, pioneer in nuclear physics, dies at 98An MIT faculty member for 40 years, Grodzins performed groundbreaking studies of the weak interaction, led in detection technology, and co-founded the Union of Concerned Scientists.Nuclear physicist and MIT Professor Emeritus Lee Grodzins died on March 6 at his home in the Maplewood Senior Living Community at Weston, Massachusetts. He was 98.
Grodzins was a pioneer in nuclear physics research. He was perhaps best known for the highly influential experiment determining the helicity of the neutrino, which led to a key understanding of what's known as the weak interaction. He was also the founder of Niton Corp. and the nonprofit Cornerstones of Science, and was a co-founder of the Union of Concerned Scientists.
He retired in 1999 after serving as an MIT physics faculty member for 40 years. As a member of the Laboratory for Nuclear Science (LNS), he initiated the relativistic heavy-ion physics program. He published over 170 scientific papers and held 64 U.S. patents.
“Lee was a very good experimental physicist, especially with his hands making gadgets,” says Heavy Ion Group and Francis L. Friedman Professor Emeritus Wit Busza PhD ’64. “His enthusiasm for physics spilled into his enthusiasm for how physics was taught in our department.”
Industrious son of immigrants
Grodzins was born July 10, 1926, in Lowell, Massachusetts, the middle child of Eastern European Jewish immigrants David and Taube Grodzins. He grew up in Manchester, New Hampshire. His two sisters were Ethel Grodzins Romm, journalist, author, and businesswoman who later ran his company, Niton Corp.; and Anne Lipow, who became a librarian and library science expert.
His father, who ran a gas station and a used-tire business, died when Lee was 15. To help support his family, Lee sold newspapers, a business he grew into the second-largest newspaper distributor in Manchester.
At 17, Grodzins attended the University of New Hampshire, graduating in less than three years with a degree in mechanical engineering. However, he decided to be a physicist after disagreeing with a textbook that used the word “never.”
“I was pretty good in math and was undecided about my future,” Grodzins said in a 1958 New York Daily News article. “It wasn’t until my senior year that I unexpectedly realized I wanted to be a physicist. I was reading a physics text one day when suddenly this sentence hit me: ‘We will never be able to see the atom.’ I said to myself that that was as stupid a statement as I’d ever read. What did he mean ‘never!’ I got so annoyed that I started devouring other writers to see what they had to say and all at once I found myself in the midst of modern physics.”
He wrote his senior thesis on “Atomic Theory.”
After graduating in 1946, he approached potential employers by saying, “I have a degree in mechanical engineering, but I don’t want to be one. I’d like to be a physicist, and I’ll take anything in that line at whatever you will pay me.”
He accepted an offer from General Electric’s Research Laboratory in Schenectady, New York, where he worked in fundamental nuclear research building cosmic ray detectors, while also pursuing his master’s degree at Union College. “I had a ball,” he recalled. “I stayed in the lab 12 hours a day. They had to kick me out at night.”
Brookhaven
After earning his PhD from Purdue University in 1954, he spent a year as a lecturer there, before becoming a researcher at Brookhaven National Laboratory (BNL) with Maurice Goldhaber’s nuclear physics group, probing the properties of the nuclei of atoms.
In 1957, he, with Goldhaber and Andy Sunyar, used a simple table-top experiment to measure the helicity of the neutrino. Helicity characterizes the alignment of a particle’s intrinsic spin vector with that particle’s direction of motion.
The research provided new support for the idea that the principle of conservation of parity — which had been accepted for 30 years as a basic law of nature before being disproven the year before, leading to the 1957 Nobel Prize in Physics — was not as inviolable as the scientists thought it was, and did not apply to the behavior of some subatomic particles.
The experiment took about 10 days to complete, followed by a month of checks and rechecks. They submitted a letter on “Helicity of Neutrinos” to Physical Review on Dec. 11, 1957, and a week later, Goldhaber told a Stanford University audience that the neutrino is left-handed, meaning that the weak interaction was probably one force. This work proved crucial to our understanding of the weak interaction, the force that governs nuclear beta decay.
“It was a real upheaval in our understanding of physics,” says Grodzins’ longtime colleague Stephen Steadman. The breakthrough was commemorated in 2008, with a conference at BNL on “Neutrino Helicity at 50.”
Steadman also recalls Grodzins’ story about one night at Brookhaven, where he was working on an experiment that involved a radioactive source inside a chamber. Lee noticed that a vacuum pump wasn’t working, so he tinkered with it a while before heading home. Later that night, he gets a call from the lab. “They said, ‘Don't go anywhere!’” recalls Steadman. It turns out the radiation source in the lab had exploded, and the pump filled the lab with radiation. “They were actually able to trace his radioactive footprints from the lab to his home,” says Steadman. “He kind of shrugged it off.”
The MIT years
Grodzins joined the faculty of MIT in 1959, where he taught physics for four decades. He inherited Robley Evans’ Radiation Laboratory, which used radioactive sources to study properties of nuclei, and led the Relativistic Heavy Ion Group, which was affiliated with the LNS.
In 1972, he launched a program at BNL using the then-new Tandem Van de Graaff accelerator to study interactions of heavy ions with nuclei. “As the BNL tandem was getting commissioned, we started a program, together with Doug Cline at the University of Rochester, tandem to investigate Coulomb-nuclear interference,” says Steadman, a senior research scientist at LNS. “The experimental results were decisive but somewhat controversial at the time. We clearly detected the interference effect.” The experimental work was published in Physical Review Letters.
Grodzins’ team looked for super-heavy elements using the Lawrence Berkeley National Laboratory Super-Hilac, investigated heavy-ion fission and other heavy-ion reactions, and explored heavy-ion transfer reactions. The latter research showed with precise detail the underlying statistical behavior of the transfer of nucleons between the heavy-ion projectile and target, using a theoretical statistical model of Surprisal Analysis developed by Rafi Levine and his graduate student. Recalls Steadman, “these results were both outstanding in their precision and initially controversial in interpretation.”
In 1985, he carried out the first computer axial tomographic experiment using synchrotron radiation, and in 1987, his group was involved in the first run of Experiment 802, a collaborative experiment with about 50 scientists from around the world that studied relativistic heavy ion collisions at Brookhaven. The MIT responsibility was to build the drift chambers and design the bending magnet for the experiment.
“He made significant contributions to the initial design and construction phases, where his broad expertise and knowledge of small area companies with unique capabilities was invaluable,” says George Stephans, physics senior lecturer and senior research scientist at MIT.
Professor emeritus of physics Rainer Weiss ’55, PhD ’62 recalls working on a Mossbauer experiment to establish if photons changed frequency as they traveled through bright regions. “It was an idea held by some to explain the ‘apparent’ red shift with distance in our universe,” says Weiss. “We became great friends in the process, and of course, amateur cosmologists.”
“Lee was great for developing good ideas,” Steadman says. “He would get started on one idea, but then get distracted with another great idea. So, it was essential that the team would carry these experiments to their conclusion: they would get the papers published.”
MIT mentor
Before retiring in 1999, Lee supervised 21 doctoral dissertations and was an early proponent of women graduate students in physics. He also oversaw the undergraduate thesis of Sidney Altman, who decades later won the Nobel Prize in Chemistry. For many years, he helped teach the Junior Lab required of all undergraduate physics majors. He got his favorite student evaluation, however, for a different course, billed as offering a “superficial overview” of nuclear physics. The comment read: “This physics course was not superficial enough for me.”
“He really liked to work with students,” says Steadman. “They could always go into his office anytime. He was a very supportive mentor.”
“He was a wonderful mentor, avuncular and supportive of all of us,” agrees Karl van Bibber ’72, PhD ’76, now at the University of California at Berkeley. He recalls handing his first paper to Grodzins for comments. “I was sitting at my desk expecting a pat on the head. Quite to the contrary, he scowled, threw the manuscript on my desk and scolded, ‘Don't even pick up a pencil again until you've read a Hemingway novel!’ … The next version of the paper had an average sentence length of about six words; we submitted it, and it was immediately accepted by Physical Review Letters.”
Van Bibber has since taught the “Grodzins Method” in his graduate seminars on professional orientation for scientists and engineers, including passing around a few anthologies of Hemingway short stories. “I gave a copy of one of the dog-eared anthologies to Lee at his 90th birthday lecture, which elicited tears of laughter.”
Early in George Stephans’ MIT career as a research scientist, he worked with Grodzins’ newly formed Relativistic Heavy Ion Group. “Despite his wide range of interests, he paid close attention to what was going on and was always very supportive of us, especially the students. He was a very encouraging and helpful mentor to me, as well as being always pleasant and engaging to work with. He actively pushed to get me promoted to principal research scientist relatively early, in recognition of my contributions.”
“He always seemed to know a lot about everything, but never acted condescending,” says Stephans. “He seemed happiest when he was deeply engaged digging into the nitty-gritty details of whatever unique and unusual work one of these companies was doing for us.”
Al Lazzarini ’74, PhD ’78 recalls Grodzins’ investigations using proton-induced X-ray emission (PIXE) as a sensitive tool to measure trace elemental amounts. “Lee was a superb physicist,” says Lazzarini. “He gave an enthralling seminar on an investigation he had carried out on a lock of Napoleon’s hair, looking for evidence of arsenic poisoning.”
Robert Ledoux ’78, PhD ’81, a former professor of physics at MIT who is now program director of the U.S. Advanced Research Projects Agency with the Department of Energy, worked with Grodzins as both a student and colleague. “He was a ‘nuclear physicist’s physicist’ — a superb experimentalist who truly loved building and performing experiments in many areas of nuclear physics. His passion for discovery was matched only by his generosity in sharing knowledge.”
The research funding crisis starting in 1969 led Grodzins to become concerned that his graduate students would not find careers in the field. He helped form the Economic Concerns Committee of the American Physical Society, for which he produced a major report on the “Manpower Crisis in Physics” (1971), and presented his results before the American Association for the Advancement of Science, and at the Karlsruhe National Lab in Germany.
Grodzins played a significant role in bringing the first Chinese graduate students to MIT in the 1970s and 1980s.
One of the students he welcomed was Huan Huang PhD ’90. “I am forever grateful to him for changing my trajectory,” says Huang, now at the University of California at Los Angeles. “His unwavering support and ‘go do it’ attitude inspired us to explore physics at the beginning of a new research field of high energy heavy ion collisions in the 1980s. I have been trying to be a ‘nice professor’ like Lee all my academic career.”
Even after he left MIT, Grodzins remained available for his former students. “Many tell me how much my lifestyle has influenced them, which is gratifying,” Huang says. “They’ve been a central part of my life. My biography would be grossly incomplete without them.”
Niton Corp. and post-MIT work
Grodzins liked what he called “tabletop experiments,” like the one used in his 1957 neutrino experiment, which involved a few people building a device that could fit on a tabletop. “He didn’t enjoy working in large collaborations, which nuclear physics embraced.” says Steadman. “I think that’s why he ultimately left MIT.”
In the 1980s, he launched what amounted to a new career in detection technology. In 1987, after developing a scanning proton-induced X-ray microspectrometer for use measuring elemental concentrations in air, he founded the Niton Corp., which developed, manufactured, and marketed test kits and instruments to measure radon gas in buildings, lead-based paint detection, and other nondestructive testing applications. (“Niton” is an obsolete term for radon.)
“At the time, there was a big scare about radon in New England, and he thought he could develop a radon detector that was inexpensive and easy to use,” says Steadman. “His radon detector became a big business.”
He later developed devices to detect explosives, drugs, and other contraband in luggage and cargo containers. Handheld devices used X-ray fluorescence to determine the composition of metal alloys and to detect other materials. The handheld XL Spectrum Analyzer could detect buried and surface lead on painted surfaces, to protect children living in older homes. Three Niton X-ray fluorescence analyzers earned R&D 100 awards.
“Lee was very technically gifted,” says Steadman.
In 1999, Grodzins retired from MIT and devoted his energies to industry, including directing the R&D group at Niton.
His sister Ethel Grodzins Romm was the president and CEO of Niton, followed by his son Hal. Many of Niton’s employees were MIT graduates. In 2005, he and his family sold Niton to Thermo Fisher Scientific, where Lee remained as a principal scientist until 2010.
In the 1990s, he was vice president of American Science and Engineering, and between the ages of 70 and 90, he was awarded three patents a year.
“Curiosity and creativity don’t stop after a certain age,” Grodzins said to UNH Today. “You decide you know certain things, and you don’t want to change that thinking. But thinking outside the box really means thinking outside your box.”
“I miss his enthusiasm,” says Steadman. “I saw him about a couple of years ago and he was still on the move, always ready to launch a new effort, and he was always trying to pull you into those efforts.”
A better world
In the 1950s, Grodzins and other Brookhaven scientists joined the American delegation at the Second United Nations International Conference on the Peaceful Uses of Atomic Energy in Geneva.
Early on, he joined several Manhattan Project alums at MIT in their concern about the consequences of nuclear bombs. In Vietnam-era 1969, Grodzins co-founded the Union of Concerned Scientists, which calls for scientific research to be directed away from military technologies and toward solving pressing environmental and social problems. He served as its chair in 1970 and 1972. He also chaired committees for the American Physical Society and the National Research Council.
As vice president for advanced products at American Science and Engineering, which made homeland security equipment, he became a consultant on airport security, especially following the 9/11 attacks. As an expert witness, he testified at the celebrated trial to determine whether Pan Am was negligent for the bombing of Flight 103 over Lockerbie, Scotland, and he took part in a weapons inspection trip on the Black Sea. He also was frequently called as an expert witness on patent cases.
In 1999, Grodzins founded the nonprofit Cornerstones in Science, a public library initiative to improve public engagement with science. Based originally at the Curtis Memorial Library in Brunswick, Maine, Cornerstones now partners with libraries in Maine, Arizona, Texas, Massachusetts, North Carolina, and California. Among their initiatives was one that has helped supply telescopes to libraries and astronomy clubs around the country.
“He had a strong sense of wanting to do good for mankind,” says Steadman.
Awards
Grodzins authored more than 170 technical papers and holds more than 60 U.S. patents. His numerous accolades included being named a Guggenheim Fellow in 1964 and 1971, and a senior von Humboldt fellow in 1980. He was a fellow of the American Physical Society and the American Academy of Arts and Sciences, and received an honorary doctor of science degree from Purdue University in 1998.
In 2021, the Denver X-Ray Conference gave Grodzins the Birks Award in X-Florescence Spectrometry, for having introduced “a handheld XRF unit which expanded analysis to in-field applications such as environmental studies, archeological exploration, mining, and more.”
Personal life
One evening in 1955, shortly after starting his work at Brookhaven, Grodzins decided to take a walk and explore the BNL campus. He found just one building that had lights on and was open, so he went in. Inside, a group was rehearsing a play. He was immediately smitten with one of the actors, Lulu Anderson, a young biologist. “I joined the acting company, and a year-and-a-half later, Lulu and I were married,” Grodzins had recalled. They were happily married for 62 years, until Lulu’s death in 2019.
They raised two sons, Dean, now of Cambridge, Massachusetts, and Hal Grodzins, who lives in Maitland, Florida. Lee and Lulu owned a succession of beloved huskies, most of them named after physicists.
After living in Arlington, Massachusetts, the Grodzins family moved to Lexington, Massachusetts, in 1972 and bought a second home a few years later in Brunswick, Maine. Starting around 1990, Lee and Lulu spent every weekend, year-round, in Brunswick. In both places, they were avid supporters of their local libraries, museums, theaters, symphonies, botanical gardens, public radio, and TV stations.
Grodzins took his family along to conferences, fellowships, and other invitations. They all lived in Denmark for two sabbaticals, in 1964-65 and 1971-72, while Lee worked at the Neils Bohr Institute. They also traveled together to China for a month in 1975, and for two months in 1980. As part of the latter trip, they were among the first American visitors to Tibet since the 1940s. Lee and Lulu also traveled the world, from Antarctica to the Galapagos Islands to Greece.
His homes had basement workshops well-stocked with tools. His sons enjoyed a playroom he built for them in their Arlington home. He also once constructed his own high-fidelity record player, patched his old Volvo with fiberglass, changed his own oil, and put on the winter tires and chains himself. He was an early adopter of the home computer.
“His work in science and technology was part of a general love of gadgets and of fixing and making things,” his son, Dean, wrote in a Facebook post.
Lee is survived by Dean, his wife, Nora Nykiel Grodzins, and their daughter, Lily; and by Hal and his wife Cathy Salmons.
A remembrance and celebration for Lee Grodzins is planned for this summer. Donations in his name may be made to Cornerstones of Science.
Drawing inspiration from ancient chemical reactionsBy studying cellular enzymes that perform difficult reactions, MIT chemist Dan Suess hopes to find new solutions to global energy challenges.To help find solutions to the planet’s climate crisis, MIT Associate Professor Daniel Suess is looking to Earth’s ancient past.
Early in the evolution of life, cells gained the ability to perform reactions such as transferring electrons from one atom to another. These reactions, which help cells to build carbon-containing or nitrogen-containing compounds, rely on specialized enzymes with clusters of metal atoms.
By learning more about how those enzymes work, Suess hopes to eventually devise new ways to perform fundamental chemical reactions that could help capture carbon from the atmosphere or enable the development of alternative fuels.
“We have to find some way of rewiring society so that we are not just relying on vast reserves of reduced carbon, fossil fuels, and burning them using oxygen,” he says. “What we’re doing is we’re looking backward, up to a billion years before oxygen and photosynthesis came along, to see if we can identify the chemical principles that underlie processes that aren’t reliant on burning carbon.”
His work could also shed light on other important cellular reactions such as the conversion of nitrogen gas to ammonia, which is also the key step in the production of synthetic fertilizer.
Exploring chemistry
Suess, who grew up in Spokane, Washington, became interested in math at a young age, but ended up majoring in chemistry and English at Williams College, which he chose based on its appealing selection of courses.
“I was interested in schools that were more focused on the liberal arts model, Williams being one of those. And I just thought they had the right combination of really interesting courses and freedom to take classes that you wanted,” he says. “I went in not expecting to major in chemistry, but then I really enjoyed my chemistry classes and chemistry teachers.”
In his classes, he explored all aspects of chemistry and found them all appealing.
“I liked organic chemistry, because there’s an emphasis on making things. And I liked physical chemistry because there was an attempt to have at least a semiquantitative way of understanding the world. Physical chemistry describes some of the most important developments in science in the 20th century, including quantum mechanics and its application to atoms and molecules,” he says.
After college, Suess came to MIT for graduate school and began working with chemistry professor Jonas Peters, who had recently arrived from Caltech. A couple of years later, Peters ended up moving back to Caltech, and Suess followed, continuing his PhD thesis research on new ways to synthesize inorganic molecules.
His project focused on molecules that consist of a metal such as iron or cobalt bound to a nonmetallic group known as a ligand. Within these molecules, the metal atom typically pulls in electrons from the ligand. However, the molecules Suess worked on were designed so that the metal would give up its own electrons to the ligand. Such molecules can be used to speed up difficult reactions that require breaking very strong bonds, like the nitrogen-nitrogen triple bond in N2.
During a postdoc at the University of California at Davis, Suess switched gears and began working on biomolecules — specifically, metalloproteins. These are protein enzymes that have metals tucked into their active sites, where they help to catalyze reactions.
Suess studied how cells synthesize the metal-containing active sites in these proteins, focusing on an enzyme called iron-iron hydrogenase. This enzyme, found mainly in anaerobic bacteria, including some that live in the human digestive tract, catalyzes reactions involving the transfer of protons and electrons. Specifically, it can combine two protons and two electrons to make H2, or can perform the reverse reaction, breaking H2 into protons and electrons.
“That enzyme is really important because a lot of cellular metabolic processes either generate excess electrons or require excess electrons. If you generate excess electrons, they have to go somewhere, and one solution is to put them on protons to make H2,” Suess says.
Global scale reactions
Since joining the MIT faculty in 2017, Suess has continued his investigations of metalloproteins and the reactions that they catalyze.
“We’re interested in global-scale chemical reactions, meaning they’re occurring on the microscopic scale but happening on a huge scale,” he says. “They impact the planet and have determined what the molecular composition of the biosphere is and what it’s going to be.”
Photosynthesis, which emerged around 2.4 billion years ago, has had the biggest impact on the atmosphere, filling it with oxygen, but Suess focuses on reactions that cells began using even earlier, when the atmosphere lacked oxygen and cell metabolism could not be driven by respiration.
Many of these ancient reactions, which are still used by cells today, involve a class of metalloproteins called iron-sulfur proteins. These enzymes, which are found in all kingdoms of life, are involved in catalyzing many of the most difficult reactions that occur in cells, such as forming carbon radicals and converting nitrogen to ammonia.
To study the metalloenzymes that catalyze these reactions, Suess’s lab takes two different approaches. In one, they create synthetic versions of the proteins that may contain fewer metal atoms, which allows for greater control over the composition and shape of the protein, making them easier to study.
In another approach, they use the natural version of the protein but substitute one of the metal atoms with an isotope that makes it easier to use spectroscopic techniques to analyze the protein’s structure.
“That allows us to study both the bonding in the resting state of an enzyme, as well as the bonding and structures of reaction intermediates that you can only characterize spectroscopically,” Suess says.
Understanding how enzymes perform these reactions could help researchers find new ways to remove carbon dioxide from the atmosphere by combining it with other molecules to create larger compounds. Finding alternative ways to convert nitrogen gas to ammonia could also have a big impact on greenhouse gas emissions, as the Haber Bosch process now used to synthesize fertilizer produces requires huge amounts of energy.
“Our primary focus is on understanding the natural world, but I think that as we’re looking at different ways to wire biological catalysts to do efficient reactions that impact society, we need to know how that wiring works. And so that is what we’re trying to figure out,” he says.
At the core of problem-solvingStuart Levine ’97, director of MIT’s BioMicro Center, keeps departmental researchers at the forefront of systems biology.As director of the MIT BioMicro Center (BMC), Stuart Levine ’97 wholeheartedly embraces the variety of challenges he tackles each day. One of over 50 core facilities providing shared resources across the Institute, the BMC supplies integrated high-throughput genomics, single-cell and spatial transcriptomic analysis, bioinformatics support, and data management to researchers across MIT. The BioMicro Center is part of the Integrated Genomics and Bioinformatics core facility at the Robert A. Swanson (1969) Biotechnology Center.
“Every day is a different day,” Levine says, “there are always new problems, new challenges, and the technology is continuing to move at an incredible pace.” After more than 15 years in the role, Levine is grateful that the breadth of his work allows him to seek solutions for so many scientific problems.
By combining bioinformatics expertise with biotech relationships and a focus on maximizing the impact of the center’s work, Levine brings the broad range of skills required to match the diversity of questions asked by investigators in MIT’s Department of Biology and Koch Institute for Integrative Cancer Research, as well as researchers across MIT’s campus.
Expansive expertise
Biology first appealed to Levine as an MIT undergraduate taking class 7.012 (Introduction to Biology), thanks to the charisma of instructors Professor Eric Lander and Amgen Professor Emerita Nancy Hopkins. After earning his PhD in biochemistry from Harvard University and Massachusetts General Hospital, Levine returned to MIT for postdoctoral work with Professor Richard Young, core member at the Whitehead Institute for Biomedical Research.
In the Young Lab, Levine found his calling as an informaticist and ultimately decided to stay at MIT. Here, his work has a wide-ranging impact: the BMC serves over 100 labs annually, from the the Computer Science and Artificial Intelligence Laboratory and the departments of Brain and Cognitive Sciences; Earth, Atmospheric and Planetary Sciences; Chemical Engineering; Mechanical Engineering; and, of course, Biology.
“It’s a fun way to think about science,” Levine says, noting that he applies his knowledge and streamlines workflows across these many disciplines by “truly and deeply understanding the instrumentation complexities.”
This depth of understanding and experience allows Levine to lead what longtime colleague Professor Laurie Boyer describes as “a state-of-the-art core that has served so many faculty and provides key training opportunities for all.” He and his team work with cutting-edge, finely tuned scientific instruments that generate vast amounts of bioinformatics data, then use powerful computational tools to store, organize, and visualize the data collected, contributing to research on topics ranging from host-parasite interactions to proposed tools for NASA’s planetary protection policy.
Staying ahead of the curve
With a scientist directing the core, the BMC aims to enable researchers to “take the best advantage of systems biology methods,” says Levine. These methods use advanced research technologies to do things like prepare large sets of DNA and RNA for sequencing, read DNA and RNA sequences from single cells, and localize gene expression to specific tissues.
Levine presents a lightweight, clear rectangle about the width of a cell phone and the length of a VHS cassette.
“This is a flow cell that can do 20 human genomes to clinical significance in two days — 8 billion reads,” he says. “There are newer instruments with several times that capacity available as well.”
The vast majority of research labs do not need that kind of power, but the Institute, and its researchers as a whole, certainly do. Levine emphasizes that “the ROI [return on investment] for supporting shared resources is extremely high because whatever support we receive impacts not just one lab, but all of the labs we support. Keeping MIT’s shared resources at the bleeding edge of science is critical to our ability to make a difference in the world.”
To stay at the edge of research technology, Levine maintains company relationships, while his scientific understanding allows him to educate researchers on what is possible in the space of modern systems biology. Altogether, these attributes enable Levine to help his researcher clients “push the limits of what is achievable.”
The man behind the machines
Each core facility operates like a small business, offering specialized services to a diverse client base across academic and industry research, according to Amy Keating, Jay A. Stein (1968) Professor of Biology and head of the Department of Biology. She explains that “the PhD-level education and scientific and technological expertise of MIT’s core directors are critical to the success of life science research at MIT and beyond.”
While Levine clearly has the education and expertise, the success of the BMC “business” is also in part due to his tenacity and focus on results for the core’s users.
He was recognized by the Institute with the MIT Infinite Mile Award in 2015 and the MIT Excellence Award in 2017, for which one nominator wrote, “What makes Stuart’s leadership of the BMC truly invaluable to the MIT community is his unwavering dedication to producing high-quality data and his steadfast persistence in tackling any type of troubleshooting needed for a project. These attributes, fostered by Stuart, permeate the entire culture of the BMC.”
“He puts researchers and their research first, whether providing education, technical services, general tech support, or networking to collaborators outside of MIT,” says Noelani Kamelamela, lab manager of the BMC. “It’s all in service to users and their projects.”
Tucked into the far back corner of the BMC lab space, Levine’s office is a fitting symbol of his humility. While his guidance and knowledge sit at the center of what elevates the BMC beyond technical support, he himself sits away from the spotlight, resolutely supporting others to advance science.
“Stuart has always been the person, often behind the scenes, that pushes great science, ideas, and people forward,” Boyer says. “His knowledge and advice have truly allowed us to be at the leading edge in our work.”
To the brain, Esperanto and Klingon appear the same as English or MandarinA new study finds natural and invented languages elicit similar responses in the brain’s language-processing network.Within the human brain, a network of regions has evolved to process language. These regions are consistently activated whenever people listen to their native language or any language in which they are proficient.
A new study by MIT researchers finds that this network also responds to languages that are completely invented, such as Esperanto, which was created in the late 1800s as a way to promote international communication, and even to languages made up for television shows such as “Star Trek” and “Game of Thrones.”
To study how the brain responds to these artificial languages, MIT neuroscientists convened nearly 50 speakers of these languages over a single weekend. Using functional magnetic resonance imaging (fMRI), the researchers found that when participants listened to a constructed language in which they were proficient, the same brain regions lit up as those activated when they processed their native language.
“We find that constructed languages very much recruit the same system as natural languages, which suggests that the key feature that is necessary to engage the system may have to do with the kinds of meanings that both kinds of languages can express,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research and the senior author of the study.
The findings help to define some of the key properties of language, the researchers say, and suggest that it’s not necessary for languages to have naturally evolved over a long period of time or to have a large number of speakers.
“It helps us narrow down this question of what a language is, and do it empirically, by testing how our brain responds to stimuli that might or might not be language-like,” says Saima Malik-Moraleda, an MIT postdoc and the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.
Convening the conlang community
Unlike natural languages, which evolve within communities and are shaped over time, constructed languages, or “conlangs,” are typically created by one person who decides what sounds will be used, how to label different concepts, and what the grammatical rules are.
Esperanto, the most widely spoken conlang, was created in 1887 by L.L. Zamenhof, who intended it to be used as a universal language for international communication. Currently, it is estimated that around 60,000 people worldwide are proficient in Esperanto.
In previous work, Fedorenko and her students have found that computer programming languages, such as Python — another type of invented language — do not activate the brain network that is used to process natural language. Instead, people who read computer code rely on the so-called multiple demand network, a brain system that is often recruited for difficult cognitive tasks.
Fedorenko and others have also investigated how the brain responds to other stimuli that share features with language, including music and nonverbal communication such as gestures and facial expressions.
“We spent a lot of time looking at all these various kinds of stimuli, finding again and again that none of them engage the language-processing mechanisms,” Fedorenko says. “So then the question becomes, what is it that natural languages have that none of those other systems do?”
That led the researchers to wonder if artificial languages like Esperanto would be processed more like programming languages or more like natural languages. Similar to programming languages, constructed languages are created by an individual for a specific purpose, without natural evolution within a community. However, unlike programming languages, both conlangs and natural languages can be used to convey meanings about the state of the external world or the speaker’s internal state.
To explore how the brain processes conlangs, the researchers invited speakers of Esperanto and several other constructed languages to MIT for a weekend conference in November 2022. The other languages included Klingon (from “Star Trek”), Na’vi (from “Avatar”), and two languages from “Game of Thrones” (High Valyrian and Dothraki). For all of these languages, there are texts available for people who want to learn the language, and for Esperanto, Klingon, and High Valyrian, there is even a Duolingo app available.
“It was a really fun event where all the communities came to participate, and over a weekend, we collected all the data,” says Malik-Moraleda, who co-led the data collection effort with former MIT postbac Maya Taliaferro, now a PhD student at New York University.
During that event, which also featured talks from several of the conlang creators, the researchers used fMRI to scan 44 conlang speakers as they listened to sentences from the constructed language in which they were proficient. The creators of these languages — who are co-authors on the paper — helped construct the sentences that were presented to the participants.
While in the scanner, the participants also either listened to or read sentences in their native language, and performed some nonlinguistic tasks for comparison. The researchers found that when people listened to a conlang, the same language regions in the brain were activated as when they listened to their native language.
Common features
The findings help to identify some of the key features that are necessary to recruit the brain’s language processing areas, the researchers say. One of the main characteristics driving language responses seems to be the ability to convey meanings about the interior and exterior world — a trait that is shared by natural and constructed languages, but not programming languages.
“All of the languages, both natural and constructed, express meanings related to inner and outer worlds. They refer to objects in the world, to properties of objects, to events,” Fedorenko says. “Whereas programming languages are much more similar to math. A programming language is a symbolic generative system that allows you to express complex meanings, but it’s a self-contained system: The meanings are highly abstract and mostly relational, and not connected to the real world that we experience.”
Some other characteristics of natural languages, which are not shared by constructed languages, don’t seem to be necessary to generate a response in the language network.
“It doesn’t matter whether the language is created and shaped over time by a community of speakers, because these constructed languages are not,” Malik-Moraleda says. “It doesn’t matter how old they are, because conlangs that are just a decade old engage the same brain regions as natural languages that have been around for many hundreds of years.”
To further refine the features of language that activate the brain’s language network, Fedorenko’s lab is now planning to study how the brain responds to a conlang called Lojban, which was created by the Logical Language Group in the 1990s and was designed to prevent ambiguity of meanings and promote more efficient communication.
The research was funded by MIT’s McGovern Institute for Brain Research, Brain and Cognitive Sciences Department, the Simons Center for the Social Brain, the Frederick A. and Carole J. Middleton Career Development Professorship, and the U.S. National Institutes of Health.
A dive into the “almost magical” potential of photonic crystals In MIT’s 2025 Killian Lecture, physicist John Joannopoulos recounts highlights from a career at the vanguard of photonics research and innovation.When you’re challenging a century-old assumption, you’re bound to meet a bit of resistance. That’s exactly what John Joannopoulos and his group at MIT faced in 1998, when they put forth a new theory on how materials can be made to bend light in entirely new ways.
“Because it was such a big difference in what people expected, we wrote down the theory for this, but it was very difficult to get it published,” Joannopoulos told a capacity crowd in MIT’s Huntington Hall on Friday, as he delivered MIT’s James R. Killian, Jr. Faculty Achievement Award Lecture.
Joannopoulos’ theory offered a new take on a type of material known as a one-dimensional photonic crystal. Photonic crystals are made from alternating layers of refractive structures whose arrangement can influence how incoming light is reflected or absorbed.
In 1887, the English physicist John William Strutt, better known as the Lord Rayleigh, established a theory for how light should bend through a similar structure composed of multiple refractive layers. Rayleigh predicted that such a structure could reflect light, but only if that light is coming from a very specific angle. In other words, such a structure could act as a mirror for light shining from a specific direction only.
More than a century later, Joannopoulos and his group found that, in fact, quite the opposite was true. They proved in theoretical terms that, if a one-dimensional photonic crystal were made from layers of materials with certain “refractive indices,” bending light to different degrees, then the crystal as a whole should be able to reflect light coming from any and all directions. Such an arrangement could act as a “perfect mirror.”
The idea was a huge departure from what scientists had long assumed, and as such, when Joannopoulos submitted the research for peer review, it took some time for the journal, and the community, to come around. But he and his students kept at it, ultimately verifying the theory with experiments.
That work led to a high-profile publication, which helped the group focus the idea into a device: Using the principles that they laid out, they effectively fabricated a perfect mirror and folded it into a tube to form a hollow-core fiber. When they shone light through, the inside of the fiber reflected all the light, trapping it entirely in the core as the light pinged through the fiber. In 2000, the team launched a startup to further develop the fiber into a flexible, highly precise and minimally invasive “photonics scalpel,” which has since been used in hundreds of thousands of medical procedures including a surgeries of the brain and spine.
“And get this: We have estimated more than 500,000 procedures across hospitals in the U.S. and abroad,” Joannopoulos proudly stated, to appreciative applause.
Joannopoulos is the recipient of the 2024-2025 James R. Killian, Jr. Faculty Achievement Award, and is the Francis Wright Davis Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT. In response to an audience member who asked what motivated him in the face of initial skepticism, he replied, “You have to persevere if you believe what you have is correct.”
Immeasurable impact
The Killian Award was established in 1971 to honor MIT’s 10th president, James Killian. Each year, a member of the MIT faculty is honored with the award in recognition of their extraordinary professional accomplishments.
Joannopoulos received his PhD from the University of California at Berkeley in 1974, then immediately joined MIT’s physics faculty. In introducing his lecture, Mary Fuller, professor of literature and chair of the MIT faculty, noted: “If you do the math, you’ll know he just celebrated 50 years at MIT.” Throughout that remarkable tenure, Fuller noted Joannopoulos’ profound impact on generations of MIT students.
“We recognize you as a leader, a visionary scientist, beloved mentor, and a believer in the goodness of people,” Fuller said. “Your legendary impact at MIT and the broader scientific community is immeasurable.”
Bending light
In his lecture, which he titled “Working at the Speed of Light,” Joannopoulos took the audience through the basic concepts underlying photonic crystals, and the ways in which he and others have shown that these materials can bend and twist incoming light in a controlled way.
As he described it, photonic crystals are “artificial materials” that can be designed to influence the properties of photons in a way that’s similar to how physical features in semiconductors affect the flow of electrons. In the case of semiconductors, such materials have a specific “band gap,” or a range of energies in which electrons cannot exist.
In the 1990s, Joannopoulos and others wondered whether the same effects could be realized for optical materials, to intentionally reflect, or keep out, some kinds of light while letting others through. And even more intriguing: Could a single material be designed such that incoming light pinballs away from certain regions in a material in predesigned paths?
“The answer was a resounding yes,” he said.
Joannopoulos described the excitement within the emerging field by quoting an editor from the journal Nature, who wrote at the time: “If only it were possible to make materials in which electromagnetic waves cannot propagate at certain frequencies, all kinds of almost-magical things would be possible.”
Joannopoulos and his group at MIT began in earnest to elucidate the ways in which light interacts with matter and air. The team worked first with two-dimensional photonic crystals made from a horizontal matrix-like pattern of silicon dots surrounded by air. Silicon has a high refractive index, meaning it can greatly bend or reflect light, while air has a much lower index. Joannopoulos predicted that the silicon could be patterned to ping light away, forcing it to travel through the air in predetermined paths.
In multiple works, he and his students showed through theory and experiments that they could design photonic crystals to, for instance, bend incoming light by 90 degrees and force light to circulate only at the edges of a crystal under an applied magnetic field.
“Over the years there have been quite a few examples we’ve discovered of very anomalous, strange behavior of light that cannot exist in normal objects,” he said.
In 1998, after showing that light can be reflected from all directions from a stacked, one-dimensional photonic crystal, he and his students rolled the crystal structure into a fiber, which they tested in a lab. In a video that Joannopoulos played for the audience, a student carefully aimed the end of the long, flexible fiber at a sheet of material made from the same material as the fiber’s casing. As light pumped through the multilayered photonic lining of the fiber and out the other end, the student used the light to slowly etch a smiley face design in the sheet, drawing laughter from the crowd.
As the video demonstrated, although the light was intense enough to melt the material of the fiber’s coating, it was nevertheless entirely contained within the fiber’s core, thanks to the multilayered design of its photonic lining. What’s more, the light was focused enough to make precise patterns when it shone out of the fiber.
“We had originally developed this [optical fiber] as a military device,” Joannopoulos said. “But then the obvious choice to use it for the civilian population was quite clear.”
“Believing in the goodness of people and what they can do”
He and others co-founded Omniguide in 2000, which has since grown into a medical device company that develops and commercializes minimally invasive surgical tools such as the fiber-based “photonics scalpel.” In illustrating the fiber’s impact, Joannopoulos played a news video, highlighting the fiber’s use in performing precise and effective neurosurgery. The optical scalpel has also been used to perform procedures in larynology, head and neck surgery, and gynecology, along with brain and spinal surgeries.
Omniguide is one of several startups that Joannopoulos has helped found, along with Luminus Devices, Inc., WiTricity Corporation, Typhoon HIL, Inc., and Lightelligence. He is author or co-author of over 750 refereed journal articles, four textbooks, and 126 issued U.S. patents. He has earned numerous recognitions and awards, including his election to the National Academy of Sciences and the American Academy of Arts and Sciences.
The Killian Award citation states: “Professor Joannopoulos has been a consistent role model not just in what he does, but in how he does it. … Through all these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”
At the end of the talk, Yoel Fink, Joannopoulos’ former student and frequent collaborator, who is now professor of materials science, asked Joannopoulos how, particularly in current times, he has been able to “maintain such a positive and optimistic outlook, of humans and human nature.”
“It’s a matter of believing in the goodness of people and what they can do, what they accomplish, and giving an environment where they’re working in, where they feel extermely comfortable,” Joannopoulos offered. “That includes creating a sense of trust between the faculty and the students, which is key. That helps enormously.”
Evidence that 40Hz gamma stimulation promotes brain health is expandingA decade of studies provide a growing evidence base that increasing the power of the brain’s gamma rhythms could help fight Alzheimer’s, and perhaps other neurological diseases.A decade after scientists in The Picower Institute for Learning and Memory at MIT first began testing whether sensory stimulation of the brain’s 40Hz “gamma” frequency rhythms could treat Alzheimer’s disease in mice, a growing evidence base supporting the idea that it can improve brain health — in humans as well as animals — has emerged from the work of labs all over the world. A new open-access review article in PLOS Biology describes the state of research so far and presents some of the fundamental and clinical questions at the forefront of the noninvasive gamma stimulation now.
“As we’ve made all our observations, many other people in the field have published results that are very consistent,” says Li-Huei Tsai, Picower professor of neuroscience at MIT, director of MIT’s Aging Brain Initiative, and senior author of the new review, with postdoc Jung Park. “People have used many different ways to induce gamma including sensory stimulation, transcranial alternating current stimulation, or transcranial magnetic stimulation, but the key is delivering stimulation at 40 hertz. They all see beneficial effects.”
A decade of discovery at MIT
Starting with a paper in Nature in 2016, a collaboration led by Tsai has produced a series of studies showing that 40Hz stimulation via light, sound, the two combined, or tactile vibration reduces hallmarks of Alzheimer’s pathology such as amyloid and tau proteins, prevents neuron death, decreases synapse loss, and sustains memory and cognition in various Alzheimer’s mouse models. The collaboration’s investigations of the underlying mechanisms that produce these benefits have so far identified specific cellular and molecular responses in many brain cell types including neurons, microglia, astrocytes, oligodendrocytes, and the brain’s blood vessels. Last year, for instance, the lab reported in Nature that 40Hz audio and visual stimulation induced interneurons in mice to increase release of the peptide VIP, prompting increased clearance of amyloid from brain tissue via the brain’s glymphatic “plumbing” system.
Meanwhile, at MIT and at the MIT spinoff company Cognito Therapeutics, phase II clinical studies have shown that people with Alzheimer’s exposed to 40Hz light and sound experienced a significant slowing of brain atrophy and improvements on some cognitive measures, compared to untreated controls. Cognito, which has also measured significant preservation of the brain’s “white matter” in volunteers, has been conducting a pivotal, nationwide phase III clinical trial of sensory gamma stimulation for more than a year.
“Neuroscientists often lament that it is a great time to have AD [Alzheimer’s disease] if you are a mouse,” Park and Tsai wrote in the review. “Our ultimate goal, therefore, is to translate GENUS discoveries into a safe, accessible, and noninvasive therapy for AD patients.” The MIT team often refers to 40Hz stimulation as “GENUS” for Gamma Entrainment Using Sensory Stimulation.
A growing field
As Tsai’s collaboration, which includes MIT colleagues Edward Boyden and Emery N. Brown, has published its results, many other labs have produced studies adding to the evidence that various methods of noninvasive gamma sensory stimulation can combat Alzheimer’s pathology. Among many examples cited in the new review, in 2024 a research team in China independently corroborated that 40Hz sensory stimulation increases glymphatic fluid flows in mice. In another example, a Harvard Medical School-based team in 2022 showed that 40Hz gamma stimulation using Transcranial Alternating Current Stimulation significantly reduced the burden of tau in three out of four human volunteers. And in another study involving more than 100 people, researchers in Scotland in 2023 used audio and visual gamma stimulation (at 37.5Hz) to improve memory recall.
Open questions
Amid the growing number of publications describing preclinical studies with mice and clinical trials with people, open questions remain, Tsai and Park acknowledge. The MIT team and others are still exploring the cellular and molecular mechanisms that underlie GENUS’s effects. Tsai says her lab is looking at other neuropeptide and neuromodulatory systems to better understand the cascade of events linking sensory stimulation to the observed cellular responses. Meanwhile, the nature of how some cells, such as microglia, respond to gamma stimulation and how that affects pathology remains unclear, Tsai adds.
Even with a national phase III clinical trial underway, it is still important to investigate these fundamental mechanisms, Tsai says, because new insights into how noninvasive gamma stimulation affects the brain could improve and expand its therapeutic potential.
“The more we understand the mechanisms, the more we will have good ideas about how to further optimize the treatment,” Tsai says. “And the more we understand its action and the circuits it affects, the more we will know beyond Alzheimer’s disease what other neurological disorders will benefit from this.”
Indeed, the review points to studies at MIT and other institutions providing at least some evidence that GENUS might be able to help with Parkinson’s disease, stroke, anxiety, epilepsy, and the cognitive side effects of chemotherapy and conditions that reduce myelin, such as multiple sclerosis. Tsai’s lab has been studying whether it can help with Down syndrome as well.
The open questions may help define the next decade of GENUS research.
QS World University Rankings rates MIT No. 1 in 11 subjects for 2025The Institute also ranks second in seven subject areas.QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2025, the organization announced today.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 13 straight years.
Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity — a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?
A new study from MIT Professor Ila Fiete suggests a surprising answer.
In findings published Feb. 18 in Nature, Fiete, an associate investigator in the McGovern Institute for Brain Research and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.
Joining two big ideas
“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona PhD '25, a former graduate student and K. Lisa Yang ICoN Center graduate fellow, and postdoc Sarthak Chandra also led the study.
Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition — small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.
Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.
Modular systems in the brain
The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale — they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.
No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.
Modular systems in nature
The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread, and also to vary, smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries — distinct ecological “neighborhoods” that don’t overlap.
Fiete’s study suggests why: local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection — and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.
A self-organizing world
One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same — they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.
The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.
Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”
Study: The ozone hole is healing, thanks to global reduction of CFCsNew results show with high statistical confidence that ozone recovery is going strong.A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.
Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.
“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”
The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.
Roots of ozone recovery
Within the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.
In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.
The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.
In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.
“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.
Anthropogenic healing
In their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.
Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.
“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”
The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.
They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.
The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.
“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”
If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.
“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”
This research was supported, in part, by the National Science Foundation and NASA.
Study suggests new molecular strategy for treating fragile X syndromeEnhancing activity of a specific component of neurons’ “NMDA” receptors normalized protein synthesis, neural activity, and seizure susceptibility in the hippocampus of fragile X lab mice.Building on more than two decades of research, a study by MIT neuroscientists at The Picower Institute for Learning and Memory reports a new way to treat pathology and symptoms of fragile X syndrome, the most common genetically-caused autism spectrum disorder. The team showed that augmenting a novel type of neurotransmitter signaling reduced hallmarks of fragile X in mouse models of the disorder.
The new approach, described in Cell Reports, works by targeting a specific molecular subunit of “NMDA” receptors that they discovered plays a key role in how neurons synthesize proteins to regulate their connections, or “synapses,” with other neurons in brain circuits. The scientists showed that in fragile X model mice, increasing the receptor’s activity caused neurons in the hippocampus region of the brain to increase molecular signaling that suppressed excessive bulk protein synthesis, leading to other key improvements.
Setting the table
“One of the things I find most satisfying about this study is that the pieces of the puzzle fit so nicely into what had come before,” says study senior author Mark Bear, Picower Professor in MIT’s Department of Brain and Cognitive Sciences. Former postdoc Stephanie Barnes, now a lecturer at the University of Glasgow, is the study’s lead author.
Bear’s lab studies how neurons continually edit their circuit connections, a process called “synaptic plasticity” that scientists believe to underlie the brain’s ability to adapt to experience and to form and process memories. These studies led to two discoveries that set the table for the newly published advance. In 2011, Bear’s lab showed that fragile X and another autism disorder, tuberous sclerosis (Tsc), represented two ends of a continuum of a kind of protein synthesis in the same neurons. In fragile X there was too much. In Tsc there was too little. When lab members crossbred fragile X and Tsc mice, in fact, their offspring emerged healthy, as the mutations of each disorder essentially canceled each other out.
More recently, Bear’s lab showed a different dichotomy. It has long been understood from their influential work in the 1990s that the flow of calcium ions through NMDA receptors can trigger a form of synaptic plasticity called “long-term depression” (LTD). But in 2020, they found that another mode of signaling by the receptor — one that did not require ion flow — altered protein synthesis in the neuron and caused a physical shrinking of the dendritic “spine” structures housing synapses.
For Bear and Barnes, these studies raised the prospect that if they could pinpoint how NMDA receptors affect protein synthesis they might identify a new mechanism that could be manipulated therapeutically to address fragile X (and perhaps tuberous sclerosis) pathology and symptoms. That would be an important advance to complement ongoing work Bear’s lab has done to correct fragile X protein synthesis levels via another receptor called mGluR5.
Receptor dissection
In the new study, Bear and Barnes’ team decided to use the non-ionic effect on spine shrinkage as a readout to dissect how NMDARs signal protein synthesis for synaptic plasticity in hippocampus neurons. They hypothesized that the dichotomy of ionic effects on synaptic function and non-ionic effects on spine structure might derive from the presence of two distinct components of NMDA receptors: “subunits” called GluN2A and GluN2B. To test that, they used genetic manipulations to knock out each of the subunits. When they did so, they found that knocking out “2A” or “2B” could eliminate LTD, but that only knocking out 2B affected spine size. Further experiments clarified that 2A and 2B are required for LTD, but that spine shrinkage solely depends on the 2B subunit.
The next task was to resolve how the 2B subunit signals spine shrinkage. A promising possibility was a part of the subunit called the “carboxyterminal domain,” or CTD. So, in a new experiment Bear and Barnes took advantage of a mouse that had been genetically engineered by researchers at the University of Edinburgh so that the 2A and 2B CTDs could be swapped with one another. A telling result was that when the 2B subunit lacked its proper CTD, the effect on spine structure disappeared. The result affirmed that the 2B subunit signals spine shrinkage via its CTD.
Another consequence of replacing the CTD of the 2B subunit was an increase in bulk protein synthesis that resembled findings in fragile X. Conversely, augmenting the non-ionic signaling through the 2B subunit suppressed bulk protein synthesis, reminiscent of Tsc.
Treating fragile X
Putting the pieces together, the findings indicated that augmenting signaling through the 2B subunit might, like introducing the mutation causing Tsc, rescue aspects of fragile X.
Indeed, when the scientists swapped in the 2B subunit CTD of NMDA receptor in fragile X model mice they found correction of not only the excessive bulk protein synthesis, but also altered synaptic plasticity, and increased electrical excitability that are hallmarks of the disease. To see if a treatment that targets NMDA receptors might be effective in fragile X, they tried an experimental drug called Glyx-13. This drug binds to the 2B subunit of NMDA receptors to augment signaling. The researchers found that this treatment can also normalize protein synthesis and reduced sound-induced seizures in the fragile X mice.
The team now hypothesizes, based on another prior study in the lab, that the beneficial effect to fragile X mice of the 2B subunit’s CTD signaling is that it shifts the balance of protein synthesis away from an all-too-efficient translation of short messenger RNAs (which leads to excessive bulk protein synthesis) toward a lower-efficiency translation of longer messenger RNAs.
Bear says he does not know what the prospects are for Glyx-13 as a clinical drug, but he noted that there are some drugs in clinical development that specifically target the 2B subunit of NMDA receptors.
In addition to Bear and Barnes, the study’s other authors are Aurore Thomazeau, Peter Finnie, Max Heinreich, Arnold Heynen, Noboru Komiyama, Seth Grant, Frank Menniti, and Emily Osterweil.
The FRAXA Foundation, The Picower Institute for Learning and Memory, The Freedom Together Foundation, and the National Institutes of Health funded the study.
Breakfast of champions: MIT hosts top young scientistsAt an MIT-led event at AJAS/AAAS, researchers connect with MIT faculty, Nobel laureates, and industry leaders to share their work, gain mentorship, and explore future careers in science.On Feb. 14, some of the nation’s most talented high school researchers convened in Boston for the annual American Junior Academy of Science (AJAS) conference, held alongside the American Association for the Advancement of Science (AAAS) annual meeting. As a highlight of the event, MIT once again hosted its renowned “Breakfast with Scientists,” offering students a unique opportunity to connect with leading scientific minds from around the world.
The AJAS conference began with an opening reception at the MIT Schwarzman College of Computing, where professor of biology and chemistry Catherine Drennan delivered the keynote address, welcoming 162 high school students from 21 states. Delegates were selected through state Academy of Science competitions, earning the chance to share their work and connect with peers and professionals in science, technology, engineering, and mathematics (STEM).
Over breakfast, students engaged with distinguished scientists, including MIT faculty, Nobel laureates, and industry leaders, discussing research, career paths, and the broader impact of scientific discovery.
Amy Keating, MIT biology department head, sat at a table with students ranging from high school juniors to college sophomores. The group engaged in an open discussion about life as a scientist at a leading institution like MIT. One student expressed concern about the competitive nature of innovative research environments, prompting Keating to reassure them, saying, “MIT has a collaborative philosophy rather than a competitive one.”
At another table, Nobel laureate and former MIT postdoc Gary Ruvkun shared a lighthearted moment with students, laughing at a TikTok video they had created to explain their science fair project. The interaction reflected the innate curiosity and excitement that drives discovery at all stages of a scientific career.
Donna Gerardi, executive director of the National Association of Academies of Science, highlighted the significance of the AJAS program. “These students are not just competing in science fairs; they are becoming part of a larger scientific community. The connections they make here can shape their careers and future contributions to science.”
Alongside the breakfast, AJAS delegates participated in a variety of enriching experiences, including laboratory tours, conference sessions, and hands-on research activities.
“I am so excited to be able to discuss my research with experts and get some guidance on the next steps in my academic trajectory,” said Andrew Wesel, a delegate from California.
A defining feature of the AJAS experience was its emphasis on mentorship and collaboration rather than competition. Delegates were officially inducted as lifetime Fellows of the American Junior Academy of Science at the conclusion of the conference, joining a distinguished network of scientists and researchers.
Sponsored by the MIT School of Science and School of Engineering, the breakfast underscored MIT’s longstanding commitment to fostering young scientific talent. Faculty and researchers took the opportunity to encourage students to pursue careers in STEM fields, providing insights into the pathways available to them.
“It was a joy to spend time with such passionate students,” says Kristala Prather, head of the Department of Chemical Engineering at MIT. “One of the brightest moments for me was sitting next to a young woman who will be joining MIT in the fall — I just have to convince her to study ChemE!”
Seeing more in expansion microscopyNew methods light up lipid membranes and let researchers see sets of proteins inside cells with high resolution.In biology, seeing can lead to understanding, and researchers in Professor Edward Boyden’s lab at the McGovern Institute for Brain Research are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy — a high-resolution imaging technique the group introduced in 2015 — so researchers everywhere can see more when they look at cells and tissues under a light microscope.
“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT. “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.
With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in open-access form in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.
Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.
Since first developing expansion microscopy, Boyden and his team have continued to enhance the method — increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.
Visualizing cell membranes
One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the Feb. 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.
Tay Shin SM ’20, PhD ’23, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”
The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.
After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.
Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.
Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.
Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.
One sample, many proteins
To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.
A key to that new method, reported Nov. 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoc Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.
After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.
Expansion microscopy lets biologists visualize some of cells’ tiniest features — but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.
To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.
The team used multiExR to look at amyloid plaques — the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high-throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease — but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.
Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the U.S. Army, Cancer Research U.K., the New York Stem Cell Foundation, the U.S. National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures, Samsung, MathWorks, the Collamore-Rogers Fellowship, the U.S. National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.
Five years, five triumphs in Putnam Math CompetitionUndergrads sweep Putnam Fellows for fifth year in a row and continue Elizabeth Lowell Putnam winning streak.For the fifth time in the history of the annual William Lowell Putnam Mathematical Competition, and for the fifth year in a row, MIT swept all five of the contest’s top spots.
The top five scorers each year are named Putnam Fellows. Senior Brian Liu and juniors Papon Lapate and Luke Robitaille are now three-time Putnam Fellows, sophomore Jiangqi Dai earned his second win, and first-year Qiao Sun earned his first. Each receives a $2,500 award. This is also the fifth time that any school has had all five Putnam Fellows.
MIT’s team also came in first. The team was made up of Lapate, Robitaille, and Sun (in alphabetical order); Lapate and Robitaille were also on last year’s winning team. This is MIT’s ninth first-place win in the past 11 competitions. Teams consist of the three top scorers from each institution. The institution with the first-place team receives a $25,000 award, and each team member receives $1,000.
First-year Jessica Wan was the top-scoring woman, finishing in the top 25, which earned her the $1,000 Elizabeth Lowell Putnam Prize. She is the eighth MIT student to receive this honor since the award was created in 1992. This is the sixth year in a row that an MIT woman has won the prize.
In total, 69 MIT students scored within the top 100. Beyond the top five scorers, MIT took nine of the next 11 spots (each receiving a $1,000 award), and seven of the next nine spots (earning $250 awards). Of the 75 receiving honorable mentions, 48 were from MIT. A total of 3,988 students took the exam in December, including 222 MIT students.
This exam is considered to be the most prestigious university-level mathematics competition in the United States and Canada.
The Putnam is known for its difficulty: While a perfect score is 120, this year’s top score was 90, and the median was just 2. While many MIT students scored well, the Department of Mathematics is proud of everyone who just took the exam, says Professor Michel Goemans, head of the Department of Mathematics.
“Year after year, I am so impressed by the sheer number of students at MIT that participate in the Putnam competition,” Goemans says. “In no other college or university in the world can one find hundreds of students who get a kick out of thinking about math problems. So refreshing!”
Adds Professor Bjorn Poonen, who helped MIT students prepare for the exam this year, “The incredible competition performance is just one manifestation of MIT’s vibrant community of students who love doing math and discussing math with each other, students who through their hard work in this environment excel in ways beyond competitions, too.”
While the annual Putnam Competition is administered to thousands of undergraduate mathematics students across the United States and Canada, in recent years around 70 of its top 100 performers have been MIT students. Since 2000, MIT has placed among the top five teams 23 times.
MIT’s success in the Putnam exam isn’t surprising. MIT’s recent Putnam coaches are four-time Putnam Fellow Bjorn Poonen and three-time Putnam Fellow Yufei Zhao ’10, PhD ’15.
MIT is also a top destination for medalists participating in the International Mathematics Olympiad (IMO) for high school students. Indeed, over the last decade MIT has enrolled almost every American IMO medalist, and more international IMO gold medalists than the universities of any other single country, according to forthcoming research from the Global Talent Fund (GTF), which offers scholarship and training programs for math Olympiad students and coaches.
IMO participation is a strong predictor of future achievement. According to the International Mathematics Olympiad Foundation, about half of Fields Medal winners are IMO alums — but it’s not the only ingredient.
“Recruiting the most talented students is only the beginning. A top-tier university education — with excellent professors, supportive mentors, and an engaging peer community — is key to unlocking their full potential," says GTF President Ruchir Agarwal. "MIT’s sustained Putnam success shows how the right conditions deliver spectacular results. The catalytic reaction of MIT’s concentration of math talent and the nurturing environment of Building 2 should accelerate advancements in fundamental science for years and decades to come.”
Many MIT mathletes see competitions not only as a way to hone their mathematical aptitude, but also as a way to create a strong sense of community, to help inspire and educate the next generation.
Chris Peterson SM ’13, director of communications and special projects at MIT Admissions and Student Financial Services, points out that many MIT students with competition math experience volunteer to help run programs for K-12 students including HMMT and Math Prize for Girls, and mentor research projects through the Program for Research in Mathematics, Engineering and Science (PRIMES).
Many of the top scorers are also alumni of the PRIMES high school outreach program. Two of this year’s Putnam Fellows, Liu and Robitaille, are PRIMES alumni, as are four of the next top 11, and six out of the next nine winners, along with many of the students receiving honorable mentions. Pavel Etingof, a math professor who is also PRIMES’ chief research advisor, states that among the 25 top winners, 12 (48 percent) are PRIMES alumni.
“We at PRIMES are very proud of our alumnae’s fantastic showing at the Putnam Competition,” says PRIMES director Slava Gerovitch PhD ’99. “PRIMES serves as a pipeline of mathematical excellence from high school through undergraduate studies, and beyond.”
Along the same lines, a collaboration between the MIT Department of Mathematics and MISTI-Africa has sent MIT students with Olympiad experience abroad during the Independent Activities Period (IAP) to coach high school students who hope to compete for their national teams.
First-years at MIT also take class 18.A34 (Mathematical Problem Solving), known informally as the Putnam Seminar, not only to hone their Putnam exam skills, but also to make new friends.
“Many people think of math competitions as primarily a way to identify and recognize talent, which of course they are,” says Peterson. “But the community convened by and through these competitions generates educational externalities that collectively exceed the sum of individual accomplishment.”
Math Community and Outreach Officer Michael King also notes the camaraderie that forms around the test.
“My favorite time of the Putnam day is right after the problem session, when the students all jump up, run over to their friends, and begin talking animatedly,” says King, who also took the exam as an undergraduate student. “They cheer each other’s successes, debate problem solutions, commiserate over missed answers, and share funny stories. It’s always amazing to work with the best math students in the world, but the most rewarding aspect is seeing the friendships that develop.”
A full list of the winners can be found on the Putnam website.
An ancient RNA-guided system could simplify delivery of gene editing therapiesThe programmable proteins are compact, modular, and can be directed to modify DNA in human cells.A vast search of natural diversity has led scientists at MIT’s McGovern Institute for Brain Research and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox.
These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.
These findings are reported online Feb. 27 in the journal Science.
“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.
“Nature is pretty incredible,” says Zhang, who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.
In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR-Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.
At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionary relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.
Zhang’s team discovered more than 20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region — its TIGR arrays — encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.
Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.
They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact — a quarter of the size Cas9, on average — making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.
Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses, as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.
This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.
MIT physicists find unexpected crystals of electrons in an ultrathin materialRhombohedral graphene reveals new exotic interacting electron states.MIT physicists report the unexpected discovery of electrons forming crystalline structures in a material only billionths of a meter thick. The work adds to a gold mine of discoveries originating from the material, which the same team discovered about three years ago.
In a paper published Jan. 22 in Nature, the team describes how electrons in devices made, in part, of the material can become solid, or form crystals, by changing the voltage applied to the devices when they are kept at a temperature similar to that of outer space. Under the same conditions, they also showed the emergence of two new electronic states that add to work they reported last year showing that electrons can split into fractions of themselves.
The physicists were able to make the discoveries thanks to new custom-made filters for better insulation of the equipment involved in the work. These allowed them to cool their devices to a temperature an order of magnitude colder than they achieved for the earlier results.
The team also observed all of these phenomena using two slightly different “versions” of the material, one composed of five layers of atomically thin carbon; the other composed of four layers. This indicates “that there’s a family of materials where you can get this kind of behavior, which is exciting,” says Long Ju, an assistant professor in the MIT Department of Physics who led the work. Ju is also affiliated with MIT’s Materials Research Laboratory and Research Lab of Electronics.
Referring to the material, known as rhombohedral pentalayer graphene, Ju says, “We found a gold mine, and every scoop is revealing something new.”
New material
Rhombohedral pentalayer graphene is essentially a special form of pencil lead. Pencil lead, or graphite, is composed of graphene, a single layer of carbon atoms arranged in hexagons resembling a honeycomb structure. Rhombohedral pentalayer graphene is composed of five layers of graphene stacked in a specific overlapping order.
Since Ju and colleagues discovered the material, they have tinkered with it by adding layers of another material they thought might accentuate the graphene’s properties, or even produce new phenomena. For example, in 2023 they created a sandwich of rhombohedral pentalayer graphene with “buns” made of hexagonal boron nitride. By applying different voltages, or amounts of electricity, to the sandwich, they discovered three important properties never before seen in natural graphite.
Last year, Ju and colleagues reported yet another important and even more surprising phenomenon: Electrons became fractions of themselves upon applying a current to a new device composed of rhombohedral pentalayer graphene and hexagonal boron nitride. This is important because this “fractional quantum Hall effect” has only been seen in a few systems, usually under very high magnetic fields. The Ju work showed that the phenomenon could occur in a fairly simple material without a magnetic field. As a result, it is called the “fractional quantum anomalous Hall effect” (anomalous indicates that no magnetic field is necessary).
New results
In the current work, the Ju team reports yet more unexpected phenomena from the general rhombohedral graphene/boron nitride system when it is cooled to 30 millikelvins (1 millikelvin is equivalent to -459.668 degrees Fahrenheit). In last year’s paper, Ju and colleagues reported six fractional states of electrons. In the current work, they report discovering two more of these fractional states.
They also found another unusual electronic phenomenon: the integer quantum anomalous Hall effect in a wide range of electron densities. The fractional quantum anomalous Hall effect was understood to emerge in an electron “liquid” phase, analogous to water. In contrast, the new state that the team has now observed can be interpreted as an electron “solid” phase — resembling the formation of electronic “ice” — that can also coexist with the fractional quantum anomalous Hall states when the system’s voltage is carefully tuned at ultra-low temperatures.
One way to think about the relation between the integer and fractional states is to imagine a map created by tuning electric voltages: By tuning the system with different voltages, you can create a “landscape” similar to a river (which represents the liquid-like fractional states) cutting through glaciers (which represent the solid-like integer effect), Ju explains.
Ju notes that his team observed all of these phenomena not only in pentalayer rhombohedral graphene, but also in rhombohedral graphene composed of four layers. This creates a family of materials, and indicates that other “relatives” may exist.
“This work shows how rich this material is in exhibiting exotic phenomena. We’ve just added more flavor to this already very interesting material,” says Zhengguang Lu, a co-first author of the paper. Lu, who conducted the work as a postdoc at MIT, is now on the faculty at Florida State University.
In addition to Ju and Lu, other principal authors of the Nature paper are Tonghang Han and Yuxuan Yao, both of MIT. Lu, Han, and Yao are co-first authors of the paper who contributed equally to the work. Other MIT authors are Jixiang Yang, Junseok Seo, Lihan Shi, and Shenyong Ye. Additional members of the team are Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
This work was supported by a Sloan Fellowship, a Mathworks Fellowship, the U.S. Department of Energy, the Japan Society for the Promotion of Science KAKENHI, and the World Premier International Research Initiative of Japan. Device fabrication was performed at the Harvard Center for Nanoscale Systems and MIT.nano.
Helping the immune system attack tumorsStefani Spranger is working to discover why some cancers don’t respond to immunotherapy, in hopes of making them more vulnerable to it.In addition to patrolling the body for foreign invaders, the immune system also hunts down and destroys cells that have become cancerous or precancerous. However, some cancer cells end up evading this surveillance and growing into tumors.
Once established, tumor cells often send out immunosuppressive signals, which leads T cells to become “exhausted” and unable to attack the tumor. In recent years, some cancer immunotherapy drugs have shown great success in rejuvenating those T cells so they can begin attacking tumors again.
While this approach has proven effective against cancers such as melanoma, it doesn’t work as well for others, including lung and ovarian cancer. MIT Associate Professor Stefani Spranger is trying to figure out how those tumors are able to suppress immune responses, in hopes of finding new ways to galvanize T cells into attacking them.
“We really want to understand why our immune system fails to recognize cancer,” Spranger says. “And I’m most excited about the really hard-to-treat cancers because I think that’s where we can make the biggest leaps.”
Her work has led to a better understanding of the factors that control T-cell responses to tumors, and raised the possibility of improving those responses through vaccination or treatment with immune-stimulating molecules called cytokines.
“We’re working on understanding what exactly the problem is, and then collaborating with engineers to find a good solution,” she says.
Jumpstarting T cells
As a student in Germany, where students often have to choose their college major while still in high school, Spranger envisioned going into the pharmaceutical industry and chose to major in biology. At Ludwig Maximilian University in Munich, her course of study began with classical biology subjects such as botany and zoology, and she began to doubt her choice. But, once she began taking courses in cell biology and immunology, her interest was revived and she continued into a biology graduate program at the university.
During a paper discussion class early in her graduate school program, Spranger was assigned to a Science paper on a promising new immunotherapy treatment for melanoma. This strategy involves isolating tumor-infiltrating T-cells during surgery, growing them into large numbers, and then returning them to the patient. For more than 50 percent of those patients, the tumors were completely eliminated.
“To me, that changed the world,” Spranger recalls. “You can take the patient’s own immune system, not really do all that much to it, and then the cancer goes away.”
Spranger completed her PhD studies in a lab that worked on further developing that approach, known as adoptive T-cell transfer therapy. At that point, she still was leaning toward going into pharma, but after finishing her PhD in 2011, her husband, also a biologist, convinced her that they should both apply for postdoc positions in the United States.
They ended up at the University of Chicago, where Spranger worked in a lab that studies how the immune system responds to tumors. There, she discovered that while melanoma is usually very responsive to immunotherapy, there is a small fraction of melanoma patients whose T cells don’t respond to the therapy at all. That got her interested in trying to figure out why the immune system doesn’t always respond to cancer the way that it should, and in finding ways to jumpstart it.
During her postdoc, Spranger also discovered that she enjoyed mentoring students, which she hadn’t done as a graduate student in Germany. That experience drew her away from going into the pharmaceutical industry, in favor of a career in academia.
“I had my first mentoring teaching experience having an undergrad in the lab, and seeing that person grow as a scientist, from barely asking questions to running full experiments and coming up with hypotheses, changed how I approached science and my view of what academia should be for,” she says.
Modeling the immune system
When applying for faculty jobs, Spranger was drawn to MIT by the collaborative environment of MIT and its Koch Institute for Integrative Cancer Research, which offered the chance to collaborate with a large community of engineers who work in the field of immunology.
“That community is so vibrant, and it’s amazing to be a part of it,” she says.
Building on the research she had done as a postdoc, Spranger wanted to explore why some tumors respond well to immunotherapy, while others do not. For many of her early studies, she used a mouse model of non-small-cell lung cancer. In human patients, the majority of these tumors do not respond well to immunotherapy.
“We build model systems that resemble each of the different subsets of non-responsive non-small cell lung cancer, and we’re trying to really drill down to the mechanism of why the immune system is not appropriately responding,” she says.
As part of that work, she has investigated why the immune system behaves differently in different types of tissue. While immunotherapy drugs called checkpoint inhibitors can stimulate a strong T-cell response in the skin, they don’t do nearly as much in the lung. However, Spranger has shown that T cell responses in the lung can be improved when immune molecules called cytokines are also given along with the checkpoint inhibitor.
Those cytokines work, in part, by activating dendritic cells — a class of immune cells that help to initiate immune responses, including activation of T cells.
“Dendritic cells are the conductor for the orchestra of all the T cells, although they’re a very sparse cell population,” Spranger says. “They can communicate which type of danger they sense from stressed cells and then instruct the T cells on what they have to do and where they have to go.”
Spranger’s lab is now beginning to study other types of tumors that don’t respond at all to immunotherapy, including ovarian cancer and glioblastoma. Both the brain and the peritoneal cavity appear to suppress T-cell responses to tumors, and Spranger hopes to figure out how to overcome that immunosuppression.
“We’re specifically focusing on ovarian cancer and glioblastoma, because nothing’s working right now for those cancers,” she says. “We want to understand what we have to do in those sites to induce a really good anti-tumor immune response.”
Three from MIT named 2025 Gates Cambridge ScholarsMarkey Freudenburg-Puricelli, Abigail Schipper ’24, and Rachel Zhang ’21 will pursue graduate studies at Cambridge University in the U.K.MIT senior Markey Freudenburg-Puricelli and alumnae Abigail (“Abbie”) Schipper ’24 and Rachel Zhang ’21 have been selected as Gates Cambridge Scholars and will begin graduate studies this fall in the field of their choice at Cambridge University in the U.K.
Now celebrating its 25th year, the Gates Cambridge program provides fully funded post-graduate scholarships to outstanding applicants from countries outside of the U.K. The mission of Gates Cambridge is to build a global network of future leaders committed to changing the world for the better.
Students interested in applying to Gates Cambridge should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.
Markey Freudenburg-Puricelli
Freudenburg-Puricelli is majoring in Earth, atmospheric, and planetary sciences and minoring in Spanish. Her passion for geoscience has led her to travel to different corners of the world to conduct geologic fieldwork. These experiences have motivated her to pursue a career in developing scientific policy and environmental regulation that can protect those most vulnerable to climate change. As a Gates Cambridge Scholar, she will pursue an MPhil in environmental policy.
Arriving at MIT, Freudenburg-Puricelli joined the Terrascope first-year learning community, which focuses on hands-on education relating to global environmental issues. She then became an undergraduate research assistant in the McGee Lab for Paleoclimate and Geochronology, where she gathered and interpreted data used to understand climate features of permafrost across northern Canada.
Following a summer internship in Chile researching volcanoes at the Universidad Católica del Norte, Freudenburg-Puricelli joined the Gehring Lab for Plant Genetics, Epigenetics, and Seed Biology. Last summer, she traveled to Peru to work with the Department of Paleontology at the Universidad Nacional de Piura, conducting fieldwork and preserving and organizing fossil specimens. Freudenburg-Puricelli has also done fieldwork on sedimentology in New Mexico, geological mapping in the Mojave Desert, and field oceanography onboard the SSV Corwith Cramer.
On campus, Freudenburg-Puricelli is an avid glassblower and has been a teaching assistant at the MIT glassblowing lab. She is also a tour guide for the MIT Office of Admissions and has volunteered with the Department of Earth, Atmospheric and Planetary Sciences’ first-year pre-orientation program.
Abigail “Abbie” Schipper ’24
Originally from Portland, Oregon, Schipper graduated from MIT with a BS in mechanical engineering and a minor in biology. At Cambridge, she will pursue an MPhil in engineering, researching medical devices used in pre-hospital trauma systems in low- and middle-income countries with the Cambridge Health Systems Design group.
At MIT, Schipper was a member of MIT Emergency Medical Services, volunteering on the ambulance and serving as the heartsafe officer and director of ambulance operations. Inspired by her work in CPR education, she helped create the LifeSaveHer project, which aims to decrease the gender disparity in out-of-hospital cardiac arrest survival outcomes through the creation of female CPR mannequins and associated research. This team was the first-place winner of the 2023 PKG IDEAS Competition and a recipient of the Eloranta Research Fellowship.
Schipper’s work has also focused on designing medical devices for low-resource or extreme environments. As an undergraduate, she performed research in the lab of Professor Giovanni Traverso, where she worked on a project designing a drug delivery implant for regions with limited access to surgery. During a summer internship at the University College London Collaborative Center for Inclusion Health, she worked with the U.K.’s National Health Service to create durable, low-cost carbon dioxide sensors to approximate the risk of airborne infectious disease transmission in shelters for people experiencing homelessness.
After graduation, Schipper interned at SAGA Space Architecture through MISTI Denmark, designing life support systems for an underwater habitat that will be used for astronaut training and oceanographic research.
Schipper was a member of the Concourse learning community, Sigma Kappa Sorority, and her living group, Burton 3rd. In her free time, she enjoys fixing bicycles and playing the piano.
Rachel Zhang ’21
Zhang graduated from MIT with a BS in physics in 2021. During her senior year, she was a recipient of the Joel Matthews Orloff Award. She then earned an MS in astronomy at Northwestern University. An internship at the Center for Computational Astrophysics at the Flatiron Institute deepened her interest in the applications of machine learning for astronomy. At Cambridge, she will pursue a PhD in applied mathematics and theoretical physics.
Study: Even after learning the right idea, humans and animals still seem to test other approachesNew research adds evidence that learning a successful strategy for approaching a task doesn’t prevent further exploration, even if doing so reduces performance.Maybe it’s a life hack or a liability, or a little of both. A surprising result in a new MIT study may suggest that people and animals alike share an inherent propensity to keep updating their approach to a task even when they have already learned how they should approach it, and even if the deviations sometimes lead to unnecessary error.
The behavior of “exploring” when one could just be “exploiting” could make sense for at least two reasons, says Mriganka Sur, senior author of the study published Feb. 18 in Current Biology. Just because a task’s rules seem set one moment doesn’t mean they’ll stay that way in this uncertain world, so altering behavior from the optimal condition every so often could help reveal needed adjustments. Moreover, trying new things when you already know what you like is a way of finding out whether there might be something even better out there than the good thing you’ve got going on right now.
“If the goal is to maximize reward, you should never deviate once you have found the perfect solution, yet you keep exploring,” says Sur, the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “Why? It’s like food. We all like certain foods, but we still keep trying different foods because you never know, there might be something you could discover.”
Predicting timing
Former research technician Tudor Dragoi, now a graduate student at Boston University, led the study in which he and fellow members of the Sur Lab explored how humans and marmosets, a small primate, make predictions about event timing.
Three humans and two marmosets were given a simple task. They’d see an image on a screen for some amount of time — the amount of time varied from one trial to the next within a limited range — and they simply had to hit a button (marmosets poked a tablet while humans clicked a mouse) when the image disappeared. Success was defined as reacting as quickly as possible to the image’s disappearance without hitting the button too soon. Marmosets received a juice reward on successful trials.
Though marmosets needed more training time than humans, the subjects all settled into the same reasonable pattern of behavior regarding the task. The longer the image stayed on the screen, the faster their reaction time to its disappearance. This behavior follows the “hazard model” of prediction in which, if the image can only last for so long, the longer it’s still there, the more likely it must be to disappear very soon. The subjects learned this and overall, with more experience, their reaction times became faster.
But as the experiment continued, Sur and Dragoi’s team noticed something surprising was also going on. Mathematical modeling of the reaction time data revealed that both the humans and marmosets were letting the results of the immediate previous trial influence what they did on the next trial, even though they had already learned what to do. If the image was only on the screen briefly in one trial, on the next round subjects would decrease reaction time a bit (presumably expecting a shorter image duration again) whereas if the image lingered, they’d increase reaction time (presumably because they figured they’d have a longer wait).
Those results add to ones from a similar study Sur’s lab published in 2023, in which they found that even after mice learned the rules of a different cognitive task, they’d arbitrarily deviate from the winning strategy every so often. In that study, like this one, learning the successful strategy didn’t prevent subjects from continuing to test alternatives, even if it meant sacrificing reward.
“The persistence of behavioral changes even after task learning may reflect exploration as a strategy for seeking and setting on an optimal internal model of the environment,” the scientists wrote in the new study.
Relevance for autism
The similarity of the human and marmoset behaviors is an important finding as well, Sur says. That’s because differences in making predictions about one’s environment is posited to be a salient characteristic of autism spectrum disorders. Because marmosets are small, are inherently social, and are more cognitively complex than mice, work has begun in some labs to establish marmoset autism models, but a key component was establishing that they model autism-related behaviors well. By demonstrating that marmosets model neurotypical human behavior regarding predictions, the study therefore adds weight to the emerging idea that marmosets can indeed provide informative models for autism studies.
In addition to Dragoi and Sur, other authors of the paper are Hiroki Sugihara, Nhat Le, Elie Adam, Jitendra Sharma, Guoping Feng, and Robert Desimone.
The Simons Foundation Autism Research Initiative supported the research through the Simons Center for the Social Brain at MIT.
AI system predicts protein fragments that can bind to or inhibit a targetFragFold, developed by MIT Biology researchers, is a computational method with potential for impact on biological research and therapeutic applications.All biological function is dependent on how different proteins interact with each other. Protein-protein interactions facilitate everything from transcribing DNA and controlling cell division to higher-level functions in complex organisms.
Much remains unclear, however, about how these functions are orchestrated on the molecular level, and how proteins interact with each other — either with other proteins or with copies of themselves.
Recent findings have revealed that small protein fragments have a lot of functional potential. Even though they are incomplete pieces, short stretches of amino acids can still bind to interfaces of a target protein, recapitulating native interactions. Through this process, they can alter that protein’s function or disrupt its interactions with other proteins.
Protein fragments could therefore empower both basic research on protein interactions and cellular processes, and could potentially have therapeutic applications.
Recently published in Proceedings of the National Academy of Sciences, a new method developed in the Department of Biology builds on existing artificial intelligence models to computationally predict protein fragments that can bind to and inhibit full-length proteins in E. coli. Theoretically, this tool could lead to genetically encodable inhibitors against any protein.
The work was done in the lab of associate professor of biology and Howard Hughes Medical Institute investigator Gene-Wei Li in collaboration with the lab of Jay A. Stein (1968) Professor of Biology, professor of biological engineering, and department head Amy Keating.
Leveraging machine learning
The program, called FragFold, leverages AlphaFold, an AI model that has led to phenomenal advancements in biology in recent years due to its ability to predict protein folding and protein interactions.
The goal of the project was to predict fragment inhibitors, which is a novel application of AlphaFold. The researchers on this project confirmed experimentally that more than half of FragFold’s predictions for binding or inhibition were accurate, even when researchers had no previous structural data on the mechanisms of those interactions.
“Our results suggest that this is a generalizable approach to find binding modes that are likely to inhibit protein function, including for novel protein targets, and you can use these predictions as a starting point for further experiments,” says co-first and corresponding author Andrew Savinov, a postdoc in the Li Lab. “We can really apply this to proteins without known functions, without known interactions, without even known structures, and we can put some credence in these models we’re developing.”
One example is FtsZ, a protein that is key for cell division. It is well-studied but contains a region that is intrinsically disordered and, therefore, especially challenging to study. Disordered proteins are dynamic, and their functional interactions are very likely fleeting — occurring so briefly that current structural biology tools can’t capture a single structure or interaction.
The researchers leveraged FragFold to explore the activity of fragments of FtsZ, including fragments of the intrinsically disordered region, to identify several new binding interactions with various proteins. This leap in understanding confirms and expands upon previous experiments measuring FtsZ’s biological activity.
This progress is significant in part because it was made without solving the disordered region’s structure, and because it exhibits the potential power of FragFold.
“This is one example of how AlphaFold is fundamentally changing how we can study molecular and cell biology,” Keating says. “Creative applications of AI methods, such as our work on FragFold, open up unexpected capabilities and new research directions.”
Inhibition, and beyond
The researchers accomplished these predictions by computationally fragmenting each protein and then modeling how those fragments would bind to interaction partners they thought were relevant.
They compared the maps of predicted binding across the entire sequence to the effects of those same fragments in living cells, determined using high-throughput experimental measurements in which millions of cells each produce one type of protein fragment.
AlphaFold uses co-evolutionary information to predict folding, and typically evaluates the evolutionary history of proteins using something called multiple sequence alignments for every single prediction run. The MSAs are critical, but are a bottleneck for large-scale predictions — they can take a prohibitive amount of time and computational power.
For FragFold, the researchers instead pre-calculated the MSA for a full-length protein once, and used that result to guide the predictions for each fragment of that full-length protein.
Savinov, together with Keating Lab alumnus Sebastian Swanson PhD ’23, predicted inhibitory fragments of a diverse set of proteins in addition to FtsZ. Among the interactions they explored was a complex between lipopolysaccharide transport proteins LptF and LptG. A protein fragment of LptG inhibited this interaction, presumably disrupting the delivery of lipopolysaccharide, which is a crucial component of the E. coli outer cell membrane essential for cellular fitness.
“The big surprise was that we can predict binding with such high accuracy and, in fact, often predict binding that corresponds to inhibition,” Savinov says. “For every protein we’ve looked at, we’ve been able to find inhibitors.”
The researchers initially focused on protein fragments as inhibitors because whether a fragment could block an essential function in cells is a relatively simple outcome to measure systematically. Looking forward, Savinov is also interested in exploring fragment function outside inhibition, such as fragments that can stabilize the protein they bind to, enhance or alter its function, or trigger protein degradation.
Design, in principle
This research is a starting point for developing a systemic understanding of cellular design principles, and what elements deep-learning models may be drawing on to make accurate predictions.
“There’s a broader, further-reaching goal that we’re building towards,” Savinov says. “Now that we can predict them, can we use the data we have from predictions and experiments to pull out the salient features to figure out what AlphaFold has actually learned about what makes a good inhibitor?”
Savinov and collaborators also delved further into how protein fragments bind, exploring other protein interactions and mutating specific residues to see how those interactions change how the fragment interacts with its target.
Experimentally examining the behavior of thousands of mutated fragments within cells, an approach known as deep mutational scanning, revealed key amino acids that are responsible for inhibition. In some cases, the mutated fragments were even more potent inhibitors than their natural, full-length sequences.
“Unlike previous methods, we are not limited to identifying fragments in experimental structural data,” says Swanson. “The core strength of this work is the interplay between high-throughput experimental inhibition data and the predicted structural models: the experimental data guides us towards the fragments that are particularly interesting, while the structural models predicted by FragFold provide a specific, testable hypothesis for how the fragments function on a molecular level.”
Savinov is excited about the future of this approach and its myriad applications.
“By creating compact, genetically encodable binders, FragFold opens a wide range of possibilities to manipulate protein function,” Li agrees. “We can imagine delivering functionalized fragments that can modify native proteins, change their subcellular localization, and even reprogram them to create new tools for studying cell biology and treating diseases.”
MIT faculty, alumni named 2025 Sloan Research FellowsAnnual award honors early-career researchers for creativity, innovation, and research accomplishments.Seven MIT faculty and 21 additional MIT alumni are among 126 early-career researchers honored with 2025 Sloan Research Fellowships by the Alfred P. Sloan Foundation.
The recipients represent the MIT departments of Biology; Chemical Engineering; Chemistry; Civil and Environmental Engineering; Earth, Atmospheric and Planetary Sciences; Economics; Electrical Engineering and Computer Science; Mathematics; and Physics as well as the Music and Theater Arts Section and the MIT Sloan School of Management.
The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.
“The Sloan Research Fellows represent the very best of early-career science, embodying the creativity, ambition, and rigor that drive discovery forward,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “These extraordinary scholars are already making significant contributions, and we are confident they will shape the future of their fields in remarkable ways.”
Including this year’s recipients, a total of 333 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. MIT and Northwestern University are tied for having the most faculty in the 2025 cohort of fellows, each with seven. The MIT recipients are:
Ariel L. Furst is the Paul M. Cook Career Development Professor of Chemical Engineering at MIT. Her lab combines biological, chemical, and materials engineering to solve challenges in human health and environmental sustainability, with lab members developing technologies for implementation in low-resource settings to ensure equitable access to technology. Furst completed her PhD in the lab of Professor Jacqueline K. Barton at Caltech developing new cancer diagnostic strategies based on DNA charge transport. She was then an A.O. Beckman Postdoctoral Fellow in the lab of Professor Matthew Francis at the University of California at Berkeley, developing sensors to monitor environmental pollutants. She is the recipient of the NIH New Innovator Award, the NSF CAREER Award, and the Dreyfus Teacher-Scholar Award. She is passionate about STEM outreach and increasing participation of underrepresented groups in engineering.
Mohsen Ghaffari SM ’13, PhD ’17 is an associate professor in the Department of Electrical Engineering and Computer Science (EECS) as well as the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research explores the theory of distributed and parallel computation, and he has had influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing (which resolved several decades-old open problems), improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation. His work has been recognized with best paper awards at the IEEE Symposium on Foundations of Computer Science (FOCS), ACM-SIAM Symposium on Discrete Algorithms (SODA), ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), the ACM Symposium on Principles of Distributed Computing (PODC), and the International Symposium on Distributed Computing (DISC), the European Research Council's Starting Grant, and a Google Faculty Research Award, among others.
Marzyeh Ghassemi PhD ’17 is an associate professor within EECS and the Institute for Medical Engineering and Science (IMES). Ghassemi earned two bachelor’s degrees in computer science and electrical engineering from New Mexico State University as a Goldwater Scholar; her MS in biomedical engineering from Oxford University as a Marshall Scholar; and her PhD in computer science from MIT. Following stints as a visiting researcher with Alphabet’s Verily and an assistant professor at University of Toronto, Ghassemi joined EECS and IMES as an assistant professor in July 2021. (IMES is the home of the Harvard-MIT Program in Health Sciences and Technology.) She is affiliated with the Laboratory for Information and Decision Systems (LIDS), the MIT-IBM Watson AI Lab, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Institute for Data, Systems, and Society (IDSS), and CSAIL. Ghassemi’s research in the Healthy ML Group creates a rigorous quantitative framework in which to design, develop, and place machine learning models in a way that is robust and useful, focusing on health settings. Her contributions range from socially-aware model construction to improving subgroup- and shift-robust learning methods to identifying important insights in model deployment scenarios that have implications in policy, health practice, and equity. Among other awards, Ghassemi has been named one of MIT Technology Review’s 35 Innovators Under 35 and an AI2050 Fellow, as well as receiving the 2018 Seth J. Teller Award, the 2023 MIT Prize for Open Data, a 2024 NSF CAREER Award, and the Google Research Scholar Award. She founded the nonprofit Association for Health, Inference and Learning (AHLI) and her work has been featured in popular press such as Forbes, Fortune, MIT News, and The Huffington Post.
Darcy McRose is the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Civil and Environmental Engineering. She is an environmental microbiologist who draws on techniques from genetics, chemistry, and geosciences to understand the ways microbes control nutrient cycling and plant health. Her laboratory uses small molecules, or “secondary metabolites,” made by plants and microbes as tractable experiments tools to study microbial activity in complex environments like soils and sediments. In the long term, this work aims to uncover fundamental controls on microbial physiology and community assembly that can be used to promote agricultural sustainability, ecosystem health, and human prosperity.
Sarah Millholland, an assistant professor of physics at MIT and member of the Kavli Institute for Astrophysics and Space Research, is a theoretical astrophysicist who studies extrasolar planets, including their formation and evolution, orbital dynamics, and interiors/atmospheres. She studies patterns in the observed planetary orbital architectures, referring to properties like the spacings, eccentricities, inclinations, axial tilts, and planetary size relationships. She specializes in investigating how gravitational interactions such as tides, resonances, and spin dynamics sculpt observable exoplanet properties. She is the 2024 recipient of the Vera Rubin Early Career Award for her contributions to the formation and dynamics of extrasolar planetary systems. She plans to use her Sloan Fellowship to explore how tidal physics shape the diversity of orbits and interiors of exoplanets orbiting close to their stars.
Emil Verner is the Albert F. (1942) and Jeanne P. Clear Career Development Associate Professor of Global Management and an associate professor of finance at the MIT Sloan School of Management. His research lies at the intersection of finance and macroeconomics, with a particular focus on understanding the causes and consequences of financial crises over the past 150 years. Verner’s recent work examines the drivers of bank runs and insolvency during banking crises, the role of debt booms in amplifying macroeconomic fluctuations, the effectiveness of debt relief policies during crises, and how financial crises impact political polarization and support for populist parties. Before joining MIT, he earned a PhD in economics from Princeton University.
Christian Wolf, the Rudi Dornbusch Career Development Assistant Professor of Economics and a faculty research fellow at the National Bureau of Economic Research, works in macroeconomics, monetary economics, and time series econometrics. His work focuses on the development and application of new empirical methods to address classic macroeconomic questions and to evaluate how robust the answers are to a range of common modeling assumptions. His research has provided path-breaking insights on monetary transmission mechanisms and fiscal policy. In a separate strand of work, Wolf has substantially deepened our understanding of the appropriate methods macroeconomists should use to estimate impulse response functions — how key economic variables respond to policy changes or unexpected shocks.
The following MIT alumni also received fellowships:
Jason Altschuler SM ’18, PhD ’22
David Bau III PhD ’21
Rene Boiteau PhD ’16
Lynne Chantranupong PhD ’17
Lydia B. Chilton ’06, ’07, MNG ’09
Jordan Cotler ’15
Alexander Ji PhD ’17
Sarah B. King ’10
Allison Z. Koenecke ’14
Eric Larson PhD ’18
Chen Lian ’15, PhD ’20
Huanqian Loh ’06
Ian J. Moult PhD ’16
Lisa Olshansky PhD ’15
Andrew Owens SM ’13, PhD ’16
Matthew Rognlie PhD ’16
David Rolnick ’12, PhD ’18
Shreya Saxena PhD ’17
Mark Sellke ’18
Amy X. Zhang PhD ’19
Aleksandr V. Zhukhovitskiy PhD ’16
Longtime MIT Professor Anthony “Tony” Sinskey ScD ’67, who was also the co-founder and faculty director of the Center for Biomedical Innovation (CBI), passed away on Feb. 12 at his home in New Hampshire. He was 84.
Deeply engaged with MIT, Sinskey left his mark on the Institute as much through the relationships he built as the research he conducted. Colleagues say that throughout his decades on the faculty, Sinskey’s door was always open.
“He was incredibly generous in so many ways,” says Graham Walker, an American Cancer Society Professor at MIT. “He was so willing to support people, and he did it out of sheer love and commitment. If you could just watch Tony in action, there was so much that was charming about the way he lived. I’ve said for years that after they made Tony, they broke the mold. He was truly one of a kind.”
Sinskey’s lab at MIT explored methods for metabolic engineering and the production of biomolecules. Over the course of his research career, he published more than 350 papers in leading peer-reviewed journals for biology, metabolic engineering, and biopolymer engineering, and filed more than 50 patents. Well-known in the biopharmaceutical industry, Sinskey contributed to the founding of multiple companies, including Metabolix, Tepha, Merrimack Pharmaceuticals, and Genzyme Corporation. Sinskey’s work with CBI also led to impactful research papers, manufacturing initiatives, and educational content since its founding in 2005.
Across all of his work, Sinskey built a reputation as a supportive, collaborative, and highly entertaining friend who seemed to have a story for everything.
“Tony would always ask for my opinions — what did I think?” says Barbara Imperiali, MIT’s Class of 1922 Professor of Biology and Chemistry, who first met Sinskey as a graduate student. “Even though I was younger, he viewed me as an equal. It was exciting to be able to share my academic journey with him. Even later, he was continually opening doors for me, mentoring, connecting. He felt it was his job to get people into a room together to make new connections.”
Sinskey grew up in the small town of Collinsville, Illinois, and spent nights after school working on a farm. For his undergraduate degree, he attended the University of Illinois, where he got a job washing dishes at the dining hall. One day, as he recalled in a 2020 conversation, he complained to his advisor about the dishwashing job, so the advisor offered him a job washing equipment in his microbiology lab.
In a development that would repeat itself throughout Sinskey’s career, he befriended the researchers in the lab and started learning about their work. Soon he was showing up on weekends and helping out. The experience inspired Sinskey to go to graduate school, and he only applied to one place.
Sinskey earned his ScD from MIT in nutrition and food science in 1967. He joined MIT’s faculty a few years later and never left.
“He loved MIT and its excellence in research and education, which were incredibly important to him,” Walker says. “I don’t know of another institution this interdisciplinary — there’s barely a speed bump between departments — so you can collaborate with anybody. He loved that. He also loved the spirit of entrepreneurship, which he thrived on. If you heard somebody wanted to get a project done, you could run around, get 10 people, and put it together. He just loved doing stuff like that.”
Working across departments would become a signature of Sinskey’s research. His original office was on the first floor of MIT’s Building 56, right next to the parking lot, so he’d leave his door open in the mornings and afternoons and colleagues would stop in and chat.
“One of my favorite things to do was to drop in on Tony when I saw that his office door was open,” says Chris Kaiser, MIT’s Amgen Professor of Biology. “We had a whole range of things we liked to catch up on, but they always included his perspectives looking back on his long history at MIT. It also always included hopes for the future, including tracking trajectories of MIT students, whom he doted on.”
Long before the internet, colleagues describe Sinskey as a kind of internet unto himself, constantly leveraging his vast web of relationships to make connections and stay on top of the latest science news.
“He was an incredibly gracious person — and he knew everyone,” Imperiali says. “It was as if his Rolodex had no end. You would sit there and he would say, ‘Call this person.’ or ‘Call that person.’ And ‘Did you read this new article?’ He had a wonderful view of science and collaboration, and he always made that a cornerstone of what he did. Whenever I’d see his door open, I’d grab a cup of tea and just sit there and talk to him.”
When the first recombinant DNA molecules were produced in the 1970s, it became a hot area of research. Sinskey wanted to learn more about recombinant DNA, so he hosted a large symposium on the topic at MIT that brought in experts from around the world.
“He got his name associated with recombinant DNA for years because of that,” Walker recalls. “People started seeing him as Mr. Recombinant DNA. That kind of thing happened all the time with Tony.”
Sinskey’s research contributions extended beyond recombinant DNA into other microbial techniques to produce amino acids and biodegradable plastics. He co-founded CBI in 2005 to improve global health through the development and dispersion of biomedical innovations. The center adopted Sinskey’s collaborative approach in order to accelerate innovation in biotechnology and biomedical research, bringing together experts from across MIT’s schools.
“Tony was at the forefront of advancing cell culture engineering principles so that making biomedicines could become a reality. He knew early on that biomanufacturing was an important step on the critical path from discovering a drug to delivering it to a patient,” says Stacy Springs, the executive director of CBI. “Tony was not only my boss and mentor, but one of my closest friends. He was always working to help everyone reach their potential, whether that was a colleague, a former or current researcher, or a student. He had a gentle way of encouraging you to do your best.”
“MIT is one of the greatest places to be because you can do anything you want here as long as it’s not a crime,” Sinskey joked in 2020. “You can do science, you can teach, you can interact with people — and the faculty at MIT are spectacular to interact with.”
Sinskey shared his affection for MIT with his family. His wife, the late ChoKyun Rha ’62, SM ’64, SM ’66, ScD ’67, was a professor at MIT for more than four decades and the first woman of Asian descent to receive tenure at MIT. His two sons also attended MIT — Tong-ik Lee Sinskey ’79, SM ’80 and Taeminn Song MBA ’95, who is the director of strategy and strategic initiatives for MIT Information Systems and Technology (IS&T).
Song recalls: “He was driven by same goal my mother had: to advance knowledge in science and technology by exploring new ideas and pushing everyone around them to be better.”
Around 10 years ago, Sinskey began teaching a class with Walker, Course 7.21/7.62 (Microbial Physiology). Walker says their approach was to treat the students as equals and learn as much from them as they taught. The lessons extended beyond the inner workings of microbes to what it takes to be a good scientist and how to be creative. Sinskey and Rha even started inviting the class over to their home for Thanksgiving dinner each year.
“At some point, we realized the class was turning into a close community,” Walker says. “Tony had this endless supply of stories. It didn’t seem like there was a topic in biology that Tony didn’t have a story about either starting a company or working with somebody who started a company.”
Over the last few years, Walker wasn’t sure they were going to continue teaching the class, but Sinskey remarked it was one of the things that gave his life meaning after his wife’s passing in 2021. That decided it.
After finishing up this past semester with a class-wide lunch at Legal Sea Foods, Sinskey and Walker agreed it was one of the best semesters they’d ever taught.
In addition to his two sons, Sinskey is survived by his daughter-in-law Hyunmee Elaine Song, five grandchildren, and two great grandsons. He has two brothers, Terry Sinskey (deceased in 1975) and Timothy Sinskey, and a sister, Christine Sinskey Braudis.
Gifts in Sinskey’s memory can be made to the ChoKyun Rha (1962) and Anthony J Sinskey (1967) Fund.
MIT biologists discover a new type of control over RNA splicingThey identified proteins that influence splicing of about half of all human introns, allowing for more complex types of gene regulation.RNA splicing is a cellular process that is critical for gene expression. After genes are copied from DNA into messenger RNA, portions of the RNA that don’t code for proteins, called introns, are cut out and the coding portions are spliced back together.
This process is controlled by a large protein-RNA complex called the spliceosome. MIT biologists have now discovered a new layer of regulation that helps to determine which sites on the messenger RNA molecule the spliceosome will target.
The research team discovered that this type of regulation, which appears to influence the expression of about half of all human genes, is found throughout the animal kingdom, as well as in plants. The findings suggest that the control of RNA splicing, a process that is fundamental to gene expression, is more complex than previously known.
“Splicing in more complex organisms, like humans, is more complicated than it is in some model organisms like yeast, even though it’s a very conserved molecular process. There are bells and whistles on the human spliceosome that allow it to process specific introns more efficiently. One of the advantages of a system like this may be that it allows more complex types of gene regulation,” says Connor Kenny, an MIT graduate student and the lead author of the study.
Christopher Burge, the Uncas and Helen Whitaker Professor of Biology at MIT, is the senior author of the study, which appears today in Nature Communications.
Building proteins
RNA splicing, a process discovered in the late 1970s, allows cells to precisely control the content of the mRNA transcripts that carry the instructions for building proteins.
Each mRNA transcript contains coding regions, known as exons, and noncoding regions, known as introns. They also include sites that act as signals for where splicing should occur, allowing the cell to assemble the correct sequence for a desired protein. This process enables a single gene to produce multiple proteins; over evolutionary timescales, splicing can also change the size and content of genes and proteins, when different exons become included or excluded.
The spliceosome, which forms on introns, is composed of proteins and noncoding RNAs called small nuclear RNAs (snRNAs). In the first step of spliceosome assembly, an snRNA molecule known as U1 snRNA binds to the 5’ splice site at the beginning of the intron. Until now, it had been thought that the binding strength between the 5’ splice site and the U1 snRNA was the most important determinant of whether an intron would be spliced out of the mRNA transcript.
In the new study, the MIT team discovered that a family of proteins called LUC7 also helps to determine whether splicing will occur, but only for a subset of introns — in human cells, up to 50 percent.
Before this study, it was known that LUC7 proteins associate with U1 snRNA, but the exact function wasn’t clear. There are three different LUC7 proteins in human cells, and Kenny’s experiments revealed that two of these proteins interact specifically with one type of 5’ splice site, which the researchers called “right-handed.” A third human LUC7 protein interacts with a different type, which the researchers call “left-handed.”
The researchers found that about half of human introns contain a right- or left-handed site, while the other half do not appear to be controlled by interaction with LUC7 proteins. This type of control appears to add another layer of regulation that helps remove specific introns more efficiently, the researchers say.
“The paper shows that these two different 5’ splice site subclasses exist and can be regulated independently of one another,” Kenny says. “Some of these core splicing processes are actually more complex than we previously appreciated, which warrants more careful examination of what we believe to be true about these highly conserved molecular processes.”
“Complex splicing machinery”
Previous work has shown that mutation or deletion of one of the LUC7 proteins that bind to right-handed splice sites is linked to blood cancers, including about 10 percent of acute myeloid leukemias (AMLs). In this study, the researchers found that AMLs that lost a copy of the LUC7L2 gene have inefficient splicing of right-handed splice sites. These cancers also developed the same type of altered metabolism seen in earlier work.
“Understanding how the loss of this LUC7 protein in some AMLs alters splicing could help in the design of therapies that exploit these splicing differences to treat AML,” Burge says. “There are also small molecule drugs for other diseases such as spinal muscular atrophy that stabilize the interaction between U1 snRNA and specific 5’ splice sites. So the knowledge that particular LUC7 proteins influence these interactions at specific splice sites could aid in improving the specificity of this class of small molecules.”
Working with a lab led by Sascha Laubinger, a professor at Martin Luther University Halle-Wittenberg, the researchers found that introns in plants also have right- and left-handed 5’ splice sites that are regulated by Luc7 proteins.
The researchers’ analysis suggests that this type of splicing arose in a common ancestor of plants, animals, and fungi, but it was lost from fungi soon after they diverged from plants and animals.
“A lot what we know about how splicing works and what are the core components actually comes from relatively old yeast genetics work,” Kenny says. “What we see is that humans and plants tend to have more complex splicing machinery, with additional components that can regulate different introns independently.”
The researchers now plan to further analyze the structures formed by the interactions of Luc7 proteins with mRNA and the rest of the spliceosome, which could help them figure out in more detail how different forms of Luc7 bind to different 5’ splice sites.
The research was funded by the U.S. National Institutes of Health and the German Research Foundation.
J-WAFS: Supporting food and water research across MITFor the past decade, the Abdul Latif Jameel Water and Food Systems Lab has strengthened MIT faculty efforts in water and food research and innovation.MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has transformed the landscape of water and food research at MIT, driving faculty engagement and catalyzing new research and innovation in these critical areas. With philanthropic, corporate, and government support, J-WAFS’ strategic approach spans the entire research life cycle, from support for early-stage research to commercialization grants for more advanced projects.
Over the past decade, J-WAFS has invested approximately $25 million in direct research funding to support MIT faculty pursuing transformative research with the potential for significant impact. “Since awarding our first cohort of seed grants in 2015, it’s remarkable to look back and see that over 10 percent of the MIT faculty have benefited from J-WAFS funding,” observes J-WAFS Executive Director Renee J. Robins ’83. “Many of these professors hadn’t worked on water or food challenges before their first J-WAFS grant.”
By fostering interdisciplinary collaborations and supporting high-risk, high-reward projects, J-WAFS has amplified the capacity of MIT faculty to pursue groundbreaking research that addresses some of the world’s most pressing challenges facing our water and food systems.
Drawing MIT faculty to water and food research
J-WAFS open calls for proposals enable faculty to explore bold ideas and develop impactful approaches to tackling critical water and food system challenges. Professor Patrick Doyle’s work in water purification exemplifies this impact. “Without J-WAFS, I would have never ventured into the field of water purification,” Doyle reflects. While previously focused on pharmaceutical manufacturing and drug delivery, exposure to J-WAFS-funded peers led him to apply his expertise in soft materials to water purification. “Both the funding and the J-WAFS community led me to be deeply engaged in understanding some of the key challenges in water purification and water security,” he explains.
Similarly, Professor Otto Cordero of the Department of Civil and Environmental Engineering (CEE) leveraged J-WAFS funding to pivot his research into aquaculture. Cordero explains that his first J-WAFS seed grant “has been extremely influential for my lab because it allowed me to take a step in a new direction, with no preliminary data in hand.” Cordero’s expertise is in microbial communities. He was previous unfamiliar with aquaculture, but he saw the relevance of microbial communities the health of farmed aquatic organisms.
Supporting early-career faculty
New assistant professors at MIT have particularly benefited from J-WAFS funding and support. J-WAFS has played a transformative role in shaping the careers and research trajectories of many new faculty members by encouraging them to explore novel research areas, and in many instances providing their first MIT research grant.
Professor Ariel Furst reflects on how pivotal J-WAFS’ investment has been in advancing her research. “This was one of the first grants I received after starting at MIT, and it has truly shaped the development of my group’s research program,” Furst explains. With J-WAFS’ backing, her lab has achieved breakthroughs in chemical detection and remediation technologies for water. “The support of J-WAFS has enabled us to develop the platform funded through this work beyond the initial applications to the general detection of environmental contaminants and degradation of those contaminants,” she elaborates.
Karthish Manthiram, now a professor of chemical engineering and chemistry at Caltech, explains how J-WAFS’ early investment enabled him and other young faculty to pursue ambitious ideas. “J-WAFS took a big risk on us,” Manthiram reflects. His research on breaking the nitrogen triple bond to make ammonia for fertilizer was initially met with skepticism. However, J-WAFS’ seed funding allowed his lab to lay the groundwork for breakthroughs that later attracted significant National Science Foundation (NSF) support. “That early funding from J-WAFS has been pivotal to our long-term success,” he notes.
These stories underscore the broad impact of J-WAFS’ support for early-career faculty, and its commitment to empowering them to address critical global challenges and innovate boldly.
Fueling follow-on funding
J-WAFS seed grants enable faculty to explore nascent research areas, but external funding for continued work is usually necessary to achieve the full potential of these novel ideas. “It’s often hard to get funding for early stage or out-of-the-box ideas,” notes J-WAFS Director Professor John H. Lienhard V. “My hope, when I founded J-WAFS in 2014, was that seed grants would allow PIs [principal investigators] to prove out novel ideas so that they would be attractive for follow-on funding. And after 10 years, J-WAFS-funded research projects have brought more than $21 million in subsequent awards to MIT.”
Professor Retsef Levi led a seed study on how agricultural supply chains affect food safety, with a team of faculty spanning the MIT schools Engineering and Science as well as the MIT Sloan School of Management. The team parlayed their seed grant research into a multi-million-dollar follow-on initiative. Levi reflects, “The J-WAFS seed funding allowed us to establish the initial credibility of our team, which was key to our success in obtaining large funding from several other agencies.”
Dave Des Marais was an assistant professor in the Department of CEE when he received his first J-WAFS seed grant. The funding supported his research on how plant growth and physiology are controlled by genes and interact with the environment. The seed grant helped launch his lab’s work addressing enhancing climate change resilience in agricultural systems. The work led to his Faculty Early Career Development (CAREER) Award from the NSF, a prestigious honor for junior faculty members. Now an associate professor, Des Marais’ ongoing project to further investigate the mechanisms and consequences of genomic and environmental interactions is supported by the five-year, $1,490,000 NSF grant. “J-WAFS providing essential funding to get my new research underway,” comments Des Marais.
Stimulating interdisciplinary collaboration
Des Marais’ seed grant was also key to developing new collaborations. He explains, “the J-WAFS grant supported me to develop a collaboration with Professor Caroline Uhler in EECS/IDSS [the Department of Electrical Engineering and Computer Science/Institute for Data, Systems, and Society] that really shaped how I think about framing and testing hypotheses. One of the best things about J-WAFS is facilitating unexpected connections among MIT faculty with diverse yet complementary skill sets.”
Professors A. John Hart of the Department of Mechanical Engineering and Benedetto Marelli of CEE also launched a new interdisciplinary collaboration with J-WAFS funding. They partnered to join expertise in biomaterials, microfabrication, and manufacturing, to create printed silk-based colorimetric sensors that detect food spoilage. “The J-WAFS Seed Grant provided a unique opportunity for multidisciplinary collaboration,” Hart notes.
Professors Stephen Graves in the MIT Sloan School of Management and Bishwapriya Sanyal in the Department of Urban Studies and Planning (DUSP) partnered to pursue new research on agricultural supply chains. With field work in Senegal, their J-WAFS-supported project brought together international development specialists and operations management experts to study how small firms and government agencies influence access to and uptake of irrigation technology by poorer farmers. “We used J-WAFS to spur a collaboration that would have been improbable without this grant,” they explain. Being part of the J-WAFS community also introduced them to researchers in Professor Amos Winter’s lab in the Department of Mechanical Engineering working on irrigation technologies for low-resource settings. DUSP doctoral candidate Mark Brennan notes, “We got to share our understanding of how irrigation markets and irrigation supply chains work in developing economies, and then we got to contrast that with their understanding of how irrigation system models work.”
Timothy Swager, professor of chemistry, and Rohit Karnik, professor of mechanical engineering and J-WAFS associate director, collaborated on a sponsored research project supported by Xylem, Inc. through the J-WAFS Research Affiliate program. The cross-disciplinary research, which targeted the development of ultra-sensitive sensors for toxic PFAS chemicals, was conceived following a series of workshops hosted by J-WAFS. Swager and Karnik were two of the participants, and their involvement led to the collaborative proposal that Xylem funded. “J-WAFS funding allowed us to combine Swager lab’s expertise in sensing with my lab’s expertise in microfluidics to develop a cartridge for field-portable detection of PFAS,” says Karnik. “J-WAFS has enriched my research program in so many ways,” adds Swager, who is now working to commercialize the technology.
Driving global collaboration and impact
J-WAFS has also helped MIT faculty establish and advance international collaboration and impactful global research. By funding and supporting projects that connect MIT researchers with international partners, J-WAFS has not only advanced technological solutions, but also strengthened cross-cultural understanding and engagement.
Professor Matthew Shoulders leads the inaugural J-WAFS Grand Challenge project. In response to the first J-WAFS call for “Grand Challenge” proposals, Shoulders assembled an interdisciplinary team based at MIT to enhance and provide climate resilience to agriculture by improving the most inefficient aspect of photosynthesis, the notoriously-inefficient carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk/high-reward project following a competitive process that engaged external reviewers through a several rounds of iterative proposal development. The technical feedback to the team led them to researchers with complementary expertise from the Australian National University. “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
Professor Leon Glicksman and Research Engineer Eric Verploegen’s team designed a low-cost cooling chamber to preserve fruits and vegetables harvested by smallholder farmers with no access to cold chain storage. J-WAFS’ guidance motivated the team to prioritize practical considerations informed by local collaborators, ensuring market competitiveness. “As our new idea for a forced-air evaporative cooling chamber was taking shape, we continually checked that our solution was evolving in a direction that would be competitive in terms of cost, performance, and usability to existing commercial alternatives,” explains Verploegen, who is currently an MIT D-Lab affiliate. Following the team’s initial seed grant, the team secured a J-WAFS Solutions commercialization grant, which Verploegen say “further motivated us to establish partnerships with local organizations capable of commercializing the technology earlier in the project than we might have done otherwise.” The team has since shared an open-source design as part of its commercialization strategy to maximize accessibility and impact.
Bringing corporate sponsored research opportunities to MIT faculty
J-WAFS also plays a role in driving private partnerships, enabling collaborations that bridge industry and academia. Through its Research Affiliate Program, for example, J-WAFS provides opportunities for faculty to collaborate with industry on sponsored research, helping to convert scientific discoveries into licensable intellectual property (IP) that companies can turn into commercial products and services.
J-WAFS introduced professor of mechanical engineering Alex Slocum to a challenge presented by its research affiliate company, Xylem: how to design a more energy-efficient pump for fluctuating flows. With centrifugal pumps consuming an estimated 6 percent of U.S. electricity annually, Slocum and his then-graduate student Hilary Johnson SM '18, PhD '22 developed an innovative variable volute mechanism that reduces energy usage. “Xylem envisions this as the first in a new category of adaptive pump geometry,” comments Johnson. The research produced a pump prototype and related IP that Xylem is working on commercializing. Johnson notes that these outcomes “would not have been possible without J-WAFS support and facilitation of the Xylem industry partnership.” Slocum adds, “J-WAFS enabled Hilary to begin her work on pumps, and Xylem sponsored the research to bring her to this point … where she has an opportunity to do far more than the original project called for.”
Swager speaks highly of the impact of corporate research sponsorship through J-WAFS on his research and technology translation efforts. His PFAS project with Karnik described above was also supported by Xylem. “Xylem was an excellent sponsor of our research. Their engagement and feedback were instrumental in advancing our PFAS detection technology, now on the path to commercialization,” Swager says.
Looking forward
What J-WAFS has accomplished is more than a collection of research projects; a decade of impact demonstrates how J-WAFS’ approach has been transformative for many MIT faculty members. As Professor Mathias Kolle puts it, his engagement with J-WAFS “had a significant influence on how we think about our research and its broader impacts.” He adds that it “opened my eyes to the challenges in the field of water and food systems and the many different creative ideas that are explored by MIT.”
This thriving ecosystem of innovation, collaboration, and academic growth around water and food research has not only helped faculty build interdisciplinary and international partnerships, but has also led to the commercialization of transformative technologies with real-world applications. C. Cem Taşan, the POSCO Associate Professor of Metallurgy who is leading a J-WAFS Solutions commercialization team that is about to launch a startup company, sums it up by noting, “Without J-WAFS, we wouldn’t be here at all.”
As J-WAFS looks to the future, its continued commitment — supported by the generosity of its donors and partners — builds on a decade of success enabling MIT faculty to advance water and food research that addresses some of the world’s most pressing challenges.
Unlocking the secrets of fusion’s core with AI-enhanced simulationsFusion’s future depends on decoding plasma’s mysteries. Simulations can help keep research on track and reveal more efficient ways to generate fusion energy.Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.
Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.
In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.
The biggest and best of what’s never been built
Forty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.
In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.
The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”
After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.
“Just dropped in to see what condition my condition was in”
Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”
The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.
Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”
Viewing the universe through ripples in spacePhysicist Salvatore Vitale is looking for new sources of gravitational waves, to reach beyond what we can learn about the universe through light alone.In early September 2015, Salvatore Vitale, who was then a research scientist at MIT, stopped home in Italy for a quick visit with his parents after attending a meeting in Budapest. The meeting had centered on the much-anticipated power-up of Advanced LIGO — a system scientists hoped would finally detect a passing ripple in space-time known as a gravitational wave.
Albert Einstein had predicted the existence of these cosmic reverberations nearly 100 years earlier and thought they would be impossible to measure. But scientists including Vitale believed they might have a shot with their new ripple detector, which was scheduled, finally, to turn on in a few days. At the meeting in Budapest, team members were excited, albeit cautious, acknowledging that it could be months or years before the instruments picked up any promising signs.
However, the day after he arrived for his long-overdue visit with his family, Vitale received a huge surprise.
“The next day, we detect the first gravitational wave, ever,” he remembers. “And of course I had to lock myself in a room and start working on it.”
Vitale and his colleagues had to work in secrecy to prevent the news from getting out before they could scientifically confirm the signal and characterize its source. That meant that no one — not even his parents — could know what he was working on. Vitale departed for MIT and promised that he would come back to visit for Christmas.
“And indeed, I fly back home on the 25th of December, and on the 26th we detect the second gravitational wave! At that point I had to swear them to secrecy and tell them what happened, or they would strike my name from the family record,” he says, only partly in jest.
With the family peace restored, Vitale could focus on the path ahead, which suddenly seemed bright with gravitational discoveries. He and his colleagues, as part of the LIGO Scientific Collaboration, announced the detection of the first gravitational wave in February 2016, confirming Einstein’s prediction. For Vitale, the moment also solidified his professional purpose.
“Had LIGO not detected gravitational waves when it did, I would not be where I am today,” Vitale says. “For sure I was very lucky to be doing this at the right time, for me, and for the instrument and the science.”
A few months after, Vitale joined the MIT faculty as an assistant professor of physics. Today, as a recently tenured associate professor, he is working with his students to analyze a bounty of gravitational signals, from Advanced LIGO as well as Virgo (a similar detector in Italy) and KAGRA, in Japan. The combined power of these observatories is enabling scientists to detect at least one gravitational wave a week, which has revealed a host of extreme sources, from merging black holes to colliding neutron stars.
“Gravitational waves give us a different view of the same universe, which could teach us about things that are very hard to see with just photons,” Vitale says.
Random motion
Vitale is from Reggio di Calabria, a small coastal city in the south of Italy, right at “the tip of the boot,” as he says. His family owned and ran a local grocery store, where he spent so much time as a child that he could recite the names of nearly all the wines in the store.
When he was 9 years old, he remembers stopping in at the local newsstand, which also sold used books. He gathered all the money he had in order to purchase two books, both by Albert Einstein. The first was a collection of letters from the physicist to his friends and family. The second was his theory of relativity.
“I read the letters, and then went through the second book and remember seeing these weird symbols that didn’t mean anything to me,” Vitale recalls.
Nevertheless, the kid was hooked, and continued reading up on physics, and later, quantum mechanics. Toward the end of high school, it wasn’t clear if Vitale could go on to college. Large grocery chains had run his parents’ store out of business, and in the process, the family lost their home and were struggling to recover their losses. But with his parents’ support, Vitale applied and was accepted to the University of Bologna, where he went on to earn a bachelor’s and a master’s in theoretical physics, specializing in general relativity and approximating ways to solve Einstein’s equations. He went on to pursue his PhD in theoretical physics at the Pierre and Marie Curie University in Paris.
“Then, things changed in a very, very random way,” he says.
Vitale’s PhD advisor was hosting a conference, and Vitale volunteered to hand out badges and flyers and help guests get their bearings. That first day, one guest drew his attention.
“I see this guy sitting on the floor, kind of banging his head against his computer because he could not connect his Ubuntu computer to the Wi-Fi, which back then was very common,” Vitale says. “So I tried to help him, and failed miserably, but we started chatting.”
The guest happened to be a professor from Arizona who specialized in analyzing gravitational-wave signals. Over the course of the conference, the two got to know each other, and the professor invited Vitale to Arizona to work with his research group. The unexpected opportunity opened a door to gravitational-wave physics that Vitale might have passed by otherwise.
“When I talk to undergrads and how they can plan their career, I say I don’t know that you can,” Vitale says. “The best you can hope for is a random motion that, overall, goes in the right direction.”
High risk, high reward
Vitale spent two months at Embry-Riddle Aeronautical University in Prescott, Arizona, where he analyzed simulated data of gravitational waves. At that time, around 2009, no one had detected actual signals of gravitational waves. The first iteration of the LIGO detectors began observations in 2002 but had so far come up empty.
“Most of my first few years was working entirely with simulated data because there was no real data in the first place. That led a lot of people to leave the field because it was not an obvious path,” Vitale says.
Nevertheless, the work he did in Arizona only piqued his interest, and Vitale chose to specialize in gravitational-wave physics, returning to Paris to finish up his PhD, then going on to a postdoc position at NIKHEF, the Dutch National Institute for Subatomic Physics at the University of Amsterdam. There, he joined on as a member of the Virgo collaboration, making further connections among the gravitational-wave community.
In 2012, he made the move to Cambridge, Massachusetts, where he started as a postdoc at MIT’s LIGO Laboratory. At that time, scientists there were focused on fine-tuning Advanced LIGO’s detectors and simulating the types of signals that they might pick up. Vitale helped to develop an algorithm to search for signals likely to be gravitational waves.
Just before the detectors turned on for the first observing run, Vitale was promoted to research scientist. And as luck would have it, he was working with MIT students and colleagues on one of the two algorithms that picked up what would later be confirmed to be the first ever gravitational wave.
“It was exciting,” Vitale recalls. “Also, it took us several weeks to convince ourselves that it was real.”
In the whirlwind that followed the official announcement, Vitale became an assistant professor in MIT’s physics department. In 2017, in recognition of the discovery, the Nobel Prize in Physics was awarded to three pivotal members of the LIGO team, including MIT’s Rainier Weiss. Vitale and other members of the LIGO-Virgo collaboration attended the Nobel ceremony later on, in Stockholm, Sweden — a moment that was captured in a photograph displayed proudly in Vitale’s office.
Vitale was promoted to associate professor in 2022 and earned tenure in 2024. Unfortunately his father passed away shortly before the tenure announcement. “He would have been very proud,” Vitale reflects.
Now, in addition to analyzing gravitational-wave signals from LIGO, Virgo, and KAGRA, Vitale is pushing ahead on plans for an even bigger, better LIGO successor. He is part of the Cosmic Explorer Project, which aims to build a gravitational-wave detector that is similar in design to LIGO but 10 times bigger. At that scale, scientists believe such an instrument could pick up signals from sources that are much farther away in space and time, even close to the beginning of the universe.
Then, scientists could look for never-before-detected sources, such as the very first black holes formed in the universe. They could also search within the same neighborhood as LIGO and Virgo, but with higher precision. Then, they might see gravitational signals that Einstein didn’t predict.
“Einstein developed the theory of relativity to explain everything from the motion of Mercury, which circles the sun every 88 days, to objects such as black holes that are 30 times the mass of the sun and move at half the speed of light,” Vitale says. “There’s no reason the same theory should work for both cases, but so far, it seems so, and we’ve found no departure from relativity. But you never know, and you have to keep looking. It’s high risk, for high reward.”
AI model deciphers the code in proteins that tells them where to goWhitehead Institute and CSAIL researchers created a machine-learning model to predict and generate protein localization, with implications for understanding and remedying disease.Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do. More recently, researchers are coming to appreciate that a protein’s localization is also critical for its function. Cells are full of compartments that help to organize their many denizens. Along with the well-known organelles that adorn the pages of biology textbooks, these spaces also include a variety of dynamic, membrane-less compartments that concentrate certain molecules together to perform shared functions. Knowing where a given protein localizes, and who it co-localizes with, can therefore be useful for better understanding that protein and its role in the healthy or diseased cell, but researchers have lacked a systematic way to predict this information.
Meanwhile, protein structure has been studied for over half-a-century, culminating in the artificial intelligence tool AlphaFold, which can predict protein structure from a protein’s amino acid code, the linear string of building blocks within it that folds to create its structure. AlphaFold and models like it have become widely used tools in research.
Proteins also contain regions of amino acids that do not fold into a fixed structure, but are instead important for helping proteins join dynamic compartments in the cell. MIT Professor Richard Young and colleagues wondered whether the code in those regions could be used to predict protein localization in the same way that other regions are used to predict structure. Other researchers have discovered some protein sequences that code for protein localization, and some have begun developing predictive models for protein localization. However, researchers did not know whether a protein’s localization to any dynamic compartment could be predicted based on its sequence, nor did they have a comparable tool to AlphaFold for predicting localization.
Now, Young, also member of the Whitehead Institute for Biological Research; Young lab postdoc Henry Kilgore; Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in MIT's Department of Electrical Engineering and Computer Science and principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and colleagues have built such a model, which they call ProtGPS. In a paper published on Feb. 6 in the journal Science, with first authors Kilgore and Barzilay lab graduate students Itamar Chinn, Peter Mikhael, and Ilan Mitnikov, the cross-disciplinary team debuts their model. The researchers show that ProtGPS can predict to which of 12 known types of compartments a protein will localize, as well as whether a disease-associated mutation will change that localization. Additionally, the research team developed a generative algorithm that can design novel proteins to localize to specific compartments.
“My hope is that this is a first step towards a powerful platform that enables people studying proteins to do their research,” Young says, “and that it helps us understand how humans develop into the complex organisms that they are, how mutations disrupt those natural processes, and how to generate therapeutic hypotheses and design drugs to treat dysfunction in a cell.”
The researchers also validated many of the model’s predictions with experimental tests in cells.
“It really excited me to be able to go from computational design all the way to trying these things in the lab,” Barzilay says. “There are a lot of exciting papers in this area of AI, but 99.9 percent of those never get tested in real systems. Thanks to our collaboration with the Young lab, we were able to test, and really learn how well our algorithm is doing.”
The researchers trained and tested ProtGPS on two batches of proteins with known localizations. They found that it could correctly predict where proteins end up with high accuracy. The researchers also tested how well ProtGPS could predict changes in protein localization based on disease-associated mutations within a protein. Many mutations — changes to the sequence for a gene and its corresponding protein — have been found to contribute to or cause disease based on association studies, but the ways in which the mutations lead to disease symptoms remain unknown.
Figuring out the mechanism for how a mutation contributes to disease is important because then researchers can develop therapies to fix that mechanism, preventing or treating the disease. Young and colleagues suspected that many disease-associated mutations might contribute to disease by changing protein localization. For example, a mutation could make a protein unable to join a compartment containing essential partners.
They tested this hypothesis by feeding ProtGOS more than 200,000 proteins with disease-associated mutations, and then asking it to both predict where those mutated proteins would localize and measure how much its prediction changed for a given protein from the normal to the mutated version. A large shift in the prediction indicates a likely change in localization.
The researchers found many cases in which a disease-associated mutation appeared to change a protein’s localization. They tested 20 examples in cells, using fluorescence to compare where in the cell a normal protein and the mutated version of it ended up. The experiments confirmed ProtGPS’s predictions. Altogether, the findings support the researchers’ suspicion that mis-localization may be an underappreciated mechanism of disease, and demonstrate the value of ProtGPS as a tool for understanding disease and identifying new therapeutic avenues.
“The cell is such a complicated system, with so many components and complex networks of interactions,” Mitnikov says. “It’s super interesting to think that with this approach, we can perturb the system, see the outcome of that, and so drive discovery of mechanisms in the cell, or even develop therapeutics based on that.”
The researchers hope that others begin using ProtGPS in the same way that they use predictive structural models like AlphaFold, advancing various projects on protein function, dysfunction, and disease.
The researchers were excited about the possible uses of their prediction model, but they also wanted their model to go beyond predicting localizations of existing proteins, and allow them to design completely new proteins. The goal was for the model to make up entirely new amino acid sequences that, when formed in a cell, would localize to a desired location. Generating a novel protein that can actually accomplish a function — in this case, the function of localizing to a specific cellular compartment — is incredibly difficult. In order to improve their model’s chances of success, the researchers constrained their algorithm to only design proteins like those found in nature. This is an approach commonly used in drug design, for logical reasons; nature has had billions of years to figure out which protein sequences work well and which do not.
Because of the collaboration with the Young lab, the machine learning team was able to test whether their protein generator worked. The model had good results. In one round, it generated 10 proteins intended to localize to the nucleolus. When the researchers tested these proteins in the cell, they found that four of them strongly localized to the nucleolus, and others may have had slight biases toward that location as well.
“The collaboration between our labs has been so generative for all of us,” Mikhael says. “We’ve learned how to speak each other’s languages, in our case learned a lot about how cells work, and by having the chance to experimentally test our model, we’ve been able to figure out what we need to do to actually make the model work, and then make it work better.”
Being able to generate functional proteins in this way could improve researchers’ ability to develop therapies. For example, if a drug must interact with a target that localizes within a certain compartment, then researchers could use this model to design a drug to also localize there. This should make the drug more effective and decrease side effects, since the drug will spend more time engaging with its target and less time interacting with other molecules, causing off-target effects.
The machine learning team members are enthused about the prospect of using what they have learned from this collaboration to design novel proteins with other functions beyond localization, which would expand the possibilities for therapeutic design and other applications.
“A lot of papers show they can design a protein that can be expressed in a cell, but not that the protein has a particular function,” Chinn says. “We actually had functional protein design, and a relatively huge success rate compared to other generative models. That’s really exciting to us, and something we would like to build on.”
All of the researchers involved see ProtGPS as an exciting beginning. They anticipate that their tool will be used to learn more about the roles of localization in protein function and mis-localization in disease. In addition, they are interested in expanding the model’s localization predictions to include more types of compartments, testing more therapeutic hypotheses, and designing increasingly functional proteins for therapies or other applications.
“Now that we know that this protein code for localization exists, and that machine learning models can make sense of that code and even create functional proteins using its logic, that opens up the door for so many potential studies and applications,” Kilgore says.
Study reveals the Phoenix galaxy cluster in the act of extreme coolingObservations from NASA’s James Webb Space Telescope help to explain the cluster’s mysterious starburst, usually only seen in younger galaxies.The core of a massive cluster of galaxies appears to be pumping out far more stars than it should. Now researchers at MIT and elsewhere have discovered a key ingredient within the cluster that explains the core’s prolific starburst.
In a new study published in Nature, the scientists report using NASA’s James Webb Space Telescope (JWST) to observe the Phoenix cluster — a sprawling collection of gravitationally bound galaxies that circle a central massive galaxy some 5.8 billion light years from Earth. The cluster is the largest of its kind that scientists have so far observed. For its size and estimated age, the Phoenix should be what astronomers call “red and dead” — long done with any star formation that is characteristic of younger galaxies.
But astronomers previously discovered that the core of the Phoenix cluster appeared surprisingly bright, and the central galaxy seemed to be churning out stars at an extremely vigorous rate. The observations raised a mystery: How was the Phoenix fueling such rapid star formation?
In younger galaxies, the “fuel” for forging stars is in the form of extremely cold and dense clouds of interstellar gas. For the much older Phoenix cluster, it was unclear whether the central galaxy could undergo the extreme cooling of gas that would be required to explain its stellar production, or whether cold gas migrated in from other, younger galaxies.
Now, the MIT team has gained a much clearer view of the cluster’s core, using JWST’s far-reaching, infrared-measuring capabilities. For the first time, they have been able to map regions within the core where there are pockets of “warm” gas. Astronomers have previously seen hints of both very hot gas, and very cold gas, but nothing in between.
The detection of warm gas confirms that the Phoenix cluster is actively cooling and able to generate a huge amount of stellar fuel on its own.
“For the first time we have a complete picture of the hot-to-warm-to-cold phase in star formation, which has really never been observed in any galaxy,” says study lead author Michael Reefe, a physics graduate student in MIT’s Kavli Institute for Astrophysics and Space Research. “There is a halo of this intermediate gas everywhere that we can see.”
“The question now is, why this system?” adds co-author Michael McDonald, associate professor of physics at MIT. “This huge starburst could be something every cluster goes through at some point, but we’re only seeing it happen currently in one cluster. The other possibility is that there’s something divergent about this system, and the Phoenix went down a path that other systems don’t go. That would be interesting to explore.”
Hot and cold
The Phoenix cluster was first spotted in 2010 by astronomers using the South Pole Telescope in Antarctica. The cluster comprises about 1,000 galaxies and lies in the constellation Phoenix, after which it is named. Two years later, McDonald led an effort to focus in on Phoenix using multiple telescopes, and discovered that the cluster’s central galaxy was extremely bright. The unexpected luminosity was due to a firehose of star formation. He and his colleagues estimated that this central galaxy was turning out stars at a staggering rate of about 1,000 per year.
“Previous to the Phoenix, the most star-forming galaxy cluster in the universe had about 100 stars per year, and even that was an outlier. The typical number is one-ish,” McDonald says. “The Phoenix is really offset from the rest of the population.”
Since that discovery, scientists have checked in on the cluster from time to time for clues to explain the abnormally high stellar production. They have observed pockets of both ultrahot gas, of about 1 million degrees Fahrenheit, and regions of extremely cold gas, of 10 kelvins, or 10 degrees above absolute zero.
The presence of very hot gas is no surprise: Most massive galaxies, young and old, host black holes at their cores that emit jets of extremely energetic particles that can continually heat up the galaxy’s gas and dust throughout a galaxy’s lifetime. Only in a galaxy’s early stages does some of this million-degree gas cool dramatically to ultracold temperatures that can then form stars. For the Phoenix cluster’s central galaxy, which should be well past the stage of extreme cooling, the presence of ultracold gas presented a puzzle.
“The question has been: Where did this cold gas come from?” McDonald says. “It’s not a given that hot gas will ever cool, because there could be black hole or supernova feedback. So, there are a few viable options, the simplest being that this cold gas was flung into the center from other nearby galaxies. The other is that this gas somehow is directly cooling from the hot gas in the core.”
Neon signs
For their new study, the researchers worked under a key assumption: If the Phoenix cluster’s cold, star-forming gas is coming from within the central galaxy, rather than from the surrounding galaxies, the central galaxy should have not only pockets of hot and cold gas, but also gas that’s in a “warm” in-between phase. Detecting such intermediate gas would be like catching the gas in the midst of extreme cooling, serving as proof that the core of the cluster was indeed the source of the cold stellar fuel.
Following this reasoning, the team sought to detect any warm gas within the Phoenix core. They looked for gas that was somewhere between 10 kelvins and 1 million kelvins. To search for this Goldilocks gas in a system that is 5.8 billion light years away, the researchers looked to JWST, which is capable of observing farther and more clearly than any observatory to date.
The team used the Medium-Resolution Spectrometer on JWST’s Mid-Infrared Instrument (MIRI), which enables scientists to map light in the infrared spectrum. In July of 2023, the team focused the instrument on the Phoenix core and collected 12 hours’ worth of infrared images. They looked for a specific wavelength that is emitted when gas — specifically neon gas — undergoes a certain loss of ions. This transition occurs at around 300,000 kelvins, or 540,000 degrees Fahrenheit — a temperature that happens to be within the “warm” range that the researchers looked to detect and map. The team analyzed the images and mapped the locations where warm gas was observed within the central galaxy.
“This 300,000-degree gas is like a neon sign that’s glowing in a specific wavelength of light, and we could see clumps and filaments of it throughout our entire field of view,” Reefe says. “You could see it everywhere.”
Based on the extent of warm gas in the core, the team estimates that the central galaxy is undergoing a huge degree of extreme cooling and is generating an amount of ultracold gas each year that is equal to the mass of about 20,000 suns. With that kind of stellar fuel supply, the team says it’s very likely that the central galaxy is indeed generating its own starburst, rather than using fuel from surrounding galaxies.
“I think we understand pretty completely what is going on, in terms of what is generating all these stars,” McDonald says. “We don’t understand why. But this new work has opened a new way to observe these systems and understand them better.”
This work was funded, in part, by NASA.
Mapping mRNA through its life cycle within a cellXiao Wang’s studies of how and where RNA is translated could lead to the development of better RNA therapeutics and vaccines.When Xiao Wang applied to faculty jobs, many of the institutions where she interviewed thought her research proposal — to study the life cycle of RNA in cells and how it influences normal development and disease — was too broad.
However, that was not the case when she interviewed at MIT, where her future colleagues embraced her ideas and encouraged her to be even more bold.
“What I’m doing now is even broader, even bolder than what I initially proposed,” says Wang, who holds joint appointments in the Department of Chemistry and the Broad Institute of MIT and Harvard. “I got great support from all my colleagues in my department and at Broad so that I could get the resources to conduct what I wanted to do. It’s also a demonstration of how brave the students are. There is a really innovative culture and environment here, so the students are not scared by taking on something that might sound weird or unrealistic.”
Wang’s work on RNA brings together students from chemistry, biology, computer science, neuroscience, and other fields. In her lab, research is focused on developing tools that pinpoint where in a given cell different types of messenger RNA are translated into proteins — information that can offer insight into how cells control their fate and what goes wrong in disease, especially in the brain.
“The joint position between MIT Chemistry and the Broad Institute was very attractive to me because I was trained as a chemist, and I would like to teach and recruit students from chemistry. But meanwhile, I also wanted to get exposure to biomedical topics and have collaborators outside chemistry. I can collaborate with biologists, doctors, as well as computational scientists who analyze all these daunting data,” she says.
Imaging RNA
Wang began her career at MIT in 2019, just before the Covid-19 pandemic began. Until that point, she hardly knew anyone in the Boston area, but she found a warm welcome.
“I wasn’t trained at MIT, and I had never lived in Boston before. At first, I had very small social circles, just with my colleagues and my students, but amazingly, even during the pandemic, I never felt socially isolated. I just felt so plugged in already even though it’s very a close, small circle,” she says.
Growing up in China, Wang became interested in science in middle school, when she was chosen to participate in China’s National Olympiad in math and chemistry. That gave her the chance to learn college-level course material, and she ended up winning a gold medal in the nationwide chemistry competition.
“That exposure was enough to draw me into initially mathematics, but later on more into chemistry. That’s how I got interested in a more science-oriented major and then career path,” Wang says.
At Peking University, she majored in chemistry and molecular engineering. There, she worked with Professor Jian Pei, who gave her the opportunity to work independently on her own research project.
“I really like to do research because every day you have a hypothesis, you have a design, and you make it happen. It’s like playing a video game: You have this roughly daily feedback loop. Sometimes it’s a reward, sometimes it’s not. I feel it’s more interesting than taking a class, so I think that made me decide I should apply for graduate school,” she says.
As a graduate student at the University of Chicago, she became interested in RNA while doing a rotation in the lab of Chuan He, a professor of chemistry. He was studying chemical modifications that affect the function of messenger RNA — the molecules that carry protein-building instructions from DNA to ribosomes, where proteins are assembled.
Wang ended up joining He’s lab, where she studied a common mRNA modification known as m6A, which influences how efficiently mRNA is translated into protein and how fast it gets degraded in the cell. She also began to explore how mRNA modifications affect embryonic development. As a model for these studies, she was using zebrafish, which have transparent embryos that develop from fertilized eggs into free-swimming larvae within two days. That got her interested in developing methods that could reveal where different types of RNA were being expressed, by imaging the entire organism.
Such an approach, she soon realized, could also be useful for studying the brain. As a postdoc at Stanford University, she started to develop RNA imaging methods, working with Professor Karl Deisseroth. There are existing techniques for identifying mRNA molecules that are expressed in individual cells, but those don’t offer information about exactly where in the cells different types of mRNA are located. She began developing a technique called STARmap that could accomplish this type of “spatial transcriptomics.”
Using this technique, researchers first use formaldehyde to crosslink all of the mRNA molecules in place. Then, the tissue is washed with fluorescent DNA probes that are complementary to the target mRNA sequences. These probes can then be imaged and sequenced, revealing the locations of each mRNA sequence within a cell. This allows for the visualization of mRNA molecules that encode thousands of different genes within single cells.
“I was leveraging my background in the chemistry of RNA to develop this RNA-centered brain mapping technology, which allows you to use RNA expression profiles to define brain cell types and also visualize their spatial architecture,” Wang says.
Tracking the RNA life cycle
Members of Wang’s lab are now working on expanding the capability of the STARmap technique so that it can be used to analyze brain function and brain wiring. They are also developing tools that will allow them to map the entire life cycle of mRNA molecules, from synthesis to translation to degradation, and track how these molecules are transported within a cell during their lifetime.
One of these tools, known as RIBOmap, pinpoints the locations of mRNA molecules as they are being translated at ribosomes. Another tool allows the researchers to measure how quickly mRNA is degraded after being transcribed.
“We are trying to develop a toolkit that will let us visualize every step of the RNA life cycle inside cells and tissues,” Wang says. “These are newer generations of tool development centered around these RNA biological questions.”
One of these central questions is how different cell types control their RNA life cycles differently, and how that affects their differentiation. Differences in RNA control may also be a factor in diseases such as Alzheimer’s. In a 2023 study, Wang and MIT Professor Morgan Sheng used a version of STARmap to discover how cells called microglia become more inflammatory as amyloid-beta plaques form in the brain. Wang’s lab is also pursuing studies of how differences in mRNA translation might affect schizophrenia and other neurological disorders.
“The reason we think there will be a lot of interesting biology to discover is because the formation of neural circuits is through synapses, and synapse formation and learning and memory are strongly associated with localized RNA translation, which involves multiple steps including RNA transport and recycling,” she says.
In addition to investigating those biological questions, Wang is also working on ways to boost the efficiency of mRNA therapeutics and vaccines by changing their chemical modifications or their topological structure.
“Our goal is to create a toolbox and RNA synthesis strategy where we can precisely tune the chemical modification on every particle of RNA,” Wang says. “We want to establish how those modifications will influence how fast mRNA can produce protein, and in which cell types they could be used to more efficiently produce protein.”
MIT method enables ultrafast protein labeling of tens of millions of densely packed cellsTissue processing advance can label proteins at the level of individual cells across large samples just as fast and uniformly as in dissociated single cells.A new technology developed at MIT enables scientists to label proteins across millions of individual cells in fully intact 3D tissues with unprecedented speed, uniformity, and versatility. Using the technology, the team was able to richly label large tissue samples in a single day. In their new study in Nature Biotechnology, they also demonstrate that the ability to label proteins with antibodies at the single-cell level across large tissue samples can reveal insights left hidden by other widely used labeling methods.
Profiling the proteins that cells are making is a staple of studies in biology, neuroscience, and related fields because the proteins a cell is expressing at a given moment can reflect the functions the cell is trying to perform or its response to its circumstances, such as disease or treatment. As much as microscopy and labeling technologies have advanced, enabling innumerable discoveries, scientists have still lacked a reliable and practical way of tracking protein expression at the level of millions of densely packed individual cells in whole, 3D intact tissues. Often confined to thin tissue sections under slides, scientists therefore haven’t had tools to thoroughly appreciate cellular protein expression in the whole, connected systems in which it occurs.
“Conventionally, investigating the molecules within cells requires dissociating tissue into single cells or slicing it into thin sections, as light and chemicals required for analysis cannot penetrate deep into tissues. Our lab developed technologies such as CLARITY and SHIELD, which enable investigation of whole organs by rendering them transparent, but we now needed a way to chemically label whole organs to gain useful scientific insights,” says study senior author Kwanghun Chung, associate professor in The Picower Institute for Learning and Memory, the departments of Chemical Engineering and Brain and Cognitive Sciences, and the Institute for Medical Engineering and Science at MIT. “If cells within a tissue are not uniformly processed, they cannot be quantitatively compared. In conventional protein labeling, it can take weeks for these molecules to diffuse into intact organs, making uniform chemical processing of organ-scale tissues virtually impossible and extremely slow.”
The new approach, called “CuRVE,” represents a major advance — years in the making — toward that goal by demonstrating a fundamentally new approach to uniformly processing large and dense tissues whole. In the study, the researchers explain how they overcame the technical barriers via an implementation of CuRVE called “eFLASH,” and provide copious vivid demonstrations of the technology, including how it yielded new neuroscience insights.
“This is a significant leap, especially in terms of the actual performance of the technology,” says co-lead author Dae Hee Yun PhD '24, a recent MIT graduate student who is now a senior application engineer at LifeCanvas Technologies, a startup company Chung founded to disseminate the tools his lab invents. The paper’s other lead author is Young-Gyun Park, a former MIT postdoc who’s now an assistant professor at KAIST in South Korea.
Clever chemistry
The fundamental reason why large, 3D tissue samples are hard to label uniformly is that antibodies seep into tissue very slowly, but are quick to bind to their target proteins. The practical effect of this speed mismatch is that simply soaking a brain in a bath of antibodies will mean that proteins are intensely well labeled on the outer edge of the tissue, but virtually none of the antibodies will find cells and proteins deeper inside.
To improve labeling, the team conceived of a way — the conceptual essence of CuRVE — to resolve the speed mismatch. The strategy was to continuously control the pace of antibody binding while at the same time speeding up antibody permeation throughout the tissue. To figure out how this could work and to optimize the approach, they built and ran a sophisticated computational simulation that enabled them to test different settings and parameters, including different binding rates and tissue densities and compositions.
Then they set out to implement their approach in real tissues. Their starting point was a previous technology, called “SWITCH,” in which Chung’s lab devised a way of temporarily turning off antibody binding, letting the antibodies permeate the tissue, and then turning binding back on. As well as it worked, Yun says, the team realized there could be substantial improvements if antibody binding speed could be controlled constantly, but the chemicals used in SWITCH were too harsh for such ongoing treatment. So the team screened a library of similar chemicals to find one that could more subtly and continuously throttle antibody binding speed. They found that deoxycholic acid was an ideal candidate. Using that chemical, the team could not only modulate antibody binding by varying the chemical’s concentration, but also by varying the labeling bath’s pH (or acidity).
Meanwhile, to speed up antibody movement through tissues, the team used another prior technology invented in the Chung Lab: stochastic electrotransport. That technology accelerates the dispersion of antibodies through tissue by applying electric fields.
Implementing this eFLASH system of accelerated dispersion with continuously modifiable binding speed produced the wide variety of labeling successes demonstrated in the paper. In all, the team reported using more than 60 different antibodies to label proteins in cells across large tissue samples.
Notably, each of these specimens was labeled within a day, an “ultra-fast” speed for whole, intact organs, the authors say. Moreover, different preparations did not require new optimization steps.
Valuable visualizations
Among the ways the team put eFLASH to the test was by comparing their labeling to another often-used method: genetically engineering cells to fluoresce when the gene for a protein of interest is being transcribed. The genetic method doesn’t require dispersing antibodies throughout tissue, but it can be prone to discrepancies because reporting gene transcription and actual protein production are not exactly the same thing. Yun added that while antibody labeling reliably and immediately reports on the presence of a target protein, the genetic method can be much less immediate and persistent, still fluorescing even when the actual protein is no longer present.
In the study the team employed both kinds of labeling simultaneously in samples. Visualizing the labels that way, they saw many examples in which antibody labeling and genetic labeling differed widely. In some areas of mouse brains, they found that two-thirds of the neurons expressing PV (a protein prominent in certain inhibitory neurons) according to antibody labeling, did not show any genetically-based fluorescence. In another example, only a tiny fraction of cells that reported expression via the genetic method of a protein called ChAT also reported it via antibody labeling. In other words, there were cases where genetic labeling both severely underreported or overreported protein expression compared to antibody labeling.
The researchers don’t mean to impugn the clear value of using the genetic reporting methods, but instead suggest that also using organ-wide antibody labeling, as eFLASH allows, can help put that data in a richer, more complete context. “Our discovery of large regionalized loss of PV-immunoreactive neurons in healthy adult mice and with high individual variability emphasizes the importance of holistic and unbiased phenotyping,” the authors write.
Or as Yun puts it, the two different kinds of labeling are “two different tools for the job.”
In addition to Yun, Park, and Chung, the paper’s other authors are Jae Hun Cho, Lee Kamentsky, Nicholas Evans, Nicholas DiNapoli, Katherine Xie, Seo Woo Choi, Alexandre Albanese, Yuxuan Tian, Chang Ho Sohn, Qiangge Zhang, Minyoung Kim, Justin Swaney, Webster Guan, Juhyuk Park, Gabi Drummond, Heejin Choi, Luzdary Ruelas, and Guoping Feng.
Funding for the study came from the Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award in Science and Engineering, a NARSAD Young Investigator Award, the McKnight Foundation, the Freedom Together Foundation, The Picower Institute for Learning and Memory, the NCSOFT Cultural Foundation, and the National Institutes of Health.
3 Questions: What the laws of physics tell us about CO2 removalIn a report on the feasibility of removing carbon dioxide from the atmosphere, physicists say these technologies are “not a magic bullet, but also not a no-go.”Human activities continue to pump billions of tons of carbon dioxide into the atmosphere each year, raising global temperatures and driving extreme weather events. As countries grapple with climate impacts and ways to significantly reduce carbon emissions, there have been various efforts to advance carbon dioxide removal (CDR) technologies that directly remove carbon dioxide from the air and sequester it for long periods of time.
Unlike carbon capture and storage technologies, which are designed to remove carbon dioxide at point sources such as fossil-fuel plants, CDR aims to remove carbon dioxide molecules that are already circulating in the atmosphere.
A new report by the American Physical Society and led by an MIT physicist provides an overview of the major experimental CDR approaches and determines their fundamental physical limits. The report focuses on methods that have the biggest potential for removing carbon dioxide, at the scale of gigatons per year, which is the magnitude that would be required to have a climate-stabilizing impact.
The new report was commissioned by the American Physical Society's Panel on Public Affairs, and appeared last week in the journal PRX. The report was chaired by MIT professor of physics Washington Taylor, who spoke with MIT News about CDR’s physical limitations and why it’s worth pursuing in tandem with global efforts to reduce carbon emissions.
Q: What motivated you to look at carbon dioxide removal systems from a physical science perspective?
A: The number one thing driving climate change is the fact that we’re taking carbon that has been stuck in the ground for 100 million years, and putting it in the atmosphere, and that’s causing warming. In the last few years there’s been a lot of interest both by the government and private entities in finding technologies to directly remove the CO2 from the air.
How to manage atmospheric carbon is the critical question in dealing with our impact on Earth’s climate. So, it’s very important for us to understand whether we can affect the carbon levels not just by changing our emissions profile but also by directly taking carbon out of the atmosphere. Physics has a lot to say about this because the possibilities are very strongly constrained by thermodynamics, mass issues, and things like that.
Q: What carbon dioxide removal methods did you evaluate?
A: They’re all at an early stage. It's kind of the Wild West out there in terms of the different ways in which companies are proposing to remove carbon from the atmosphere. In this report, we break down CDR processes into two classes: cyclic and once-through.
Imagine we are in a boat that has a hole in the hull and is rapidly taking on water. Of course, we want to plug the hole as quickly as we can. But even once we have fixed the hole, we need to get the water out so we aren't in danger of sinking or getting swamped. And this is particularly urgent if we haven't completely fixed the hole so we still have a slow leak. Now, imagine we have a couple of options for how to get the water out so we don’t sink.
The first is a sponge that we can use to absorb water, that we can then squeeze out and reuse. That’s a cyclic process in the sense that we have some material that we’re using over and over. There are cyclic CDR processes like chemical “direct air capture” (DAC), which acts basically like a sponge. You set up a big system with fans that blow air past some material that captures carbon dioxide. When the material is saturated, you close off the system and then use energy to essentially squeeze out the carbon and store it in a deep repository. Then you can reuse the material, in a cyclic process.
The second class of approaches is what we call “once-through.” In the boat analogy, it would be as if you try to fix the leak using cartons of paper towels. You let them saturate and then throw them overboard, and you use each roll once.
There are once-through CDR approaches, like enhanced rock weathering, that are designed to accelerate a natural process, by which certain rocks, when exposed to air, will absorb carbon from the atmosphere. Worldwide, this natural rock weathering is estimated to remove about 1 gigaton of carbon each year. “Enhanced rock weathering” is a CDR approach where you would dig up a lot of this rock, grind it up really small, to less than the width of a human hair, to get the process to happen much faster. The idea is, you dig up something, spread it out, and absorb CO2 in one go.
The key difference between these two processes is that the cyclic process is subject to the second law of thermodynamics and there’s an energy constraint. You can set an actual limit from physics, saying any cyclic process is going to take a certain amount of energy, and that cannot be avoided. For example, we find that for cyclic direct-air-capture (DAC) plants, based on second law limits, the absolute minimum amount of energy you would need to capture a gigaton of carbon is comparable to the total yearly electric energy consumption of the state of Virginia. Systems currently under development use at least three to 10 times this much energy on a per ton basis (and capture tens of thousands, not billions, of tons). Such systems also need to move a lot of air; the air that would need to pass through a DAC system to capture a gigaton of CO2 is comparable to the amount of air that passes through all the air cooling systems on the planet.
On the other hand, if you have a once-through process, you could in some respects avoid the energy constraint, but now you’ve got a materials constraint due to the central laws of chemistry. For once-through processes like enhanced rock weathering, that means that if you want to capture a gigaton of CO2, roughly speaking, you’re going to need a billion tons of rock.
So, to capture gigatons of carbon through engineered methods requires tremendous amounts of physical material, air movement, and energy. On the other hand, everything we’re doing to put that CO2 in the atmosphere is extensive too, so large-scale emissions reductions face comparable challenges.
Q: What does the report conclude, in terms of whether and how to remove carbon dioxide from the atmosphere?
A: Our initial prejudice was, CDR is just going to take so much energy, and there’s no way around that because of the second law of thermodynamics, regardless of the method.
But as we discussed, there is this nuance about cyclic versus once-through systems. And there are two points of view that we ended up threading a needle between. One is the view that CDR is a silver bullet, and we’ll just do CDR and not worry about emissions — we’ll just suck it all out of the atmosphere. And that’s not the case. It will be really expensive, and will take a lot of energy and materials to do large-scale CDR. But there’s another view, where people say, don’t even think about CDR. Even thinking about CDR will compromise our efforts toward emissions reductions. The report comes down somewhere in the middle, saying that CDR is not a magic bullet, but also not a no-go.
If we are serious about managing climate change, we will likely want substantial CDR in addition to aggressive emissions reductions. The report concludes that research and development on CDR methods should be selectively and prudently pursued despite the expected cost and energy and material requirements.
At a policy level, the main message is that we need an economic and policy framework that incentivizes emissions reductions and CDR in a common framework; this would naturally allow the market to optimize climate solutions. Since in many cases it is much easier and cheaper to cut emissions than it will likely ever be to remove atmospheric carbon, clearly understanding the challenges of CDR should help motivate rapid emissions reductions.
For me, I’m optimistic in the sense that scientifically we understand what it will take to reduce emissions and to use CDR to bring CO2 levels down to a slightly lower level. Now, it’s really a societal and economic problem. I think humanity has the potential to solve these problems. I hope that we can find common ground so that we can take actions as a society that will benefit both humanity and the broader ecosystems on the planet, before we end up having bigger problems than we already have.
Seeking climate connections among the oceans’ smallest organismsMIT oceanographer and biogeochemist Andrew Babbin has voyaged around the globe to investigate marine microbes and their influence on ocean health.Andrew Babbin tries to pack light for work trips. Along with the travel essentials, though, he also brings a roll each of electrical tape, duct tape, lab tape, a pack of cable ties, and some bungee cords.
“It’s my MacGyver kit: You never know when you have to rig something on the fly in the field or fix a broken bag,” Babbin says.
The trips Babbin takes are far out to sea, on month-long cruises, where he works to sample waters off the Pacific coast and out in the open ocean. In remote locations, repair essentials often come in handy, as when Babbin had to zip-tie a wrench to a sampling device to help it sink through an icy Antarctic lake.
Babbin is an oceanographer and marine biogeochemist who studies marine microbes and the ways in which they control the cycling of nitrogen between the ocean and the atmosphere. This exchange helps maintain healthy ocean ecosystems and supports the ocean’s capacity to store carbon.
By combining measurements that he takes in the ocean with experiments in his MIT lab, Babbin is working to understand the connections between microbes and ocean nitrogen, which could in turn help scientists identify ways to maintain the ocean’s health and productivity. His work has taken him to many coastal and open-ocean regions around the globe.
“You really become an oceanographer and an Earth scientist to see the world,” says Babbin, who recently earned tenure as the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “We embrace the diversity of places and cultures on this planet. To see just a small fraction of that is special.”
A powerful cycle
The ocean has been a constant presence for Babbin since childhood. His family is from Monmouth County, New Jersey, where he and his twin sister grew up playing along the Jersey shore. When they were teenagers, their parents took the kids on family cruise vacations.
“I always loved being on the water,” he says. “My favorite parts of any of those cruises were the days at sea, where you were just in the middle of some ocean basin with water all around you.”
In school, Babbin gravitated to the sciences, and chemistry in particular. After high school, he attended Columbia University, where a visit to the school’s Earth and environmental engineering department catalyzed a realization.
“For me, it was always this excitement about the water and about chemistry, and it was this pop of, ‘Oh wow, it doesn’t have to be one or the other,’” Babbin says.
He chose to major in Earth and environmental engineering, with a concentration in water resources and climate risks. After graduating in 2008, Babbin returned to his home state, where he attended Princeton University and set a course for a PhD in geosciences, with a focus on chemical oceanography and environmental microbiology. His advisor, oceanographer Bess Ward, took Babbin on as a member of her research group and invited him on several month-long cruises to various parts of the eastern tropical Pacific.
“I still remember that first trip,” Babbin recalls. “It was a whirlwind. Everyone else had been to sea a gazillion times and was loading the boat and strapping things down, and I had no idea of anything. And within a few hours, I was doing an experiment as the ship rocked back and forth!”
Babbin learned to deploy sampling cannisters overboard, then haul them back up and analyze the seawater inside for signs of nitrogen — an essential nutrient for all living things on Earth.
As it turns out, the plants and animals that depend on nitrogen to survive are unable to take it up from the atmosphere themselves. They require a sort of go-between, in the form of microbes that “fix” nitrogen, converting it from nitrogen gas to more digestible forms. In the ocean, this nitrogen fixation is done by highly specialized microbial species, which work to make nitrogen available to phytoplankton — microscopic plant-like organisms that are the foundation of the marine food chain. Phytoplankton are also a main route by which the ocean absorbs carbon dioxide from the atmosphere.
Microorganisms may also use these biologically available forms of nitrogen for energy under certain conditions, returning nitrogen to the atmosphere. These microbes can also release a byproduct of nitrous oxide, which is a potent greenhouse gas that also can catalyze ozone loss in the stratosphere.
Through his graduate work, at sea and in the lab, Babbin became fascinated with the cycling of nitrogen and the role that nitrogen-fixing microbes play in supporting the ocean’s ecosystems and the climate overall. A balance of nitrogen inputs and outputs sustains phytoplankton and maintains the ocean’s ability to soak up carbon dioxide.
“Some of the really pressing questions in ocean biogeochemistry pertain to this cycling of nitrogen,” Babbin says. “Understanding the ways in which this one element cycles through the ocean, and how it is central to ecosystem health and the planet’s climate, has been really powerful.”
In the lab and out to sea
After completing his PhD in 2014, Babbin arrived at MIT as a postdoc in the Department of Civil and Environmental Engineering.
“My first feeling when I came here was, wow, this really is a nerd’s playground,” Babbin says. “I embraced being part of a culture where we seek to understand the world better, while also doing the things we really want to do.”
In 2017, he accepted a faculty position in MIT’s Department of Earth, Atmospheric and Planetary Sciences. He set up his laboratory space, painted in his favorite brilliant orange, on the top floor of the Green Building.
His group uses 3D printers to fabricate microfluidic devices in which they reproduce the conditions of the ocean environment and study microbe metabolism and its effects on marine chemistry. In the field, Babbin has led research expeditions to the Galapagos Islands and parts of the eastern Pacific, where he has collected and analyzed samples of air and water for signs of nitrogen transformations and microbial activity. His new measuring station in the Galapagos is able to infer marine emissions of nitrous oxide across a large swath of the eastern tropical Pacific Ocean. His group has also sailed to southern Cuba, where the researchers studied interactions of microbes in coral reefs.
Most recently, Babbin traveled to Antarctica, where he set up camp next to frozen lakes and plumbed for samples of pristine ice water that he will analyze for genetic remnants of ancient microbes. Such preserved bacterial DNA could help scientists understand how microbes evolved and influenced the Earth’s climate over billions of years.
“Microbes are the terraformers,” Babbin notes. “They have been, since life evolved more than 3 billion years ago. We have to think about how they shape the natural world and how they will respond to the Anthropocene as humans monkey with the planet ourselves.”
Collective action
Babbin is now charting new research directions. In addition to his work at sea and in the lab, he is venturing into engineering, with a new project to design denitrifying capsules. While nitrogen is an essential nutrient for maintaining a marine ecosystem, too much nitrogen, such as from fertilizer that runs off into lakes and streams, can generate blooms of toxic algae. Babbin is looking to design eco-friendly capsules that scrub excess anthropogenic nitrogen from local waterways.
He’s also beginning the process of designing a new sensor to measure low-oxygen concentrations in the ocean. As the planet warms, the oceans are losing oxygen, creating “dead zones” where fish cannot survive. While others including Babbin have tried to map these oxygen minimum zones, or OMZs, they have done so sporadically, by dropping sensors into the ocean over limited range, depth, and times. Babbin’s sensors could potentially provide a more complete map of OMZs, as they would be deployed on wide-ranging, deep-diving, and naturally propulsive vehicles: sharks.
“We want to measure oxygen. Sharks need oxygen. And if you look at where the sharks don’t go, you might have a sense of where the oxygen is not,” says Babbin, who is working with marine biologists on ways to tag sharks with oxygen sensors. “A number of these large pelagic fish move up and down the water column frequently, so you can map the depth to which they dive to, and infer something about the behavior. And my suggestion is, you might also infer something about the ocean’s chemistry.”
When he reflects on what stimulates new ideas and research directions, Babbin credits working with others, in his own group and across MIT.
“My best thoughts come from this collective action,” Babbin says. “Particularly because we all have different upbringings and approach things from a different perspective.”
He’s bringing this collaborative spirit to his new role, as a mission director for MIT’s Climate Project. Along with Jesse Kroll, who is a professor of civil and environmental engineering and of chemical engineering, Babbin co-leads one of the project’s six missions: Restoring the Atmosphere, Protecting the Land and Oceans. Babbin and Kroll are planning a number of workshops across campus that they hope will generate new connections, and spark new ideas, particularly around ways to evaluate the effectiveness of different climate mitigation strategies and better assess the impacts of climate on society.
“One area we want to promote is thinking of climate science and climate interventions as two sides of the same coin,” Babbin says. “There’s so much action that’s trying to be catalyzed. But we want it to be the best action. Because we really have one shot at doing this. Time is of the essence.”
David McGee named head of the Department of Earth, Atmospheric and Planetary SciencesSpecialist in paleoclimate and geochronology is known for contributions to education and community.David McGee, the William R. Kenan Jr. Professor of Earth and Planetary Sciences at MIT, was recently appointed head of the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), effective Jan. 15. He assumes the role from Professor Robert van der Hilst, the Schlumberger Professor of Earth and Planetary Sciences, who led the department for 13 years.
McGee specializes in applying isotope geochemistry and geochronology to reconstruct Earth’s climate history, helping to ground-truth our understanding of how the climate system responds during periods of rapid change. He has also been instrumental in the growth of the department’s community and culture, having served as EAPS associate department head since 2020.
“David is an amazing researcher who brings crucial, data-based insights to aid our response to climate change,” says dean of the School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics Nergis Mavalvala. “He is also a committed and caring educator, providing extraordinary investment in his students’ learning experiences, and through his direction of Terrascope, one of our unique first-year learning communities focused on generating solutions to sustainability challenges.”
“I am energized by the incredible EAPS community, by Rob’s leadership over the last 13 years, and by President Kornbluth’s call for MIT to innovate effective and wise responses to climate change,” says McGee. “EAPS has a unique role in this time of reckoning with planetary boundaries — our collective path forward needs to be guided by a deep understanding of the Earth system and a clear sense of our place in the universe.”
McGee’s research seeks to understand the Earth system’s response to past climate changes. Using geochemical analysis and uranium-series dating, McGee and his group investigate stalagmites, ancient lake deposits, and deep-sea sediments from field sites around the world to trace patterns of wind and precipitation, water availability in drylands, and permafrost stability through space and time. Armed with precise chronologies, he aims to shed light on drivers of historical hydroclimatic shifts and provide quantitative tests of climate model performance.
Beyond research, McGee has helped shape numerous Institute initiatives focused on environment, climate, and sustainability, including serving on the MIT Climate and Sustainability Consortium Faculty Steering Committee and the faculty advisory board for the MIT Environment and Sustainability Minor.
McGee also co-chaired MIT's Climate Education Working Group, one of three working groups established under the Institute's Fast Forward climate action plan. The group identified opportunities to strengthen climate- and sustainability-related education at the Institute, from curricular offerings to experiential learning opportunities and beyond.
In April 2023, the working group hosted the MIT Symposium for Advancing Climate Education, featuring talks by McGee and others on how colleges and universities can innovate and help students develop the skills, capacities, and perspectives they’ll need to live, lead, and thrive in a world being remade by the accelerating climate crisis.
“David is reimagining MIT undergraduate education to include meaningful collaborations with communities outside of MIT, teaching students that scientific discovery is important, but not always enough to make impact for society,” says van der Hilst. “He will help shape the future of the department with this vital perspective.”
From the start of his career, McGee has been dedicated to sharing his love of exploration with students. He earned a master’s degree in teaching and spent seven years as a teacher in middle school and high school classrooms before earning his PhD in Earth and environmental sciences from Columbia University. He joined the MIT faculty in 2012, and in 2018 received the Excellence in Mentoring Award from MIT’s Undergraduate Advising and Academic Programming office. In 2015, he became the director of MIT’s Terrascope first-year learning community.
“David's exemplary teaching in Terrascope comes through his understanding that effective solutions must be found where science intersects with community engagement to forge ethical paths forward,” adds van der Hilst. In 2023, for his work with Terrascope, McGee received the school’s highest award, the School of Science Teaching Prize. In 2022, he was named a Margaret MacVicar Faculty Fellow, the highest teaching honor at MIT.
As associate department head, McGee worked alongside van der Hilst and student leaders to promote EAPS community engagement, improve internal supports and reporting structures, and bolster opportunities for students to pursue advanced degrees and STEM careers.
Superconducting materials are similar to the carpool lane in a congested interstate. Like commuters who ride together, electrons that pair up can bypass the regular traffic, moving through the material with zero friction.
But just as with carpools, how easily electron pairs can flow depends on a number of conditions, including the density of pairs that are moving through the material. This “superfluid stiffness,” or the ease with which a current of electron pairs can flow, is a key measure of a material’s superconductivity.
Physicists at MIT and Harvard University have now directly measured superfluid stiffness for the first time in “magic-angle” graphene — materials that are made from two or more atomically thin sheets of graphene twisted with respect to each other at just the right angle to enable a host of exceptional properties, including unconventional superconductivity.
This superconductivity makes magic-angle graphene a promising building block for future quantum-computing devices, but exactly how the material superconducts is not well-understood. Knowing the material’s superfluid stiffness will help scientists identify the mechanism of superconductivity in magic-angle graphene.
The team’s measurements suggest that magic-angle graphene’s superconductivity is primarily governed by quantum geometry, which refers to the conceptual “shape” of quantum states that can exist in a given material.
The results, which are reported today in the journal Nature, represent the first time scientists have directly measured superfluid stiffness in a two-dimensional material. To do so, the team developed a new experimental method which can now be used to make similar measurements of other two-dimensional superconducting materials.
“There’s a whole family of 2D superconductors that is waiting to be probed, and we are really just scratching the surface,” says study co-lead author Joel Wang, a research scientist in MIT’s Research Laboratory of Electronics (RLE).
The study’s co-authors from MIT’s main campus and MIT Lincoln Laboratory include co-lead author and former RLE postdoc Miuko Tanaka as well as Thao Dinh, Daniel Rodan-Legrain, Sameia Zaman, Max Hays, Bharath Kannan, Aziza Almanakly, David Kim, Bethany Niedzielski, Kyle Serniak, Mollie Schwartz, Jeffrey Grover, Terry Orlando, Simon Gustavsson, Pablo Jarillo-Herrero, and William D. Oliver, along with Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Magic resonance
Since its first isolation and characterization in 2004, graphene has proven to be a wonder substance of sorts. The material is effectively a single, atom-thin sheet of graphite consisting of a precise, chicken-wire lattice of carbon atoms. This simple configuration can exhibit a host of superlative qualities in terms of graphene’s strength, durability, and ability to conduct electricity and heat.
In 2018, Jarillo-Herrero and colleagues discovered that when two graphene sheets are stacked on top of each other, at a precise “magic” angle, the twisted structure — now known as magic-angle twisted bilayer graphene, or MATBG — exhibits entirely new properties, including superconductivity, in which electrons pair up, rather than repelling each other as they do in everyday materials. These so-called Cooper pairs can form a superfluid, with the potential to superconduct, meaning they could move through a material as an effortless, friction-free current.
“But even though Cooper pairs have no resistance, you have to apply some push, in the form of an electric field, to get the current to move,” Wang explains. “Superfluid stiffness refers to how easy it is to get these particles to move, in order to drive superconductivity.”
Today, scientists can measure superfluid stiffness in superconducting materials through methods that generally involve placing a material in a microwave resonator — a device which has a characteristic resonance frequency at which an electrical signal will oscillate, at microwave frequencies, much like a vibrating violin string. If a superconducting material is placed within a microwave resonator, it can change the device’s resonance frequency, and in particular, its “kinetic inductance,” by an amount that scientists can directly relate to the material’s superfluid stiffness.
However, to date, such approaches have only been compatible with large, thick material samples. The MIT team realized that to measure superfluid stiffness in atomically thin materials like MATBG would require a new approach.
“Compared to MATBG, the typical superconductor that is probed using resonators is 10 to 100 times thicker and larger in area,” Wang says. “We weren’t sure if such a tiny material would generate any measurable inductance at all.”
A captured signal
The challenge to measuring superfluid stiffness in MATBG has to do with attaching the supremely delicate material to the surface of the microwave resonator as seamlessly as possible.
“To make this work, you want to make an ideally lossless — i.e., superconducting — contact between the two materials,” Wang explains. “Otherwise, the microwave signal you send in will be degraded or even just bounce back instead of going into your target material.”
Will Oliver’s group at MIT has been developing techniques to precisely connect extremely delicate, two-dimensional materials, with the goal of building new types of quantum bits for future quantum-computing devices. For their new study, Tanaka, Wang, and their colleagues applied these techniques to seamlessly connect a tiny sample of MATBG to the end of an aluminum microwave resonator. To do so, the group first used conventional methods to assemble MATBG, then sandwiched the structure between two insulating layers of hexagonal boron nitride, to help maintain MATBG’s atomic structure and properties.
“Aluminum is a material we use regularly in our superconducting quantum computing research, for example, aluminum resonators to read out aluminum quantum bits (qubits),” Oliver explains. “So, we thought, why not make most of the resonator from aluminum, which is relatively straightforward for us, and then add a little MATBG to the end of it? It turned out to be a good idea.”
“To contact the MATBG, we etch it very sharply, like cutting through layers of a cake with a very sharp knife,” Wang says. “We expose a side of the freshly-cut MATBG, onto which we then deposit aluminum — the same material as the resonator — to make a good contact and form an aluminum lead.”
The researchers then connected the aluminum leads of the MATBG structure to the larger aluminum microwave resonator. They sent a microwave signal through the resonator and measured the resulting shift in its resonance frequency, from which they could infer the kinetic inductance of the MATBG.
When they converted the measured inductance to a value of superfluid stiffness, however, the researchers found that it was much larger than what conventional theories of superconductivity would have predicted. They had a hunch that the surplus had to do with MATBG’s quantum geometry — the way the quantum states of electrons correlate to one another.
“We saw a tenfold increase in superfluid stiffness compared to conventional expectations, with a temperature dependence consistent with what the theory of quantum geometry predicts,” Tanaka says. “This was a ‘smoking gun’ that pointed to the role of quantum geometry in governing superfluid stiffness in this two-dimensional material.”
“This work represents a great example of how one can use sophisticated quantum technology currently used in quantum circuits to investigate condensed matter systems consisting of strongly interacting particles,” adds Jarillo-Herrero.
This research was funded, in part, by the U.S. Army Research Office, the National Science Foundation, the U.S. Air Force Office of Scientific Research, and the U.S. Under Secretary of Defense for Research and Engineering. The work was carried out, in part, through the use of MIT.nano’s facilities.
A complementary study on magic-angle twisted trilayer graphene (MATTG), conducted by a collaboration between Philip Kim’s group at Harvard University and Jarillo-Herrero’s group at MIT appears in the same issue of Nature.
How telecommunications cables can image the ground beneath usBy making use of MIT’s existing fiber optic infrastructure, PhD student Hilary Chang imaged the ground underneath campus, a method that can be used to characterize seismic hazards.When people think about fiber optic cables, its usually about how they’re used for telecommunications and accessing the internet. But fiber optic cables — strands of glass or plastic that allow for the transmission of light — can be used for another purpose: imaging the ground beneath our feet.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) PhD student Hilary Chang recently used the MIT fiber optic cable network to successfully image the ground underneath campus using a method known as distributed acoustic sensing (DAS). By using existing infrastructure, DAS can be an efficient and effective way to understand ground composition, a critical component for assessing the seismic hazard of areas, or how at risk they are from earthquake damage.
“We were able to extract very nice, coherent waves from the surroundings, and then use that to get some information about the subsurface,” says Chang, the lead author of a recent paper describing her work that was co-authored with EAPS Principal Research Scientist Nori Nakata.
Dark fibers
The MIT campus fiber optic system, installed from 2000 to 2003, services internal data transport between labs and buildings as well as external transport, such as the campus internet (MITNet). There are three major cable hubs on campus from which lines branch out into buildings and underground, much like a spiderweb.
The network allocates a certain number of strands per building, some of which are “dark fibers,” or cables that are not actively transporting information. Each campus fiber hub has redundant backbone cables between them so that, in the event of a failure, network transmission can switch to the dark fibers without loss of network services.
DAS can use existing telecommunication cables and ambient wavefields to extract information about the materials they pass through, making it a valuable tool for places like cities or the ocean floor, where conventional sensors can’t be deployed. Chang, who studies earthquake waveforms and the information we can extract from them, decided to try it out on the MIT campus.
In order to get access to the fiber optic network for the experiment, Chang reached out to John Morgante, a manager of infrastructure project engineering with MIT Information Systems and Technology (IS&T). Morgante has been at MIT since 1998 and was involved with the original project installing the fiber optic network, and was thus able to provide personal insight into selecting a route.
“It was interesting to listen to what they were trying to accomplish with the testing,” says Morgante. While IS&T has worked with students before on various projects involving the school’s network, he said that “in the physical plant area, this is the first that I can remember that we’ve actually collaborated on an experiment together.”
They decided on a path starting from a hub in Building 24, because it was the longest running path that was entirely underground; above-ground wires that cut through buildings wouldn’t work because they weren’t grounded, and thus were useless for the experiment. The path ran from east to west, beginning in Building 24, traveling under a section of Massachusetts Ave., along parts of Amherst and Vassar streets, and ending at Building W92.
“[Morgante] was really helpful,” says Chang, describing it as “a very good experience working with the campus IT team.”
Locating the cables
After renting an interrogator, a device that sends laser pulses to sense ambient vibrations along the fiber optic cables, Chang and a group of volunteers were given special access to connect it to the hub in Building 24. They let it run for five days.
To validate the route and make sure that the interrogator was working, Chang conducted a tap test, in which she hit the ground with a hammer several times to record the precise GPS coordinates of the cable. Conveniently, the underground route is marked by maintenance hole covers that serve as good locations to do the test. And, because she needed the environment to be as quiet as possible to collect clean data, she had to do it around 2 a.m.
“I was hitting it next to a dorm and someone yelled ‘shut up,’ probably because the hammer blows woke them up,” Chang recalls. “I was sorry.” Thankfully, she only had to tap at a few spots and could interpolate the locations for the rest.
During the day, Chang and her fellow students — Denzel Segbefia, Congcong Yuan, and Jared Bryan — performed an additional test with geophones, another instrument that detects seismic waves, out on Brigg’s Field where the cable passed under it to compare the signals. It was an enjoyable experience for Chang; when the data were collected in 2022, the campus was coming out of pandemic measures, with remote classes sometimes still in place. “It was very nice to have everyone on the field and do something with their hands,” she says.
The noise around us
Once Chang collected the data, she was able to see plenty of environmental activity in the waveforms, including the passing of cars, bikes, and even when the train that runs along the northern edge of campus made its nightly passes.
After identifying the noise sources, Chang and Nakata extracted coherent surface waves from the ambient noises and used the wave speeds associated with different frequencies to understand the properties of the ground the cables passed through. Stiffer materials allow fast velocities, while softer material slows it.
“We found out that the MIT campus is built on soft materials overlaying a relatively hard bedrock,” Chang says, which confirms previously known, albeit lower-resolution, information about the geology of the area that had been collected using seismometers.
Information like this is critical for regions that are susceptible to destructive earthquakes and other seismic hazards, including the Commonwealth of Massachusetts, which has experienced earthquakes as recently as this past week. Areas of Boston and Cambridge characterized by artificial fill during rapid urbanization are especially at risk due to its subsurface structure being more likely to amplify seismic frequencies and damage buildings. This non-intrusive method for site characterization can help ensure that buildings meet code for the correct seismic hazard level.
“Destructive seismic events do happen, and we need to be prepared,” she says.
Eleven MIT faculty receive Presidential Early Career AwardsFaculty members and additional MIT alumni are among 400 scientists and engineers recognized for outstanding leadership potential.Eleven MIT faculty, including nine from the School of Engineering and two from the School of Science, were awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). Fifteen additional MIT alumni were also honored.
Established in 1996 by President Bill Clinton, the PECASE is awarded to scientists and engineers “who show exceptional potential for leadership early in their research careers.” The latest recipients were announced by the White House on Jan. 14 under President Joe Biden. Fourteen government agencies recommended researchers for the award.
The MIT faculty and alumni honorees are among 400 scientists and engineers recognized for innovation and scientific contributions. Those from the School of Engineering and School of Science who were honored are:
Additional MIT alumni who were honored include: Ambika Bajpayee MNG ’07, PhD ’15; Katherine Bouman SM ’13, PhD ’17; Walter Cheng-Wan Lee ’95, MNG ’95, PhD ’05; Ismaila Dabo PhD ’08; Ying Diao SM ’10, PhD ’12; Eno Ebong ’99; Soheil Feizi- Khankandi SM ’10, PhD ’16; Mark Finlayson SM ’01, PhD ’12; Chelsea B. Finn ’14; Grace Xiang Gu SM ’14, PhD ’18; David Michael Isaacson PhD ’06, AF ’16; Lewei Lin ’05; Michelle Sander PhD ’12; Kevin Solomon SM ’08, PhD ’12; and Zhiting Tian PhD ’14.
Introducing the MIT Generative AI Impact Consortium The consortium will bring researchers and industry together to focus on impact.From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.
Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.
“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”
Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”
Developing the blueprint for generative AI’s next leap
The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:
Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there's no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.
“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.
"What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time," says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.
A “perfect match” of academia and industry
At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.
The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.
“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”
Industry partners: Collaborating on AI’s evolution
At the core of the consortium’s mission is collaboration — bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.
Among the founding members is OpenAI, the creator of the generative AI chatbot ChatGPT.
“This type of collaboration between academics, practitioners, and labs is key to ensuring that generative AI evolves in ways that meaningfully benefit society,” says Anna Makanju, vice president of global impact at OpenAI, adding that OpenAI “is eager to work alongside MIT’s Generative AI Consortium to bridge the gap between cutting-edge AI research and the real-world expertise of diverse industries.”
The Coca-Cola Co. recognizes an opportunity to leverage AI innovation on a global scale. “We see a tremendous opportunity to innovate at the speed of AI and, leveraging The Coca-Cola Company's global footprint, make these cutting-edge solutions accessible to everyone,” says Pratik Thakar, global vice president and head of generative AI. “Both MIT and The Coca-Cola Company are deeply committed to innovation, while also placing equal emphasis on the legally and ethically responsible development and use of technology.”
For TWG Global, the consortium offers the ideal environment to share knowledge and drive advancements. “The strength of the consortium is its unique combination of industry leaders and academia, which fosters the exchange of valuable lessons, technological advancements, and access to pioneering research,” says Drew Cukor, head of data and artificial intelligence transformation. Cukor adds that TWG Global “is keen to share its insights and actively engage with leading executives and academics to gain a broader perspective of how others are configuring and adopting AI, which is why we believe in the work of the consortium.”
The Tata Group views the collaboration as a platform to address some of AI’s most pressing challenges. “The consortium enables Tata to collaborate, share knowledge, and collectively shape the future of generative AI, particularly in addressing urgent challenges such as ethical considerations, data privacy, and algorithmic biases,” says Aparna Ganesh, vice president of Tata Sons Ltd.
Similarly, SK Telecom sees its involvement as a launchpad for growth and innovation. Suk-geun (SG) Chung, SK Telecom executive vice president and chief AI global officer, explains, “Joining the consortium presents a significant opportunity for SK Telecom to enhance its AI competitiveness in core business areas, including AI agents, AI semiconductors, data centers (AIDC), and physical AI,” says Chung. “By collaborating with MIT and leveraging the SK AI R&D Center as a technology control tower, we aim to forecast next-generation generative AI technology trends, propose innovative business models, and drive commercialization through academic-industrial collaboration.”
Alan Lee, chief technology officer of Analog Devices (ADI), highlights how the consortium bridges key knowledge gaps for both his company and the industry at large. “ADI can’t hire a world-leading expert in every single corner case, but the consortium will enable us to access top MIT researchers and get them involved in addressing problems we care about, as we also work together with others in the industry towards common goals,” he says.
The consortium will host interactive workshops and discussions to identify and prioritize challenges. “It’s going to be a two-way conversation, with the faculty coming together with industry partners, but also industry partners talking with each other,” says Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research and statistics, who serves alongside Huttenlocher as co-chair of the GenAI Dean’s oversight group.
Preparing for the AI-enabled workforce of the future
With AI poised to disrupt industries and create new opportunities, one of the consortium’s core goals is to guide that change in a way that benefits both businesses and society.
“When the first commercial digital computers were introduced [the UNIVAC was delivered to the U.S. Census Bureau in 1951], people were worried about losing their jobs,” says Kraska. “And yes, jobs like large-scale, manual data entry clerks and human ‘computers,’ people tasked with doing manual calculations, largely disappeared over time. But the people impacted by those first computers were trained to do other jobs.”
The consortium aims to play a key role in preparing the workforce of tomorrow by educating global business leaders and employees on generative AI evolving uses and applications. With the pace of innovation accelerating, leaders face a flood of information and uncertainty.
“When it comes to educating leaders about generative AI, it’s about helping them navigate the complexity of the space right now, because there’s so much hype and hundreds of papers published daily,” says Kraska. “The hard part is understanding which developments could actually have a chance of changing the field and which are just tiny improvements. There's a kind of FOMO [fear of missing out] for leaders that we can help reduce.”
Defining success: Shared goals for generative AI impact
Success within the initiative is defined by shared progress, open innovation, and mutual growth. “Consortium participants recognize, I think, that when I share my ideas with you, and you share your ideas with me, we’re both fundamentally better off,” explains Farias. “Progress on generative AI is not zero-sum, so it makes sense for this to be an open-source initiative.”
While participants may approach success from different angles, they share a common goal of advancing generative AI for broad societal benefit. “There will be many success metrics,” says Perakis. “We’ll educate students, who will be networking with companies. Companies will come together and learn from each other. Business leaders will come to MIT and have discussions that will help all of us, not just the leaders themselves.”
For Analog Devices’ Alan Lee, success is measured in tangible improvements that drive efficiency and product innovation: “For us at ADI, it’s a better, faster quality of experience for our customers, and that could mean better products. It could mean faster design cycles, faster verification cycles, and faster tuning of equipment that we already have or that we’re going to develop for the future. But beyond that, we want to help the world be a better, more efficient place.”
Ganesh highlights success through the lens of real-world application. “Success will also be defined by accelerating AI adoption within Tata companies, generating actionable knowledge that can be applied in real-world scenarios, and delivering significant advantages to our customers and stakeholders,” she says.
Generative AI is no longer confined to isolated research labs — it’s driving innovation across industries and disciplines. At MIT, the technology has become a campus-wide priority, connecting researchers, students, and industry leaders to solve complex challenges and uncover new opportunities. “It's truly an MIT initiative,” says Farias, “one that’s much larger than any individual or department on campus.”
David Darmofal SM ’91, PhD ’93 named vice chancellor for undergraduate and graduate educationLongtime AeroAstro professor brings deep experience with academic and student life.David L. Darmofal SM ’91, PhD ’93 will serve as MIT’s next vice chancellor for undergraduate and graduate education, effective Feb. 17. Chancellor Melissa Nobles announced Darmofal’s appointment today in a letter to the MIT community.
Darmofal succeeds Ian A. Waitz, who stepped down in May to become MIT’s vice president for research, and Daniel E. Hastings, who has been serving in an interim capacity.
A creative innovator in research-based teaching and learning, Darmofal is the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. Since 2017, he and his wife Claudia have served as heads of house at The Warehouse, an MIT graduate residence.
“Dave knows the ins and outs of education and student life at MIT in a way that few do,” Nobles says. “He’s a head of house, an alum, and the parent of a graduate. Dave will bring decades of first-hand experience to the role.”
“An MIT education is incredibly special, combining passionate students, staff, and faculty striving to use knowledge and discovery to drive positive change for the world,” says Darmofal. “I am grateful for this opportunity to play a part in supporting MIT’s academic mission.”
Darmofal’s leadership experience includes service from 2008 to 2011 as associate and interim department head in the Department of Aeronautics and Astronautics, overseeing undergraduate and graduate programs. He was the AeroAstro director of digital education from 2020 to 2022, including leading the department’s response to remote learning during the Covid-19 pandemic. He currently serves as director of the MIT Aerospace Computational Science and Engineering Laboratory and is a member of the Center for Computational Science and Engineering (CCSE) in the MIT Stephen A. Schwarzman College of Computing.
As an MIT faculty member and administrator, Darmofal has been involved in designing more flexible degree programs, developing open digital-learning opportunities, creating first-year advising seminars, and enhancing professional and personal development opportunities for students. He also contributed his expertise in engineering pedagogy to the development of the Schwarzman College of Computing’s Common Ground efforts, to address the need for computing education across many disciplines.
“MIT students, staff, and faculty share a common bond as problem solvers. Talk to any of us about an MIT education, and you will get an earful on not only what we need to do better, but also how we can actually do it. The Office of the Vice Chancellor can help bring our community of problem solvers together to enable improvements in our academics,” says Darmofal.
Overseeing the academic arm of the Chancellor’s Office, the vice chancellor’s portfolio is extensive. Darmofal will lead professionals across more than a dozen units, covering areas such as recruitment and admissions, financial aid, student systems, advising, professional and career development, pedagogy, experiential learning, and support for MIT’s more than 100 graduate programs. He will also work collaboratively with many of MIT’s student organizations and groups, including with the leaders of the Undergraduate Association and the Graduate Student Council, and administer the relationship with the graduate student union.
“Dave will be a critical part of my office’s efforts to strengthen and expand critical connections across all areas of student life and learning,” Nobles says. She credits the search advisory group, co-chaired by professors Laurie Boyer and Will Tisdale, in setting the right tenor for such an important role and leading a thorough, inclusive process.
Darmofal’s research is focused on computational methods for partial differential equations, especially fluid dynamics. He earned his SM and PhD degrees in aeronautics and astronautics in 1991 and 1993, respectively, from MIT, and his BS in aerospace engineering in 1989 from the University of Michigan. Prior to joining MIT in 1998, he was an assistant professor in the Department of Aerospace Engineering at Texas A&M University from 1995 to 1998. Currently, he is the chair of AeroAstro’s Undergraduate Committee and the graduate officer for the CCSE PhD program.
“I want to echo something that Dan Hastings said recently,” Darmofal says. “We have a lot to be proud of when it comes to an MIT education. It’s more accessible than it has ever been. It’s innovative, with unmatched learning opportunities here and around the world. It’s home to academic research labs that attract the most talented scholars, creators, experimenters, and engineers. And ultimately, it prepares graduates who do good.”
Every cell in your body contains the same genetic sequence, yet each cell expresses only a subset of those genes. These cell-specific gene expression patterns, which ensure that a brain cell is different from a skin cell, are partly determined by the three-dimensional structure of the genetic material, which controls the accessibility of each gene.
MIT chemists have now come up with a new way to determine those 3D genome structures, using generative artificial intelligence. Their technique can predict thousands of structures in just minutes, making it much speedier than existing experimental methods for analyzing the structures.
Using this technique, researchers could more easily study how the 3D organization of the genome affects individual cells’ gene expression patterns and functions.
“Our goal was to try to predict the three-dimensional genome structure from the underlying DNA sequence,” says Bin Zhang, an associate professor of chemistry and the senior author of the study. “Now that we can do that, which puts this technique on par with the cutting-edge experimental techniques, it can really open up a lot of interesting opportunities.”
MIT graduate students Greg Schuette and Zhuohan Lao are the lead authors of the paper, which appears today in Science Advances.
From sequence to structure
Inside the cell nucleus, DNA and proteins form a complex called chromatin, which has several levels of organization, allowing cells to cram 2 meters of DNA into a nucleus that is only one-hundredth of a millimeter in diameter. Long strands of DNA wind around proteins called histones, giving rise to a structure somewhat like beads on a string.
Chemical tags known as epigenetic modifications can be attached to DNA at specific locations, and these tags, which vary by cell type, affect the folding of the chromatin and the accessibility of nearby genes. These differences in chromatin conformation help determine which genes are expressed in different cell types, or at different times within a given cell.
Over the past 20 years, scientists have developed experimental techniques for determining chromatin structures. One widely used technique, known as Hi-C, works by linking together neighboring DNA strands in the cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.
This method can be used on large populations of cells to calculate an average structure for a section of chromatin, or on single cells to determine structures within that specific cell. However, Hi-C and similar techniques are labor-intensive, and it can take about a week to generate data from one cell.
To overcome those limitations, Zhang and his students developed a model that takes advantage of recent advances in generative AI to create a fast, accurate way to predict chromatin structures in single cells. The AI model that they designed can quickly analyze DNA sequences and predict the chromatin structures that those sequences might produce in a cell.
“Deep learning is really good at pattern recognition,” Zhang says. “It allows us to analyze very long DNA segments, thousands of base pairs, and figure out what is the important information encoded in those DNA base pairs.”
ChromoGen, the model that the researchers created, has two components. The first component, a deep learning model taught to “read” the genome, analyzes the information encoded in the underlying DNA sequence and chromatin accessibility data, the latter of which is widely available and cell type-specific.
The second component is a generative AI model that predicts physically accurate chromatin conformations, having been trained on more than 11 million chromatin conformations. These data were generated from experiments using Dip-C (a variant of Hi-C) on 16 cells from a line of human B lymphocytes.
When integrated, the first component informs the generative model how the cell type-specific environment influences the formation of different chromatin structures, and this scheme effectively captures sequence-structure relationships. For each sequence, the researchers use their model to generate many possible structures. That’s because DNA is a very disordered molecule, so a single DNA sequence can give rise to many different possible conformations.
“A major complicating factor of predicting the structure of the genome is that there isn’t a single solution that we’re aiming for. There’s a distribution of structures, no matter what portion of the genome you’re looking at. Predicting that very complicated, high-dimensional statistical distribution is something that is incredibly challenging to do,” Schuette says.
Rapid analysis
Once trained, the model can generate predictions on a much faster timescale than Hi-C or other experimental techniques.
“Whereas you might spend six months running experiments to get a few dozen structures in a given cell type, you can generate a thousand structures in a particular region with our model in 20 minutes on just one GPU,” Schuette says.
After training their model, the researchers used it to generate structure predictions for more than 2,000 DNA sequences, then compared them to the experimentally determined structures for those sequences. They found that the structures generated by the model were the same or very similar to those seen in the experimental data.
“We typically look at hundreds or thousands of conformations for each sequence, and that gives you a reasonable representation of the diversity of the structures that a particular region can have,” Zhang says. “If you repeat your experiment multiple times, in different cells, you will very likely end up with a very different conformation. That’s what our model is trying to predict.”
The researchers also found that the model could make accurate predictions for data from cell types other than the one it was trained on. This suggests that the model could be useful for analyzing how chromatin structures differ between cell types, and how those differences affect their function. The model could also be used to explore different chromatin states that can exist within a single cell, and how those changes affect gene expression.
“ChromoGen provides a new framework for AI-driven discovery of genome folding principles and demonstrates that generative AI can bridge genomic and epigenomic features with 3D genome structure, pointing to future work on studying the variation of genome structure and function across a broad range of biological contexts,” says Jian Ma, a professor of computational biology at Carnegie Mellon University, who was not involved in the research.
Another possible application would be to explore how mutations in a particular DNA sequence change the chromatin conformation, which could shed light on how such mutations may cause disease.
“There are a lot of interesting questions that I think we can address with this type of model,” Zhang says.
The researchers have made all of their data and the model available to others who wish to use it.
The research was funded by the National Institutes of Health.
From bench to bedside, and beyondIn the United States and abroad, Matthew Dolan ’81 has served as a leader in immunology and virology.In medical school, Matthew Dolan ’81 briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work.
“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.”
Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the United States and abroad through the U.S. Air Force, Dolan has emerged as a leader in immunology and virology, and has served as director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and Covid-19, and has even been a guest speaker on NPR’s “Science Friday.”
“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.”
Pieces of the puzzle
Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge.
He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems such as ice machines or air conditioners, are solved at the interface between public health and ecology.
“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”
Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive.
“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.”
Choosing To serve
Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”
One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die.
“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.”
Lasting impacts
Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives.
Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future.
“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.”
Dolan understands that the most lasting impact he has had is, likely, teaching: Time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of health-care specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the U.S. departments of State and Defense, and taught those programs around the world.
“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.”
Rare and mysterious cosmic explosion: Gamma-ray burst or jetted tidal disruption event?Researchers characterize the peculiar Einstein Probe transient EP240408a.Highly energetic explosions in the sky are commonly attributed to gamma-ray bursts. We now understand that these bursts originate from either the merger of two neutron stars or the collapse of a massive star. In these scenarios, a newborn black hole is formed, emitting a jet that travels at nearly the speed of light. When these jets are directed toward Earth, we can observe them from vast distances — sometimes billions of light-years away — due to a relativistic effect known as Doppler boosting. Over the past decade, thousands of such gamma-ray bursts have been detected.
Since its launch in 2024, the Einstein Probe — an X-ray space telescope developed by the Chinese Academy of Sciences (CAS) in partnership with European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics — has been scanning the skies looking for energetic explosions, and in April the telescope observed an unusual event designated as EP240408A. Now an international team of astronomers, including Dheeraj Pasham from MIT, Igor Andreoni from University of North Carolina at Chapel Hill, and Brendan O’Connor from Carnegie Mellon University, and others have investigated this explosion using a slew of ground-based and space-based telescopes, including NuSTAR, Swift, Gemini, Keck, DECam, VLA, ATCA, and NICER, which was developed in collaboration with MIT.
An open-access report of their findings, published Jan. 27 in The Astrophysical Journal Letters, indicates that the characteristics of this explosion do not match those of typical gamma-ray bursts. Instead, it may represent a rare new class of powerful cosmic explosion — a jetted tidal disruption event, which occurs when a supermassive black hole tears apart a star.
“NICER’s ability to steer to pretty much any part of the sky and monitor for weeks has been instrumental in our understanding of these unusual cosmic explosions,” says Pasham, a research scientist at the MIT Kavli Institute for Astrophysics and Space Research.
While a jetted tidal disruption event is plausible, the researchers say the lack of radio emissions from this jet is puzzling. O’Connor surmises, “EP240408a ticks some of the boxes for several different kinds of phenomena, but it doesn’t tick all the boxes for anything. In particular, the short duration and high luminosity are hard to explain in other scenarios. The alternative is that we are seeing something entirely new!”
According to Pasham, the Einstein Probe is just beginning to scratch the surface of what seems possible. “I’m excited to chase the next weird explosion from the Einstein Probe”, he says, echoing astronomers worldwide who look forward to the prospect of discovering more unusual explosions from the farthest reaches of the cosmos.
Professor Emeritus Gerald Schneider, discoverer of the “two visual systems,” dies at 84An MIT affiliate for some 60 years, Schneider was an authority on the relationships between brain structure and behavior.Gerald E. Schneider, a professor emeritus of psychology and member of the MIT community for over 60 years, passed away on Dec. 11, 2024. He was 84.
Schneider was an authority on the relationships between brain structure and behavior, concentrating on neuronal development, regeneration or altered growth after brain injury, and the behavioral consequences of altered connections in the brain.
Using the Syrian golden hamster as his test subject of choice, Schneider made numerous contributions to the advancement of neuroscience. He laid out the concept of two visual systems — one for locating objects and one for the identification of objects — in a 1969 issue of Science, a milestone in the study of brain-behavior relationships. In 1973, he described a “pruning effect” in the optic tract axons of adult hamsters who had brain lesions early in life. In 2006, his lab reported a previously undiscovered nanobiomedical technology for tissue repair and restoration in Biological Sciences. The paper showed how a designed self-assembling peptide nanofiber scaffold could create a permissive environment for axons, not only to regenerate through the site of an acute injury in the optic tract of hamsters, but also to knit the brain tissue together.
His work shaped the research and thinking of numerous colleagues and trainees. Mriganka Sur, the Newton Professor of Neuroscience and former Department of Brain and Cognitive Sciences (BCS) department head, recalls how Schneider’s paper, “Is it really better to have your brain lesion early? A revision of the ‘Kennard Principle,’” published in 1979 in the journal Neuropsychologia, influenced his work on rewiring retinal projections to the auditory thalamus, which was used to derive principles of functional plasticity in the cortex.
“Jerry was an extremely innovative thinker. His hypothesis of two visual systems — for detailed spatial processing and for movement processing — based on his analysis of visual pathways in hamsters presaged and inspired later work on form and motion pathways in the primate brain,” Sur says. “His description of conservation of axonal arbor during development laid the foundation for later ideas about homeostatic mechanisms that co-regulate neuronal plasticity.”
Institute Professor Ann Graybiel was a colleague of Schneider’s for over five decades. She recalls early in her career being asked by Schneider to help make a map of the superior colliculus.
“I took it as an honor to be asked, and I worked very hard on this, with great excitement. It was my first such mapping, to be followed by much more in the future,” Graybiel recalls. “Jerry was fascinated by animal behavior, and from early on he made many discoveries using hamsters as his main animals of choice. He found that they could play. He found that they could operate in ways that seemed very sophisticated. And, yes, he mapped out pathways in their brains.”
Schneider was raised in Wheaton, Illinois, and graduated from Wheaton College in 1962 with a degree in physics. He was recruited to MIT by Hans-Lukas Teuber, one of the founders of the Department of Psychology, which eventually became the Department of Brain and Cognitive Sciences. Walle Nauta, another founder of the department, taught Schneider neuroanatomy. The pair were deeply influential in shaping his interests in neuroscience and his research.
“He admired them both very much and was very attached to them,” his daughter, Nimisha Schneider, says. “He was an interdisciplinary scholar and he liked that aspect of neuroscience, and he was fascinated by the mysteries of the human brain.”
Shortly after completing his PhD in psychology in 1966, he was hired as an assistant professor in 1967. He was named an associate professor in 1970, received tenure in 1975, and was appointed a full professor in 1977.
After his retirement in 2017, Schneider remained involved with the Department of BCS. Professor Pawan Sinha brought Schneider to campus for what would be his last on-campus engagement, as part of the “SilverMinds Series,” an initiative in the Sinha Lab to engage with scientists now in their “silver years.”
Schneider’s research made an indelible impact on Sinha, beginning as a graduate student when he was inspired by Schneider’s work linking brain structure and function. His work on nerve regeneration, which merged fundamental science and real-world impact, served as a “North Star” that guided Sinha’s own work as he established his lab as a junior faculty member.
“Even through the sadness of his loss, I am grateful for the inspiring example he has left for us of a life that so seamlessly combined brilliance, kindness, modesty, and tenacity,” Sinha says. “He will be missed.”
Schneider’s life centered around his research and teaching, but he also had many other skills and hobbies. Early in his life, he enjoyed painting, and as he grew older he was drawn to poetry. He was also skilled in carpentry and making furniture. He built the original hamster cages for his lab himself, along with numerous pieces of home furniture and shelving. He enjoyed nature anywhere it could be found, from the bees in his backyard to hiking and visiting state and national parks.
He was a Type 1 diabetic, and at the time of his death, he was nearing the completion of a book on the effects of hypoglycemia on the brain, which his family hopes to have published in the future. He was also the author of “Brain Structure and Its Origins,” published in 2014 by MIT Press.
He is survived by his wife, Aiping; his children, Cybele, Aniket, and Nimisha; and step-daughter Anna. He was predeceased by a daughter, Brenna. He is also survived by eight grandchildren and 10 great-grandchildren. A memorial in his honor was held on Jan. 11 at Saint James Episcopal Church in Cambridge.
A new vaccine approach could help combat future coronavirus pandemicsThe nanoparticle-based vaccine shows promise against many variants of SARS-CoV-2, as well as related sarbecoviruses that could jump to humans.A new experimental vaccine developed by researchers at MIT and Caltech could offer protection against emerging variants of SARS-CoV-2, as well as related coronaviruses, known as sarbecoviruses, that could spill over from animals to humans.
In addition to SARS-CoV-2, the virus that causes COVID-19, sarbecoviruses — a subgenus of coronaviruses — include the virus that led to the outbreak of the original SARS in the early 2000s. Sarbecoviruses that currently circulate in bats and other mammals may also hold the potential to spread to humans in the future.
By attaching up to eight different versions of sarbecovirus receptor-binding proteins (RBDs) to nanoparticles, the researchers created a vaccine that generates antibodies that recognize regions of RBDs that tend to remain unchanged across all strains of the viruses. That makes it much more difficult for viruses to evolve to escape vaccine-induced antibodies.
“This work is an example of how bringing together computation and immunological experiments can be fruitful,” says Arup K. Chakraborty, the John M. Deutch Institute Professor at MIT and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MIT, MGH and Harvard University.
Chakraborty and Pamela Bjorkman, a professor of biology and biological engineering at Caltech, are the senior authors of the study, which appears today in Cell. The paper’s lead authors are Eric Wang PhD ’24, Caltech postdoc Alexander Cohen, and Caltech graduate student Luis Caldera.
Mosaic nanoparticles
The new study builds on a project begun in Bjorkman’s lab, in which she and Cohen created a “mosaic” 60-mer nanoparticle that presents eight different sarbecovirus RBD proteins. The RBD is the part of the viral spike protein that helps the virus get into host cells. It is also the region of the coronavirus spike protein that is usually targeted by antibodies against sarbecoviruses.
RBDs contain some regions that are variable and can easily mutate to escape antibodies. Most of the antibodies generated by mRNA COVID-19 vaccines target those variable regions because they are more easily accessible. That is one reason why mRNA vaccines need to be updated to keep up with the emergence of new strains.
If researchers could create a vaccine that stimulates production of antibodies that target RBD regions that can’t easily change and are shared across viral strains, it could offer broader protection against a variety of sarbecoviruses.
Such a vaccine would have to stimulate B cells that have receptors (which then become antibodies) that target those shared, or “conserved,” regions. When B cells circulating in the body encounter a vaccine or other antigen, their B cell receptors, each of which have two “arms,” are more effectively activated if two copies of the antigen are available for binding to each arm. The conserved regions tend to be less accessible to B cell receptors, so if a nanoparticle vaccine presents just one type of RBD, B cells with receptors that bind to the more accessible variable regions, are most likely to be activated.
To overcome this, the Caltech researchers designed a nanoparticle vaccine that includes 60 copies of RBDs from eight different related sarbecoviruses, which have different variable regions but similar conserved regions. Because eight different RBDs are displayed on each nanoparticle, it’s unlikely that two identical RBDs will end up next to each other. Therefore, when a B cell receptor encounters the nanoparticle immunogen, the B cell is more likely to become activated if its receptor can recognize the conserved regions of the RBD.
“The concept behind the vaccine is that by co-displaying all these different RBDs on the nanoparticle, you are selecting for B cells that recognize the conserved regions that are shared between them,” Cohen says. “As a result, you’re selecting for B cells that are more cross-reactive. Therefore, the antibody response would be more cross-reactive and you could potentially get broader protection.”
In studies conducted in animals, the researchers showed that this vaccine, known as mosaic-8, produced strong antibody responses against diverse strains of SARS-CoV-2 and other sarbecoviruses and protected from challenges by both SARS-CoV-2 and SARS-CoV (original SARS).
Broadly neutralizing antibodies
After these studies were published in 2021 and 2022, the Caltech researchers teamed up with Chakraborty’s lab at MIT to pursue computational strategies that could allow them to identify RBD combinations that would generate even better antibody responses against a wider variety of sarbecoviruses.
Led by Wang, the MIT researchers pursued two different strategies — first, a large-scale computational screen of many possible mutations to the RBD of SARS-CoV-2, and second, an analysis of naturally occurring RBD proteins from zoonotic sarbecoviruses.
For the first approach, the researchers began with the original strain of SARS-CoV-2 and generated sequences of about 800,000 RBD candidates by making substitutions in locations that are known to affect antibody binding to variable portions of the RBD. Then, they screened those candidates for their stability and solubility, to make sure they could withstand attachment to the nanoparticle and injection as a vaccine.
From the remaining candidates, the researchers chose 10 based on how different their variable regions were. They then used these to create mosaic nanoparticles coated with either two or five different RBD proteins (mosaic-2COM and mosaic-5COM).
In their second approach, instead of mutating the RBD sequences, the researchers chose seven naturally occurring RBD proteins, using computational techniques to select RBDs that were different from each other in regions that are variable, but retained their conserved regions. They used these to create another vaccine, mosaic-7COM.
Once the researchers produced the RBD-nanoparticles, they evaluated each one in mice. After each mouse received three doses of one of the vaccines, the researchers analyzed how well the resulting antibodies bound to and neutralized seven variants of SARS-CoV-2 and four other sarbecoviruses.
They also compared the mosaic nanoparticle vaccines to a nanoparticle with only one type of RBD displayed, and to the original mosaic-8 particle from their 2021, 2022, and 2024 studies. They found that mosaic-2COM and mosaic-5COM outperformed both of those vaccines, and mosaic-7COM showed the best responses of all. Mosaic-7COM elicited antibodies with binding to most of the viruses tested, and these antibodies were also able to prevent the viruses from entering cells.
The researchers saw similar results when they tested the new vaccines in mice that were previously vaccinated with a bivalent mRNA COVID-19 vaccine.
“We wanted to simulate the fact that people have already been infected and/or vaccinated against SARS-CoV-2,” Wang says. “In pre-vaccinated mice, mosaic-7COM is consistently giving the highest binding titers for both SARS-CoV-2 variants and other sarbecoviruses.”
Bjorkman’s lab has received funding from the Coalition for Epidemic Preparedness Innovations to do a clinical trial of the mosaic-8 RBD-nanoparticle. They also hope to move mosaic-7COM, which performed better in the current study, into clinical trials. The researchers plan to work on redesigning the vaccines so that they could be delivered as mRNA, which would make them easier to manufacture.
The research was funded by a National Science Foundation Graduate Research Fellowship, the National Institutes of Health, Wellcome Leap, the Bill and Melinda Gates Foundation, the Coalition for Epidemic Preparedness Innovations, and the Caltech Merkin Institute for Translational Research.