General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
MIT releases financials and endowment figures for 2024

The Institute’s pooled investments returned 8.9 percent last year; endowment stands at $24.6 billion.


The Massachusetts Institute of Technology Investment Management Company (MITIMCo) announced today that MIT’s unitized pool of endowment and other MIT funds generated an investment return of 8.9 percent during the fiscal year ending June 30, 2024, as measured using valuations received within one month of fiscal year end. At the end of the fiscal year, MIT’s endowment funds totaled $24.6 billion, excluding pledges. Over the 10 years ending June 30, 2024, MIT generated an annualized return of 10.5 percent.

MIT’s endowment is intended to support current and future generations of MIT scholars with the resources needed to advance knowledge, research, and innovation. As such, endowment funds are used for Institute activities including education, research, campus renewal, faculty work, and student financial aid.

The Institute’s need-blind undergraduate admissions policy ensures that an MIT education is accessible to all qualified candidates regardless of financial resources. MIT works closely with all families who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2023-24, the average need-based MIT scholarship was $59,510. Fifty-eight percent of MIT undergraduates received need-based financial aid, and 39 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.

Effective in fiscal 2023, MIT enhanced undergraduate financial aid, ensuring that all families with incomes below $140,000 and typical assets have tuition fully covered by scholarships. MIT further enhanced undergraduate financial aid effective in fiscal 2025, and families with incomes below $75,000 and typical assets have no expectation of parental contribution. Eighty-seven percent of seniors who graduated in academic year 2024 graduated with no debt.

MITIMCo is a unit of MIT, created to manage and oversee the investment of the Institute’s endowment, retirement, and operating funds.

MIT’s Report of the Treasurer for fiscal year 2024 was made available publicly today.


Tiny magnetic discs offer remote brain stimulation without transgenes

The devices could be a useful tool for biomedical research, and possible clinical use in the future.


Novel magnetic nanodiscs could provide a much less invasive way of stimulating parts of the brain, paving the way for stimulation therapies without implants or genetic modification, MIT researchers report.

The scientists envision that the tiny discs, which are about 250 nanometers across (about 1/500 the width of a human hair), would be injected directly into the desired location in the brain. From there, they could be activated at any time simply by applying a magnetic field outside the body. The new particles could quickly find applications in biomedical research, and eventually, after sufficient testing, might be applied to clinical uses.

The development of these nanoparticles is described in the journal Nature Nanotechnology, in a paper by Polina Anikeeva, a professor in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, graduate student Ye Ji Kim, and 17 others at MIT and in Germany.

Deep brain stimulation (DBS) is a common clinical procedure that uses electrodes implanted in the target brain regions to treat symptoms of neurological and psychiatric conditions such as Parkinson’s disease and obsessive-compulsive disorder. Despite its efficacy, the surgical difficulty and clinical complications associated with DBS limit the number of cases where such an invasive procedure is warranted. The new nanodiscs could provide a much more benign way of achieving the same results.

Over the past decade other implant-free methods of producing brain stimulation have been developed. However, these approaches were often limited by their spatial resolution or ability to target deep regions. For the past decade, Anikeeva’s Bioelectronics group as well as others in the field used magnetic nanomaterials to transduce remote magnetic signals into brain stimulation. However, these magnetic methods relied on genetic modifications and can’t be used in humans.

Since all nerve cells are sensitive to electrical signals, Kim, a graduate student in Anikeeva’s group, hypothesized that a magnetoelectric nanomaterial that can efficiently convert magnetization into electrical potential could offer a path toward remote magnetic brain stimulation. Creating a nanoscale magnetoelectric material was, however, a formidable challenge.

Kim synthesized novel magnetoelectric nanodiscs and collaborated with Noah Kent, a postdoc in Anikeeva’s lab with a background in physics who is a second author of the study, to understand the properties of these particles.

The structure of the new nanodiscs consists of a two-layer magnetic core and a piezoelectric shell. The magnetic core is magnetostrictive, which means it changes shape when magnetized. This deformation then induces strain in the piezoelectric shell which produces a varying electrical polarization. Through the combination of the two effects, these composite particles can deliver electrical pulses to neurons when exposed to magnetic fields.

One key to the discs’ effectiveness is their disc shape. Previous attempts to use magnetic nanoparticles had used spherical particles, but the magnetoelectric effect was very weak, says Kim. This anisotropy enhances magnetostriction by over a 1000-fold, adds Kent.

The team first added their nanodiscs to cultured neurons, which allowed then to activate these cells on demand with short pulses of magnetic field. This stimulation did not require any genetic modification.

They then injected small droplets of magnetoelectric nanodiscs solution into specific regions of the brains of mice. Then, simply turning on a relatively weak electromagnet nearby triggered the particles to release a tiny jolt of electricity in that brain region. The stimulation could be switched on and off remotely by the switching of the electromagnet. That electrical stimulation “had an impact on neuron activity and on behavior,” Kim says.

The team found that the magnetoelectric nanodiscs could stimulate a deep brain region, the ventral tegmental area, that is associated with feelings of reward.

The team also stimulated another brain area, the subthalamic nucleus, associated with motor control. “This is the region where electrodes typically get implanted to manage Parkinson’s disease,” Kim explains. The researchers were able to successfully demonstrate the modulation of motor control through the particles. Specifically, by injecting nanodiscs only in one hemisphere, the researchers could induce rotations in healthy mice by applying magnetic field.

The nanodiscs could trigger the neuronal activity comparable with conventional implanted electrodes delivering mild electrical stimulation. The authors achieved subsecond temporal precision for neural stimulation with their method yet observed significantly reduced foreign body responses as compared to the electrodes, potentially allowing for even safer deep brain stimulation.

The multilayered chemical composition and physical shape and size of the new multilayered nanodiscs is what made precise stimulation possible.

While the researchers successfully increased the magnetostrictive effect, the second part of the process, converting the magnetic effect into an electrical output, still needs more work, Anikeeva says. While the magnetic response was a thousand times greater, the conversion to an electric impulse was only four times greater than with conventional spherical particles.

“This massive enhancement of a thousand times didn’t completely translate into the magnetoelectric enhancement,” says Kim. “That’s where a lot of the future work will be focused, on making sure that the thousand times amplification in magnetostriction can be converted into a thousand times amplification in the magnetoelectric coupling.”

What the team found, in terms of the way the particles’ shapes affects their magnetostriction, was quite unexpected. “It’s kind of a new thing that just appeared when we tried to figure out why these particles worked so well,” says Kent.

Anikeeva adds: “Yes, it’s a record-breaking particle, but it’s not as record-breaking as it could be.” That remains a topic for further work, but the team has ideas about how to make further progress.

While these nanodiscs could in principle already be applied to basic research using animal models, to translate them to clinical use in humans would require several more steps, including large-scale safety studies, “which is something academic researchers are not necessarily most well-positioned to do,” Anikeeva says. “When we find that these particles are really useful in a particular clinical context, then we imagine that there will be a pathway for them to undergo more rigorous large animal safety studies.”

The team included researchers affiliated with MIT’s departments of Materials Science and Engineering, Electrical Engineering and Computer Science, Chemistry, and Brain and Cognitive Sciences; the Research Laboratory of Electronics; the McGovern Institute for Brain Research; and the Koch Institute for Integrative Cancer Research; and from the Friedrich-Alexander University of Erlangen, Germany. The work was supported, in part, by the National Institutes of Health, the National Center for Complementary and Integrative Health, the National Institute for Neurological Disorders and Stroke, the McGovern Institute for Brain Research, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience.


A new method makes high-resolution imaging more accessible

Labs that can’t afford expensive super-resolution microscopes could use a new expansion technique to image nanoscale structures inside cells.


A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.

In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.

“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”

At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.

“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.

A single expansion

Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.

The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.

“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”

With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.

In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.

To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.

To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.

Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.

“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”

Imaging tiny structures

Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.

In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).

Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.

The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.

“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.

The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.


The way sensory prediction changes under anesthesia tells us how conscious cognition works

A new study adds evidence that consciousness requires communication between sensory and cognitive regions of the brain’s cortex.


Our brains constantly work to make predictions about what’s going on around us to ensure that we can attend to and consider the unexpected, for instance. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence to the idea that conscious thought requires synchronized communication — mediated by brain rhythms in specific frequency bands — between basic sensory and higher-order cognitive regions of the brain.

Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how brain rhythms enable the brain to remain prepared to attend to surprises. Cognition-oriented brain regions (generally at the front of the brain) use relatively low-frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g., your co-worker’s music). When sensory regions detect a surprise (e.g., the office fire alarm), they use faster-frequency gamma rhythms to tell the higher regions about it, and the higher regions process that at gamma frequencies to decide what to do (e.g., exit the building).

The new results, published Oct. 7 in the Proceedings of the National Academy of Sciences, show that when animals were under propofol-induced general anesthesia, a sensory region retained the capacity to detect simple surprises but communication with a higher cognitive region toward the front of the brain was lost, making that region unable to engage in its “top-down” regulation of the activity of the sensory region and keeping it oblivious to simple and more complex surprises alike.

What we've got here is failure to communicate

“What we are doing here speaks to the nature of consciousness,” says co-senior author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences. “Propofol general anesthesia deactivates the top-down processes that that underlie cognition. It essentially disconnects communication between the front and back halves of the brain.”

Co-senior author Andre Bastos, an assistant professor in the psychology department at Vanderbilt and a former member of Miller’s MIT lab, adds that the study results highlight the key role of frontal areas in consciousness.

“These results are particularly important given the newfound scientific interest in the mechanisms of consciousness, and how consciousness relates to the ability of the brain to form predictions,” Bastos says.

The brain’s ability to predict is dramatically altered during anesthesia. It was interesting that the front of the brain, areas associated with cognition, were more strongly diminished in their predictive abilities than sensory areas. This suggests that prefrontal areas help to spark an “ignition” event that allows sensory information to become conscious. Sensory cortex activation by itself does not lead to conscious perception. These observations help us narrow down possible models for the mechanisms of consciousness.

Yihan Sophy Xiong, a graduate student in Bastos’ lab who led the study, says the anesthetic reduces the times in which inter-regional communication within the cortex can occur.

“In the awake brain, brain waves give short windows of opportunity for neurons to fire optimally — the ‘refresh rate’ of the brain, so to speak,” Xiong says. “This refresh rate helps organize different brain areas to communicate effectively. Anesthesia both slows down the refresh rate, which narrows these time windows for brain areas to talk to each other and makes the refresh rate less effective, so that neurons become more disorganized about when they can fire. When the refresh rate no longer works as intended, our ability to make predictions is weakened.”

Learning from oddballs

To conduct the research, the neuroscientists measured the electrical signals, “or spiking,” of hundreds of individual neurons and the coordinated rhythms of their aggregated activity (at alpha/beta and gamma frequencies), in two areas on the surface, or cortex, of the brain of two animals as they listened to sequences of tones. Sometimes the sequences would all be the same note (e.g., AAAAA). Sometimes there’d be a simple surprise that the researchers called a “local oddball” (e.g., AAAAB). But sometimes the surprise would be more complicated, or a “global oddball.” For example, after seeing a series of AAAABs, there’d all of a sudden be AAAAA, which violates the global but not the local pattern.

Prior work has suggested that a sensory region (in this case the temporoparietal area, or Tpt) can spot local oddballs on its own, Miller says. Detecting the more complicated global oddball requires the participation of a higher order region (in this case the frontal eye fields, or FEF).

The animals heard the tone sequences both while awake and while under propofol anesthesia. There were no surprises about the waking state. The researchers reaffirmed that top-down alpha/beta rhythms from FEF carried predictions to the Tpt and that Tpt would increase gamma rhythms when an oddball came up, causing FEF (and the prefrontal cortex) to respond with upticks of gamma activity as well.

But by several measures and analyses, the scientists could see these dynamics break down after the animals lost consciousness.

Under propofol, for instance, spiking activity declined overall but when a local oddball came along, Tpt spiking still increased notably but now spiking in FEF didn’t follow suit as it does during wakefulness.

Meanwhile, when a global oddball was presented during wakefulness, the researchers could use software to “decode” representation of that among neurons in FEF and the prefrontal cortex (another cognition-oriented region). They could also decode local oddballs in the Tpt. But under anesthesia the decoder could no longer reliably detect representation of local or global oddballs in FEF or the prefrontal cortex.

Moreover, when they compared rhythms in the regions amid wakeful versus unconscious states they found stark differences. When the animals were awake, oddballs increased gamma activity in both Tpt and FEF and alpha/beta rhythms decreased. Regular, non-oddball stimulation increased alpha/beta rhythms. But when the animals lost consciousness the increase in gamma rhythms from a local oddball was even greater in Tpt than when the animal was awake.

“Under propofol-mediated loss of consciousness, the inhibitory function of alpha/beta became diminished and/or eliminated, leading to disinhibition of oddballs in sensory cortex,” the authors wrote.

Other analyses of inter-region connectivity and synchrony revealed that the regions lost the ability to communicate during anesthesia.

In all, the study’s evidence suggests that conscious thought requires coordination across the cortex, from front to back, the researchers wrote.

“Our results therefore suggest an important role for prefrontal cortex activation, in addition to sensory cortex activation, for conscious perception,” the researchers wrote.

In addition to Xiong, Miller, and Bastos, the paper’s other authors are Jacob Donoghue, Mikael Lundqvist, Meredith Mahnke, Alex Major, and Emery N. Brown.

The National Institutes of Health, The JPB Foundation, and The Picower Institute for Learning and Memory funded the study.


Mixing joy and resolve, event celebrates women in science and addresses persistent inequalities

The Kuggie Vallee Distinguished Lectures and Workshops presented inspiring examples of success, even as the event evoked frank discussions of the barriers that still hinder many women in science.


For two days at The Picower Institute for Learning and Memory at MIT, participants in the Kuggie Vallee Distinguished Lectures and Workshops celebrated the success of women in science and shared strategies to persist through, or better yet dissipate, the stiff headwinds women still face in the field.

“Everyone is here to celebrate and to inspire and advance the accomplishments of all women in science,” said host Li-Huei Tsai, Picower Professor in the Department of Brain and Cognitive Sciences and director of the Picower Institute, as she welcomed an audience that included scores of students, postdocs, and other research trainees. “It is a great feeling to have the opportunity to showcase examples of our successes and to help lift up the next generation.”

Tsai earned the honor of hosting the event after she was named a Vallee Visiting Professor in 2022 by the Vallee Foundation. Foundation president Peter Howley, a professor of pathological anatomy at Harvard University, said the global series of lectureships and workshops were created to honor Kuggie Vallee, a former Lesley College professor who worked to advance the careers of women.

During the program Sept. 24-25, speakers and audience members alike made it clear that helping women succeed requires both recognizing their achievements and resolving to change social structures in which they face marginalization.

Inspiring achievements

Lectures on the first day featured two brain scientists who have each led acclaimed discoveries that have been transforming their fields.

Michelle Monje, a pediatric neuro-oncologist at Stanford University whose recognitions include a MacArthur Fellowship, described her lab’s studies of brain cancers in children, which emerge at specific times in development as young brains adapt to their world by wiring up new circuits and insulating neurons with a fatty sheathing called myelin. Monje has discovered that when the precursors to myelinating cells, called oligodendrocyte precursor cells, harbor cancerous mutations, the tumors that arise — called gliomas — can hijack those cellular and molecular mechanisms. To promote their own growth, gliomas tap directly into the electrical activity of neural circuits by forging functional neuron-to-cancer connections, akin to the “synapse” junctions healthy neurons make with each other. Years of her lab’s studies, often led by female trainees, have not only revealed this insidious behavior (and linked aberrant myelination to many other diseases as well), but also revealed specific molecular factors involved. Those findings, Monje said, present completely novel potential avenues for therapeutic intervention.

“This cancer is an electrically active tissue and that is not how we have been approaching understanding it,” she said.

Erin Schuman, who directs the Max Planck Institute for Brain Research in Frankfurt, Germany, and has won honors including the Brain Prize, described her groundbreaking discoveries related to how neurons form and edit synapses along the very long branches — axons and dendrites — that give the cells their exotic shapes. Synapses form very far from the cell body where scientists had long thought all proteins, including those needed for synapse structure and activity, must be made. In the mid-1990s, Schuman showed that the protein-making process can occur at the synapse and that neurons stage the needed infrastructure — mRNA and ribosomes — near those sites. Her lab has continued to develop innovative tools to build on that insight, cataloging the stunning array of thousands of mRNAs involved, including about 800 that are primarily translated at the synapse, studying the diversity of synapses that arise from that collection, and imaging individual ribosomes such that her lab can detect when they are actively making proteins in synaptic neighborhoods.

Persistent headwinds

While the first day’s lectures showcased examples of women’s success, the second day’s workshops turned the spotlight on the social and systemic hindrances that continue to make such achievements an uphill climb. Speakers and audience members engaged in frank dialogues aimed at calling out those barriers, overcoming them, and dismantling them.

Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology at MIT and professor of behavioral and policy sciences in the MIT Sloan School of Management, told the group that as bad as sexual harassment and assault in the workplace are, the more pervasive, damaging, and persistent headwinds for women across a variety of professions are “deeply sedimented cultural habits” that marginalize their expertise and contributions in workplaces, rendering them invisible to male counterparts, even when they are in powerful positions. High-ranking women in Silicon Valley who answered the “Elephant in the Valley” survey, for instance, reported high rates of many demeaning comments and demeanor, as well as exclusion from social circles. Even U.S. Supreme Court justices are not immune, she noted, citing research showing that for decades female justices have been interrupted with disproportionate frequency during oral arguments at the court. Silbey’s research has shown that young women entering the engineering workforce often become discouraged by a system that appears meritocratic, but in which they are often excluded from opportunities to demonstrate or be credited for that merit and are paid significantly less.

“Women’s occupational inequality is a consequence of being ignored, having contributions overlooked or appropriated, of being assigned to lower-status roles, while men are pushed ahead, honored and celebrated, often on the basis of women’s work,” Silbey said.

Often relatively small in numbers, women in such workplaces become tokens — visible as different, but still treated as outsiders, Silbey said. Women tend to internalize this status, becoming very cautious about their work while some men surge ahead in more cavalier fashion. Silbey and speakers who followed illustrated the effect this can have on women’s careers in science. Kara McKinley, an assistant professor of stem cell and regenerative biology at Harvard, noted that while the scientific career “pipeline” in some areas of science is full of female graduate students and postdocs, only about 20 percent of natural sciences faculty positions are held by women. Strikingly, women are already significantly depleted in the applicant pools for assistant professor positions, she said. Those who do apply tend to wait until they are more qualified than the men they are competing against. 

McKinley and Silbey each noted that women scientists submit fewer papers to prestigious journals, with Silbey explaining that it’s often because women are more likely to worry that their studies need to tie up every loose end. Yet, said Stacie Weninger, a venture capitalist and president of the F-Prime Biomedical Research Initiative and a former editor at Cell Press, women were also less likely than men to rebut rejections from journal editors, thereby accepting the rejection even though rebuttals sometimes work.

Several speakers, including Weninger and Silbey, said pedagogy must change to help women overcome a social tendency to couch their assertions in caveats when many men speak with confidence and are therefore perceived as more knowledgeable.

At lunch, trainees sat in small groups with the speakers. They shared sometimes harrowing personal stories of gender-related difficulties in their young careers and sought advice on how to persist and remain resilient. Schuman advised the trainees to report mistreatment, even if they aren’t confident that university officials will be able to effect change, to at least make sure patterns of mistreatment get on the record. Reflecting on discouraging comments she experienced early in her career, Monje advised students to build up and maintain an inner voice of confidence and draw upon it when criticism is unfair.

“It feels terrible in the moment, but cream rises,” Monje said. “Believe in yourself. It will be OK in the end.”

Lifting each other up

Speakers at the conference shared many ideas to help overcome inequalities. McKinley described a program she launched in 2020 to ensure that a diversity of well-qualified women and non-binary postdocs are recruited for, and apply for, life sciences faculty jobs: the Leading Edge Symposium. The program identifies and names fellows — 200 so far — and provides career mentoring advice, a supportive community, and a platform to ensure they are visible to recruiters. Since the program began, 99 of the fellows have gone on to accept faculty positions at various institutions.

In a talk tracing the arc of her career, Weninger, who trained as a neuroscientist at Harvard, said she left bench work for a job as an editor because she wanted to enjoy the breadth of science, but also noted that her postdoc salary didn’t even cover the cost of child care. She left Cell Press in 2005 to help lead a task force on women in science that Harvard formed in the wake of comments by then-president Lawrence Summers widely understood as suggesting that women lacked “natural ability” in science and engineering. Working feverishly for months, the task force recommended steps to increase the number of senior women in science, including providing financial support for researchers who were also caregivers at home so they’d have the money to hire a technician. That extra set of hands would afford them the flexibility to keep research running even as they also attended to their families. Notably, Monje said she does this for the postdocs in her lab.

A graduate student asked Silbey at the end of her talk how to change a culture in which traditionally male-oriented norms marginalize women. Silbey said it starts with calling out those norms and recognizing that they are the issue, rather than increasing women’s representation in, or asking them to adapt to, existing systems.

“To make change, it requires that you do recognize the differences of the experiences and not try to make women exactly like men, or continue the past practices and think, ‘Oh, we just have to add women into it’,” she said.

Silbey also praised the Kuggie Vallee event at MIT for assembling a new community around these issues. Women in science need more social networks where they can exchange information and resources, she said.

“This is where an organ, an event like this, is an example of making just that kind of change: women making new networks for women,” she said.


New 3D printing technique creates unique objects quickly and with less waste

By using a 3D printer like an iron, researchers can precisely control the color, shade, and texture of fabricated objects, using only one material.


Multimaterial 3D printing enables makers to fabricate customized devices with multiple colors and varied textures. But the process can be time-consuming and wasteful because existing 3D printers must switch between multiple nozzles, often discarding one material before they can start depositing another.

Researchers from MIT and Delft University of Technology have now introduced a more efficient, less wasteful, and higher-precision technique that leverages heat-responsive materials to print objects that have multiple colors, shades, and textures in one step.

Their method, called speed-modulated ironing, utilizes a dual-nozzle 3D printer. The first nozzle deposits a heat-responsive filament and the second nozzle passes over the printed material to activate certain responses, such as changes in opacity or coarseness, using heat.

By controlling the speed of the second nozzle, the researchers can heat the material to specific temperatures, finely tuning the color, shade, and roughness of the heat-responsive filaments. Importantly, this method does not require any hardware modifications.

The researchers developed a model that predicts the amount of heat the “ironing” nozzle will transfer to the material based on its speed. They used this model as the foundation for a user interface that automatically generates printing instructions which achieve color, shade, and texture specifications.

One could use speed-modulated ironing to create artistic effects by varying the color on a printed object. The technique could also produce textured handles that would be easier to grasp for individuals with weakness in their hands.

“Today, we have desktop printers that use a smart combination of a few inks to generate a range of shades and textures. We want to be able to do the same thing with a 3D printer — use a limited set of materials to create a much more diverse set of characteristics for 3D-printed objects,” says Mustafa Doğa Doğan PhD ’24, co-author of a paper on speed-modulated ironing.

This project is a collaboration between the research groups of Zjenja Doubrovski, assistant professor at TU Delft, and Stefanie Mueller, the TIBCO Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Doğan worked closely with lead author Mehmet Ozdemir of TU Delft; Marwa AlAlawi, a mechanical engineering graduate student at MIT; and Jose Martinez Castro of TU Delft. The research will be presented at the ACM Symposium on User Interface Software and Technology.

Modulating speed to control temperature

The researchers launched the project to explore better ways to achieve multiproperty 3D printing with a single material. The use of heat-responsive filaments was promising, but most existing methods use a single nozzle to do printing and heating. The printer always needs to first heat the nozzle to the desired target temperature before depositing the material.

However, heating and cooling the nozzle takes a long time, and there is a danger that the filament in the nozzle might degrade as it reaches higher temperatures.

To prevent these problems, the team developed an ironing technique where material is printed using one nozzle, then activated by a second, empty nozzle which only reheats it. Instead of adjusting the temperature to trigger the material response, the researchers keep the temperature of the second nozzle constant and vary the speed at which it moves over the printed material, slightly touching the top of the layer.
 

Animation of rectangular iron sweeping top layer of printing block as infrared inset shows thermal activity.


“As we modulate the speed, that allows the printed layer we are ironing to reach different temperatures. It is similar to what happens if you move your finger over a flame. If you move it quickly, you might not be burned, but if you drag it across the flame slowly, your finger will reach a higher temperature,” AlAlawi says.

The MIT team collaborated with the TU Delft researchers to develop the theoretical model that predicts how fast the second nozzle must move to heat the material to a specific temperature.

The model correlates a material’s output temperature with its heat-responsive properties to determine the exact nozzle speed which will achieve certain colors, shades, or textures in the printed object.

“There are a lot of inputs that can affect the results we get. We are modeling something that is very complicated, but we also want to make sure the results are fine-grained,” AlAlawi says.

The team dug into scientific literature to determine proper heat transfer coefficients for a set of unique materials, which they built into their model. They also had to contend with an array of unpredictable variables, such as heat that may be dissipated by fans and the air temperature in the room where the object is being printed.

They incorporated the model into a user-friendly interface that simplifies the scientific process, automatically translating the pixels in a maker’s 3D model into a set of machine instructions that control the speed at which the object is printed and ironed by the dual nozzles.

Faster, finer fabrication

They tested their approach with three heat-responsive filaments. The first, a foaming polymer with particles that expand as they are heated, yields different shades, translucencies, and textures. They also experimented with a filament filled with wood fibers and one with cork fibers, both of which can be charred to produce increasingly darker shades.

The researchers demonstrated how their method could produce objects like water bottles that are partially translucent. To make the water bottles, they ironed the foaming polymer at low speeds to create opaque regions and higher speeds to create translucent ones. They also utilized the foaming polymer to fabricate a bike handle with varied roughness to improve a rider’s grip.

Trying to produce similar objects using traditional multimaterial 3D printing took far more time, sometimes adding hours to the printing process, and consumed more energy and material. In addition, speed-modulated ironing could produce fine-grained shade and texture gradients that other methods could not achieve.

In the future, the researchers want to experiment with other thermally responsive materials, such as plastics. They also hope to explore the use of speed-modulated ironing to modify the mechanical and acoustic properties of certain materials.


Uplifting West African communities, one cashew at a time

GRIA Food Company, founded by Joshua Reed-Diawuoh MBA ’20, ethically sources cashews from the region and sells them internationally to support local food economies.


Ever wonder how your favorite snack was sourced? Joshua Reed-Diawuoh thinks more people should.

Reed-Diawuoh MBA ’20 is the founder and CEO of GRIA Food Company, which partners with companies that ethically source and process food in West Africa to support local food economies and help communities in the region more broadly.

“It’s very difficult for these agribusinesses and producers to start sustainable businesses and build up that value chain in the area,” says Reed-Diawuoh, who started the company as a student in the MIT Sloan School of Management. “We want to support these companies that put in the work to build integrated businesses that are employing people and uplifting communities.”

GRIA, which stands for “Grown in Africa,” is currently selling six types of flavored cashews sourced from Benin, Togo, and Burkina Faso. All of the cashews are certified by Fairtrade International, which means in addition to offering sustainable wages, access to financing, and decent working conditions, the companies receive a “Fairtrade Premium” on top of the selling price that allows them to invest in the long-term health of their communities.

“That premium is transformational,” Reed-Diawuoh says. “The premium goes to the producer cooperatives, or the farmers working the land, and they can invest that in any way they choose. They can put it back into their business, they can start new community development projects, like building schools or improving wastewater infrastructure, whatever they want.”

Cracking the nut

Reed-Diawuoh’s family is from Ghana, and before coming to MIT Sloan, he worked to support agriculture and food manufacturing for countries in Sub-Saharan Africa, with particular focus on uplifting small-scale farmers. That’s where he learned about difficulties with financing and infrastructure constraints that held many companies back.

“I wanted to get my hands dirty and start my own business that contributed to improving agricultural development in West Africa,” Reed-Diawuoh says.

He entered MIT Sloan in 2018, taking entrepreneurship classes and exploring several business ideas before deciding to ethically source produce from farmers and sell directly to consumers. He says MIT Sloan’s Sustainability Business Lab offered particularly valuable lessons for how to structure his business.

In his second year, Reed-Diawuoh was selected for a fellowship at the Legatum Center, which connected him to other entrepreneurs working in emerging markets around the world.

“Legatum was a pivotal milestone for me,” he says. “It provided me with some structure and space to develop this idea. It also gave me an incredible opportunity to take risks and explore different business concepts in a way I couldn’t have done if I was working in industry.”

The business model Reed-Diawuoh settled on for GRIA sources product from agribusiness partners in West Africa that adhere to the strictest environmental and labor standards. Reed-Diawuoh decided to start with cashews because they have many manual processing steps — from shelling to peeling and roasting — that are often done after the cashews are shipped out of West Africa, limiting the growth of local food economies and taking wealth out of communities.

Each of GRIA’s partners, from the companies harvesting cashews to the processing facilities, works directly with farmer cooperatives and small-scale farmers and is certified by Fairtrade International.

“Without proper oversight and regulations, workers oftentimes get exploited, and child labor is a huge problem across the agriculture sector,” Reed-Diawuoh says. “Fairtrade certifications try and take a robust and rigorous approach to auditing all of the businesses and their supply chains, from producers to farmers to processors. They do on-site visits and they audit financial documents. We went through this over the course of a thorough three-month review.”

After importing cashew kernels, GRIA flavors and packages them at a production facility in Boston. Reed-Diawuoh started by selling to small independent retailers in Greater Boston before scaling up GRIA’s online sales. He started ramping up production in the beginning of 2023.

“Every time we sell our product, if people weren’t already familiar with Fairtrade or ethical sourcing, we provide information on our packaging and all of our collateral,” Reed-Diawuoh says. “We want to spread this message about the importance of ethical sourcing and the importance of building up food manufacturing in West Africa in particular, but also in rising economies throughout the world.”

Making ethical sourcing mainstream

GRIA currently imports about a ton of Fairtrade cashews and kernels each quarter, and Reed-Diawuoh hopes to double that number each year for the foreseeable future.

“For each pound, we pay premiums for the kernels, and that supports this ecosystem where producers get compensated fairly for their work on the land, and agribusinesses are able to build more robust and profitable business models, because they have an end market for these Fairtrade-certified products.”

Reed-Diawuoh is currently trying out different packaging and flavors and is in discussions with partners to expand production capacity and move into Ghana. He’s also exploring corporate collaborations and has provided MIT with product over the past two years for conferences and other events.

“We’re experimenting with different growth strategies,” Reed-Diawuoh says. “We’re very much still in startup mode, but really trying to ramp up our sales and production.”

As GRIA scales, Reed-Diawuoh hopes it pushes consumers to start asking more of their favorite food brands.

“It’s absolutely critical that, if we’re sourcing produce in markets like the U.S. from places like West Africa, we’re hyper-focused on doing it in an ethical manner,” Reed-Diawuoh says. “The overall goal of GRIA is to ensure we are adhering to and promoting strict sourcing standards and being rigorous and thoughtful about the way we import product.”


Jane-Jane Chen: A model scientist who inspires the next generation

A research scientist and internationally recognized authority in the field of blood cell development reflects on 45 years at MIT.


Growing up in Taiwan, Jane-Jane Chen excelled at math and science, which, at that time, were promoted heavily by the government, and were taught at a high level. Learning rudimentary English as well, the budding scientist knew she wanted to come to the United States to continue her studies, after she earned a bachelor of science in agricultural chemistry from the National Taiwan University in Taipei.

But the journey to becoming a respected scientist, with many years of notable National Institutes of Health (NIH) and National Science Foundation-funded research findings, would require Chen to be uncommonly determined, to move far from her childhood home, to overcome cultural obstacles — and to have the energy to be a trailblazer — in a field where barriers to being a woman in science were significantly higher than they are today.

Today, Chen is looking back on her journey, and on her long career as a principal research scientist at the MIT Institute for Medical Engineering and Science (IMES), a position from which she recently retired after 45 dedicated years.

At MIT, Chen established herself as an internationally recognized authority in the field of blood cell development — specifically red blood cells, says Lee Gehrke, the Hermann L.F. Helmholtz Professor and core faculty in IMES, professor of microbiology and immunobiology and health science and technology at Harvard Medical School, and one of the scientists Chen worked with most closely. 

“Red cells are essential because they carry oxygen to our cells and tissues, requiring iron in the form of a co-factor called heme,” Gehrke says. “Both insufficient heme availability and excess heme are detrimental to red cell development, and Dr. Chen explored the molecular mechanisms allowing cells to adapt to variable heme levels to maintain blood cell production.”

During her MIT career, Chen produced potent biochemistry research, working with heme-regulated eIF2 alpha kinase (which was discovered as the heme-regulated inhibitor of translation, HRI) and regulation of gene expression at translation relating to anemia, including:

“Dr. Chen’s signature discovery is the molecular cloning of the cDNA of the heme regulated inhibitor protein (HRI), a master regulatory protein in gene expression under stress and disease conditions,” Gehrke says, adding that Chen “subsequently devoted her career to defining a molecular and biochemical understanding of this key protein kinase” and that she “has also contributed several invited review articles on the subject of red cell development, and her papers are seminal contributions to her field.”

Forging her path

Shortly after graduating college, in 1973, Chen received a scholarship to come to California to study for her PhD in biochemistry at the School of Medicine of the University of Southern California. In Taiwan, Chen recalls, the demographic balance between male and female students was even, about 50 percent for each. Once she was in medical school in the United States, she found there were fewer female students, closer to 30 percent at that time, she recalls.

But she says she was fortunate to have important female mentors while at USC, including her PhD advisor, Mary Ellen Jones, a renowned biochemist who is notable for her discovery of carbamyl phosphate, a chemical substance that is key to the biosynthesis of both pyrimidine nucleotides, and arginine and urea. Jones, whom The New York Times called a “crucial researcher on DNA” and a foundational basic cancer researcher, had worked with eventual Nobel laureate Fritz Lipmann at Massachusetts General Hospital. 

When Chen arrived, while there were other Taiwanese students at USC, there were not many at the medical school. Chen says she bonded with a young female scientist and student from Hong Kong and with another female student who was Korean and Chinese, but who was born in America. Forming these friendships was crucial for blunting the isolation she could sometimes feel as a newcomer to America, particularly her connection with the American-born young woman: “She helped me a lot with getting used to the language,” and the culture, Chen says. “It was very hard to be so far away from my family and friends,” she adds. “It was the very first time I had left home. By coincidence, I had a very nice roommate who was not Chinese, but knew the Chinese language conversationally, so that was so lucky … I still have the letters that my parents wrote to me. I was the only girl, and the eldest child (Chen has three younger brothers), so it was hard for all of us.”

“Mostly, the culture I learned was in the lab,” Chen remembers. “I had to work a long day in the lab, and I knew it was such a great opportunity ­ — to go to seminars with professors to listen to speakers who had won, or would win, Nobel Prizes. My monthly living stipend was $300, so that had to stretch far. In my second year, more of my college friends had come to the USC and Caltech, and I began to have more interactions with other Taiwanese students who were studying here.”

Chen's first scientific discovery at Jones’ laboratory was that the fourth enzyme of the pyrimidine biosynthesis, dihydroorotate dehydrogenase, is localized in the inner membrane of the mitochondria. As it more recently turned out, this enzyme plays dual roles not only for pyrimidine biosynthesis, but also for cellular redox homeostasis, and has been demonstrated to be an important target for the development of cancer treatments.

Coming to MIT

After receiving her degree, Chen received a postdoctoral fellowship to work at the Roche Institute of Molecular Biology, in New Jersey, for nine months. In 1979, she married Zong-Long Liau, who was then working at MIT Lincoln Laboratory, from where he also recently retired. She accepted a postdoctoral position to continue her scientific training and pursuit at the laboratory of Irving M. London at MIT, and Jane-Jane and Zong-Long have lived in the Boston area ever since, raising two sons.

Looking back at her career, Chen says she is most proud of “being an established woman scientist with decades of NIH findings, and for being a mother of two wonderful sons.” During her time at MIT and IMES, she has worked with many renowned scientists, including Gehrke and London, professor of biology at MIT, professor of medicine at Harvard Medical School (HMS), founding director of the Harvard-MIT Program in Health Sciences and Technology (HST), and a recognized expert in molecular regulation of hemoglobin synthesis. She says that she is also in debt to the colleagues and collaborators at HMS and Children’s Hospital Boston for their scientific interests and support at the time when her research branched into the field of hematology, far different from her expertise in biochemistry. All of them are HST-educated physician scientists, including Stuart H. Orkin, Nancy C. Andrews, Mark D. Fleming, and Vijay G. Sankaran.

“We will miss Dr. Chen’s sage counsel on all matters scientific and communal,” says Elazer R. Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, and the director of the Center for Clinical and Translational Research (CCTR), who was the director of IMES when Chen retired in June. “For generations, she has been an inspiration and guide to generations of students and established leaders across multiple communities — a model for all.”

She says her life in retirement “is a work in progress” — but she is working on a scientific review article, so that she can have “my last words on the research topics of my lab for the past 40 years.” Chen is pondering writing a memoir “reflecting on the journey of my life thus far, from Taiwan to MIT.” She also plans to travel to Taiwan more frequently, to better nurture and treasure the relationships with her three younger brothers, one of whom lives in Los Angeles.

She says that in looking back, she is grateful to have participated in a special grant application that was awarded from the National Science Foundation, aimed at helping women scientists to get their careers back on track after having a family. And she says she also remembers the advice of a female scientist in Jones’ lab during her last year of graduate study, who had stepped back from her research for a while after having two children, “She was not happy that she had done that, and she told me: Never drop out, try to always keep your hands in the research, and the work. So that is what I did.”


MIT Energy and Climate Club mobilizes future leaders to address global climate issues

One of the largest MIT clubs sees itself as “the umbrella of all things related to energy and climate on campus.”


One of MIT’s missions is helping to solve the world’s greatest problems — with a large focus on one of the most pressing topics facing the world today, climate change. The MIT Energy and Climate Club, (MITEC) formerly known as the MIT Energy Club, has been working since 2004 to inform and educate the entire MIT community about this urgent issue and other related matters.

MITEC, one of the largest clubs on campus, has hundreds of active members from every major, including both undergraduate and graduate students. With a broad reach across the Institute, MITEC is the hub for thought leadership and relationship-building across campus.

The club’s co-presidents Laurențiu Anton, doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA student in the MIT Sloan School of Management; and Thomas Lee, doctoral candidate in the Institute for Data, Systems, and Society, say that faculty, staff, and alumni are also welcome to join and interact with the continuously growing club.

While they closely collaborate on all aspects of the club, each of the co-presidents has a focus area to support the student managing directors and vice presidents for several of the club’s committees. Keller oversees the External Relations, Social, Launchpad, and Energy and Climate Hackathon leadership teams. Lee supports the leadership team for next spring’s Energy Conference. He also assists the club treasurer on budget and finance and guides the industry Sponsorships team. Anton oversees marketing, community and education as well as the Energy and Climate Night and Energy and Climate Career Fair leadership teams.

“We think of MITEC as the umbrella of all things related to energy and climate on campus. Our goal is to share actionable information and not just have discussions. We work with other organizations on campus, including the MIT Environmental Solutions Initiative, to bring awareness,” says Anton. “Our Community and Education team is currently working with the MIT ESI [Environmental Solutions Initiative] to create an ecosystem map that we’re excited to produce for the MIT community.”

To share their knowledge and get more people interested in solving climate and energy problems, each year MITEC hosts a variety of events including the MIT Energy and Climate Night, the MIT Energy and Climate Hack, the MIT Energy and Climate Career Fair, and the MIT Energy Conference to be held next spring March 3-4. The club also offers students the opportunity to gain valuable work experience while engaging with top companies, such as Constellation Energy and GE Vernova, on real climate and energy issues through their Launchpad Program.

Founded in 2006, the annual MIT Energy Conference is the largest student-run conference in North America focused on energy and climate issues, where hundreds of participants gather every year with the CEOs, policymakers, investors, and scholars at the forefront of the global energy transition.

“The 2025 MIT Energy Conference’s theme is ‘Breakthrough to Deployment: Driving Climate Innovation to Market’ — which focuses on the importance of both cutting-edge research innovation as well as large-scale commercial deployment to successfully reach climate goals,” says Lee.

Anton notes that the first of four MITEC flagship events the MIT Energy and Climate Night. This research symposium that takes place every year in the fall at the MIT Museum will be held on Nov. 8. The club invites a select number of keynote speakers and several dozen student posters. Guests are allowed to walk around and engage with students, and in return students get practice showcasing their research. The club’s career fair will take place in the spring semester, shortly after Independent Activities Period.

MITEC also provides members opportunities to meet with companies that are working to improve the energy sector, which helps to slow down, as well as adapt to, the effects of climate change.

“We recently went to Provincetown and toured Eversource’s battery energy storage facility. This helped open doors for club members,” says Keller. “The Provincetown battery helps address grid reliability problems after extreme storms on Cape Cod — which speaks to energy’s connection to both the mitigation and adaptation aspects of climate change,” adds Lee.

“MITEC is also a great way to meet other students at MIT that you might not otherwise have a chance to,” says Keller.

“We’d always welcome more undergraduate students to join MITEC. There are lots of leadership opportunities within the club for them to take advantage of and build their resumes. We also have good and growing collaboration between different centers on campus such as the Sloan Sustainability Initiative and the MIT Energy Initiative. They support us with resources, introductions, and help amplify what we're doing. But students are the drivers of the club and set the agendas,” says Lee.

All three co-presidents are excited to hear that MIT President Sally Kornbluth wants to bring climate change solutions to the next level, and that she recently launched The Climate Project at MIT to kick off the Institute’s major new effort to accelerate and scale up climate change solutions.

“We look forward to connecting with the new directors of the Climate Project at MIT and Interim Vice President for Climate Change Richard Lester in the near future. We are eager to explore how MITEC can support and collaborate with the Climate Project at MIT,” says Anton.

Lee, Keller, and Anton want MITEC to continue fostering solutions to climate issues. They emphasized that while individual actions like bringing your own thermos, using public transportation, or recycling are necessary, there’s a bigger picture to consider. They encourage the MIT community to think critically about the infrastructure and extensive supply chains behind the products everyone uses daily.

“It’s not just about bringing a thermos; it’s also understanding the life cycle of that thermos, from production to disposal, and how our everyday choices are interconnected with global climate impacts,” says Anton.

“Everyone should get involved with this worldwide problem. We’d like to see more people think about how they can use their careers for change. To think how they can navigate the type of role they can play — whether it’s in finance or on the technical side. I think exploring what that looks like as a career is also a really interesting way of thinking about how to get involved with the problem,” says Keller.

“MITEC’s newsletter reaches more than 4,000 people. We’re grateful that so many people are interested in energy and climate change,” says Anton.


The changing geography of “energy poverty”

Study of the U.S. shows homes in the South and Southwest could use more aid for energy costs, due to a growing need for air conditioning in a warming climate.


A growing portion of Americans who are struggling to pay for their household energy live in the South and Southwest, reflecting a climate-driven shift away from heating needs and toward air conditioning use, an MIT study finds.

The newly published research also reveals that a major U.S. federal program that provides energy subsidies to households, by assigning block grants to states, does not yet fully match these recent trends.

The work evaluates the “energy burden” on households, which reflects the percentage of income needed to pay for energy necessities, from 2015 to 2020. Households with an energy burden greater than 6 percent of income are considered to be in “energy poverty.” With climate change, rising temperatures are expected to add financial stress in the South, where air conditioning is increasingly needed. Meanwhile, milder winters are expected to reduce heating costs in some colder regions.

“From 2015 to 2020, there is an increase in burden generally, and you do also see this southern shift,” says Christopher Knittel, an MIT energy economist and co-author of a new paper detailing the study’s results. About federal aid, he adds, “When you compare the distribution of the energy burden to where the money is going, it’s not aligned too well.”

The paper, “U.S. federal resource allocations are inconsistent with concentrations of energy poverty,” is published today in Science Advances.

The authors are Carlos Batlle, a professor at Comillas University in Spain and a senior lecturer with the MIT Energy Initiative; Peter Heller SM ’24, a recent graduate of the MIT Technology and Policy Program; Knittel, the George P. Shultz Professor at the MIT Sloan School of Management and associate dean for climate and sustainability at MIT; and Tim Schittekatte, a senior lecturer at MIT Sloan.

A scorching decade

The study, which grew out of graduate research that Heller conducted at MIT, deploys a machine-learning estimation technique that the scholars applied to U.S. energy use data.

Specifically, the researchers took a sample of about 20,000 households from the U.S. Energy Information Administration’s Residential Energy Consumption Survey, which includes a wide variety of demographic characteristics about residents, along with building-type and geographic information. Then, using the U.S. Census Bureau’s American Community Survey data for 2015 and 2020, the research team estimated the average household energy burden for every census tract in the lower 48 states — 73,057 in 2015, and 84,414 in 2020.

That allowed the researchers to chart the changes in energy burden in recent years, including the shift toward a greater energy burden in southern states. In 2015, Maine, Mississippi, Arkansas, Vermont, and Alabama were the five states (ranked in descending order) with the highest energy burden across census bureau tracts. In 2020, that had shifted somewhat, with Maine and Vermont dropping on the list and southern states increasingly having a larger energy burden. That year, the top five states in descending order were Mississippi, Arkansas, Alabama, West Virginia, and Maine.

The data also reflect a urban-rural shift. In 2015, 23 percent of the census tracts where the average household is living in energy poverty were urban. That figure shrank to 14 percent by 2020.

All told, the data are consistent with the picture of a warming world, in which milder winters in the North, Northwest, and Mountain West require less heating fuel, while more extreme summer temperatures in the South require more air conditioning.

“Who’s going to be harmed most from climate change?” asks Knittel. “In the U.S., not surprisingly, it’s going to be the southern part of the U.S. And our study is confirming that, but also suggesting it’s the southern part of the U.S that’s least able to respond. If you’re already burdened, the burden’s growing.”

An evolution for LIHEAP?

In addition to identifying the shift in energy needs during the last decade, the study also illuminates a longer-term change in U.S. household energy needs, dating back to the 1980s. The researchers compared the present-day geography of U.S. energy burden to the help currently provided by the federal Low Income Home Energy Assistance Program (LIHEAP), which dates to 1981.

Federal aid for energy needs actually predates LIHEAP, but the current program was introduced in 1981, then updated in 1984 to include cooling needs such as air conditioning. When the formula was updated in 1984, two “hold harmless” clauses were also adopted, guaranteeing states a minimum amount of funding.

Still, LIHEAP’s parameters also predate the rise of temperatures over the last 40 years, and the current study shows that, compared to the current landscape of energy poverty, LIHEAP distributes relatively less of its funding to southern and southwestern states.

“The way Congress uses formulas set in the 1980s keeps funding distributions nearly the same as it was in the 1980s,” Heller observes. “Our paper illustrates the shift in need that has occurred over the decades since then.”

Currently, it would take a fourfold increase in LIHEAP to ensure that no U.S. household experiences energy poverty. But the researchers tested out a new funding design, which would help the worst-off households first, nationally, ensuring that no household would have an energy burden of greater than 20.3 percent.

“We think that’s probably the most equitable way to allocate the money, and by doing that, you now have a different amount of money that should go to each state, so that no one state is worse off than the others,” Knittel says.

And while the new distribution concept would require a certain amount of subsidy reallocation among states, it would be with the goal of helping all households avoid a certain level of energy poverty, across the country, at a time of changing climate, warming weather, and shifting energy needs in the U.S.

“We can optimize where we spend the money, and that optimization approach is an important thing to think about,” Knittel says. 


Institute Professor Emeritus John Little, a founder of operations research and marketing science, dies at 96

The MIT Sloan scholar was a part of the Institute community for nearly eight decades.


MIT Institute Professor Emeritus John D.C. Little ’48, PhD ’55, an inventive scholar whose work significantly influenced operations research and marketing, died on Sept. 27, at age 96. Having entered MIT as an undergraduate in 1945, he was part of the Institute community over a span of nearly 80 years and served as a faculty member at the MIT Sloan School of Management since 1962.

Little’s career was characterized by innovative computing work, an interdisciplinary and expansive research agenda, and research that was both theoretically robust and useful in practical terms for business managers. Little had a strong commitment to supporting and mentoring others at the Institute, and played a key role in helping shape the professional societies in his fields, such as the Institute for Operations Research and the Management Sciences (INFORMS).

He may be best known for his formulation of “Little’s Law,” a concept applied in operations research that generalizes the dynamics of queuing. Broadly, the theorem, expressed as L = λW, states that the number of customers or others waiting in a line equals their arrival rate multiplied by their average time spent in the system. This result can be applied to many systems, from manufacturing to health care to customer service, and helps quantify and fix business bottlenecks, among other things.

Little is widely considered to have been instrumental in the development of both operations research and marketing science, where he also made a range of advances, starting in the 1960s. Drawing on innovations in computer modeling, he analyzed a broad range of issues in marketing, from customer behavior and brand loyalty to firm-level decisions, often about advertising deployment strategy. Little’s research methods evolved to incorporate the new streams of data that information technology increasingly made available, such as the purchasing information obtained from barcodes.

“John Little was a mentor and friend to so many of us at MIT and beyond,” says Georgia Perakis, the interim John C. Head III Dean of MIT Sloan. “He was also a pioneer — as the first doctoral student in the field of operations research, as the founder of the Marketing Group at MIT Sloan, and with his research, including Little’s Law, published in 1961. Many of us at MIT Sloan are lucky to have followed in John’s footsteps, learning from his research and his leadership both at the school and in many professional organizations, including the INFORMS society where he served as its first president. I am grateful to have known and learned from John myself.”

Little’s longtime colleagues in the marketing group at MIT Sloan shared those sentiments.

“John was truly an academic giant with pioneering work in queuing, optimization, decision sciences, and marketing science,” says Stephen Graves, the Abraham J. Siegel Professor Post Tenure of Management at MIT Sloan. “He also was an exceptional academic leader, being very influential in the shaping and strengthening of the professional societies for operations research and for marketing science. And he was a remarkable person as a mentor and colleague, always caring, thoughtful, wise, and with a New England sense of humor.”

John Dutton Conant Little was born in Boston and grew up in Andover, Massachusetts. At MIT he majored in physics and edited the campus’ humor magazine. Working at General Electric after graduation, he met his future wife, Elizabeth Alden PhD ’54; they both became doctoral students in physics at MIT, starting in 1951.

Alden studied ferroelectric materials, which exhibit complex properties of polarization, and produced a thesis titled, “The Dynamic Behavior of Domain Walls in Barium Titanate,” working with Professor Arthur R. von Hippel. Little, advised by Professor Philip Morse, used MIT’s famous Whirlwind I computer for his dissertation work. His thesis, titled “Use of Storage Water in a Hydroelectric System,” modeled the optimally low-cost approach to distributing water held by dams. It was a thesis in both physics and operations research, and appears to be the first one ever granted in operations research.

Little then served in the U.S. Army and spent five years on the faculty at what is now Case Western Reserve University, before returning to the Institute in 1962 as an associate professor of operations research and management at MIT Sloan. Having worked at the leading edge of using computing to tackle operations problems, Little began applying computer modeling to marketing questions. His research included models of consumer choice and promotional spending, among other topics.

Little published several dozen scholarly papers across operations research and marketing, as well as co-editing, along with Robert C. Blattberg and Rashi Glazer, a 1974 book, “The Marketing Information Revolution,” published by Harvard Business School Press. Ever the wide-ranging scholar, he even published several studies about optimizing traffic signals and traffic flow.

Still, in addition to Little’s Law, some of his key work came from studies in marketing and management. In an influential 1970 paper in Management Science,  Little outlined the specifications that a good data-driven management model should have, emphasizing that business leaders should be given tools they could thoroughly grasp.

In a 1979 paper in Operations Research, Little described the elements needed to develop a robust model of ad expenditures for businesses, such as the geographic distribution of spending, and a firm’s spending over time. And in a 1983 paper with Peter Guadagni, published in Marketing Science, Little used the advent of scanner data for consumer goods to build a powerful model of consumer behavior and brand loyalty, which has remained influential.

Separate though these topics might be, Little always sought to explain the dynamics at work in each case. As a scholar, he “had the vision to perceive marketing as source of interesting and relevant unexplored opportunities for OR [operations research] and management science,” wrote Little’s MIT colleagues John Hauser and Glen Urban in a biographical chapter about him, “Profile of John D.C. Little," for the book “Profiles in Operations Research,” published in 2011. It it, Hauser and Urban detail the lasting contributions these papers and others made.

By 1967, Little had co-founded the firm Management Decisions Systems, which modeled marketing problems for major companies and was later purchased by Information Resources, Inc. on whose board Little served.

In 1989, Little was named Institute Professor, MIT’s highest faculty honor. He had previously served as director of the MIT Operations Research Center. At MIT Sloan he was the former head of the Management Science Area and the Behavioral and Policy Sciences Area.

For all his productivity as a scholar, Little also served as a valued mentor to many, while opening his family home outside of Boston to overseas-based faculty and students for annual Thanksgiving dinners. He also took pride in encouraging women to enter management and academia. In just one example, he was the principal faculty advisor for the late Asha Seth Kapadia SM ’65, one of the first international and female students at Sloan, who studied queuing theory and later became a longtime professor at the University of Texas School of Public Health.

Additionally, current MIT Sloan professor Juanjuan Zhang credits Little for inspiring her interest in the field; today Zhang is the John D.C. Little Professor of Marketing at MIT Sloan.

"John was a larger-than-life person," Zhang says. "His foundational work transformed marketing from art, to art, science, and engineering, making it a process that ordinary people can follow to succeed. He democratized marketing.”

Little’s presence as an innovative, interdisciplinary scholar who also encouraged others to pursue their own work is fundamental to the way he is remembered at MIT.

“John pioneered in operations research at MIT and is widely known for Little’s Law, but he did even more work in marketing science,” said Urban, an emeritus dean of MIT Sloan and the David Austin Professor in Marketing, Emeritus. “He founded the field of operations research modeling in marketing, with analytic work on adaptive advertising, and did fundamental work on marketing response. He was true to our MIT philosophy of “mens et manus” [“mind and hand”] as he proposed that models should be usable by managers as well as being theoretically strong. Personally, John hired me as an assistant professor in 1966 and supported my work in the following 55 years at MIT. I am grateful to him, and sad to lose a friend and mentor.”

Hauser, the Kirin Professor of Marketing at MIT Sloan, added: “John made seminal contributions to many fields from operations to management science to founding marketing science. More importantly, he was a unique colleague who mentored countless faculty and students and who, by example, led with integrity and wit. I, and many others, owe our love of operations research and marketing science to John.”

In recognition of his scholarship, Little was elected to the National Academy of Engineering, and was a fellow of the American Association for the Advancement of Science. Among other honors, the American Marketing Association gave Little its Charles Parlin Award for contributions to the practice of marketing research, in 1979, and its Paul D. Converse Award for lifetime achievement, in 1992. Little was the first president of INFORMS, which honored him with its George E. Kimball Medal. Little was also president of The Institute of Management Sciences (TIMS), and the Operations Research Society of America (ORSA).

An avid jogger, biker, and seafood chef, Little was dedicated to his family. He is predeceased by his wife, Elizabeth, and his two sisters, Margaret and Francis. Little is survived by his children Jack, Sarah, Thomas, and Ruel; eight grandchildren; and two great-grandchildren. Arrangements for a memorial service have been entrusted to the Dee Funeral Home in Concord, Massachusetts. 


Artificial intelligence meets “blisk” in new DARPA-funded collaboration

Collaborative multi-university team will pursue new AI-enhanced design tools and high-throughput testing methods for next-generation turbomachinery.


A recent award from the U.S. Defense Advanced Research Projects Agency (DARPA) brings together researchers from Massachusetts Institute of Technology (MIT), Carnegie Mellon University (CMU), and Lehigh University (Lehigh) under the Multiobjective Engineering and Testing of Alloy Structures (METALS) program. The team will research novel design tools for the simultaneous optimization of shape and compositional gradients in multi-material structures that complement new high-throughput materials testing techniques, with particular attention paid to the bladed disk (blisk) geometry commonly found in turbomachinery (including jet and rocket engines) as an exemplary challenge problem.

“This project could have important implications across a wide range of aerospace technologies. Insights from this work may enable more reliable, reusable, rocket engines that will power the next generation of heavy-lift launch vehicles,” says Zachary Cordero, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the project’s lead principal investigator. “This project merges classical mechanics analyses with cutting-edge generative AI design technologies to unlock the plastic reserve of compositionally graded alloys allowing safe operation in previously inaccessible conditions.”

Different locations in blisks require different thermomechanical properties and performance, such as resistance to creep, low cycle fatigue, high strength, etc. Large scale production also necessitates consideration of cost and sustainability metrics such as sourcing and recycling of alloys in the design.

“Currently, with standard manufacturing and design procedures, one must come up with a single magical material, composition, and processing parameters to meet ‘one part-one material’ constraints,” says Cordero. “Desired properties are also often mutually exclusive prompting inefficient design tradeoffs and compromises.”

Although a one-material approach may be optimal for a singular location in a component, it may leave other locations exposed to failure or may require a critical material to be carried throughout an entire part when it may only be needed in a specific location. With the rapid advancement of additive manufacturing processes that are enabling voxel-based composition and property control, the team sees unique opportunities for leap-ahead performance in structural components are now possible.

Cordero’s collaborators include Zoltan Spakovszky, the T. Wilson (1953) Professor in Aeronautics in AeroAstro; A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering; Faez Ahmed, ABS Career Development Assistant Professor of mechanical engineering at MIT; S. Mohadeseh Taheri-Mousavi, assistant professor of materials science and engineering at CMU; and Natasha Vermaak, associate professor of mechanical engineering and mechanics at Lehigh.

The team’s expertise spans hybrid integrated computational material engineering and machine-learning-based material and process design, precision instrumentation, metrology, topology optimization, deep generative modeling, additive manufacturing, materials characterization, thermostructural analysis, and turbomachinery.

“It is especially rewarding to work with the graduate students and postdoctoral researchers collaborating on the METALS project, spanning from developing new computational approaches to building test rigs operating under extreme conditions,” says Hart. “It is a truly unique opportunity to build breakthrough capabilities that could underlie propulsion systems of the future, leveraging digital design and manufacturing technologies.”

This research is funded by DARPA under contract HR00112420303. The views, opinions, and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. government and no official endorsement should be inferred.


Study finds mercury pollution from human activities is declining

Models show that an unexpected reduction in human-driven emissions led to a 10 percent decline in atmospheric mercury concentrations.


MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.

In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.

They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.

Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.

“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.

The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.

However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.

“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.

Mercury mismatch

The Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.

The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.

This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.

Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.

“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.

Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.

At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.

“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.

Multifaceted models

The researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.

By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.

Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.

For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.

“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.

Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.

While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.

One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.

They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.

Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.

In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.

“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.

In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.

This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency.


Bubble findings could unlock better electrode and electrolyzer designs

A new study of bubbles on electrode surfaces could help improve the efficiency of electrochemical processes that produce fuels, chemicals, and materials.


Industrial electrochemical processes that use electrodes to produce fuels and chemical products are hampered by the formation of bubbles that block parts of the electrode surface, reducing the area available for the active reaction. Such blockage reduces the performance of the electrodes by anywhere from 10 to 25 percent.

But new research reveals a decades-long misunderstanding about the extent of that interference. The findings show exactly how the blocking effect works and could lead to new ways of designing electrode surfaces to minimize inefficiencies in these widely used electrochemical processes.

It has long been assumed that the entire area of the electrode shadowed by each bubble would be effectively inactivated. But it turns out that a much smaller area — roughly the area where the bubble actually contacts the surface — is blocked from its electrochemical activity. The new insights could lead directly to new ways of patterning the surfaces to minimize the contact area and improve overall efficiency.

The findings are reported today in the journal Nanoscale, in a paper by recent MIT graduate Jack Lake PhD ’23, graduate student Simon Rufer, professor of mechanical engineering Kripa Varanasi, research scientist Ben Blaiszik, and six others at the University of Chicago and Argonne National Laboratory. The team has made available an open-source, AI-based software tool that engineers and scientists can now use to automatically recognize and quantify bubbles formed on a given surface, as a first step toward controlling the electrode material’s properties.

Gas-evolving electrodes, often with catalytic surfaces that promote chemical reactions, are used in a wide variety of processes, including the production of “green” hydrogen without the use of fossil fuels, carbon-capture processes that can reduce greenhouse gas emissions, aluminum production, and the chlor-alkali process that is used to make widely used chemical products.

These are very widespread processes. The chlor-alkali process alone accounts for 2 percent of all U.S. electricity usage; aluminum production accounts for 3 percent of global electricity; and both carbon capture and hydrogen production are likely to grow rapidly in coming years as the world strives to meet greenhouse-gas reduction targets. So, the new findings could make a real difference, Varanasi says.

“Our work demonstrates that engineering the contact and growth of bubbles on electrodes can have dramatic effects” on how bubbles form and how they leave the surface, he says. “The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes to avoid the deleterious effects of bubbles.”

“The broader literature built over the last couple of decades has suggested that not only that small area of contact but the entire area under the bubble is passivated,” Rufer says. The new study reveals “a significant difference between the two models because it changes how you would develop and design an electrode to minimize these losses.”

To test and demonstrate the implications of this effect, the team produced different versions of electrode surfaces with patterns of dots that nucleated and trapped bubbles at different sizes and spacings. They were able to show that surfaces with widely spaced dots promoted large bubble sizes but only tiny areas of surface contact, which helped to make clear the difference between the expected and actual effects of bubble coverage.

Developing the software to detect and quantify bubble formation was necessary for the team’s analysis, Rufer explains. “We wanted to collect a lot of data and look at a lot of different electrodes and different reactions and different bubbles, and they all look slightly different,” he says. Creating a program that could deal with different materials and different lighting and reliably identify and track the bubbles was a tricky process, and machine learning was key to making it work, he says.

Using that tool, he says, they were able to collect “really significant amounts of data about the bubbles on a surface, where they are, how big they are, how fast they’re growing, all these different things.” The tool is now freely available for anyone to use via the GitHub repository.

By using that tool to correlate the visual measures of bubble formation and evolution with electrical measurements of the electrode’s performance, the researchers were able to disprove the accepted theory and to show that only the area of direct contact is affected. Videos further proved the point, revealing new bubbles actively evolving directly under parts of a larger bubble.

The researchers developed a very general methodology that can be applied to characterize and understand the impact of bubbles on any electrode or catalyst surface. They were able to quantify the bubble passivation effects in a new performance metric they call BECSA (Bubble-induced electrochemically active surface), as opposed to ECSA (electrochemically active surface area), that is used in the field. “The BECSA metric was a concept we defined in an earlier study but did not have an effective method to estimate until this work,” says Varanasi.

The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes. This means that electrode designers should seek to minimize bubble contact area rather than simply bubble coverage, which can be achieved by controlling the morphology and chemistry of the electrodes. Surfaces engineered to control bubbles can not only improve the overall efficiency of the processes and thus reduce energy use, they can also save on upfront materials costs. Many of these gas-evolving electrodes are coated with catalysts made of expensive metals like platinum or iridium, and the findings from this work can be used to engineer electrodes to reduce material wasted by reaction-blocking bubbles.

Varanasi says that “the insights from this work could inspire new electrode architectures that not only reduce the usage of precious materials, but also improve the overall electrolyzer performance,” both of which would provide large-scale environmental benefits.

The research team included Jim James, Nathan Pruyne, Aristana Scourtas, Marcus Schwarting, Aadit Ambalkar, Ian Foster, and Ben Blaiszik at the University of Chicago and Argonne National Laboratory. The work was supported by the U.S. Department of Energy under the ARPA-E program. This work made use of the MIT.nano facilities.


Solar-powered desalination system requires no extra batteries

Because it doesn’t need expensive energy storage for times without sunshine, the technology could provide communities with drinking water at low costs.


MIT engineers have built a new desalination system that runs with the rhythms of the sun.

The solar-powered system removes salt from water at a pace that closely follows changes in solar energy. As sunlight increases through the day, the system ramps up its desalting process and automatically adjusts to any sudden variation in sunlight, for example by dialing down in response to a passing cloud or revving up as the skies clear.

Because the system can quickly react to subtle changes in sunlight, it maximizes the utility of solar energy, producing large quantities of clean water despite variations in sunlight throughout the day. In contrast to other solar-driven desalination designs, the MIT system requires no extra batteries for energy storage, nor a supplemental power supply, such as from the grid.

The engineers tested a community-scale prototype on groundwater wells in New Mexico over six months, working in variable weather conditions and water types. The system harnessed on average over 94 percent of the electrical energy generated from the system’s solar panels to produce up to 5,000 liters of water per day despite large swings in weather and available sunlight.

“Conventional desalination technologies require steady power and need battery storage to smooth out a variable power source like solar. By continually varying power consumption in sync with the sun, our technology directly and efficiently uses solar power to make water,” says Amos Winter, the Germeshausen Professor of Mechanical Engineering and director of the K. Lisa Yang Global Engineering and Research (GEAR) Center at MIT. “Being able to make drinking water with renewables, without requiring battery storage, is a massive grand challenge. And we’ve done it.”

The system is geared toward desalinating brackish groundwater — a salty source of water that is found in underground reservoirs and is more prevalent than fresh groundwater resources. The researchers see brackish groundwater as a huge untapped source of potential drinking water, particularly as reserves of fresh water are stressed in parts of the world. They envision that the new renewable, battery-free system could provide much-needed drinking water at low costs, especially for inland communities where access to seawater and grid power are limited.

“The majority of the population actually lives far enough from the coast, that seawater desalination could never reach them. They consequently rely heavily on groundwater, especially in remote, low-income regions. And unfortunately, this groundwater is becoming more and more saline due to climate change,” says Jonathan Bessette, MIT PhD student in mechanical engineering. “This technology could bring sustainable, affordable clean water to underreached places around the world.”

The researchers report details the new system in a paper appearing today in Nature Water. The study’s co-authors are Bessette, Winter, and staff engineer Shane Pratt.

Pump and flow

The new system builds on a previous design, which Winter and his colleagues, including former MIT postdoc Wei He, reported earlier this year. That system aimed to desalinate water through “flexible batch electrodialysis.”

Electrodialysis and reverse osmosis are two of the main methods used to desalinate brackish groundwater. With reverse osmosis, pressure is used to pump salty water through a membrane and filter out salts. Electrodialysis uses an electric field to draw out salt ions as water is pumped through a stack of ion-exchange membranes.

Scientists have looked to power both methods with renewable sources. But this has been especially challenging for reverse osmosis systems, which traditionally run at a steady power level that’s incompatible with naturally variable energy sources such as the sun.

Winter, He, and their colleagues focused on electrodialysis, seeking ways to make a more flexible, “time-variant” system that would be responsive to variations in renewable, solar power.

In their previous design, the team built an electrodialysis system consisting of water pumps, an ion-exchange membrane stack, and a solar panel array. The innovation in this system was a model-based control system that used sensor readings from every part of the system to predict the optimal rate at which to pump water through the stack and the voltage that should be applied to the stack to maximize the amount of salt drawn out of the water.

When the team tested this system in the field, it was able to vary its water production with the sun’s natural variations. On average, the system directly used 77 percent of the available electrical energy produced by the solar panels, which the team estimated was 91 percent more than traditionally designed solar-powered electrodialysis systems.

Still, the researchers felt they could do better.

“We could only calculate every three minutes, and in that time, a cloud could literally come by and block the sun,” Winter says. “The system could be saying, ‘I need to run at this high power.’ But some of that power has suddenly dropped because there’s now less sunlight. So, we had to make up that power with extra batteries.”

Solar commands

In their latest work, the researchers looked to eliminate the need for batteries, by shaving the system’s response time to a fraction of a second. The new system is able to update its desalination rate, three to five times per second. The faster response time enables the system to adjust to changes in sunlight throughout the day, without having to make up any lag in power with additional power supplies.

The key to the nimbler desalting is a simpler control strategy, devised by Bessette and Pratt. The new strategy is one of “flow-commanded current control,” in which the system first senses the amount of solar power that is being produced by the system’s solar panels. If the panels are generating more power than the system is using, the controller automatically “commands” the system to dial up its pumping, pushing more water through the electrodialysis stacks. Simultaneously, the system diverts some of the additional solar power by increasing the electrical current delivered to the stack, to drive more salt out of the faster-flowing water.

“Let’s say the sun is rising every few seconds,” Winter explains. “So, three times a second, we’re looking at the solar panels and saying, ‘Oh, we have more power — let’s bump up our flow rate and current a little bit.’ When we look again and see there’s still more excess power, we’ll up it again. As we do that, we’re able to closely match our consumed power with available solar power really accurately, throughout the day. And the quicker we loop this, the less battery buffering we need.”

The engineers incorporated the new control strategy into a fully automated system that they sized to desalinate brackish groundwater at a daily volume that would be enough to supply a small community of about 3,000 people. They operated the system for six months on several wells at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. Throughout the trial, the prototype operated under a wide range of solar conditions, harnessing over 94 percent of the solar panel’s electrical energy, on average, to directly power desalination.

“Compared to how you would traditionally design a solar desal system, we cut our required battery capacity by almost 100 percent,” Winter says.

The engineers plan to further test and scale up the system in hopes of supplying larger communities, and even whole municipalities, with low-cost, fully sun-driven drinking water.

“While this is a major step forward, we’re still working diligently to continue developing lower cost, more sustainable desalination methods,” Bessette says.

“Our focus now is on testing, maximizing reliability, and building out a product line that can provide desalinated water using renewables to multiple markets around the world," Pratt adds.

The team will be launching a company based on their technology in the coming months.

This research was supported in part by the National Science Foundation, the Julia Burke Foundation, and the MIT Morningside Academy of Design. This work was additionally supported in-kind by Veolia Water Technologies and Solutions and Xylem Goulds. 


Teen uses pharmacology learned through MIT OpenCourseWare to extract and study medicinal properties of plants

Inspired by traditional medicine, 17-year-old Tomás Orellana is on a mission to identify plants that can help treat students’ health issues.


Tomás Orellana, a 17-year-old high school student in Chile, had a vision: to create a kit of medicinal plants for Chilean school infirmaries. But first, he needed to understand the basic principles of pharmacology. That’s when Orellana turned to the internet and stumbled upon a gold mine of free educational resources and courses on the MIT OpenCourseWare website.

Right away, Orellana completed class HST.151 (Principles of Pharmacology), learning about the mechanisms of drug action, dose-response relations, pharmacokinetics, drug delivery systems, and more. He then shared this newly acquired knowledge with 16 members of his school science group so that together they could make Orellana’s vision a reality.

“I used the course to guide my classmates in the development of a phyto-medicinal school project, demonstrating in practice the innovation that the OpenCourseWare platform offers,” Orellana says in Spanish. “Thanks to the pharmacology course, I can collect and synthesize the information we need to learn to prepare the medicines for our project.”

OpenCourseWare, part of MIT Open Learning, offers free educational resources on its website from more than 2,500 courses that span the MIT curriculum, from introductory to advanced classes. A global model for open sharing in higher education, OpenCourseWare has an open license that allows the remix and reuse of its educational resources, which include video lectures, syllabi, lecture notes, problem sets, assignments, audiovisual content, and insights.

After completing the Principles of Pharmacology course, Orellana and members of his science group began extracting medicinal properties from plants, such as cedron, and studying them in an effort to determine which plants are best to grow in a school environment. Their goal, Orellana says, is to help solve students’ health problems during the school day, including menstrual, mental, intestinal, and respiratory issues.

“There is a tradition regarding the use of medicinal plants, but there is no scientific evidence that says that these properties really exist,” the 11th-grader explains. “What we want to do is know which plants are the best to grow in a school environment.”

Orellana’s science group discussed their scientific project on “Que Sucede,” a Chilean television show, and their interview will air soon. The group plans to continue working on their medicinal project during this academic year.

Next up on Orellana’s learning journey is the mysteries of the human brain. He plans to complete class 9.01 (Introduction to Neuroscience) through OpenCourseWare. His ultimate goal? To pursue a career in health sciences and become a professor so that he may continue to share knowledge — widely.

“I dream of becoming a university academic to have an even greater impact on current affairs in my country and internationally,” Orellana says. “All that will happen if I try hard enough.”

Orellana encourages learners to explore MIT Open Learning's free educational resources, including OpenCourseWare.

“Take advantage of MIT's free digital technologies and tools,” he says. “Keep an open mind as to how the knowledge can be applied.”


Applying risk and reliability analysis across industries

After an illustrious career at Idaho National Laboratory spanning three decades, Curtis Smith is now sharing his expertise in risk analysis and management with future generations of engineers at MIT.


On Feb. 1, 2003, the space shuttle Columbia disintegrated as it returned to Earth, killing all seven astronauts on board. The tragic incident compelled NASA to amp up their risk safety assessments and protocols. They knew whom to call: Curtis Smith PhD ’02, who is now the KEPCO Professor of the Practice of Nuclear Science and Engineering at MIT.

The nuclear community has always been a leader in probabilistic risk analysis and Smith’s work in risk-related research had made him an established expert in the field. When NASA came knocking, Smith had been working for the Nuclear Regulatory Commission (NRC) at the Idaho National Laboratory (INL). He pivoted quickly. For the next decade, Smith worked with NASA’s Office of Safety and Mission Assurance supporting their increased use of risk analysis. It was a software tool that Smith helped develop, SAPHIRE, that NASA would adopt to bolster its own risk analysis program.

At MIT, Smith’s focus is on both sides of system operation: risk and reliability. A research project he has proposed involves evaluating the reliability of 3D-printed components and parts for nuclear reactors.

Growing up in Idaho

MIT is a distance from where Smith grew up on the Shoshone-Bannock Native American reservation in Fort Hall, Idaho. His father worked at a chemical manufacturing plant, while his mother and grandmother operated a small restaurant on the reservation.

Southeast Idaho had a significant population of migrant workers and Smith grew up with a diverse group of friends, mostly Native American and Hispanic. “It was a largely positive time and set a worldview for me in many wonderful ways,” Smith remembers. When he was a junior in high school, the family moved to Pingree, Idaho, a small town of barely 500. Smith attended Snake River High, a regional school, and remembered the deep impact his teachers had. “I learned a lot in grade school and had great teachers, so my love for education probably started there. I tried to emulate my teachers,” Smith says.

Smith went to Idaho State University in Pocatello for college, a 45-minute drive from his family. Drawn to science, he decided he wanted to study a subject that would benefit humanity the most: nuclear engineering. Fortunately, Idaho State has a strong nuclear engineering program. Smith completed a master’s degree in the same field at ISU while working for the Federal Bureau of Investigation in the security department during the swing shift — 5 p.m. to 1 a.m. — at the FBI offices in Pocatello. “It was a perfect job while attending grad school,” Smith says.

His KEPCO Professor of the Practice appointment is the second stint for Smith at MIT: He completed his PhD in the Department of Nuclear Science and Engineering (NSE) under the advisement of Professor George Apostolakis in 2002.

A career in risk analysis and management

After a doctorate at MIT, Smith returned to Idaho, conducting research in risk analysis for the NRC. He also taught technical courses and developed risk analysis software. “We did a whole host of work that supported the current fleet of nuclear reactors that we have,” Smith says.

He was 10 years into his career at INL when NASA recruited him, leaning on his expertise in risk analysis to translate it into space missions. “I didn’t really have a background in aerospace, but I was able to bring all the engineering I knew, conducting risk analysis for nuclear missions. It was really exciting and I learned a lot about aerospace,” Smith says.

Risk analysis uses statistics and data to answer complex questions involving safety. Among his projects: analyzing the risk involved in a Mars rover mission with a radioisotope-generated power source for the rover. Even if the necessary plutonium is encased in really strong material, calculations for risk have to factor in all eventualities, including the rocket blowing up.

When the Fukushima incident happened in 2011, the Department of Energy (DoE) was more supportive of safety and risk analysis research. Smith found himself in the center of the action again, supporting large DoE research programs. He then moved to become the director of the Nuclear Safety and Regulatory Research Division at the INL. Smith found he loved the role, mentoring and nurturing the careers of a diverse set of scientists. “It turned out to be much more rewarding than I had expected,” Smith says. Under his leadership, the division grew from 45 to almost 90 research staff and won multiple national awards.

Return to MIT

MIT NSE came calling in 2022, looking to fill the position of professor of the practice, an offer Smith couldn’t refuse. The department was looking to bulk up its risk and reliability offerings and Smith made a great fit. The DoE division he had been supervising had grown wings enough for Smith to seek out something new.

“Just getting back to Boston is exciting,” Smith says. The last go-around involved bringing the family to the city and included a lot of sleepless nights. Smith’s wife, Jacquie, is also excited about being closer to the New England fan base. The couple has invested in season tickets for the Patriots and look to attend as many sporting events as possible.

Smith is most excited about adding to the risk and reliability offerings at MIT at a time when the subject has become especially important for nuclear power. “I’m grateful for the opportunity to bring my knowledge and expertise from the last 30 years to the field,” he says. Being a professor of the practice of NSE carries with it a responsibility to unite theory and practice, something Smith is especially good at. “We always have to answer the question of, ‘How do I take the research and make that practical,’ especially for something important like nuclear power, because we need much more of these ideas in industry,” he says.

He is particularly excited about developing the next generation of nuclear scientists. “Having the ability to do this at a place like MIT is especially fulfilling and something I have been desiring my whole career,” Smith says.


Cancer biologists discover a new mechanism for an old drug

Study reveals the drug, 5-fluorouracil, acts differently in different types of cancer — a finding that could help researchers design better drug combinations.


Since the 1950s, a chemotherapy drug known as 5-fluorouracil has been used to treat many types of cancer, including blood cancers and cancers of the digestive tract.

Doctors have long believed that this drug works by damaging the building blocks of DNA. However, a new study from MIT has found that in cancers of the colon and other gastrointestinal cancers, it actually kills cells by interfering with RNA synthesis.

The findings could have a significant effect on how doctors treat many cancer patients. Usually, 5-fluorouracil is given in combination with chemotherapy drugs that damage DNA, but the new study found that for colon cancer, this combination does not achieve the synergistic effects that were hoped for. Instead, combining 5-FU with drugs that affect RNA synthesis could make it more effective in patients with GI cancers, the researchers say.

“Our work is the most definitive study to date showing that RNA incorporation of the drug, leading to an RNA damage response, is responsible for how the drug works in GI cancers,” says Michael Yaffe, a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, and a member of MIT’s Koch Institute for Integrative Cancer Research. “Textbooks implicate the DNA effects of the drug as the mechanism in all cancer types, but our data shows that RNA damage is what’s really important for the types of tumors, like GI cancers, where the drug is used clinically.”

Yaffe, the senior author of the new study, hopes to plan clinical trials of 5-fluorouracil with drugs that would enhance its RNA-damaging effects and kill cancer cells more effectively.

Jung-Kuei Chen, a Koch Institute research scientist, and Karl Merrick, a former MIT postdoc, are the lead authors of the paper, which appears today in Cell Reports Medicine.

An unexpected mechanism

Clinicians use 5-fluorouracil (5-FU) as a first-line drug for colon, rectal, and pancreatic cancers. It’s usually given in combination with oxaliplatin or irinotecan, which damage DNA in cancer cells. The combination was thought to be effective because 5-FU can disrupt the synthesis of DNA nucleotides. Without those building blocks, cells with damaged DNA wouldn’t be able to efficiently repair the damage and would undergo cell death.

Yaffe’s lab, which studies cell signaling pathways, wanted to further explore the underlying mechanisms of how these drug combinations preferentially kill cancer cells.

The researchers began by testing 5-FU in combination with oxaliplatin or irinotecan in colon cancer cells grown in the lab. To their surprise, they found that not only were the drugs not synergistic, in many cases they were less effective at killing cancer cells than what one would expect by simply adding together the effects of 5-FU or the DNA-damaging drug given alone.

“One would have expected that these combinations to cause synergistic cancer cell death because you are targeting two different aspects of a shared process: breaking DNA, and making nucleotides,” Yaffe says. “Karl looked at a dozen colon cancer cell lines, and not only were the drugs not synergistic, in most cases they were antagonistic. One drug seemed to be undoing what the other drug was doing.”

Yaffe’s lab then teamed up with Adam Palmer, an assistant professor of pharmacology at the University of North Carolina School of Medicine, who specializes in analyzing data from clinical trials. Palmer’s research group examined data from colon cancer patients who had been on one or more of these drugs and showed that the drugs did not show synergistic effects on survival in most patients.

“This confirmed that when you give these combinations to people, it’s not generally true that the drugs are actually working together in a beneficial way within an individual patient,” Yaffe says. “Instead, it appears that one drug in the combination works well for some patients while another drug in the combination works well in other patients. We just cannot yet predict which drug by itself is best for which patient, so everyone gets the combination.”

These results led the researchers to wonder just how 5-FU was working, if not by disrupting DNA repair. Studies in yeast and mammalian cells had shown that the drug also gets incorporated into RNA nucleotides, but there has been dispute over how much this RNA damage contributes to the drug’s toxic effects on cancer cells.

Inside cells, 5-FU is broken down into two different metabolites. One of these gets incorporated into DNA nucleotides, and other into RNA nucleotides. In studies of colon cancer cells, the researchers found that the metabolite that interferes with RNA was much more effective at killing colon cancer cells than the one that disrupts DNA.

That RNA damage appears to primarily affect ribosomal RNA, a molecule that forms part of the ribosome — a cell organelle responsible for assembling new proteins. If cells can’t form new ribosomes, they can’t produce enough proteins to function. Additionally, the lack of undamaged ribosomal RNA causes cells to destroy a large set of proteins that normally bind up the RNA to make new functional ribosomes.

The researchers are now exploring how this ribosomal RNA damage leads cells to under programmed cell death, or apoptosis. They hypothesize that sensing of the damaged RNAs within cell structures called lysosomes somehow triggers an apoptotic signal.

“My lab is very interested in trying to understand the signaling events during disruption of ribosome biogenesis, particularly in GI cancers and even some ovarian cancers, that cause the cells to die. Somehow, they must be monitoring the quality control of new ribosome synthesis, which somehow is connected to the death pathway machinery,” Yaffe says.

New combinations

The findings suggest that drugs that stimulate ribosome production could work together with 5-FU to make a highly synergistic combination. In their study, the researchers showed that a molecule that inhibits KDM2A, a suppressor of ribosome production, helped to boost the rate of cell death in colon cancer cells treated with 5-FU.

The findings also suggest a possible explanation for why combining 5-FU with a DNA-damaging drug often makes both drugs less effective. Some DNA damaging drugs send a signal to the cell to stop making new ribosomes, which would negate 5-FU’s effect on RNA. A better approach may be to give each drug a few days apart, which would give patients the potential benefits of each drug, without having them cancel each other out.

“Importantly, our data doesn’t say that these combination therapies are wrong. We know they’re effective clinically. It just says that if you adjust how you give these drugs, you could potentially make those therapies even better, with relatively minor changes in the timing of when the drugs are given,” Yaffe says.

He is now hoping to work with collaborators at other institutions to run a phase 2 or 3 clinical trial in which patients receive the drugs on an altered schedule.

“A trial is clearly needed to look for efficacy, but it should be straightforward to initiate because these are already clinically accepted drugs that form the standard of care for GI cancers. All we’re doing is changing the timing with which we give them,” he says.

The researchers also hope that their work could lead to the identification of biomarkers that predict which patients’ tumors will be more susceptible to drug combinations that include 5-FU. One such biomarker could be RNA polymerase I, which is active when cells are producing a lot of ribosomal RNA.

The research was funded by the Damon Runyon Cancer Research Foundation, a fellowship from the Ludwig Center at MIT, the National Institutes of Health, the Ovarian Cancer Research Fund, the Charles and Marjorie Holloway Foundation, and the STARR Cancer Consortium.


Victor Ambros ’75, PhD ’79 and Gary Ruvkun share Nobel Prize in Physiology or Medicine

The scientists, who worked together as postdocs at MIT, are honored for their discovery of microRNA — a class of molecules that are critical for gene regulation.


MIT alumnus Victor Ambros ’75, PhD ’79 and Gary Ruvkun, who did his postdoctoral training at MIT, will share the 2024 Nobel Prize in Physiology or Medicine, the Royal Swedish Academy of Sciences announced this morning in Stockholm.

Ambros, a professor at the University of Massachusetts Chan Medical School, and Ruvkun, a professor at Harvard Medical School and Massachusetts General Hospital, were honored for their discovery of microRNA, a class of tiny RNA molecules that play a critical role in gene control.

“Their groundbreaking discovery revealed a completely new principle of gene regulation that turned out to be essential for multicellular organisms, including humans. It is now known that the human genome codes for over one thousand microRNAs. Their surprising discovery revealed an entirely new dimension to gene regulation. MicroRNAs are proving to be fundamentally important for how organisms develop and function,” the Nobel committee said in its announcement today.

During the late 1980s, Ambros and Ruvkun both worked as postdocs in the laboratory of H. Robert Horvitz, a David H. Koch Professor at MIT, who was awarded the Nobel Prize in 2002.

While in Horvitz’s lab, the pair began studying gene control in the roundworm C. elegans — an effort that laid the groundwork for their Nobel discoveries. They studied two mutant forms of the worm, known as lin-4 and lin-14, that showed defects in the timing of the activation of genetic programs that control development.

In the early 1990s, while Ambros was a faculty member at Harvard University, he made a surprising discovery. The lin-4 gene, instead of encoding a protein, produced a very short RNA molecule that appeared to inhibit the expression of lin-14.

At the same time, Ruvkun was continuing to study these C. elegans genes in his lab at MGH and Harvard. He showed that lin-4 did not inhibit lin-14 by preventing the lin-14 gene from being transcribed into messenger RNA; instead, it appeared to turn off the gene’s expression later on, by preventing production of the protein encoded by lin-14.

The two compared results and realized that the sequence of lin-4 was complementary to some short sequences of lin-14. Lin-4, they showed, was binding to messenger RNA encoding lin-14 and blocking it from being translated into protein — a mechanism for gene control that had never been seen before. Those results were published in two articles in the journal Cell in 1993.

In an interview with the Journal of Cell Biology, Ambros credited the contributions of his collaborators, including his wife, Rosalind “Candy” Lee ’76, and postdoc Rhonda Feinbaum, who both worked in his lab, cloned and characterized the lin-4 microRNA, and were co-authors on one of the 1993 Cell papers.

In 2000, Ruvkun published the discovery of another microRNA molecule, encoded by a gene called let-7, which is found throughout the animal kingdom. Since then, more than 1,000 microRNA genes have been found in humans.

“Ambros and Ruvkun’s seminal discovery in the small worm C. elegans was unexpected, and revealed a new dimension to gene regulation, essential for all complex life forms,” the Nobel citation declared.

Ambros, who was born in New Hampshire and grew up in Vermont, earned his PhD at MIT under the supervision of David Baltimore, then an MIT professor of biology, who received a Nobel Prize in 1973. Ambros was a longtime faculty member at Dartmouth College before joining the faculty at the University of Massachusetts Chan Medical School in 2008.

Ruvkun is a graduate of the University of California at Berkeley and earned his PhD at Harvard University before joining Horvitz’s lab at MIT.


On technology in schools, think evolution, not revolution

Associate Professor Justin Reich’s work shows high-tech tools infuse into education one step at a time, as schools keep adapting and changing.


Back in 1913 Thomas Edison confidently proclaimed, “Books will soon be obsolete in the public schools.” At the time, Edison was advocating for motion pictures as an educational device. “Our school system will be completely changed inside of 10 years,” he added.

Edison was not wrong that video recordings could help people learn. On the other hand, students still read books today. Like others before and after him, Edison thought one particular technology was going to completely revolutionize education. In fact, technologies do get adopted into schools, but usually quite gradually and without altering the fundamentals of education: a good classroom with good teachers and a community of willing students.

The idea that technology changes education incrementally is central to Justin Reich’s work. Reich is an associate professor in MIT’s Comparative Media Studies/Writing program who has been studying schools for a couple of decades, as a teacher, consultant, and scholar. Reich is an advocate for technology, but with a realistic perspective.

Time after time, entrepreneurs claim tech will upend what they depict as stagnation in schools. Both parts of those claims usually miss the mark: Tech tools produce not revolution but evolution, in schools that are frequently changing anyway. Reich’s work emphasizes this alternate framework.

“In the history of education technology, the two most common findings are, first, when teachers get new technology, they use it to do what they were already doing,” Reich says. “It takes quite a bit of time, practice, coaching, messing up, trying again, and iteration, to have new technologies lead to new and better practices.”

The second finding, meanwhile, is that ed-tech tools are most readily adopted by the well-off.

“Almost every educational technology we’ve ever developed disproportionately benefits the affluent,” Reich says. “Even when we make things available for free, people with more financial, social, and technical capital are more likely to take advantage of innovations. Those are two findings from the research literature that people don’t want to hear.”

Some people must want to hear them: Reich has written two well-regarded books about education, and for his scholarship and teaching was awarded tenure earlier this year at MIT, where he founded the Teaching Systems Lab.

“I’ve spent a substantial portion of my career reminding people of those two things, and demonstrating them again and again,” Reich says. 

Optimized like a shark

Long before he made a living by studying schools, Reich pictured himself working in them. Indeed, that was his career plan.

“I wanted to be a teacher,” Reich says. He received his undergraduate degree from the University of Virginia in interdisciplinary studies, then earned an MA in history from the University of Virginia, writing a thesis about the U.S. National Parks system.

Reich then got a job in the early 2000s as a history teacher at a private school in the Boston area. Soon the school administrators gave Reich a cart of laptops and encouraged him to put the new tools to use. Many history archives were becoming digitized, so Reich happily integrated the laptops and web-based sources into his lessons.

Before long Reich co-founded EdTechTeacher, a consulting firm helping schools use technology productively. And his own teaching reinforced a lesson: When larger practices in a discipline change, schools can use technology to follow suit; it will make less difference otherwise. Then too, schools also adapt and evolve in ways unrelated to technology. For instance, we now educate a greater breadth of people than ever.

“You can absolutely improve schools,” Reich says. “And we improve schools all the time. It’s just a long, slow process, and everything is kind of incremental.”

Eventually Reich went back to school himself, earning his PhD from Harvard University’s Graduate School of Education in 2012. At the time, large-scale online college courses were seen as a potentially disruptive force in higher education. But that proposed revolution became an evolution, with online learning producing uneven results for K-12 students and undergraduates while being used more effectively in some graduate programs. Reich examines the subject in his 2020 book, “Failure to Disrupt,” about technologies intended to enhance education at scale.

“Online learning is good for people who are already well-equipped for learning, and those tend to be well-off, educated people,” Reich says. The Covid-19 pandemic also helped reinforce the value of in-person learning. The physical classroom may date to ancient times, but it is a durable innovation.

“Technology gets introduced into educational systems, when it’s possible that the systems are already pretty optimized for what they want to do,” Reich says. Citing another scholar of education, he notes, “Mike Caufield says, ‘We think of schools as old and ancient, but maybe they are in the way a great white shark is, optimized for its environment.’”

Okay, but what about AI?

Reich has now seen many supposed ed-tech revolutions firsthand and studied many others from the past. The latest such potential revolution, of course, is artificial intelligence, currently subject of massive investments and attention. Will AI be different, and fundamentally transform the way we learn? Reich and a colleague, Jesse Dukes, are conducting a research project finding out how schools are currently using AI. So far, Reich thinks, the impact is not huge.

“A lot of folks are saying, ‘AI is going to be amazing! It’s going to transform everything!’” Reich says. “And we’re spending a lot of time with teachers and students asking what they’re actually doing. And of course AI is not transformative. Teachers are finding modest ways to integrate it into their practice, but the main function of AI in schools is kids using it to do their homework, which is probably not good for learning, on net.”

To some degree, Reich suspects, teachers are now devoting more time to in-class writing assignments, to work around students substituting Chat GPT for their own writing. As he notes, “Using in-class time differently to accommodate for changes in technology is something educators have gotten really good at doing over the last decade. This doesn’t seem like a tidal wave crashing over them.”

Reich, again, is not an opponent of technology, but a realist about it, including AI. “A lot of new things are probably helpful in some way, some place, so let’s find it,” he says. In the meantime, schools will be grappling with a lot of hard problems that tech alone will not solve.

“If you’re working at a school serving kids furthest from opportunity in the country, the biggest problem you’re facing right now is chronic absenteeism,” Reich says. “You’re having a really hard time getting kids to show up. AI doesn’t really have anything to do with that.”

Overall, Reich thinks, the key in sustaining good schools is to keep tinkering on many fronts. Educators should “act in short design spirals,” as he wrote in his 2023 book, “Iterate: The Secret to Innovation in Schools,” rather than waiting for radical technology solutions. In education, the tortoise will usually beat the disruptor.

“Improving education is a lot of hard work, and it’s a long process, but at the other end of it, you can get genuine improvement,” Reich concludes.


Modeling relationships to solve complex problems efficiently

Associate Professor Julian Shun develops high-performance algorithms and frameworks for large-scale graph processing.


The German philosopher Fredrich Nietzsche once said that “invisible threads are the strongest ties.” One could think of “invisible threads” as tying together related objects, like the homes on a delivery driver’s route, or more nebulous entities, such as transactions in a financial network or users in a social network.

Computer scientist Julian Shun studies these types of multifaceted but often invisible connections using graphs, where objects are represented as points, or vertices, and relationships between them are modeled by line segments, or edges.

Shun, a newly tenured associate professor in the Department of Electrical Engineering and Computer Science, designs graph algorithms that could be used to find the shortest path between homes on the delivery driver’s route or detect fraudulent transactions made by malicious actors in a financial network.

But with the increasing volume of data, such networks have grown to include billions or even trillions of objects and connections. To find efficient solutions, Shun builds high-performance algorithms that leverage parallel computing to rapidly analyze even the most enormous graphs. As parallel programming is notoriously difficult, he also develops user-friendly programming frameworks that make it easier for others to write efficient graph algorithms of their own.

“If you are searching for something in a search engine or social network, you want to get your results very quickly. If you are trying to identify fraudulent financial transactions at a bank, you want to do so in real-time to minimize damages. Parallel algorithms can speed things up by using more computing resources,” explains Shun, who is also a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Such algorithms are frequently used in online recommendation systems. Search for a product on an e-commerce website and odds are you’ll quickly see a list of related items you could also add to your cart. That list is generated with the help of graph algorithms that leverage parallelism to rapidly find related items across a massive network of users and available products.

Campus connections

As a teenager, Shun’s only experience with computers was a high school class on building websites. More interested in math and the natural sciences than technology, he intended to major in one of those subjects when he enrolled as an undergraduate at the University of California at Berkeley.

But during his first year, a friend recommended he take an introduction to computer science class. While he wasn’t sure what to expect, he decided to sign up.

“I fell in love with programming and designing algorithms. I switched to computer science and never looked back,” he recalls.

That initial computer science course was self-paced, so Shun taught himself most of the material. He enjoyed the logical aspects of developing algorithms and the short feedback loop of computer science problems. Shun could input his solutions into the computer and immediately see whether he was right or wrong. And the errors in the wrong solutions would guide him toward the right answer.

“I’ve always thought that it was fun to build things, and in programming, you are building solutions that do something useful. That appealed to me,” he adds.

After graduation, Shun spent some time in industry but soon realized he wanted to pursue an academic career. At a university, he knew he would have the freedom to study problems that interested him.

Getting into graphs

He enrolled as a graduate student at Carnegie Mellon University, where he focused his research on applied algorithms and parallel computing.

As an undergraduate, Shun had taken theoretical algorithms classes and practical programming courses, but the two worlds didn’t connect. He wanted to conduct research that combined theory and application. Parallel algorithms were the perfect fit.

“In parallel computing, you have to care about practical applications. The goal of parallel computing is to speed things up in real life, so if your algorithms aren’t fast in practice, then they aren’t that useful,” he says.

At Carnegie Mellon, he was introduced to graph datasets, where objects in a network are modeled as vertices connected by edges. He felt drawn to the many applications of these types of datasets, and the challenging problem of developing efficient algorithms to handle them.

After completing a postdoctoral fellowship at Berkeley, Shun sought a faculty position and decided to join MIT. He had been collaborating with several MIT faculty members on parallel computing research, and was excited to join an institute with such a breadth of expertise.

In one of his first projects after joining MIT, Shun joined forces with Department of Electrical Engineering and Computer Science professor and fellow CSAIL member Saman Amarasinghe, an expert on programming languages and compilers, to develop a programming framework for graph processing known as GraphIt. The easy-to-use framework, which generates efficient code from high-level specifications, performed about five times faster than the next best approach.

“That was a very fruitful collaboration. I couldn’t have created a solution that powerful if I had worked by myself,” he says.

Shun also expanded his research focus to include clustering algorithms, which seek to group related datapoints together. He and his students build parallel algorithms and frameworks for quickly solving complex clustering problems, which can be used for applications like anomaly detection and community detection.

Dynamic problems

Recently, he and his collaborators have been focusing on dynamic problems where data in a graph network change over time.

When a dataset has billions or trillions of data points, running an algorithm from scratch to make one small change could be extremely expensive from a computational point of view. He and his students design parallel algorithms that process many updates at the same time, improving efficiency while preserving accuracy.

But these dynamic problems also pose one of the biggest challenges Shun and his team must work to overcome. Because there aren’t many dynamic datasets available for testing algorithms, the team often must generate synthetic data which may not be realistic and could hamper the performance of their algorithms in the real world.

In the end, his goal is to develop dynamic graph algorithms that perform efficiently in practice while also holding up to theoretical guarantees. That ensures they will be applicable across a broad range of settings, he says.

Shun expects dynamic parallel algorithms to have an even greater research focus in the future. As datasets continue to become larger, more complex, and more rapidly changing, researchers will need to build more efficient algorithms to keep up.

He also expects new challenges to come from advancements in computing technology, since researchers will need to design new algorithms to leverage the properties of novel hardware.

“That’s the beauty of research — I get to try and solve problems other people haven’t solved before and contribute something useful to society,” he says.


Laura Lewis and Jing Kong receive postdoctoral mentoring award

Advisors commended for providing exceptional individualized mentoring for postdocs.


MIT professors Laura Lewis and Jing Kong have been recognized with the MIT Postdoctoral Association’s Award for Excellence in Postdoctoral Mentoring. The award is given annually to faculty or other principal investigators (PIs) whose current and former postdoctoral scholars say they stand out in their efforts to create a supportive work environment for postdocs and support postdocs’ professional development.

This year, the award identified exceptional mentors in two categories. Lewis, the Athinoula A. Martinos Associate Professor in the Institute for Mechanical Engineering and Science and the Department of Electrical Engineering and Computer Science (EECS), was recognized as an early-career mentor. Kong, the Jerry McAfee (1940) Professor In Engineering in the Research Laboratory of Electronics and EECS, was recognized as an established mentor.

“It’s a very diverse kind of mentoring that you need for a postdoc,” said Vipindev Adat Vasudevan, who chaired the Postdoctoral Association committee organizing the award. “Every postdoc has different requirements. Some of the people will be going to industry, some of the people are going for academia… so everyone comes with a different objective.”

Vasudevan presented the award at a luncheon hosted by the Office of the Vice President for Research on Sept. 25 in recognition of National Postdoc Appreciation Week. The annual luncheon, celebrating the postdoctoral community’s contributions to MIT, is attended by hundreds of postdocs and faculty.

“The award recognizes faculty members who go above and beyond to create a professional, supportive, and inclusive environment to foster postdocs’ growth and success,” said Ian Waitz, vice president for research, who spoke at the luncheon. He noted the vital role postdocs play in advancing MIT research, mentoring undergraduate and graduate students, and connecting with colleagues from around the globe, while working toward launching independent research careers of their own. 

“The best part of my job”

Nomination letters for Lewis spoke to her ability to create an inclusive and welcoming lab. In the words of one nominator, “She invests considerable time and effort in cultivating personalized mentoring relationships, ensuring each postdoc in her lab receives guidance and support tailored to their individual goals and circumstances.”

Other nominators commented on Lewis’ ability to facilitate collaborations that furthered postdocs’ research goals. Lewis encouraged them to work with other PIs to build their independence and professional development, and to develop their own research questions, they said. “I was never pushed to work on her projects — rather, she guided me towards finding and developing my own,” wrote one.

Lewis’ lab explores new ways to image the human brain, integrating engineering with neuroscience. Improving neuroimaging techniques can improve our understanding of the brain’s activity when asleep and awake, allowing researchers to understand sleep’s impact on brain health.

“I love working with my postdocs and trainees; it’s honestly the best part of my job,” Lewis says. “It’s important for any individual to be in an environment to help them grow toward what they want to do.”

Recognized as an early-career mentor, Lewis looks forward to seeing her postdocs’ career trajectories over time. Group members returning as collaborators come back with fresh ideas and creative approaches, she says, adding, “I view this mentoring relationship as lifelong.”

“No ego, no bias, just solid facts”

Kong’s nomination also speaks to the lifelong nature of the mentoring relationship. The 13 letters supporting Kong’s nomination came from past and current postdocs. Nearly all touched on Kong’s kindness and the culture of respect she maintains in the lab, alongside high expectations of scientific rigor.

“No ego, no bias, just solid facts and direct evidence,” wrote one nominator: “In discussions, she would ask you many questions that make you think ‘I should have asked that to myself’ or ‘why didn’t I think of this.’”

Kong was also praised for her ability to take the long view on projects and mentor postdocs through temporary challenges. One nominator wrote of a period when the results of a project were less promising than anticipated, saying, “Jing didn't push me to switch my direction; instead, she was always glad to listen and discuss the new results. Because of her encouragement and long-term support, I eventually got very good results on this project.”

Kong’s lab focuses on the chemical synthesis of nanomaterials, such as carbon nanotubes, with the goal of characterizing their structures and identifying applications. Kong says postdocs are instrumental in bringing new ideas into the lab.

“I learn a lot from each one of them. They always have a different perspective, and also, they each have their unique talents. So we learn from each other,” she says. As a mentor, she sees her role as developing postdocs’ individual talents, while encouraging them to collaborate with group members who have different strengths.

The collaborations that Kong facilitates extend beyond the postdocs’ time at MIT. She views the postdoctoral period as a key stage in developing a professional network: “Their networking starts from the first day they join the group. They already in this process establish connections with other group members, and also our collaborators, that will continue on for many years.”

About the award

The Award for Excellence in Postdoctoral Mentoring has been awarded since 2022. With support from Ann Skoczenski, director of Postdoctoral Services in the Office of the VPR, and the Faculty Postdoctoral Advisory Committee, nominations are reviewed on four criteria:

The Award for Excellence in Postdoctoral Mentoring provides a celebratory lunch for the recipient’s research group, as well as the opportunity to participate in a mentoring seminar or panel discussion for the postdoctoral community. Last year’s award was given to Jesse Kroll, the Peter de Florez Professor of Civil and Environmental Engineering, professor of chemical engineering, and director of the Ralph M. Parsons Laboratory.


MIT engineers create a chip-based tractor beam for biological particles

The tiny device uses a tightly focused beam of light to capture and manipulate cells.


MIT researchers have developed a miniature, chip-based “tractor beam,” like the one that captures the Millennium Falcon in the film “Star Wars,” that could someday help biologists and clinicians study DNA, classify cells, and investigate the mechanisms of disease.

Small enough to fit in the palm of your hand, the device uses a beam of light emitted by a silicon-photonics chip to manipulate particles millimeters away from the chip surface. The light can penetrate the glass cover slips that protect samples used in biological experiments, enabling cells to remain in a sterile environment.

Traditional optical tweezers, which trap and manipulate particles using light, usually require bulky microscope setups, but chip-based optical tweezers could offer a more compact, mass manufacturable, broadly accessible, and high-throughput solution for optical manipulation in biological experiments.

However, other similar integrated optical tweezers can only capture and manipulate cells that are very close to or directly on the chip surface. This contaminates the chip and can stress the cells, limiting compatibility with standard biological experiments.

Using a system called an integrated optical phased array, the MIT researchers have developed a new modality for integrated optical tweezers that enables trapping and tweezing of cells more than a hundred times further away from the chip surface.

“This work opens up new possibilities for chip-based optical tweezers by enabling trapping and tweezing of cells at much larger distances than previously demonstrated. It’s exciting to think about the different applications that could be enabled by this technology,” says Jelena Notaros, the Robert J. Shillman Career Development Professor in Electrical Engineering and Computer Science (EECS), and a member of the Research Laboratory of Electronics.

Joining Notaros on the paper are lead author and EECS graduate student Tal Sneh; Sabrina Corsetti, an EECS graduate student; Milica Notaros PhD ’23; Kruthika Kikkeri PhD ’24; and Joel Voldman, the William R. Brody Professor of EECS. The research appears today in Nature Communications.

A new trapping modality

Optical traps and tweezers use a focused beam of light to capture and manipulate tiny particles. The forces exerted by the beam will pull microparticles toward the intensely focused light in the center, capturing them. By steering the beam of light, researchers can pull the microparticles along with it, enabling them to manipulate tiny objects using noncontact forces.

However, optical tweezers traditionally require a large microscope setup in a lab, as well as multiple devices to form and control light, which limits where and how they can be utilized.

“With silicon photonics, we can take this large, typically lab-scale system and integrate it onto a chip. This presents a great solution for biologists, since it provides them with optical trapping and tweezing functionality without the overhead of a complicated bulk-optical setup,” Notaros says.

But so far, chip-based optical tweezers have only been capable of emitting light very close to the chip surface, so these prior devices could only capture particles a few microns off the chip surface. Biological specimens are typically held in sterile environments using glass cover slips that are about 150 microns thick, so the only way to manipulate them with such a chip is to take the cells out and place them on its surface.

However, that leads to chip contamination. Every time a new experiment is done, the chip has to be thrown away and the cells need to be put onto a new chip.

To overcome these challenges, the MIT researchers developed a silicon photonics chip that emits a beam of light that focuses about 5 millimeters above its surface. This way, they can capture and manipulate biological particles that remain inside a sterile cover slip, protecting both the chip and particles from contamination.

Manipulating light

The researchers accomplish this using a system called an integrated optical phased array. This technology involves a series of microscale antennas fabricated on a chip using semiconductor manufacturing processes. By electronically controlling the optical signal emitted by each antenna, researchers can shape and steer the beam of light emitted by the chip.

Motivated by long-range applications like lidar, most prior integrated optical phased arrays weren’t designed to generate the tightly focused beams needed for optical tweezing. The MIT team discovered that, by creating specific phase patterns for each antenna, they could form an intensely focused beam of light, which can be used for optical trapping and tweezing millimeters from the chip’s surface.

“No one had created silicon-photonics-based optical tweezers capable of trapping microparticles over a millimeter-scale distance before. This is an improvement of several orders of magnitude higher compared to prior demonstrations,” says Notaros.

By varying the wavelength of the optical signal that powers the chip, the researchers could steer the focused beam over a range larger than a millimeter and with microscale accuracy.

To test their device, the researchers started by trying to capture and manipulate tiny polystyrene spheres. Once they succeeded, they moved on to trapping and tweezing cancer cells provided by the Voldman group.

“There were many unique challenges that came up in the process of applying silicon photonics to biophysics,” Sneh adds.

The researchers had to determine how to track the motion of sample particles in a semiautomated fashion, ascertain the proper trap strength to hold the particles in place, and effectively postprocess data, for instance.

In the end, they were able to show the first cell experiments with single-beam optical tweezers.

Building off these results, the team hopes to refine the system to enable an adjustable focal height for the beam of light. They also want to apply the device to different biological systems and use multiple trap sites at the same time to manipulate biological particles in more complex ways.

“This is a very creative and important paper in many ways,” says Ben Miller, Dean’s Professor of Dermatology and professor of biochemistry and biophysics at the University of Rochester, who was not involved with this work. “For one, given that silicon photonic chips can be made at low cost, it potentially democratizes optical tweezing experiments. That may sound like something that only would be of interest to a few scientists, but in reality having these systems widely available will allow us to study fundamental problems in single-cell biophysics in ways previously only available to a few labs given the high cost and complexity of the instrumentation. I can also imagine many applications where one of these devices (or possibly an array of them) could be used to improve the sensitivity of disease diagnostic.”

This research is funded by the National Science Foundation (NSF), an MIT Frederick and Barbara Cronin Fellowship, and the MIT Rolf G. Locher Endowed Fellowship.


Celebrating the people behind Kendall Square’s innovation ecosystem

The 16th Annual Meeting of the Kendall Square Association honored community members for their work bringing impactful innovations to bear on humanity’s biggest challenges.


While it’s easy to be amazed by the constant drumbeat of innovations coming from Kendall Square in Cambridge, Massachusetts, sometimes overlooked are the dedicated individuals working to make those scientific and technological breakthroughs a reality. Every day, people in the neighborhood tackle previously intractable problems and push the frontiers of their fields.

This year’s Kendall Square Association (KSA) Annual Meeting centered around celebrating the people behind the area’s prolific innovation ecosystem. That included a new slate of awards and recognitions for community members and a panel discussion featuring MIT President Sally Kornbluth.

“It’s truly inspiring to be surrounded by all of you: people who seem to share an exuberant curiosity, a pervasive ethic of service, and the baseline expectation that we’re all interested in impact — in making a difference for people and the planet,” Kornbluth said.

The gathering took place in MIT’s Walker Memorial (Building 50) on Memorial Drive and attracted entrepreneurs, life science workers, local students, restaurant and retail shop owners, and leaders of nonprofits.

The KSA itself is a nonprofit organization made up of over 150 organizations across the greater Kendall Square region, from large companies to universities like MIT and Harvard, along with the independent shops and restaurants that give Kendall Square its distinct character.

New to this year’s event were two Founder Awards, which were given to Sangeeta Bhatia, the the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and Michal Preminger, head of Johnson and Johnson Innovation, for their work bringing people together to achieve hard things that benefit humanity.

The KSA will donate $2,500 to the Science Club for Girls in Bhatia’s honor and $2,500 to Innovators for Purpose in honor of Preminger.

Recognition was also given to Alex Cheung of the Cambridge Innovation Center and Shazia Mir of LabCentral for their work bringing Kendall Square’s community members together.

Cambridge Mayor Denise Simmons also spoke at the event, noting the vital role the Kendall Square community has played in things like Covid-19 vaccine development and in the fight against climate change.

“As many of you know, Cambridge has a long and proud history of innovation, with the presence of MIT and the remarkable growth of the tech and life science industry examples of that,” Simmons said. “We are leaving a lasting, positive impact in our city. This community has made and continues to make enormous contributions, not just to our city but to the world.”

In her talk, Kornbluth also introduced the Kendall Square community to her plans for The Climate Project at MIT, which is designed to focus the Institute’s talent and resources to achieve real-world impact on climate change faster. The project will provide funding and catalyze partnerships around six climate “missions,” or broad areas where MIT researchers will seek to identify gaps in the global climate response that MIT can help fill.

“The Climate Project is a whole-of-MIT mobilization that’s mission driven, solution focused, and outward looking,” Kornbluth explained. “If you want to make progress, faster and at scale, that’s the way!”

After mingling with Kendall community members, Kornbluth said she still considers herself a newbie to the area but is coming to see the success of Kendall Square and MIT as more than a coincidence.

“The more time I spend here, the more I come to understand the incredible synergies between MIT and Kendall Square,” Kornbluth said. “We know, for example, that proximity is an essential ingredient in our collective and distinctive recipe for impact. That proximity, and the cross-fertilization that comes with it, helps us churn out new technologies and patents, found startups, and course-correct our work as we try to keep pace with the world’s challenges. We can’t do any of this separately. Our work together — all of us in this thriving, wildly entrepreneurial community — is what drives the success of our innovation ecosystem.”


Translating MIT research into real-world results

MIT’s innovation and entrepreneurship system helps launch water, food, and ag startups with social and economic benefits.


Inventive solutions to some of the world’s most critical problems are being discovered in labs, classrooms, and centers across MIT every day. Many of these solutions move from the lab to the commercial world with the help of over 85 Institute resources that comprise MIT’s robust innovation and entrepreneurship (I&E) ecosystem. The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) draws on MIT’s wealth of I&E knowledge and experience to help researchers commercialize their breakthrough technologies through the J-WAFS Solutions grant program. By collaborating with I&E programs on campus, J-WAFS prepares MIT researchers for the commercial world, where their novel innovations aim to improve productivity, accessibility, and sustainability of water and food systems, creating economic, environmental, and societal benefits along the way.

The J-WAFS Solutions program launched in 2015 with support from Community Jameel, an international organization that advances science and learning for communities to thrive. Since 2015, J-WAFS Solutions has supported 19 projects with one-year grants of up to $150,000, with some projects receiving renewal grants for a second year of support. Solutions projects all address challenges related to water or food. Modeled after the esteemed grant program of MIT’s Deshpande Center for Technological Innovation, and initially administered by Deshpande Center staff, the J-WAFS Solutions program follows a similar approach by supporting projects that have already completed the basic research and proof-of-concept phases. With technologies that are one to three years away from commercialization, grantees work on identifying their potential markets and learn to focus on how their technology can meet the needs of future customers.

“Ingenuity thrives at MIT, driving inventions that can be translated into real-world applications for widespread adoption, implantation, and use,” says J-WAFS Director Professor John H. Lienhard V. “But successful commercialization of MIT technology requires engineers to focus on many challenges beyond making the technology work. MIT’s I&E network offers a variety of programs that help researchers develop technology readiness, investigate markets, conduct customer discovery, and initiate product design and development,” Lienhard adds. “With this strong I&E framework, many J-WAFS Solutions teams have established startup companies by the completion of the grant. J-WAFS-supported technologies have had powerful, positive effects on human welfare. Together, the J-WAFS Solutions program and MIT’s I&E ecosystem demonstrate how academic research can evolve into business innovations that make a better world,” Lienhard says.

Creating I&E collaborations

In addition to support for furthering research, J-WAFS Solutions grants allow faculty, students, postdocs, and research staff to learn the fundamentals of how to transform their work into commercial products and companies. As part of the grant requirements, researchers must interact with mentors through MIT Venture Mentoring Service (VMS). VMS connects MIT entrepreneurs with teams of carefully selected professionals who provide free and confidential mentorship, guidance, and other services to help advance ideas into for-profit, for-benefit, or nonprofit ventures. Since 2000, VMS has mentored over 4,600 MIT entrepreneurs across all industries, through a dynamic and accomplished group of nearly 200 mentors who volunteer their time so that others may succeed. The mentors provide impartial and unbiased advice to members of the MIT community, including MIT alumni in the Boston area. J-WAFS Solutions teams have been guided by 21 mentors from numerous companies and nonprofits. Mentors often attend project events and progress meetings throughout the grant period.

“Working with VMS has provided me and my organization with a valuable sounding board for a range of topics, big and small,” says Eric Verploegen PhD ’08, former research engineer in the MIT D-Lab and founder of J-WAFS spinout CoolVeg. Along with professors Leon Glicksman and Daniel Frey, Verploegen received a J-WAFS Solutions grant in 2021 to commercialize cold-storage chambers that use evaporative cooling to help farmers preserve fruits and vegetables in rural off-grid communities. Verploegen started CoolVeg in 2022 to increase access and adoption of open-source, evaporative cooling technologies through collaborations with businesses, research institutions, nongovernmental organizations, and government agencies. “Working as a solo founder at my nonprofit venture, it is always great to have avenues to get feedback on communications approaches, overall strategy, and operational issues that my mentors have experience with,” Verploegen says. Three years after the initial Solutions grant, one of the VMS mentors assigned to the evaporative cooling team still acts as a mentor to Verploegen today.

Another Solutions grant requirement is for teams to participate in the Spark program — a free, three-week course that provides an entry point for researchers to explore the potential value of their innovation. Spark is part of the National Science Foundation’s (NSF) Innovation Corps (I-Corps), which is an “immersive, entrepreneurial training program that facilitates the transformation of invention to impact.” In 2018, MIT received an award from the NSF, establishing the New England Regional Innovation Corps Node (NE I-Corps) to deliver I-Corps training to participants across New England. Trainings are open to researchers, engineers, scientists, and others who want to engage in a customer discovery process for their technology. Offered regularly throughout the year, the Spark course helps participants identify markets and explore customer needs in order to understand how their technologies can be positioned competitively in their target markets. They learn to assess barriers to adoption, as well as potential regulatory issues or other challenges to commercialization. NE-I-Corps reports that since its start, over 1,200 researchers from MIT have completed the program and have gone on to launch 175 ventures, raising over $3.3 billion in funding from grants and investors, and creating over 1,800 jobs.

Constantinos Katsimpouras, a research scientist in the Department of Chemical Engineering, went through the NE I-Corps Spark program to better understand the customer base for a technology he developed with professors Gregory Stephanopoulos and Anthony Sinskey. The group received a J-WAFS Solutions grant in 2021 for their microbial platform that converts food waste from the dairy industry into valuable products. “As a scientist with no prior experience in entrepreneurship, the program introduced me to important concepts and tools for conducting customer interviews and adopting a new mindset,” notes Katsimpouras. “Most importantly, it encouraged me to get out of the building and engage in interviews with potential customers and stakeholders, providing me with invaluable insights and a deeper understanding of my industry,” he adds. These interviews also helped connect the team with companies willing to provide resources to test and improve their technology — a critical step to the scale-up of any lab invention.

In the case of Professor Cem Tasan’s research group in the Department of Materials Science and Engineering, the I-Corps program led them to the J-WAFS Solutions grant, instead of the other way around. Tasan is currently working with postdoc Onur Guvenc on a J-WAFS Solutions project to manufacture formable sheet metal by consolidating steel scrap without melting, thereby reducing water use compared to traditional steel processing. Before applying for the Solutions grant, Guvenc took part in NE I-Corps. Like Katsimpouras, Guvenc benefited from the interaction with industry. “This program required me to step out of the lab and engage with potential customers, allowing me to learn about their immediate challenges and test my initial assumptions about the market,” Guvenc recalls. “My interviews with industry professionals also made me aware of the connection between water consumption and steelmaking processes, which ultimately led to the J-WAFS 2023 Solutions Grant,” says Guvenc.

After completing the Spark program, participants may be eligible to apply for the Fusion program, which provides microgrants of up to $1,500 to conduct further customer discovery. The Fusion program is self-paced, requiring teams to conduct 12 additional customer interviews and craft a final presentation summarizing their key learnings. Professor Patrick Doyle’s J-WAFS Solutions team completed the Spark and Fusion programs at MIT. Most recently, their team was accepted to join the NSF I-Corps National program with a $50,000 award. The intensive program requires teams to complete an additional 100 customer discovery interviews over seven weeks. Located in the Department of Chemical Engineering, the Doyle lab is working on a sustainable microparticle hydrogel system to rapidly remove micropollutants from water. The team’s focus has expanded to higher value purifications in amino acid and biopharmaceutical manufacturing applications. Devashish Gokhale PhD ’24 worked with Doyle on much of the underlying science.

“Our platform technology could potentially be used for selective separations in very diverse market segments, ranging from individual consumers to large industries and government bodies with varied use-cases,” Gokhale explains. He goes on to say, “The I-Corps Spark program added significant value by providing me with an effective framework to approach this problem ... I was assigned a mentor who provided critical feedback, teaching me how to formulate effective questions and identify promising opportunities.” Gokhale says that by the end of Spark, the team was able to identify the best target markets for their products. He also says that the program provided valuable seminars on topics like intellectual property, which was helpful in subsequent discussions the team had with MIT’s Technology Licensing Office.

Another member of Doyle’s team, Arjav Shah, a recent PhD from MIT’s Department of Chemical Engineering and a current MBA candidate at the MIT Sloan School of Management, is spearheading the team’s commercialization plans. Shah attended Fusion last fall and hopes to lead efforts to incorporate a startup company called hydroGel.  “I admire the hypothesis-driven approach of the I-Corps program,” says Shah. “It has enabled us to identify our customers’ biggest pain points, which will hopefully lead us to finding a product-market fit.” He adds “based on our learnings from the program, we have been able to pivot to impact-driven, higher-value applications in the food processing and biopharmaceutical industries.” Postdoc Luca Mazzaferro will lead the technical team at hydroGel alongside Shah.

In a different project, Qinmin Zheng, a postdoc in the Department of Civil and Environmental Engineering, is working with Professor Andrew Whittle and Lecturer Fábio Duarte. Zheng plans to take the Fusion course this fall to advance their J-WAFS Solutions project that aims to commercialize a novel sensor to quantify the relative abundance of major algal species and provide early detection of harmful algal blooms. After completing Spark, Zheng says he’s “excited to participate in the Fusion program, and potentially the National I-Corps program, to further explore market opportunities and minimize risks in our future product development.”

Economic and societal benefits

Commercializing technologies developed at MIT is one of the ways J-WAFS helps ensure that MIT research advances will have real-world impacts in water and food systems. Since its inception, the J-WAFS Solutions program has awarded 28 grants (including renewals), which have supported 19 projects that address a wide range of global water and food challenges. The program has distributed over $4 million to 24 professors, 11 research staff, 15 postdocs, and 30 students across MIT. Nearly half of all J-WAFS Solutions projects have resulted in spinout companies or commercialized products, including eight companies to date plus two open-source technologies.

Nona Technologies is an example of a J-WAFS spinout that is helping the world by developing new approaches to produce freshwater for drinking. Desalination — the process of removing salts from seawater — typically requires a large-scale technology called reverse osmosis. But Nona created a desalination device that can work in remote off-grid locations. By separating salt and bacteria from water using electric current through a process called ion concentration polarization (ICP), their technology also reduces overall energy consumption. The novel method was developed by Jongyoon Han, professor of electrical engineering and biological engineering, and research scientist Junghyo Yoon. Along with Bruce Crawford, a Sloan MBA alum, Han and Yoon created Nona Technologies to bring their lightweight, energy-efficient desalination technology to the market.

“My feeling early on was that once you have technology, commercialization will take care of itself,” admits Crawford. The team completed both the Spark and Fusion programs and quickly realized that much more work would be required. “Even in our first 24 interviews, we learned that the two first markets we envisioned would not be viable in the near term, and we also got our first hints at the beachhead we ultimately selected,” says Crawford. Nona Technologies has since won MIT’s $100K Entrepreneurship Competition, received media attention from outlets like Newsweek and Fortune, and hired a team that continues to further the technology for deployment in resource-limited areas where clean drinking water may be scarce. 

Food-borne diseases sicken millions of people worldwide each year, but J-WAFS researchers are addressing this issue by integrating molecular engineering, nanotechnology, and artificial intelligence to revolutionize food pathogen testing. Professors Tim Swager and Alexander Klibanov, of the Department of Chemistry, were awarded one of the first J-WAFS Solutions grants for their sensor that targets food safety pathogens. The sensor uses specialized droplets that behave like a dynamic lens, changing in the presence of target bacteria in order to detect dangerous bacterial contamination in food. In 2018, Swager launched Xibus Systems Inc. to bring the sensor to market and advance food safety for greater public health, sustainability, and economic security.

“Our involvement with the J-WAFS Solutions Program has been vital,” says Swager. “It has provided us with a bridge between the academic world and the business world and allowed us to perform more detailed work to create a usable application,” he adds. In 2022, Xibus developed a product called XiSafe, which enables the detection of contaminants like salmonella and listeria faster and with higher sensitivity than other food testing products. The innovation could save food processors billions of dollars worldwide and prevent thousands of food-borne fatalities annually.

J-WAFS Solutions companies have raised nearly $66 million in venture capital and other funding. Just this past June, J-WAFS spinout SiTration announced that it raised an $11.8 million seed round. Jeffrey Grossman, a professor in MIT’s Department of Materials Science and Engineering, was another early J-WAFS Solutions grantee for his work on low-cost energy-efficient filters for desalination. The project enabled the development of nanoporous membranes and resulted in two spinout companies, Via Separations and SiTration. SiTration was co-founded by Brendan Smith PhD ’18, who was a part of the original J-WAFS team. Smith is CEO of the company and has overseen the advancement of the membrane technology, which has gone on to reduce cost and resource consumption in industrial wastewater treatment, advanced manufacturing, and resource extraction of materials such as lithium, cobalt, and nickel from recycled electric vehicle batteries. The company also recently announced that it is working with the mining company Rio Tinto to handle harmful wastewater generated at mines.

But it's not just J-WAFS spinout companies that are producing real-world results. Products like the ECC Vial — a portable, low-cost method for E. coli detection in water — have been brought to the market and helped thousands of people. The test kit was developed by MIT D-Lab Lecturer Susan Murcott and Professor Jeffrey Ravel of the MIT History Section. The duo received a J-WAFS Solutions grant in 2018 to promote safely managed drinking water and improved public health in Nepal, where it is difficult to identify which wells are contaminated by E. coli. By the end of their grant period, the team had manufactured approximately 3,200 units, of which 2,350 were distributed — enough to help 12,000 people in Nepal. The researchers also trained local Nepalese on best manufacturing practices.

“It’s very important, in my life experience, to follow your dream and to serve others,” says Murcott. Economic success is important to the health of any venture, whether it’s a company or a product, but equally important is the social impact — a philosophy that J-WAFS research strives to uphold. “Do something because it’s worth doing and because it changes people’s lives and saves lives,” Murcott adds.

As J-WAFS prepares to celebrate its 10th anniversary this year, we look forward to continued collaboration with MIT’s many I&E programs to advance knowledge and develop solutions that will have tangible effects on the world’s water and food systems.

Learn more about the J-WAFS Solutions program and about innovation and entrepreneurship at MIT.


3 Questions: Bridging anthropology and engineering for clean energy in Mongolia

Anthropologists Manduhai Buyandelger and Lauren Bonilla discuss the humanistic perspective they bring to a project that is yielding promising results.


In 2021, Michael Short, an associate professor of nuclear science and engineering, approached professor of anthropology Manduhai Buyandelger with an unusual pitch: collaborating on a project to prototype a molten salt heat bank in Mongolia, Buyandelger’s country of origin and place of her scholarship. It was also an invitation to forge a novel partnership between two disciplines that rarely overlap. Developed in collaboration with the National University of Mongolia (NUM), the device was built to provide heat for people in colder climates, and in places where clean energy is a challenge. 

Buyandelger and Short teamed up to launch Anthro-Engineering Decarbonization at the Million-Person Scale, an initiative intended to advance the heat bank idea in Mongolia, and ultimately demonstrate its potential as a scalable clean heat source in comparably challenging sites around the world. This project received funding from the inaugural MIT Climate and Sustainability Consortium Seed Awards program. In order to fund various components of the project, especially student involvement and additional staff, the project also received support from the MIT Global Seed Fund, New Engineering Education Transformation (NEET), Experiential Learning Office, Vice Provost for International Activities, and d’Arbeloff Fund for Excellence in Education.

As part of this initiative, the partners developed a special topic course in anthropology to teach MIT undergraduates about Mongolia’s unique energy and climate challenges, as well as the historical, social, and economic context in which the heat bank would ideally find a place. The class 21A.S01 (Anthro-Engineering: Decarbonization at the Million-Person Scale) prepares MIT students for a January Independent Activities Period (IAP) trip to the Mongolian capital of Ulaanbaatar, where they embed with Mongolian families, conduct research, and collaborate with their peers. Mongolian students also engaged in the project. Anthropology research scientist and lecturer Lauren Bonilla, who has spent the past two decades working in Mongolia, joined to co-teach the class and lead the IAP trips to Mongolia. 

With the project now in its third year and yielding some promising solutions on the ground, Buyandelger and Bonilla reflect on the challenges for anthropologists of advancing a clean energy technology in a developing nation with a unique history, politics, and culture. 

Q: Your roles in the molten salt heat bank project mark departures from your typical academic routine. How did you first approach this venture?

Buyandelger: As an anthropologist of contemporary religion, politics, and gender in Mongolia, I have had little contact with the hard sciences or building or prototyping technology. What I do best is listening to people and working with narratives. When I first learned about this device for off-the-grid heating, a host of issues came straight to mind right away that are based on socioeconomic and cultural context of the place. The salt brick, which is encased in steel, must be heated to 400 degrees Celsius in a central facility, then driven to people’s homes. Transportation is difficult in Ulaanbaatar, and I worried about road safety when driving the salt brick to gers [traditional Mongolian homes] where many residents live. The device seemed a bit utopian to me, but I realized that this was an amazing educational opportunity: We could use the heat bank as part of an ethnographic project, so students could learn about the everyday lives of people — crucially, in the dead of winter — and how they might respond to this new energy technology in the neighborhoods of Ulaanbaatar.

Bonilla: When I first went to Mongolia in the early 2000s as an undergraduate student, the impacts of climate change were already being felt. There had been a massive migration to the capital after a series of terrible weather events that devastated the rural economy. Coal mining had emerged as a vital part of the economy, and I was interested in how people regarded this industry that both provided jobs and damaged the air they breathed. I am trained as a human geographer, which involves seeing how things happening in a local place correspond to things happening at a global scale. Thinking about climate or sustainability from this perspective means making linkages between social life and environmental life. In Mongolia, people associated coal with national progress. Based on historical experience, they had low expectations for interventions brought by outsiders to improve their lives. So my first take on the molten salt project was that this was no silver bullet solution. At the same time, I wanted to see how we could make this a great project-based learning experience for students, getting them to think about the kind of research necessary to see if some version of the molten salt would work.

Q: After two years, what lessons have you and the students drawn from both the class and the Ulaanbaatar field trips?

Buyandelger: We wanted to make sure MIT students would not go to Mongolia and act like consultants. We taught them anthropological methods so they could understand the experiences of real people and think about how to bring people and new technologies together. The students, from engineering and anthropological and social science backgrounds, became critical thinkers who could analyze how people live in ger districts. When they stay with families in Ulaanbaatar in January, they not only experience the cold and the pollution, but they observe what people do for work, how parents care for their children, how they cook, sleep, and get from one place to another. This enables them to better imagine and test out how these people might utilize the molten salt heat bank in their homes.

Bonilla: In class, students learn that interventions like this often fail because the implementation process doesn’t work, or the technology doesn’t meet people’s real needs. This is where anthropology is so important, because it opens up the wider landscape in which you’re intervening. We had really difficult conversations about the professional socialization of engineers and social scientists. Engineers love to work within boxes, but don’t necessarily appreciate the context in which their invention will serve.

As a group, we discussed the provocative notion that engineers construct and anthropologists deconstruct. This makes it seem as if engineers are creators, and anthropologists are brought in as add-ons to consult and critique engineers’ creations. Our group conversation concluded that a project such as ours benefits from an iterative back-and-forth between the techno-scientific and humanistic disciplines.

Q: So where does the molten salt brick project stand?

Bonilla: Our research in Mongolia helped us produce a prototype that can work: Our partners at NUM are developing a hybrid stove that incorporates the molten salt brick. Supervised by instructor Nathan Melenbrink of MIT’s NEET program, our engineering students have been involved in this prototyping as well.

The concept is for a family to heat it up using a coal fire once a day and it warms their home overnight. Based on our anthropological research, we believe that this stove would work better than the device as originally conceived. It won’t eliminate coal use in residences, but it will reduce emissions enough to have a meaningful impact on ger districts in Ulaanbaatar. The challenge now is getting funding to NUM so they can test different salt combinations and stove models and employ local blacksmiths to work on the design.

This integrated stove/heat bank will not be the ultimate solution to the heating and pollution crisis in Mongolia. But it will be something that can inspire even more ideas. We feel with this project we are planting all kinds of seeds that will germinate in ways we cannot anticipate. It has sparked new relationships between MIT and Mongolian students, and catalyzed engineers to integrate a more humanistic, anthropological perspective in their work.

Buyandelger: Our work illustrates the importance of anthropology in responding to the unpredictable and diverse impacts of climate change. Without our ethnographic research — based on participant observation and interviews, led by Dr. Bonilla, — it would have been impossible to see how the prototyping and modifications could be done, and where the molten salt brick could work and what shape it needed to take. This project demonstrates how indispensable anthropology is in moving engineering out of labs and companies and directly into communities.

Bonilla: This is where the real solutions for climate change are going to come from. Even though we need solutions quickly, it will also take time for new technologies like molten salt bricks to take root and grow. We don’t know where the outcomes of these experiments will take us. But there’s so much that’s emerging from this project that I feel very hopeful about.


How AI is improving simulations with smarter sampling techniques

MIT CSAIL researchers created an AI-powered method for low-discrepancy sampling, which uniformly distributes data points to boost simulation accuracy.


Imagine you’re tasked with sending a team of football players onto a field to assess the condition of the grass (a likely task for them, of course). If you pick their positions randomly, they might cluster together in some areas while completely neglecting others. But if you give them a strategy, like spreading out uniformly across the field, you might get a far more accurate picture of the grass condition.

Now, imagine needing to spread out not just in two dimensions, but across tens or even hundreds. That's the challenge MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers are getting ahead of. They've developed an AI-driven approach to “low-discrepancy sampling,” a method that improves simulation accuracy by distributing data points more uniformly across space.

A key novelty lies in using graph neural networks (GNNs), which allow points to “communicate” and self-optimize for better uniformity. Their approach marks a pivotal enhancement for simulations in fields like robotics, finance, and computational science, particularly in handling complex, multidimensional problems critical for accurate simulations and numerical computations.

“In many problems, the more uniformly you can spread out points, the more accurately you can simulate complex systems,” says T. Konstantin Rusch, lead author of the new paper and MIT CSAIL postdoc. “We've developed a method called Message-Passing Monte Carlo (MPMC) to generate uniformly spaced points, using geometric deep learning techniques. This further allows us to generate points that emphasize dimensions which are particularly important for a problem at hand, a property that is highly important in many applications. The model’s underlying graph neural networks lets the points 'talk' with each other, achieving far better uniformity than previous methods.”

Their work was published in the September issue of the Proceedings of the National Academy of Sciences.

Take me to Monte Carlo

The idea of Monte Carlo methods is to learn about a system by simulating it with random sampling. Sampling is the selection of a subset of a population to estimate characteristics of the whole population. Historically, it was already used in the 18th century,  when mathematician Pierre-Simon Laplace employed it to estimate the population of France without having to count each individual.

Low-discrepancy sequences, which are sequences with low discrepancy, i.e., high uniformity, such as Sobol’, Halton, and Niederreiter, have long been the gold standard for quasi-random sampling, which exchanges random sampling with low-discrepancy sampling. They are widely used in fields like computer graphics and computational finance, for everything from pricing options to risk assessment, where uniformly filling spaces with points can lead to more accurate results. 

The MPMC framework suggested by the team transforms random samples into points with high uniformity. This is done by processing the random samples with a GNN that minimizes a specific discrepancy measure.

One big challenge of using AI for generating highly uniform points is that the usual way to measure point uniformity is very slow to compute and hard to work with. To solve this, the team switched to a quicker and more flexible uniformity measure called L2-discrepancy. For high-dimensional problems, where this method isn’t enough on its own, they use a novel technique that focuses on important lower-dimensional projections of the points. This way, they can create point sets that are better suited for specific applications.

The implications extend far beyond academia, the team says. In computational finance, for example, simulations rely heavily on the quality of the sampling points. “With these types of methods, random points are often inefficient, but our GNN-generated low-discrepancy points lead to higher precision,” says Rusch. “For instance, we considered a classical problem from computational finance in 32 dimensions, where our MPMC points beat previous state-of-the-art quasi-random sampling methods by a factor of four to 24.”

Robots in Monte Carlo

In robotics, path and motion planning often rely on sampling-based algorithms, which guide robots through real-time decision-making processes. The improved uniformity of MPMC could lead to more efficient robotic navigation and real-time adaptations for things like autonomous driving or drone technology. “In fact, in a recent preprint, we demonstrated that our MPMC points achieve a fourfold improvement over previous low-discrepancy methods when applied to real-world robotics motion planning problems,” says Rusch.

“Traditional low-discrepancy sequences were a major advancement in their time, but the world has become more complex, and the problems we're solving now often exist in 10, 20, or even 100-dimensional spaces,” says Daniela Rus, CSAIL director and MIT professor of electrical engineering and computer science. “We needed something smarter, something that adapts as the dimensionality grows. GNNs are a paradigm shift in how we generate low-discrepancy point sets. Unlike traditional methods, where points are generated independently, GNNs allow points to 'chat' with one another so the network learns to place points in a way that reduces clustering and gaps — common issues with typical approaches.”

Going forward, the team plans to make MPMC points even more accessible to everyone, addressing the current limitation of training a new GNN for every fixed number of points and dimensions.

“Much of applied mathematics uses continuously varying quantities, but computation typically allows us to only use a finite number of points,” says Art B. Owen, Stanford University professor of statistics, who wasn’t involved in the research. “The century-plus-old field of discrepancy uses abstract algebra and number theory to define effective sampling points. This paper uses graph neural networks to find input points with low discrepancy compared to a continuous distribution. That approach already comes very close to the best-known low-discrepancy point sets in small problems and is showing great promise for a 32-dimensional integral from computational finance. We can expect this to be the first of many efforts to use neural methods to find good input points for numerical computation.”

Rusch and Rus wrote the paper with University of Waterloo researcher Nathan Kirk, Oxford University’s DeepMind Professor of AI and former CSAIL affiliate Michael Bronstein, and University of Waterloo Statistics and Actuarial Science Professor Christiane Lemieux. Their research was supported, in part, by the AI2050 program at Schmidt Sciences, Boeing, the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator, the Swiss National Science Foundation, Natural Science and Engineering Research Council of Canada, and an EPSRC Turing AI World-Leading Research Fellowship. 


An interstellar instrument takes a final bow

The Plasma Science Experiment aboard NASA’s Voyager 2 spacecraft turns off after 47 years and 15 billion miles.


They planned to fly for four years and to get as far as Jupiter and Saturn. But nearly half a century and 15 billion miles later, NASA’s twin Voyager spacecraft have far exceeded their original mission, winging past the outer planets and busting out of our heliosphere, beyond the influence of the sun. The probes are currently making their way through interstellar space, traveling farther than any human-made object.

Along their improbable journey, the Voyagers made first-of-their-kind observations at all four giant outer planets and their moons using only a handful of instruments, including MIT’s Plasma Science Experiments — identical plasma sensors that were designed and built in the 1970s in Building 37 by MIT scientists and engineers.

The Plasma Science Experiment (also known as the Plasma Spectrometer, or PLS for short) measured charged particles in planetary magnetospheres, the solar wind, and the interstellar medium, the material between stars. Since launching on the Voyager 2 spacecraft in 1977, the PLS has revealed new phenomena near all the outer planets and in the solar wind across the solar system. The experiment played a crucial role in confirming the moment when Voyager 2 crossed the heliosphere and moved outside of the sun’s regime, into interstellar space.

Now, to conserve the little power left on Voyager 2 and prolong the mission’s life, the Voyager scientists and engineers have made the decision to shut off MIT’s Plasma Science Experiment. It’s the first in a line of science instruments that will progressively blink off over the coming years. On Sept. 26, the Voyager 2 PLS sent its last communication from 12.7 billion miles away, before it received the command to shut down.

MIT News spoke with John Belcher, the Class of 1922 Professor of Physics at MIT, who was a member of the original team that designed and built the plasma spectrometers, and John Richardson, principal research scientist at MIT’s Kavli Institute for Astrophysics and Space Research, who is the experiment’s principal investigator. Both Belcher and Richardson offered their reflections on the retirement of this interstellar piece of MIT history.

Q: Looking back at the experiment’s contributions, what are the greatest hits, in terms of what MIT’s Plasma Spectrometer has revealed about the solar system and interstellar space?

Richardson: A key PLS finding at Jupiter was the discovery of the Io torus, a plasma donut surrounding Jupiter, formed from sulphur and oxygen from Io’s volcanos (which were discovered in Voyager images). At Saturn, PLS found a magnetosphere full of water and oxygen that had been knocked off of Saturn’s icy moons. At Uranus and Neptune, the tilt of the magnetic fields led to PLS seeing smaller density features, with Uranus’ plasma disappearing near the planet. Another key PLS observation was of the termination shock, which was the first observation of the plasma at the largest shock in the solar system, where the solar wind stopped being supersonic. This boundary had a huge drop in speed and an increase in the density and temperature of the solar wind. And finally, PLS documented Voyager 2’s crossing of the heliopause by detecting a stopping of outward-flowing plasma. This signaled the end of the solar wind and the beginning of the local interstellar medium (LISM). Although not designed to measure the LISM, PLS constantly measured the interstellar plasma currents beyond the heliosphere. It is very sad to lose this instrument and data!

Belcher: It is important to emphasize that PLS was the result of decades of development by MIT Professor Herbert Bridge (1919-1995) and Alan Lazarus (1931-2014). The first version of the instrument they designed was flown on Explorer 10 in 1961. And the most recent version is flying on the Solar Probe, which is collecting measurements very close to the sun to understand the origins of solar wind. Bridge was the principal investigator for plasma probes on spacecraft which visited the sun and every major planetary body in the solar system.

Q: During their tenure aboard the Voyager probes, how did the plasma sensors do their job over the last 47 years?

Richardson: There were four Faraday cup detectors designed by Herb Bridge that measured currents from ions and electrons that entered the detectors. By measuring these particles at different energies, we could find the plasma velocity, density, and temperature in the solar wind and in the four planetary magnetospheres Voyager encountered. Voyager data were (and are still) sent to Earth every day and received by NASA’s deep space network of antennae. Keeping two 1970s-era spacecraft going for 47 years and counting has been an amazing feat of JPL engineering prowess — you can google the most recent rescue when Voyager 1 lost some memory in November of 2023 and stopped sending data. JPL figured out the problem and was able to reprogram the flight data system from 15 billion miles away, and all is back to normal now. Shutting down PLS involves sending a command which will get to Voyager 2 about 19 hours later, providing the rest of the spacecraft enough power to continue.

Q: Once the plasma sensors have shut down, how much more could Voyager do, and how far might it still go?

Richardson: Voyager will still measure the galactic cosmic rays, magnetic fields, and plasma waves. The available power decreases about 4 watts per year as the plutonium which powers them decays. We hope to keep some of the instruments running until the mid-2030s, but that will be a challenge as power levels decrease.

Belcher: Nick Oberg at the Kapteyn Astronomical Institute in the Netherlands has made an exhaustive study of the future of the spacecraft, using data from the European Space Agency’s spacecraft Gaia. In about 30,000 years, the spacecraft will reach the distance to the nearest stars. Because space is so vast, there is zero chance that the spacecraft will collide directly with a star in the lifetime of the universe. However, the spacecraft surface will erode by microcollisions with vast clouds of interstellar dust, but this happens very slowly. 

In Oberg’s estimate, the Golden Records [identical records that were placed aboard each probe, that contain selected sounds and images to represent life on Earth] are likely to survive for a span of over 5 billion years. After those 5 billion years, things are difficult to predict, since at this point, the Milky Way will collide with its massive neighbor, the Andromeda galaxy. During this collision, there is a one in five chance that the spacecraft will be flung into the intergalactic medium, where there is little dust and little weathering. In that case, it is possible that the spacecraft will survive for trillions of years. A trillion years is about 100 times the current age of the universe. The Earth ceases to exist in about 6 billion years, when the sun enters its red giant phase and engulfs it.

In a “poor man’s” version of the Golden Record, Robert Butler, the chief engineer of the Plasma Instrument, inscribed the names of the MIT engineers and scientists who had worked on the spacecraft on the collector plate of the side-looking cup. Butler’s home state was New Hampshire, and he put the state motto, “Live Free or Die,” at the top of the list of names. Thanks to Butler, although New Hampshire will not survive for a trillion years, its state motto might. The flight spare of the PLS instrument is now displayed at the MIT Museum, where you can see the text of Butler’s message by peering into the side-looking sensor. 


Q&A: A new initiative to help strengthen democracy

David Singer, head of the MIT Department of Political Science, discusses the Strengthening Democracy Initiative, focused on the rigorous study of elections, public opinion, and political participation.


In the United States and around the world, democracy is under threat. Anti-democratic attitudes have become more prevalent, partisan polarization is growing, misinformation is omnipresent, and politicians and citizens sometimes question the integrity of elections. 

With this backdrop, the MIT Department of Political Science is launching an effort to establish a Strengthening Democracy Initiative. In this Q&A, department head David Singer, the Raphael Dorman-Helen Starbuck Professor of Political Science, discusses the goals and scope of the initiative.

Q: What is the purpose of the Strengthening Democracy Initiative?

A: Well-functioning democracies require accountable representatives, accurate and freely available information, equitable citizen voice and participation, free and fair elections, and an abiding respect for democratic institutions. It is unsettling for the political science community to see more and more evidence of democratic backsliding in Europe, Latin America, and even here in the U.S. While we cannot single-handedly stop the erosion of democratic norms and practices, we can focus our energies on understanding and explaining the root causes of the problem, and devising interventions to maintain the healthy functioning of democracies.

MIT political science has a history of generating important research on many facets of the democratic process, including voting behavior, election administration, information and misinformation, public opinion and political responsiveness, and lobbying. The goals of the Strengthening Democracy Initiative are to place these various research programs under one umbrella, to foster synergies among our various research projects and between political science and other disciplines, and to mark MIT as the country’s leading center for rigorous, evidence-based analysis of democratic resiliency.

Q: What is the initiative’s research focus?

A: The initiative is built upon three research pillars. One pillar is election science and administration. Democracy cannot function without well-run elections and, just as important, popular trust in those elections. Even within the U.S., let alone other countries, there is tremendous variation in the electoral process: whether and how people register to vote, whether they vote in person or by mail, how polling places are run, how votes are counted and validated, and how the results are communicated to citizens.

The MIT Election Data and Science Lab is already the country’s leading center for the collection and analysis of election-related data and dissemination of electoral best practices, and it is well positioned to increase the scale and scope of its activities.

The second pillar is public opinion, a rich area of study that includes experimental studies of public responses to misinformation and analyses of government responsiveness to mass attitudes. Our faculty employ survey and experimental methods to study a range of substantive areas, including taxation and health policy, state and local politics, and strategies for countering political rumors in the U.S. and abroad. Faculty research programs form the basis for this pillar, along with longstanding collaborations such as the Political Experiments Research Lab, an annual omnibus survey in which students and faculty can participate, and frequent conferences and seminars.

The third pillar is political participation, which includes the impact of the criminal justice system and other negative interactions with the state on voting, the creation of citizen assemblies, and the lobbying behavior of firms on Congressional legislation. Some of this research relies on machine learning and AI to cull and parse an enormous amount of data, giving researchers visibility into phenomena that were previously difficult to analyze. A related research area on political deliberation brings together computer science, AI, and the social sciences to analyze the dynamics of political discourse in online forums and the possible interventions that can attenuate political polarization and foster consensus.

The initiative’s flexible design will allow for new pillars to be added over time, including international and homeland security, strengthening democracies in different regions of the world, and tackling new challenges to democratic processes that we cannot see yet.

Q: Why is MIT well-suited to host this new initiative?

A: Many people view MIT as a STEM-focused, highly technical place. And indeed it is, but there is a tremendous amount of collaboration across and within schools at MIT — for example, between political science and the Schwarzman College of Computing and the Sloan School of Management, and between the social science fields and the schools of science and engineering. The Strengthening Democracy Initiative will benefit from these collaborations and create new bridges between political science and other fields. It’s also important to note that this is a nonpartisan research endeavor. The MIT political science department has a reputation for rigorous, data-driven approaches to the study of politics, and its position within the MIT ecosystem will help us to maintain a reputation as an “honest broker,” and to disseminate path-breaking, evidence-based research and interventions to help democracies become more resilient.

Q: Will the new initiative have an educational mission?

A: Of course! The department has a long history of bringing in scores of undergraduate researchers via MIT’s Undergraduate Research Opportunities Program. The initiative will be structured to provide these students with opportunities to study various facets of the democratic process, and for faculty to have a ready pool of talented students to assist with their projects. My hope is to provide students with the resources and opportunities to test their own theories by designing and implementing surveys in the U.S. and abroad, and use insights and tools from computer science, applied statistics, and other disciplines to study political phenomena. As the initiative grows, I expect more opportunities for students to collaborate with state and local officials on improvements to election administration, and to study new puzzles related to healthy democracies.

Postdoctoral researchers will also play a prominent role by advancing research across the initiative’s pillars, supervising undergraduate researchers, and handling some of the administrative aspects of the work.

Q: This sounds like a long-term endeavor. Do you expect this initiative to be permanent?

A: Yes. We already have the pieces in place to create a leading center for the study of healthy democracies (and how to make them healthier). But we need to build capacity, including resources for a pool of researchers to shift from one project to another, which will permit synergies between projects and foster new ones. A permanent initiative will also provide the infrastructure for faculty and students to respond swiftly to current events and new research findings — for example, by launching a nationwide survey experiment, or collecting new data on an aspect of the electoral process, or testing the impact of a new AI technology on political perceptions. As I like to tell our supporters, there are new challenges to healthy democracies that were not on our radar 10 years ago, and no doubt there will be others 10 years from now that we have not imagined. We need to be prepared to do the rigorous analysis on whatever challenges come our way. And MIT Political Science is the best place in the world to undertake this ambitious agenda in the long term.


Microelectronics projects awarded CHIPS and Science Act funding

MIT and Lincoln Laboratory are among awardees of $38 million in project awards to the Northeast Microelectronics Coalition to boost U.S. chip technology innovation.


MIT and Lincoln Laboratory are participants in four microelectronics proposals selected for funding to the Northeast Microelectronics Coalition (NEMC) Hub. The funding comes from the Microelectronics Commons, a $2 billion initiative of the CHIPS and Science Act to strengthen U.S. leadership in semiconductor manufacturing and innovation. The regional awards are among 33 projects announced as part of a $269 million federal investment.

U.S. Department of Defense (DoD) and White House officials announced the awards during an event on Sept. 18, hosted by the NEMC Hub at MIT Lincoln Laboratory. The NEMC Hub, a division of the Massachusetts Technology Collaborative, leads a network of more than 200 member organizations across the region to enable the lab-to-fab transition of critical microelectronics technologies for the DoD. The NEMC Hub is one of eight regional hubs forming a nationwide chip network under the Microelectronics Commons and is executed through the Naval Surface Warfare Center Crane Division and the National Security Technology Accelerator (NSTXL).

"The $38 million in project awards to the NEMC Hub are a recognition of the capability, capacity, and commitment of our members," said Mark Halfman, NEMC Hub director. "We have a tremendous opportunity to grow microelectronics lab-to-fab capabilities across the Northeast region and spur the growth of game-changing technologies."

"We are very pleased to have Lincoln Laboratory be a central part of the vibrant ecosystem that has formed within the Microelectronics Commons program," said Mark Gouker, assistant head of the laboratory's Advanced Technology Division and NEMC Hub advisory group representative. "We have made strong connections to academia, startups, DoD contractors, and commercial sector companies through collaborations with our technical staff and by offering our microelectronics fabrication infrastructure to assist in these projects. We believe this tighter ecosystem will be important to future Microelectronics Commons programs as well as other CHIPS and Science Act programs."

The nearly $38 million award to the NEMC Hub is expected to support six collaborative projects, four of which will involve MIT and/or Lincoln Laboratory.

"These projects promise significant gains in advanced microelectronics technologies," said Ian A. Waitz, MIT's vice president for research. "We look forward to working alongside industry and government organizations in the NEMC Hub to strengthen U.S. microelectronics innovation, workforce and education, and lab-to-fab translation."

The projects selected for funding support key technology areas identified in the federal call for competitive proposals. MIT campus researchers will participate in a project advancing commercial leap-ahead technologies, titled "Advancing DoD High Power Systems: Transition of High Al% AlGaN from Lab to Fab," and another in the area of 5G/6G, called "Wideband, Scalable MIMO arrays for NextG Systems: From Antennas to Decoders."

Researchers both at Lincoln Laboratory and on campus will contribute to a quantum technology project called "Community‐driven Hybrid Integrated Quantum‐Photonic Integrated circuits (CHIQPI)."

Lincoln Laboratory researchers will also participate in the "Wideband Same‐Frequency STAR Array Platform Based on Heterogeneous Multi-Domain Self‐Interference Cancellation" project.

The anticipated funding for these four projects follows a $7.7 million grant awarded earlier this year to MIT from the NEMC Hub, alongside an agreement between MIT and Applied Materials, to add advanced nanofabrication equipment and capabilities to MIT.nano.

The funding comes amid construction of the Compound Semiconductor Laboratory – Microsystem Integration Facility (CSL-MIF) at Lincoln Laboratory. The CSL-MIF will complement Lincoln Laboratory's existing Microelectronics Laboratory, which has remained the U.S. government's most advanced silicon-based research and fabrication facility for decades. When completed in 2028, the CSL-MIF is expected to play a vital role in the greater CHIPS and Science Act ecosystem.

"Lincoln Laboratory has a long history of developing advanced microelectronics to enable critical national security systems," said Melissa Choi, Lincoln Laboratory director. "We are excited to embark on these awarded projects, leveraging our microelectronics facilities and partnering with fellow hub members to be at the forefront of U.S. microelectronics innovation."

Officials who spoke at the Sept. 18 event emphasized the national security and economic imperatives to building a robust microelectronics workforce and innovation network.

"The Microelectronics Commons is an essential part of the CHIPS and Science Act's whole-of-government approach to strengthen the U.S. microelectronics ecosystem and secure lasting technical leadership in this critical sector," said Dev Shenoy, the principal director for microelectronics in the Office of the Under Secretary of Defense for Research and Engineering. "I believe in the incredible impact this work will have for American economies, American defense, and the American people."

"The secret sauce of what made the U.S. the lead innovator in the world for the last 100 years was the coming together of the U.S. government and the public sector, together with the private sector and teaming up with academia and research," said Amos Hochstein, special presidential coordinator for global infrastructure and energy security at the U.S. Department of State. "That is what enabled us to be the forefront of innovation and technology, and that is what we have to do again."


AI simulation gives people a glimpse of their potential future self

By enabling users to chat with an older version of themselves, Future You is aimed at reducing anxiety and guiding young people to make better choices.


Have you ever wanted to travel through time to see what your future self might be like? Now, thanks to the power of generative AI, you can.

Researchers from MIT and elsewhere created a system that enables users to have an online, text-based conversation with an AI-generated simulation of their potential future self.

Dubbed Future You, the system is aimed at helping young people improve their sense of future self-continuity, a psychological concept that describes how connected a person feels with their future self.

Research has shown that a stronger sense of future self-continuity can positively influence how people make long-term decisions, from one’s likelihood to contribute to financial savings to their focus on achieving academic success.

Future You utilizes a large language model that draws on information provided by the user to generate a relatable, virtual version of the individual at age 60. This simulated future self can answer questions about what someone’s life in the future could be like, as well as offer advice or insights on the path they could follow.

In an initial user study, the researchers found that after interacting with Future You for about half an hour, people reported decreased anxiety and felt a stronger sense of connection with their future selves.

“We don’t have a real time machine yet, but AI can be a type of virtual time machine. We can use this simulation to help people think more about the consequences of the choices they are making today,” says Pat Pataranutaporn, a recent Media Lab doctoral graduate who is actively developing a program to advance human-AI interaction research at MIT, and co-lead author of a paper on Future You.

Pataranutaporn is joined on the paper by co-lead authors Kavin Winson, a researcher at KASIKORN Labs; and Peggy Yin, a Harvard University undergraduate; as well as Auttasak Lapapirojn and Pichayoot Ouppaphan of KASIKORN Labs; and senior authors Monchai Lertsutthiwong, head of AI research at the KASIKORN Business-Technology Group; Pattie Maes, the Germeshausen Professor of Media, Arts, and Sciences and head of the Fluid Interfaces group at MIT, and Hal Hershfield, professor of marketing, behavioral decision making, and psychology at the University of California at Los Angeles. The research will be presented at the IEEE Conference on Frontiers in Education.

A realistic simulation

Studies about conceptualizing one’s future self go back to at least the 1960s. One early method aimed at improving future self-continuity had people write letters to their future selves. More recently, researchers utilized virtual reality goggles to help people visualize future versions of themselves.

But none of these methods were very interactive, limiting the impact they could have on a user.

With the advent of generative AI and large language models like ChatGPT, the researchers saw an opportunity to make a simulated future self that could discuss someone’s actual goals and aspirations during a normal conversation.

“The system makes the simulation very realistic. Future You is much more detailed than what a person could come up with by just imagining their future selves,” says Maes.

Users begin by answering a series of questions about their current lives, things that are important to them, and goals for the future.

The AI system uses this information to create what the researchers call “future self memories” which provide a backstory the model pulls from when interacting with the user.

For instance, the chatbot could talk about the highlights of someone’s future career or answer questions about how the user overcame a particular challenge. This is possible because ChatGPT has been trained on extensive data involving people talking about their lives, careers, and good and bad experiences.

The user engages with the tool in two ways: through introspection, when they consider their life and goals as they construct their future selves, and retrospection, when they contemplate whether the simulation reflects who they see themselves becoming, says Yin.

“You can imagine Future You as a story search space. You have a chance to hear how some of your experiences, which may still be emotionally charged for you now, could be metabolized over the course of time,” she says.

To help people visualize their future selves, the system generates an age-progressed photo of the user. The chatbot is also designed to provide vivid answers using phrases like “when I was your age,” so the simulation feels more like an actual future version of the individual.

The ability to take advice from an older version of oneself, rather than a generic AI, can have a stronger positive impact on a user contemplating an uncertain future, Hershfield says.

“The interactive, vivid components of the platform give the user an anchor point and take something that could result in anxious rumination and make it more concrete and productive,” he adds.

But that realism could backfire if the simulation moves in a negative direction. To prevent this, they ensure Future You cautions users that it shows only one potential version of their future self, and they have the agency to change their lives. Providing alternate answers to the questionnaire yields a totally different conversation.

“This is not a prophesy, but rather a possibility,” Pataranutaporn says.

Aiding self-development

To evaluate Future You, they conducted a user study with 344 individuals. Some users interacted with the system for 10-30 minutes, while others either interacted with a generic chatbot or only filled out surveys.

Participants who used Future You were able to build a closer relationship with their ideal future selves, based on a statistical analysis of their responses. These users also reported less anxiety about the future after their interactions. In addition, Future You users said the conversation felt sincere and that their values and beliefs seemed consistent in their simulated future identities.

“This work forges a new path by taking a well-established psychological technique to visualize times to come — an avatar of the future self — with cutting edge AI. This is exactly the type of work academics should be focusing on as technology to build virtual self models merges with large language models,” says Jeremy Bailenson, the Thomas More Storke Professor of Communication at Stanford University, who was not involved with this research.

Building off the results of this initial user study, the researchers continue to fine-tune the ways they establish context and prime users so they have conversations that help build a stronger sense of future self-continuity.

“We want to guide the user to talk about certain topics, rather than asking their future selves who the next president will be,” Pataranutaporn says.

They are also adding safeguards to prevent people from misusing the system. For instance, one could imagine a company creating a “future you” of a potential customer who achieves some great outcome in life because they purchased a particular product.

Moving forward, the researchers want to study specific applications of Future You, perhaps by enabling people to explore different careers or visualize how their everyday choices could impact climate change.

They are also gathering data from the Future You pilot to better understand how people use the system.

“We don’t want people to become dependent on this tool. Rather, we hope it is a meaningful experience that helps them see themselves and the world differently, and helps with self-development,” Maes says.

The researchers acknowledge the support of Thanawit Prasongpongchai, a designer at KBTG and visiting scientist at the Media Lab.


State of Supply Chain Sustainability report reveals growing investor pressure, challenges with emissions tracking

The 2024 report highlights five years of global progress but uncovers gaps between companies’ sustainability goals and the investments required to achieve them.


The MIT Center for Transportation and Logistics (MIT CTL) and the Council of Supply Chain Management Professionals (CSCMP) have released the 2024 State of Supply Chain Sustainability report, marking the fifth edition of this influential research. The report highlights how supply chain sustainability practices have evolved over the past five years, assessing their global implementation and implications for industries, professionals, and the environment.

This year’s report is based on four years of comprehensive international surveys with responses from over 7,000 supply chain professionals representing more than 80 countries, coupled with insights from executive interviews. It explores how external pressures on firms, such as the growing investor demand and climate regulations, are driving sustainability initiatives. However, it also reveals persistent gaps between companies’ sustainability goals and the actual investments required to achieve them.

"Over the past five years, we have seen supply chains face unprecedented global challenges. While companies have made strides, our analysis shows that many are still struggling to align their sustainability ambitions with real progress, particularly when it comes to tackling Scope 3 emissions," says Josué Velázquez Martínez, MIT CTL research scientist and lead investigator. "Scope 3 emissions, which account for the vast majority of a company’s carbon footprint, remain a major hurdle due to the complexity of tracking emissions from indirect supply chain activities. The margin of error of the most common approach to estimate emissions are drastic, which disincentivizes companies to make more sustainable choices at the expense of investing in green alternatives."

Among the key findings:

Mark Baxa, president and CEO of CSCMP, emphasized the importance of collaboration: "Businesses and consumers alike are putting pressure on us to source and supply products to live up to their social and environmental standards. The State of Supply Chain Sustainability 2024 provides a thorough analysis of our current understanding, along with valuable insights on how to improve our Scope 3 emissions accounting to have a greater impact on lowering our emissions."

The report also underscores the importance of technological innovations, such as machine learning, advanced data analytics, and standardization to improve the accuracy of emissions tracking and help firms make data-driven sustainability decisions.

The 2024 State of Supply Chain Sustainability can be accessed online or in PDF format at sustainable.mit.edu.

The MIT CTL is a world leader in supply chain management research and education, with over 50 years of expertise. The center's work spans industry partnerships, cutting-edge research, and the advancement of sustainable supply chain practices. CSCMP is the leading global association for supply chain professionals. Established in 1963, CSCMP provides its members with education, research, and networking opportunities to advance the field of supply chain management.


Aligning economic and regulatory frameworks for today’s nuclear reactor technology

Today’s regulations for nuclear reactors are unprepared for how the field is evolving. PhD student Liam Hines wants to ensure that policy keeps up with the technology.


Liam Hines ’22 didn't move to Sarasota, Florida, until high school, but he’s a Floridian through and through. He jokes that he’s even got a floral shirt, what he calls a “Florida formal,” for every occasion.

Which is why it broke his heart when toxic red algae used to devastate the Sunshine State’s coastline, including at his favorite beach, Caspersen. The outbreak made headline news during his high school years, with the blooms destroying marine wildlife and adversely impacting the state’s tourism-driven economy.

In Florida, Hines says, environmental awareness is pretty high because everyday citizens are being directly impacted by climate change. After all, it’s hard not to worry when beautiful white sand beaches are covered in dead fish. Ongoing concerns about the climate cemented Hines’ resolve to pick a career that would have a strong “positive environmental impact.” He chose nuclear, as he saw it as “a green, low-carbon-emissions energy source with a pretty straightforward path to implementation.”

Undergraduate studies at MIT

Knowing he wanted a career in the sciences, Hines applied and got accepted to MIT for undergraduate studies in fall 2018. An orientation program hosted by the Department of Nuclear Science and Engineering (NSE) sold him on the idea of pursuing the field. “The department is just a really tight-knit community, and that really appealed to me,” Hines says.

During his undergraduate years, Hines realized he needed a job to pay part of his bills. “Instead of answering calls at the dorm front desk or working in the dining halls, I decided I’m going to become a licensed nuclear operator onsite,” he says. “Reactor operations offer so much hands-on experience with real nuclear systems. It doesn’t hurt that it pays better.” Becoming a licensed nuclear reactor operator is hard work, however, involving a year-long training process studying maintenance, operations, and equipment oversight. A bonus: The job, supervising the MIT Nuclear Reactor Laboratory, taught him the fundamentals of nuclear physics and engineering.

Always interested in research, Hines got an early start by exploring the regulatory challenges of advanced fusion systems. There have been questions related to licensing requirements and the safety consequences of the onsite radionuclide inventory. Hines’ undergraduate research work involved studying precedent for such fusion facilities and comparing them to experimental facilities such as the Tokamak Fusion Test Reactor at the Princeton Plasma Physics Laboratory.

Doctoral focus on legal and regulatory frameworks

When scientists want to make technologies as safe as possible, they have to do two things in concert: First they evaluate the safety of the technology, and then make sure legal and regulatory structures take into account the evolution of these advanced technologies. Hines is taking such a two-pronged approach to his doctoral work on nuclear fission systems.

Under the guidance of Professor Koroush Shirvan, Hines is conducting systems modeling of various reactor cores that include graphite, and simulating operations under long time spans. He then studies radionuclide transport from low-level waste facilities — the consequences of offsite storage after 50 or 100 or even 10,000 years of storage. The work has to make sure to hit safety and engineering margins, but also tread a fine line. “You want to make sure you’re not over-engineering systems and adding undue cost, but also making sure to assess the unique hazards of these advanced technologies as accurately as possible,” Hines says.

On a parallel track, under Professor Haruko Wainwright’s advisement, Hines is applying the current science on radionuclide geochemistry to track radionuclide wastes and map their profile for hazards. One of the challenges fission reactors face is that existing low-level waste regulations were fine-tuned to old reactors. Regulations have not kept up: “Now that we have new technologies with new wastes, some of the hazards of the new waste are completely missed by existing standards,” Hines says. He is working to seal these gaps.

A philosophy-driven outlook

Hines is grateful for the dynamic learning environment at NSE. “A lot of the faculty have that go-getter attitude,” he points out, impressed by the entrepreneurial spirit on campus. “It’s made me confident to really tackle the things that I care about.”

An ethics class as an undergraduate made Hines realize there were discussions in class he could apply to the nuclear realm, especially when it came to teasing apart the implications of the technology — where the devices would be built and who they would serve. He eventually went on to double-major in NSE and philosophy.

The framework style of reading and reasoning involved in studying philosophy is particularly relevant in his current line of work, where he has to extract key points regarding nuclear regulatory issues. Much like philosophy discussions today that involve going over material that has been discussed for centuries and framing them through new perspectives, nuclear regulatory issues too need to take the long view.

“In philosophy, we have to insert ourselves into very large conversations. Similarly, in nuclear engineering, you have to understand how to take apart the discourse that’s most relevant to your research and frame it,” Hines says. This technique is especially necessary because most of the time the nuclear regulatory issues might seem like wading in the weeds of nitty-gritty technical matters, but they can have a huge impact on the public and public perception, Hines adds.

As for Florida, Hines visits every chance he can get. The red tide still surfaces but not as consistently as it once did. And since he started his job as a nuclear operator in his undergraduate days, Hines has progressed to senior reactor operator. This time around he gets to sign off on the checklists. “It’s much like when I was shift lead at Dunkin’ Donuts in high school,” Hines says, “everyone is kind of doing the same thing, but you get to be in charge for the afternoon.”


AI pareidolia: Can machines spot faces in inanimate objects?

New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.


In 1994, Florida jewelry designer Diana Duyser discovered what she believed to be the Virgin Mary’s image in a grilled cheese sandwich, which she preserved and later auctioned for $28,000. But how much do we really understand about pareidolia, the phenomenon of seeing faces and patterns in objects when they aren’t really there? 

A new study from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) delves into this phenomenon, introducing an extensive, human-labeled dataset of 5,000 pareidolic images, far surpassing previous collections. Using this dataset, the team discovered several surprising results about the differences between human and machine perception, and how the ability to see faces in a slice of toast might have saved your distant relatives’ lives.

“Face pareidolia has long fascinated psychologists, but it’s been largely unexplored in the computer vision community,” says Mark Hamilton, MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead researcher on the work. “We wanted to create a resource that could help us understand how both humans and AI systems process these illusory faces.”

So what did all of these fake faces reveal? For one, AI models don’t seem to recognize pareidolic faces like we do. Surprisingly, the team found that it wasn’t until they trained algorithms to recognize animal faces that they became significantly better at detecting pareidolic faces. This unexpected connection hints at a possible evolutionary link between our ability to spot animal faces — crucial for survival — and our tendency to see faces in inanimate objects. “A result like this seems to suggest that pareidolia might not arise from human social behavior, but from something deeper: like quickly spotting a lurking tiger, or identifying which way a deer is looking so our primordial ancestors could hunt,” says Hamilton.

A row of five photos of animal faces atop five photos of inanimate objects that look like faces

Another intriguing discovery is what the researchers call the “Goldilocks Zone of Pareidolia,” a class of images where pareidolia is most likely to occur. “There’s a specific range of visual complexity where both humans and machines are most likely to perceive faces in non-face objects,” William T. Freeman, MIT professor of electrical engineering and computer science and principal investigator of the project says. “Too simple, and there’s not enough detail to form a face. Too complex, and it becomes visual noise.”

To uncover this, the team developed an equation that models how people and algorithms detect illusory faces.  When analyzing this equation, they found a clear “pareidolic peak” where the likelihood of seeing faces is highest, corresponding to images that have “just the right amount” of complexity. This predicted “Goldilocks zone” was then validated in tests with both real human subjects and AI face detection systems.

3 photos of clouds above 3 photos of a fruit tart. The left photo of each is “Too Simple” to perceive a face; the middle photo is “Just Right,” and the last photo is “Too Complex"

This new dataset, “Faces in Things,” dwarfs those of previous studies that typically used only 20-30 stimuli. This scale allowed the researchers to explore how state-of-the-art face detection algorithms behaved after fine-tuning on pareidolic faces, showing that not only could these algorithms be edited to detect these faces, but that they could also act as a silicon stand-in for our own brain, allowing the team to ask and answer questions about the origins of pareidolic face detection that are impossible to ask in humans. 

To build this dataset, the team curated approximately 20,000 candidate images from the LAION-5B dataset, which were then meticulously labeled and judged by human annotators. This process involved drawing bounding boxes around perceived faces and answering detailed questions about each face, such as the perceived emotion, age, and whether the face was accidental or intentional. “Gathering and annotating thousands of images was a monumental task,” says Hamilton. “Much of the dataset owes its existence to my mom,” a retired banker, “who spent countless hours lovingly labeling images for our analysis.”

The study also has potential applications in improving face detection systems by reducing false positives, which could have implications for fields like self-driving cars, human-computer interaction, and robotics. The dataset and models could also help areas like product design, where understanding and controlling pareidolia could create better products. “Imagine being able to automatically tweak the design of a car or a child’s toy so it looks friendlier, or ensuring a medical device doesn’t inadvertently appear threatening,” says Hamilton.

“It’s fascinating how humans instinctively interpret inanimate objects with human-like traits. For instance, when you glance at an electrical socket, you might immediately envision it singing, and you can even imagine how it would ‘move its lips.’ Algorithms, however, don’t naturally recognize these cartoonish faces in the same way we do,” says Hamilton. “This raises intriguing questions: What accounts for this difference between human perception and algorithmic interpretation? Is pareidolia beneficial or detrimental? Why don’t algorithms experience this effect as we do? These questions sparked our investigation, as this classic psychological phenomenon in humans had not been thoroughly explored in algorithms.”

As the researchers prepare to share their dataset with the scientific community, they’re already looking ahead. Future work may involve training vision-language models to understand and describe pareidolic faces, potentially leading to AI systems that can engage with visual stimuli in more human-like ways.

“This is a delightful paper! It is fun to read and it makes me think. Hamilton et al. propose a tantalizing question: Why do we see faces in things?” says Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering at Caltech, who was not involved in the work. “As they point out, learning from examples, including animal faces, goes only half-way to explaining the phenomenon. I bet that thinking about this question will teach us something important about how our visual system generalizes beyond the training it receives through life.”

Hamilton and Freeman’s co-authors include Simon Stent, staff research scientist at the Toyota Research Institute; Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences, NVIDIA research scientist, and former CSAIL member; and CSAIL affiliates postdoc Vasha DuTell, Anne Harrington MEng ’23, and Research Scientist Jennifer Corbett. Their work was supported, in part, by the National Science Foundation and the CSAIL MEnTorEd Opportunities in Research (METEOR) Fellowship, while being sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator. The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.

This work is being presented this week at the European Conference on Computer Vision.


Where flood policy helps most — and where it could do more

A U.S. program provides important flood insurance relief, but it’s used more in communities with greater means to protect themselves.


Flooding, including the devastation caused recently by Hurricane Helene, is responsible for $5 billion in annual damages in the U.S. That’s more than any other type of weather-related extreme event.

To address the problem, the federal government instituted a program in 1990 that helps reduce flood insurance costs in communities enacting measures to better handle flooding. If, say, a town preserves open space as a buffer against coastal flooding, or develops better stormwater management, area policy owners get discounts on their premiums. Studies show the program works well: It has reduced overall flood damage in participating communities.

However, a new study led by an MIT researcher shows that the effects of the program differ greatly from place to place. For instance, higher-population communities, which likely have more means to introduce flood defenses, benefit more than smaller communities, to the tune of about $4,000 per insured household.

“When we evaluate it, the effects of the same policy vary widely among different types of communities,” says study co-author Lidia Cano Pecharromán, a PhD candidate in MIT’s Department of Urban Studies and Planning.

Referring to climate and environmental justice concerns, she adds: “It’s important to understand not just if a policy is effective, but who is benefitting, so that we can make necessary adjustments and reach all the targets we want to reach.”

The paper, “Exposing Disparities in Flood Adaptation for Equitable Future Interventions in the USA,” is published today in Nature Communications. The authors are Cano Pecharromán and ChangHoon Hahn, an associate research scholar at Princeton University.

Able to afford help

The program in question was developed by the Federal Emergency Management Agency (FEMA), which has a division, the Flood Insurance Mitigation Administration, focusing on this issue. In 1990, FEMA initiated the National Flood Insurance Program’s Community Rating System, which incentivizes communities to enact measures that help prevent or reduce flooding.

Communities can engage in a broad set of related activities, including floodplain mapping, preservation of open spaces, stormwater management activities, creating flood warning systems, or even developing public information and participation programs. In exchange, area residents receive a discount on their flood insurance premium rates.

To conduct the study, the researchers examined 2.5 million flood insurance claims filed with FEMA since then. They also examined U.S. Census Bureau data to analyze demographic and economic data about communities, and incorporated flood risk data from the First Street Foundation.

By comparing over 1,500 communities in the FEMA program, the researchers were able to quantify its different relative effects — depending on community characteristics such as population, race, income or flood risk. For instance, higher-income communities seem better able to make more flood-control and mitigation investments, earning better FEMA ratings and, ultimately, enacting more effective measures.

“You see some positive effects for low-income communities, but as the risks go up, these disappear, while only high-income communities continue seeing these positive effects,” says Cano Pecharromán. “They are likely able to afford measures that handle a higher risk indices for flooding.”

Similarly, the researchers found, communities with higher overall levels of education fare better from the flood-insurance program, with about $2,000 more in savings per individual policy than communities with lower levels of education. One way or another, communities with more assets in the first place — size, wealth, education — are better able to deploy or hire the civic and technical expertise necessary to enact more best practices against flood damage.

And even among lower-income communities in the program, communities with less population diversity see greater effectiveness from their flood program activities, realizing a gain of about $6,000 per household compared to communities where racial and ethnic minorities are predominant.

“These are substantial effects, and we should consider these things when making decisions and reviewing if our climate adaptation policies work,” Cano Pecharromán says.

An even larger number of communities is not in the FEMA program at all. The study identified 14,729 unique U.S. communities with flood issues. Many of those are likely lacking the capacity to engage on flooding issues the way even the lower-ranked communities within the FEMA program have at least taken some action so far.

“If we are able to consider all the communities that are not in the program because they can’t afford to do the basics, we would likely see that the effects are even larger among different communities,” Cano Pecharromán says.

Getting communities started

To make the program more effective for more people, Cano Pecharromán suggests that the federal government should consider how to help communities enact flood-control and mitigation measures in the first place.

“When we set out these kinds of policies, we need to consider how certain types of communities might need help with implementation,” she says.

Methodologically, the researchers arrived at their conclusions using an advanced statistical approach that Hahn, who is an astrophysicist by training, has applied to the study of dark energy and galaxies. Instead of finding one “average treatment effect” of the FEMA program across all participating communities, they quantified the program’s impact while subdividing the set of participating set of communities according to their characteristics.

“We are able to calculate the causal effect of [the program], not as an average, which can hide these inequalities, but at every given level of the specific characteristic of communities we’re looking at, different levels of income, different levels of education, and more,” Cano Pecharromán says.

Government officials have seen Cano Pecharromán present the preliminary findings at meetings, and expressed interest in the results. Currently, she is also working on a follow-up study, which aims to pinpoint which types of local flood-mitigation programs provide the biggest benefits for local communities.

Support for the research was provided, in part, by the La Caixa Foundation, the MIT Martin Family Society of Fellows for Sustainability, and the AI Accelerator program of the Schmidt Sciences.


Helping robots zero in on the objects that matter

A new method called Clio enables robots to quickly map a scene and identify the items they need to complete a given set of tasks.


Imagine having to straighten up a messy kitchen, starting with a counter littered with sauce packets. If your goal is to wipe the counter clean, you might sweep up the packets as a group. If, however, you wanted to first pick out the mustard packets before throwing the rest away, you would sort more discriminately, by sauce type. And if, among the mustards, you had a hankering for Grey Poupon, finding this specific brand would entail a more careful search.

MIT engineers have developed a method that enables robots to make similarly intuitive, task-relevant decisions.

The team’s new approach, named Clio, enables a robot to identify the parts of a scene that matter, given the tasks at hand. With Clio, a robot takes in a list of tasks described in natural language and, based on those tasks, it then determines the level of granularity required to interpret its surroundings and “remember” only the parts of a scene that are relevant.

In real experiments ranging from a cluttered cubicle to a five-story building on MIT’s campus, the team used Clio to automatically segment a scene at different levels of granularity, based on a set of tasks specified in natural-language prompts such as “move rack of magazines” and “get first aid kit.”

The team also ran Clio in real-time on a quadruped robot. As the robot explored an office building, Clio identified and mapped only those parts of the scene that related to the robot’s tasks (such as retrieving a dog toy while ignoring piles of office supplies), allowing the robot to grasp the objects of interest.

Clio is named after the Greek muse of history, for its ability to identify and remember only the elements that matter for a given task. The researchers envision that Clio would be useful in many situations and environments in which a robot would have to quickly survey and make sense of its surroundings in the context of its given task.

“Search and rescue is the motivating application for this work, but Clio can also power domestic robots and robots working on a factory floor alongside humans,” says Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. “It’s really about helping the robot understand the environment and what it has to remember in order to carry out its mission.”

The team details their results in a study appearing today in the journal Robotics and Automation Letters. Carlone’s co-authors include members of the SPARK Lab: Dominic Maggio, Yun Chang, Nathan Hughes, and Lukas Schmid; and members of MIT Lincoln Laboratory: Matthew Trang, Dan Griffith, Carlyn Dougherty, and Eric Cristofalo.

Open fields

Huge advances in the fields of computer vision and natural language processing have enabled robots to identify objects in their surroundings. But until recently, robots were only able to do so in “closed-set” scenarios, where they are programmed to work in a carefully curated and controlled environment, with a finite number of objects that the robot has been pretrained to recognize.

In recent years, researchers have taken a more “open” approach to enable robots to recognize objects in more realistic settings. In the field of open-set recognition, researchers have leveraged deep-learning tools to build neural networks that can process billions of images from the internet, along with each image’s associated text (such as a friend’s Facebook picture of a dog, captioned “Meet my new puppy!”).

From millions of image-text pairs, a neural network learns from, then identifies, those segments in a scene that are characteristic of certain terms, such as a dog. A robot can then apply that neural network to spot a dog in a totally new scene.

But a challenge still remains as to how to parse a scene in a useful way that is relevant for a particular task.

“Typical methods will pick some arbitrary, fixed level of granularity for determining how to fuse segments of a scene into what you can consider as one ‘object,’” Maggio says. “However, the granularity of what you call an ‘object’ is actually related to what the robot has to do. If that granularity is fixed without considering the tasks, then the robot may end up with a map that isn’t useful for its tasks.”

Information bottleneck

With Clio, the MIT team aimed to enable robots to interpret their surroundings with a level of granularity that can be automatically tuned to the tasks at hand.

For instance, given a task of moving a stack of books to a shelf, the robot should be able to  determine that the entire stack of books is the task-relevant object. Likewise, if the task were to move only the green book from the rest of the stack, the robot should distinguish the green book as a single target object and disregard the rest of the scene — including the other books in the stack.

The team’s approach combines state-of-the-art computer vision and large language models comprising neural networks that make connections among millions of open-source images and semantic text. They also incorporate mapping tools that automatically split an image into many small segments, which can be fed into the neural network to determine if certain segments are semantically similar. The researchers then leverage an idea from classic information theory called the “information bottleneck,” which they use to compress a number of image segments in a way that picks out and stores segments that are semantically most relevant to a given task.

“For example, say there is a pile of books in the scene and my task is just to get the green book. In that case we push all this information about the scene through this bottleneck and end up with a cluster of segments that represent the green book,” Maggio explains. “All the other segments that are not relevant just get grouped in a cluster which we can simply remove. And we’re left with an object at the right granularity that is needed to support my task.”

The researchers demonstrated Clio in different real-world environments.

“What we thought would be a really no-nonsense experiment would be to run Clio in my apartment, where I didn’t do any cleaning beforehand,” Maggio says.

The team drew up a list of natural-language tasks, such as “move pile of clothes” and then applied Clio to images of Maggio’s cluttered apartment. In these cases, Clio was able to quickly segment scenes of the apartment and feed the segments through the Information Bottleneck algorithm to identify those segments that made up the pile of clothes.

They also ran Clio on Boston Dynamic’s quadruped robot, Spot. They gave the robot a list of tasks to complete, and as the robot explored and mapped the inside of an office building, Clio ran in real-time on an on-board computer mounted to Spot, to pick out segments in the mapped scenes that visually relate to the given task. The method generated an overlaying map showing just the target objects, which the robot then used to approach the identified objects and physically complete the task.

“Running Clio in real-time was a big accomplishment for the team,” Maggio says. “A lot of prior work can take several hours to run.”

Going forward, the team plans to adapt Clio to be able to handle higher-level tasks and build upon recent advances in photorealistic visual scene representations.

“We’re still giving Clio tasks that are somewhat specific, like ‘find deck of cards,’” Maggio says. “For search and rescue, you need to give it more high-level tasks, like ‘find survivors,’ or ‘get power back on.’ So, we want to get to a more human-level understanding of how to accomplish more complex tasks.”

This research was supported, in part, by the U.S. National Science Foundation, the Swiss National Science Foundation, MIT Lincoln Laboratory, the U.S. Office of Naval Research, and the U.S. Army Research Lab Distributed and Collaborative Intelligent Systems and Technology Collaborative Research Alliance.


MIT launches new Music Technology and Computation Graduate Program

The program will invite students to investigate new vistas at the intersection of music, computing, and technology.


A new, multidisciplinary MIT graduate program in music technology and computation will feature faculty, labs, and curricula from across the Institute.

The program is a collaboration between the Music and Theater Arts Section in the School of Humanities, Arts, and Social Sciences (SHASS) and the School of Engineering. Faculty for the program share appointments between the Music and Theater Arts Section, the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.

“The launch of a new graduate program in music technology strikes me as both a necessary and a provocative gesture — an important leap in an era being rapidly redefined by exponential growth in computation, artificial intelligence, and human-computer interactions of every conceivable kind,” says Jay Scheib,​​ head of the MIT Music and Theater Arts Section and the Class of 1949 Professor.

“Music plays an elegant role at the fore of a remarkable convergence of art and technology,” adds Scheib. “It’s the right time to launch this program and if not at MIT, then where?”

MIT’s practitioners define music technology as the field of scientific inquiry where they study, discover, and develop new computational approaches to music that include music information retrieval; artificial intelligence; machine learning; generative algorithms; interaction and performance systems; digital instrument design; conceptual and perceptual modeling of music; acoustics; audio signal processing; and software development for creative expression and music applications.

Eran Egozy, professor of the practice in music technology and one of the program leads, says MIT’s focus is technical research in music technology that always centers the humanistic and artistic aspects of making music.

“There are so many MIT students who are fabulous musicians,” says Egozy. “We'll approach music technology as computer scientists, mathematicians, and musicians.”

With the launch of this new program — an offering alongside those available in MIT’s Media Lab and elsewhere — Egozy sees MIT becoming the obvious destination for students interested in music and computation study, preparing high-impact graduates for roles in academia and industry, while also helping mold creative, big-picture thinkers who can tackle large challenges.

Investigating big ideas

The program will encompass two master’s degrees and a PhD:

Anna Huang, a new MIT assistant professor who holds a shared faculty position between the MIT Music and Theater Arts Section and the MIT Schwarzman College of Computing, is collaborating with Egozy to develop and launch the program. Huang arrived at MIT this fall after spending eight years with Magenta at Google Brain and DeepMind, spearheading efforts in generative modeling, reinforcement learning, and human-computer interaction to support human-AI partnerships in music-making.

“As a composer turned AI researcher who specializes in generative music technology, my long-term goal is to develop AI systems that can shed new light on how we understand, learn, and create music, and to learn from interactions between musicians in order to transform how we approach human-AI collaboration,” says Huang. “This new program will let us further investigate how musical applications can illuminate problems in understanding neural networks, for example.”

MIT’s new Edward and Joyce Linde Music Building, featuring enhanced music technology spaces, will also help transform music education with versatile performance venues and optimized rehearsal facilities.

A natural home for music technology

MIT’s world-class, top-ranked engineering program, combined with its focus on computation and its conservatory-level music education offerings, makes the Institute a natural home for the continued expansion of music technology education.

The collaborative nature of the new program is the latest example of interdisciplinary work happening across the Institute.

“I am thrilled that the School of Engineering is partnering with the MIT Music and Theater Arts Section on this important initiative, which represents the convergence of various engineering areas — such as AI and design — with music,” says Anantha Chandrakasan, dean of the School of Engineering, chief innovation and strategy officer, and the Vannevar Bush Professor of EECS. “I can’t wait to see the innovative projects the students will create and how they will drive this new field forward.”

“Everyone on campus knows that MIT is a great place to do music. But I want people to come to MIT because of what we do in music,” says Agustin Rayo, the Kenan Sahin Dean of SHASS. “This outstanding collaboration with the Schwarzman College of Computing and the School of Engineering will make that dream a reality, by bringing together the world’s best engineers with our extraordinary musicians to create the next generation of music technologies.”

“The new master’s program offers students an unparalleled opportunity to explore the intersection of music and technology,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of EECS. “It equips them with a deep understanding of this confluence, preparing them to advance new approaches to computational models of music and be at the forefront of an evolving area.” 


How social structure influences the way people share money

A new study shows that belonging to age-based groups, common in some global regions, influences finances and health.


People around the globe often depend on informal financial arrangements, borrowing and lending money through social networks. Understanding this sheds light on local economies and helps fight poverty.

Now, a study co-authored by an MIT economist illuminates a striking case of informal finance: In East Africa, money moves in very different patterns depending on whether local societies are structured around family units or age-based groups.

That is, while much of the world uses the extended family as a basic social unit, hundreds of millions of people live in societies with stronger age-based cohorts. In these cases, people are initiated into adulthood together and maintain closer social ties with each other than with extended family. That affects their finances, too.

“We found there are major impacts in that social structure really does matter for how people form financial ties,” says Jacob Moscona, an MIT economist and co-author of a newly published paper detailing the results.

He adds: “In age-based societies when someone gets a cash transfer, the money flows in a big way to other members of their age cohort but not to other [younger or older] members of an extended family. And you see the exact opposite pattern in kin-based groups, where money is transferred within the family but not the age cohort.”

This leads to measurable health effects. In kin-based societies, grandparents often share their pension payments with grandchildren. In Uganda, the study reveals, an additional year of pension payments to a senior citizen in a kin-based society reduces the likelihood of child malnourishment by 5.5 percent, compared to an age-based society where payments are less likely to move across generations.

The paper, “Age Set versus Kin: Culture and Financial Ties in East Africa,” is published in the September issue of the American Economic Review. The authors are Moscona, the 3M Career Development Assistant Professor of Economics in MIT’s Department of Economics; and Awa Ambra Seck, an assistant professor at Harvard Business School.

Studying informal financial arrangements has long been an important research domain for economists. MIT Professor Robert Townsend, for one, helped advance this area of scholarship with innovative studies of finances in rural Thailand.

At the same time, the specific matter of analyzing how age-based social groups function, in comparison to the more common kin-based groups, has tended to be addressed more by anthropologists than economists. Among the Maasai people in Northern Kenya, for example, anthropologists have observed that age-group friends have closer ties to each other than anyone apart from a spouse and children. Maasai age-group cohorts frequently share food and lodging, and more extensively than they do even with siblings. The current study adds economic data points to this body of knowledge.

To conduct the research, the scholars first analyzed the Kenyan government’s Hunger Safety Net Program (HSNP), a cash transfer project initiated in 2009 covering 48 locations in Northern Kenya. The program included both age-based and kin-based social groups, allowing for a comparison of its effects.

In age-based societies, the study shows, there was a spillover in spending by HSNP recipients on others in the age cohort, with zero additional cash flows to those in other generations; in kin-based societies, they also found a spillover across generations, but without informal cash flows otherwise.

In Uganda, where both kin-based and age-based societies exist, the researchers studied the national roll-out of the Senior Citizen Grant (SCG) program, initiated in 2011, which consists of a monthly cash transfer to seniors of about $7.50, equivalent to roughly 20 percent of per-capita spending. Similar programs exist or are being rolled out across sub-Saharan Africa, including in regions where age-based organization is common.

Here again, the researchers found financial flows aligned to kin-based and age-based social ties. In particular, they show that the pension program had large positive effects on child nutrition in kin-based households, where ties across generations are strong; the team found zero evidence of these effects in age-based societies.

“These policies had vastly different effects on these two groups, on account of the very different structure of financial ties,” Moscona says.

To Moscona, there are at least two large reasons to evaluate the variation between these financial flows: understanding society more thoroughly and rethinking how to design social programs in these circumstances.

“It’s telling us something about how the world works, that social structure is really important for shaping these [financial] relationships,” Moscona says. “But it also has a big potential impact on policy.”

After all, if a social policy is designed to help limit childhood poverty, or senior poverty, experts will want to know how the informal flow of cash in a society interacts with it. The current study shows that understanding social structure should be a high-order concern for making policies more effective.

“In these two ways of organizing society, different people are on average more vulnerable,” Moscona says. “In the kin-based groups, because the young and the old share with each other, you don’t see as much inequality across generations. But in age-based groups, the young and the old are left systematically more vulnerable. And in kin-based groups, some entire families are doing much worse than others, while in age-based societies the age sets often cut across lineages or extended families, making them more equal. That’s worth considering if you’re thinking about poverty reduction.”


New security protocol shields data from attackers during cloud-based computation

The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.


Deep-learning models are being used in many fields, from health care diagnostics to financial forecasting. However, these models are so computationally intensive that they require the use of powerful cloud-based servers.

This reliance on cloud computing poses significant security risks, particularly in areas like health care, where hospitals may be hesitant to use AI tools to analyze confidential patient data due to privacy concerns.

To tackle this pressing issue, MIT researchers have developed a security protocol that leverages the quantum properties of light to guarantee that data sent to and from a cloud server remain secure during deep-learning computations.

By encoding data into the laser light used in fiber optic communications systems, the protocol exploits the fundamental principles of quantum mechanics, making it impossible for attackers to copy or intercept the information without detection.

Moreover, the technique guarantees security without compromising the accuracy of the deep-learning models. In tests, the researcher demonstrated that their protocol could maintain 96 percent accuracy while ensuring robust security measures.

“Deep learning models like GPT-4 have unprecedented capabilities but require massive computational resources. Our protocol enables users to harness these powerful models without compromising the privacy of their data or the proprietary nature of the models themselves,” says Kfir Sulimany, an MIT postdoc in the Research Laboratory for Electronics (RLE) and lead author of a paper on this security protocol.

Sulimany is joined on the paper by Sri Krishna Vadlamani, an MIT postdoc; Ryan Hamerly, a former postdoc now at NTT Research, Inc.; Prahlad Iyengar, an electrical engineering and computer science (EECS) graduate student; and senior author Dirk Englund, a professor in EECS, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE. The research was recently presented at Annual Conference on Quantum Cryptography.

A two-way street for security in deep learning

The cloud-based computation scenario the researchers focused on involves two parties — a client that has confidential data, like medical images, and a central server that controls a deep learning model.

The client wants to use the deep-learning model to make a prediction, such as whether a patient has cancer based on medical images, without revealing information about the patient.

In this scenario, sensitive data must be sent to generate a prediction. However, during the process the patient data must remain secure.

Also, the server does not want to reveal any parts of the proprietary model that a company like OpenAI spent years and millions of dollars building.

“Both parties have something they want to hide,” adds Vadlamani.

In digital computation, a bad actor could easily copy the data sent from the server or the client.

Quantum information, on the other hand, cannot be perfectly copied. The researchers leverage this property, known as the no-cloning principle, in their security protocol.

For the researchers’ protocol, the server encodes the weights of a deep neural network into an optical field using laser light.

A neural network is a deep-learning model that consists of layers of interconnected nodes, or neurons, that perform computation on data. The weights are the components of the model that do the mathematical operations on each input, one layer at a time. The output of one layer is fed into the next layer until the final layer generates a prediction.

The server transmits the network’s weights to the client, which implements operations to get a result based on their private data. The data remain shielded from the server.

At the same time, the security protocol allows the client to measure only one result, and it prevents the client from copying the weights because of the quantum nature of light.

Once the client feeds the first result into the next layer, the protocol is designed to cancel out the first layer so the client can’t learn anything else about the model.

“Instead of measuring all the incoming light from the server, the client only measures the light that is necessary to run the deep neural network and feed the result into the next layer. Then the client sends the residual light back to the server for security checks,” Sulimany explains.

Due to the no-cloning theorem, the client unavoidably applies tiny errors to the model while measuring its result. When the server receives the residual light from the client, the server can measure these errors to determine if any information was leaked. Importantly, this residual light is proven to not reveal the client data.

A practical protocol

Modern telecommunications equipment typically relies on optical fibers to transfer information because of the need to support massive bandwidth over long distances. Because this equipment already incorporates optical lasers, the researchers can encode data into light for their security protocol without any special hardware.

When they tested their approach, the researchers found that it could guarantee security for server and client while enabling the deep neural network to achieve 96 percent accuracy.

The tiny bit of information about the model that leaks when the client performs operations amounts to less than 10 percent of what an adversary would need to recover any hidden information. Working in the other direction, a malicious server could only obtain about 1 percent of the information it would need to steal the client’s data.

“You can be guaranteed that it is secure in both ways — from the client to the server and from the server to the client,” Sulimany says.

“A few years ago, when we developed our demonstration of distributed machine learning inference between MIT’s main campus and MIT Lincoln Laboratory, it dawned on me that we could do something entirely new to provide physical-layer security, building on years of quantum cryptography work that had also been shown on that testbed,” says Englund. “However, there were many deep theoretical challenges that had to be overcome to see if this prospect of privacy-guaranteed distributed machine learning could be realized. This didn’t become possible until Kfir joined our team, as Kfir uniquely understood the experimental as well as theory components to develop the unified framework underpinning this work.”

In the future, the researchers want to study how this protocol could be applied to a technique called federated learning, where multiple parties use their data to train a central deep-learning model. It could also be used in quantum operations, rather than the classical operations they studied for this work, which could provide advantages in both accuracy and security.

“This work combines in a clever and intriguing way techniques drawing from fields that do not usually meet, in particular, deep learning and quantum key distribution. By using methods from the latter, it adds a security layer to the former, while also allowing for what appears to be a realistic implementation. This can be interesting for preserving privacy in distributed architectures. I am looking forward to seeing how the protocol behaves under experimental imperfections and its practical realization,” says Eleni Diamanti, a CNRS research director at Sorbonne University in Paris, who was not involved with this work.

This work was supported, in part, by the Israeli Council for Higher Education and the Zuckerman STEM Leadership Program.


Mars’ missing atmosphere could be hiding in plain sight

A new study shows Mars’ early thick atmosphere could be locked up in the planet’s clay surface.


Mars wasn’t always the cold desert we see today. There’s increasing evidence that water once flowed on the Red Planet’s surface, billions of years ago. And if there was water, there must also have been a thick atmosphere to keep that water from freezing. But sometime around 3.5 billion years ago, the water dried up, and the air, once heavy with carbon dioxide, dramatically thinned, leaving only the wisp of an atmosphere that clings to the planet today.

Where exactly did Mars’ atmosphere go? This question has been a central mystery of Mars’ 4.6-billion-year history.

For two MIT geologists, the answer may lie in the planet’s clay. In a paper appearing today in Science Advances, they propose that much of Mars’ missing atmosphere could be locked up in the planet’s clay-covered crust.

The team makes the case that, while water was present on Mars, the liquid could have trickled through certain rock types and set off a slow chain of reactions that progressively drew carbon dioxide out of the atmosphere and converted it into methane — a form of carbon that could be stored for eons in the planet’s clay surface.

Similar processes occur in some regions on Earth. The researchers used their knowledge of interactions between rocks and gases on Earth and applied that to how similar processes could play out on Mars. They found that, given how much clay is estimated to cover Mars’ surface, the planet’s clay could hold up to 1.7 bar of carbon dioxide, which would be equivalent to around 80 percent of the planet’s initial, early atmosphere.

It’s possible that this sequestered Martian carbon could one day be recovered and converted into propellant to fuel future missions between Mars and Earth, the researchers propose.

“Based on our findings on Earth, we show that similar processes likely operated on Mars, and that copious amounts of atmospheric CO2 could have transformed to methane and been sequestered in clays,” says study author Oliver Jagoutz, professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This methane could still be present and maybe even used as an energy source on Mars in the future.”

The study’s lead author is recent EAPS graduate Joshua Murray PhD ’24.

In the folds

Jagoutz’ group at MIT seeks to identify the geologic processes and interactions that drive the evolution of Earth’s lithosphere — the hard and brittle outer layer that includes the crust and upper mantle, where tectonic plates lie.

In 2023, he and Murray focused on a type of surface clay mineral called smectite, which is known to be a highly effective trap for carbon. Within a single grain of smectite are a multitude of folds, within which carbon can sit undisturbed for billions of years. They showed that smectite on Earth was likely a product of tectonic activity, and that, once exposed at the surface, the clay minerals acted to draw down and store enough carbon dioxide from the atmosphere to cool the planet over millions of years.

Soon after the team reported their results, Jagoutz happened to look at a map of the surface of Mars and realized that much of that planet’s surface was covered in the same smectite clays. Could the clays have had a similar carbon-trapping effect on Mars, and if so, how much carbon could the clays hold?

“We know this process happens, and it is well-documented on Earth. And these rocks and clays exist on Mars,” Jagoutz says. “So, we wanted to try and connect the dots.”

“Every nook and cranny”

Unlike on Earth, where smectite is a consequence of continental plates shifting and uplifting to bring rocks from the mantle to the surface, there is no such tectonic activity on Mars. The team looked for ways in which the clays could have formed on Mars, based on what scientists know of the planet’s history and composition.

For instance, some remote measurements of Mars’ surface suggest that at least part of the planet’s crust contains ultramafic igneous rocks, similar to those that produce smectites through weathering on Earth. Other observations reveal geologic patterns similar to terrestrial rivers and tributaries, where water could have flowed and reacted with the underlying rock.

Jagoutz and Murray wondered whether water could have reacted with Mars’ deep ultramafic rocks in a way that would produce the clays that cover the surface today. They developed a simple model of rock chemistry, based on what is known of how igneous rocks interact with their environment on Earth.

They applied this model to Mars, where scientists believe the crust is mostly made up of igneous rock that is rich in the mineral olivine. The team used the model to estimate the changes that olivine-rich rock might undergo, assuming that water existed on the surface for at least a billion years, and the atmosphere was thick with carbon dioxide.

“At this time in Mars’ history, we think CO2 is everywhere, in every nook and cranny, and water percolating through the rocks is full of CO2 too,” Murray says.

Over about a billion years, water trickling through the crust would have slowly reacted with olivine — a mineral that is rich in a reduced form of iron. Oxygen molecules in water would have bound to the iron, releasing hydrogen as a result and forming the red oxidized iron which gives the planet its iconic color. This free hydrogen would then have combined with carbon dioxide in the water, to form methane. As this reaction progressed over time, olivine would have slowly transformed into another type of iron-rich rock known as serpentine, which then continued to react with water to form smectite.

“These smectite clays have so much capacity to store carbon,” Murray says. “So then we used existing knowledge of how these minerals are stored in clays on Earth, and extrapolate to say, if the Martian surface has this much clay in it, how much methane can you store in those clays?”

He and Jagoutz found that if Mars is covered in a layer of smectite that is 1,100 meters deep, this amount of clay could store a huge amount of methane, equivalent to most of the carbon dioxide in the atmosphere that is thought to have disappeared since the planet dried up.

“We find that estimates of global clay volumes on Mars are consistent with a significant fraction of Mars’ initial CO2 being sequestered as organic compounds within the clay-rich crust,” Murray says. “In some ways, Mars’ missing atmosphere could be hiding in plain sight.”

“Where the CO2 went from an early, thicker atmosphere is a fundamental question in the history of the Mars atmosphere, its climate, and the habitability by microbes,” says Bruce Jakosky, professor emeritus of geology at the University of Colorado and principal investigator on the Mars Atmosphere and Volatile Evolution (MAVEN) mission, which has been orbiting and studying Mars’ upper atmosphere since 2014. Jakosky was not involved with the current study. “Murray and Jagoutz examine the chemical interaction of rocks with the atmosphere as a means of removing CO2. At the high end of our estimates of how much weathering has occurred, this could be a major process in removing CO2 from Mars’ early atmosphere.”

This work was supported, in part, by the National Science Foundation.


Startup helps people fall asleep by aligning audio signals with brainwaves

Elemind, founded by researchers from MIT, has developed a headband that uses acoustic stimulation to move people into a sleep state.


Do you ever toss and turn in bed after a long day, wishing you could just program your brain to turn off and get some sleep?

That may sound like science fiction, but that’s the goal of the startup Elemind, which is using an electroencephalogram (EEG) headband that emits acoustic stimulation aligned with people’s brainwaves to move them into a sleep state more quickly.

In a small study of adults with sleep onset insomnia, 30 minutes of stimulation from the device decreased the time it took them to fall asleep by 10 to 15 minutes. This summer, Elemind began shipping its product to a small group of users as part of an early pilot program.

The company, which was founded by MIT Professor Ed Boyden ’99, MNG ’99; David Wang ’05, SM ’10, PhD ’15; former postdoc Nir Grossman; former Media Lab research affiliate Heather Read; and Meredith Perry, plans to collect feedback from early users before making the device more widely available.

Elemind’s team believes their device offers several advantages over sleeping pills that can cause side effects and addiction.

“We wanted to create a nonchemical option for people who wanted to get great sleep without side effects, so you could get all the benefits of natural sleep without the risks,” says Perry, Elemind’s CEO. “There’s a number of people that we think would benefit from this device, whether you’re a breastfeeding mom that might not want to take a sleep drug, somebody traveling across time zones that wants to fight jet lag, or someone that simply wants to improve your next-day performance and feel like you have more control over your sleep.”

From research to product

Wang’s academic journey at MIT spanned nearly 15 years, during which he earned four degrees, culminating in a PhD in artificial intelligence in 2015. In 2014, Wang was co-teaching a class with Grossman when they began working together to noninvasively measure real-time biological oscillations in the brain and body. Through that work, they became fascinated with a technique for modulating the brain known as phase-locked stimulation, which uses precisely timed visual, physical, or auditory stimulation that lines up with brain activity.

“You’re measuring some kind of changing variable, and then you want to change your stimulus in real time in response to that variable,” explains Boyden, who pointed Wang and Grossman to a set of mathematical techniques that became some of the core intellectual property of Elemind.

Phase-locked stimulation has been used in conjunction with electrodes implanted in the brain to disrupt seizures and tremors for years. But in 2021, Wang, Grossman, Boyden, and their collaborators published a paper showing they could use electrical stimulation from outside the skull to suppress essential tremor syndrome, the most common adult movement disorder.

The results were promising, but the founders decided to start by proving their approach worked in a less regulated space: sleep. They developed a system to deliver auditory pulses timed to promote or suppress alpha oscillations in the brain, which are elevated in insomnia.

That kicked off a years-long product development process that led to the headband device Elemind uses today. The headband measures brainwaves through EEG and feeds the results into Elemind's proprietary algorithms, which are used to dynamically generate audio through a bone conduction driver. The moment the device detects that someone is asleep, the audio is slowly tapered out.

“We have a theory that the sound that we play triggers an auditory-evoked response in the brain,” Wang says. “That means we get your auditory cortex to basically release this voltage burst that sweeps across your brain and interferes with other regions. Some people who have worn Elemind call it a brain jammer. For folks that ruminate a lot before they go to sleep, their brains are actively running. This encourages their brain to quiet down.”

Beyond sleep

Elemind has established a collaboration with eight universities that allows researchers to explore the effectiveness of the company’s approach in a range of use cases, from tremors to memory formation, Alzheimer’s progression, and more.

“We’re not only developing this product, but also advancing the field of neuroscience by collecting high-resolution data to hopefully also help others conduct new research,” Wang says.

The collaborations have led to some exciting results. Researchers at McGill University found that using Elemind’s acoustic stimulation during sleep increased activity in areas of the cortex related to motor function and improved healthy adults’ performance in memory tasks. Other studies have shown the approach can be used to reduce essential tremors in patients and enhance sedation recovery.

Elemind is focused on its sleep application for now, but the company plans to develop other solutions, from medical interventions to memory and focus augmentation, as the science evolves.

“The vision is how do we move beyond sleep into what could ultimately become like an app store for the brain, where you can download a brain state like you download an app?” Perry says. “How can we make this a tool that can be applied to a bunch of different applications with a single piece of hardware that has a lot of different stimulation protocols?”


Study evaluates impacts of summer heat in U.S. prison environments

MIT researchers identify facility-level factors that could worsen heat impacts for incarcerated people.


When summer temperatures spike, so does our vulnerability to heat-related illness or even death. For the most part, people can take measures to reduce their heat exposure by opening a window, turning up the air conditioning, or simply getting a glass of water. But for people who are incarcerated, freedom to take such measures is often not an option. Prison populations therefore are especially vulnerable to heat exposure, due to their conditions of confinement.

A new study by MIT researchers examines summertime heat exposure in prisons across the United States and identifies characteristics within prison facilities that can further contribute to a population’s vulnerability to summer heat.

The study’s authors used high-spatial-resolution air temperature data to determine the daily average outdoor temperature for each of 1,614 prisons in the U.S., for every summer between the years 1990 and 2023. They found that the prisons that are exposed to the most extreme heat are located in the southwestern U.S., while prisons with the biggest changes in summertime heat, compared to the historical record, are in the Pacific Northwest, the Northeast, and parts of the Midwest.

Those findings are not entirely unique to prisons, as any non-prison facility or community in the same geographic locations would be exposed to similar outdoor air temperatures. But the team also looked at characteristics specific to prison facilities that could further exacerbate an incarcerated person’s vulnerability to heat exposure. They identified nine such facility-level characteristics, such as highly restricted movement, poor staffing, and inadequate mental health treatment. People living and working in prisons with any one of these characteristics may experience compounded risk to summertime heat. 

The team also looked at the demographics of 1,260 prisons in their study and found that the prisons with higher heat exposure on average also had higher proportions of non-white and Hispanic populations. The study, appearing today in the journal GeoHealth, provides policymakers and community leaders with ways to estimate, and take steps to address, a prison population’s heat risk, which they anticipate could worsen with climate change.

“This isn’t a problem because of climate change. It’s becoming a worse problem because of climate change,” says study lead author Ufuoma Ovienmhada SM ’20, PhD ’24, a graduate of the MIT Media Lab, who recently completed her doctorate in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “A lot of these prisons were not built to be comfortable or humane in the first place. Climate change is just aggravating the fact that prisons are not designed to enable incarcerated populations to moderate their own exposure to environmental risk factors such as extreme heat.”

The study’s co-authors include Danielle Wood ’04, SM ’08, PhD ’12, MIT associate professor of media arts and sciences, and of AeroAstro; and Brent Minchew, MIT associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences; along with Ahmed Diongue ’24, Mia Hines-Shanks of Grinnell College, and Michael Krisch of Columbia University.

Environmental intersections

The new study is an extension of work carried out at the Media Lab, where Wood leads the Space Enabled research group. The group aims to advance social and environmental justice issues through the use of satellite data and other space-enabled technologies.

The group’s motivation to look at heat exposure in prisons came in 2020 when, as co-president of MIT’s Black Graduate Student Union, Ovienmhada took part in community organizing efforts following the murder of George Floyd by Minneapolis police.

“We started to do more organizing on campus around policing and reimagining public safety. Through that lens I learned more about police and prisons as interconnected systems, and came across this intersection between prisons and environmental hazards,” says Ovienmhada, who is leading an effort to map the various environmental hazards that prisons, jails, and detention centers face. “In terms of environmental hazards, extreme heat causes some of the most acute impacts for incarcerated people.”

She, Wood, and their colleagues set out to use Earth observation data to characterize U.S. prison populations’ vulnerability, or their risk of experiencing negative impacts, from heat.

The team first looked through a database maintained by the U.S. Department of Homeland Security that lists the location and boundaries of carceral facilities in the U.S. From the database’s more than 6,000 prisons, jails, and detention centers, the researchers highlighted 1,614 prison-specific facilities, which together incarcerate nearly 1.4 million people, and employ about 337,000 staff.

They then looked to Daymet, a detailed weather and climate database that tracks daily temperatures across the United States, at a 1-kilometer resolution. For each of the 1,614 prison locations, they mapped the daily outdoor temperature, for every summer between the years 1990 to 2023, noting that the majority of current state and federal correctional facilities in the U.S. were built by 1990.

The team also obtained U.S. Census data on each facility’s demographic and facility-level characteristics, such as prison labor activities and conditions of confinement. One limitation of the study that the researchers acknowledge is a lack of information regarding a prison’s climate control.

“There’s no comprehensive public resource where you can look up whether a facility has air conditioning,” Ovienmhada notes. “Even in facilities with air conditioning, incarcerated people may not have regular access to those cooling systems, so our measurements of outdoor air temperature may not be far off from reality.”

Heat factors

From their analysis, the researchers found that more than 98 percent of all prisons in the U.S. experienced at least 10 days in the summer that were hotter than every previous summer, on average, for a given location. Their analysis also revealed the most heat-exposed prisons, and the prisons that experienced the highest temperatures on average, were mostly in the Southwestern U.S. The researchers note that with the exception of New Mexico, the Southwest is a region where there are no universal air conditioning regulations in state-operated prisons.

“States run their own prison systems, and there is no uniformity of data collection or policy regarding air conditioning,” says Wood, who notes that there is some information on cooling systems in some states and individual prison facilities, but the data is sparse overall, and too inconsistent to include in the group’s nationwide study.

While the researchers could not incorporate air conditioning data, they did consider other facility-level factors that could worsen the effects that outdoor heat triggers. They looked through the scientific literature on heat, health impacts, and prison conditions, and focused on 17 measurable facility-level variables that contribute to heat-related health problems. These include factors such as overcrowding and understaffing.

“We know that whenever you’re in a room that has a lot of people, it’s going to feel hotter, even if there’s air conditioning in that environment,” Ovienmhada says. “Also, staffing is a huge factor. Facilities that don’t have air conditioning but still try to do heat risk-mitigation procedures might rely on staff to distribute ice or water every few hours. If that facility is understaffed or has neglectful staff, that may increase people’s susceptibility to hot days.”

The study found that prisons with any of nine of the 17 variables showed statistically significant greater heat exposures than the prisons without those variables. Additionally, if a prison exhibits any one of the nine variables, this could worsen people’s heat risk through the combination of elevated heat exposure and vulnerability. The variables, they say, could help state regulators and activists identify prisons to prioritize for heat interventions.

“The prison population is aging, and even if you’re not in a ‘hot state,’ every state has responsibility to respond,” Wood emphasizes. “For instance, areas in the Northwest, where you might expect to be temperate overall, have experienced a number of days in recent years of increasing heat risk. A few days out of the year can still be dangerous, particularly for a population with reduced agency to regulate their own exposure to heat.”

This work was supported, in part, by NASA, the MIT Media Lab, and MIT’s Institute for Data, Systems and Society’s Research Initiative on Combatting Systemic Racism.


Fifteen Lincoln Laboratory technologies receive 2024 R&D 100 Awards

The innovations map the ocean floor and the brain, prevent heat stroke and cognitive injury, expand AI processing and quantum system capabilities, and introduce new fabrication approaches.


Fifteen technologies developed either wholly or in part by MIT Lincoln Laboratory have been named recipients of 2024 R&D 100 Awards. The awards are given by R&D World, an online publication that serves research scientists and engineers worldwide. Dubbed the “Oscars of Innovation,” the awards recognize the 100 most significant technologies transitioned to use or introduced into the marketplace in the past year. An independent panel of expert judges selects the winners.

“The R&D 100 Awards are a significant recognition of the laboratory’s technical capabilities and its role in transitioning technology for real-world impact,” says Melissa Choi, director of Lincoln Laboratory. “It is exciting to see so many projects selected for this honor, and we are proud of everyone whose creativity, curiosity, and technical excellence made these and many other Lincoln Laboratory innovations possible.”

The awarded technologies have a wide range of applications. A handful of them are poised to prevent human harm — for example, by monitoring for heat stroke or cognitive injury. Others present new processes for 3D printing glass, fabricating silicon imaging sensors, and interconnecting integrated circuits. Some technologies take on long-held challenges, such as mapping the human brain and the ocean floor. Together, the winners exemplify the creativity and breadth of Lincoln Laboratory innovation. Since 2010, the laboratory has received 101 R&D 100 Awards.

This year’s R&D 100 Award–winning technologies are described below.

Protecting human health and safety

The Neuron Tracing and Active Learning Environment (NeuroTrALE) software uses artificial intelligence techniques to create high-resolution maps, or atlases, of the brain's network of neurons from high-dimensional biomedical data. NeuroTrALE addresses a major challenge in AI-assisted brain mapping: a lack of labeled data for training AI systems to build atlases essential for study of the brain’s neural structures and mechanisms. The software is the first end-to-end system to perform processing and annotation of dense microscopy data; generate segmentations of neurons; and enable experts to review, correct, and edit NeuroTrALE’s annotations from a web browser. This award is shared with the lab of Kwanghun (KC) Chung, associate professor in MIT’s Department of Chemical Engineering, Institute for Medical Engineering and Science, and Picower Institute for Learning and Memory.

Many military and law enforcement personnel are routinely exposed to low-level blasts in training settings. Often, these blasts don’t cause immediate diagnosable injury, but exposure over time has been linked to anxiety, depression, and other cognitive conditions. The Electrooculography and Balance Blast Overpressure Monitoring (EYEBOOM) is a wearable system developed to monitor individuals’ blast exposure and notify them if they are at an increased risk of harm. It uses two body-worn sensors, one to capture continuous eye and body movements and another to measure blast energy. An algorithm analyzes these data to detect subtle changes in physiology, which, when combined with cumulative blast exposure, can be predictive of cognitive injury. Today, the system is in use by select U.S. Special Forces units. The laboratory co-developed EYEBOOM with Creare LLC and Lifelens LLC.

Tunable knitted stem cell scaffolds: The development of artificial-tissue constructs that mimic the natural stretchability and toughness of living tissue is in high demand for regenerative medicine applications. A team from Lincoln Laboratory and the MIT Department of Mechanical Engineering developed new forms of biocompatible fabrics that mimic the mechanical properties of native tissues while nurturing growing stem cells. These wearable stem-cell scaffolds can expedite the regeneration of skin, muscle, and other soft tissues to reduce recovery time and limit complications from severe burns, lacerations, and other bodily wounds.

Mixture deconvolution pipeline for forensic investigative genetic genealogy: A rapidly growing field of forensic science is investigative genetic genealogy, wherein investigators submit a DNA profile to commercial genealogy databases to identify a missing person or criminal suspect. Lincoln Laboratory’s software invention addresses a large unmet need in this field: the ability to deconvolve, or unravel, mixed DNA profiles of multiple unknown persons to enable database searching. The software pipeline estimates the number of contributors in a DNA mixture, the percentage of DNA present from each contributor, and the sex of each contributor; then, it deconvolves the different DNA profiles in the mixture to isolate two contributors, without needing to match them to a reference profile of a known contributor, as required by previous software.

Each year, hundreds of people die or suffer serious injuries from heat stroke, especially personnel in high-risk outdoor occupations such as military, construction, or first response. The Heat Injury Prevention System (HIPS) provides accurate, early warning of heat stroke several minutes in advance of visible symptoms. The system collects data from a sensor worn on a chest strap and employs algorithms for estimating body temperature, gait instability, and adaptive physiological strain index. The system then provides an individual’s heat-injury prediction on a mobile app. The affordability, accuracy, and user-acceptability of HIPS have led to its integration into operational environments for the military.

Observing the world

More than 80 percent of the ocean floor remains virtually unmapped and unexplored. Historically, deep sea maps have been generated either at low resolution from a large sonar array mounted on a ship, or at higher resolution with slow and expensive underwater vehicles. New autonomous sparse-aperture multibeam echo sounder technology uses a swarm of about 20 autonomous surface vehicles that work together as a single large sonar array to achieve the best of both worlds: mapping the deep seabed at 100 times the resolution of a ship-mounted sonar and 50 times the coverage rate of an underwater vehicle. New estimation algorithms and acoustic signal processing techniques enable this technology. The system holds potential for significantly improving humanitarian search-and-rescue capabilities and ocean and climate modeling. The R&D 100 Award is shared with the MIT Department of Mechanical Engineering.

FocusNet is a machine-learning architecture for analyzing airborne ground-mapping lidar data. Airborne lidar works by scanning the ground with a laser and creating a digital 3D representation of the area, called a point cloud. Humans or algorithms then analyze the point cloud to categorize scene features such as buildings or roads. In recent years, lidar technology has both improved and diversified, and methods to analyze the data have struggled to keep up. FocusNet fills this gap by using a convolutional neural network — an algorithm that finds patterns in images to recognize objects — to automatically categorize objects within the point cloud. It can achieve this object recognition across different types of lidar system data without needing to be retrained, representing a major advancement in understanding 3D lidar scenes.

Atmospheric observations collected from aircraft, such as temperature and wind, provide the highest-value inputs to weather forecasting models. However, these data collections are sparse and delayed, currently obtained through specialized systems installed on select aircraft. The Portable Aircraft Derived Weather Observation System (PADWOS) offers a way to significantly expand the quality and quantity of these data by leveraging Mode S Enhanced Surveillance (EHS) transponders, which are already installed on more than 95 percent of commercial aircraft and the majority of general aviation aircraft. From the ground, PADWOS interrogates Mode S EHS–equipped aircraft, collecting in milliseconds aircraft state data reported by the transponder to make wind and temperature estimates. The system holds promise for improving forecasts, monitoring climate, and supporting other weather applications.

Advancing computing and communications

Quantum networking has the potential to revolutionize connectivity across the globe, unlocking unprecedented capabilities in computing, sensing, and communications. To realize this potential, entangled photons distributed across a quantum network must arrive and interact with other photons in precisely controlled ways. Lincoln Laboratory's precision photon synchronization system for quantum networking is the first to provide an efficient solution to synchronize space-to-ground quantum networking links to sub-picosecond precision. Unlike other technologies, the system performs free-space quantum entanglement distribution via a satellite, without needing to locate complex entanglement sources in space. These sources are instead located on the ground, providing an easily accessible test environment that can be upgraded as new quantum entanglement generation technologies emerge.

Superconductive many-state memory and comparison logic: Lincoln Laboratory developed circuits that natively store and compare greater than two discrete states, utilizing the quantized magnetic fields of superconductive materials. This property allows the creation of digital logic circuitry that goes beyond binary logic to ternary logic, improving memory throughput without significantly increasing the number of devices required or the surface area of the circuits. Comparing their superconducting ternary-logic memory to a conventional memory, the research team found that the ternary memory could pattern match across the entire digital Library of Congress nearly 30 times faster. The circuits represent fundamental building blocks for advanced, ultrahigh-speed and low-power digital logic.

The Megachip is an approach to interconnect many small, specialized chips (called chiplets) into a single-chip-like monolithic integrated circuit. Capable of incorporating billions of transistors, this interconnected structure extends device performance beyond the limits imposed by traditional wafer-level packaging. Megachips can address the increasing size and performance demands made on microelectronics used for AI processing and high-performance computing, and in mobile devices and servers.

An in-band full-duplex (IBDF) wireless system with advanced interference mitigation addresses the growing congestion of wireless networks. Previous IBFD systems have demonstrated the ability for a wireless device to transmit and receive on the same frequency at the same time by suppressing self-interference, effectively doubling the device’s efficiency on the frequency spectrum. These systems, however, haven’t addressed interference from external wireless sources on the same frequency. Lincoln Laboratory's technology, for the first time, allows IBFD to mitigate multiple interference sources, resulting in a wireless system that can increase the number of devices supported, their data rate, and their communications range. This IBFD system could enable future smart vehicles to simultaneously connect to wireless networks, share road information, and self-drive — a capability not possible today.

Fabricating with novel processes

Lincoln Laboratory developed a nanocomposite ink system for 3D printing functional materials. Deposition using an active-mixing nozzle allows the generation of graded structures that transition gradually from one material to another. This ability to control the electromagnetic and geometric properties of a material can enable smaller, lighter, and less-power-hungry RF components while accommodating large frequency bandwidths. Furthermore, introducing different particles into the ink in a modular fashion allows the absorption of a wide range of radiation types. This 3D-printed shielding is expected to be used for protecting electronics in small satellites. This award is shared with Professor Jennifer Lewis’ research group at Harvard University.

The laboratory’s engineered substrates for rapid advanced imaging sensor development dramatically reduce the time and cost of developing advanced silicon imaging sensors. These substrates prebuild most steps of the back-illumination process (a method to increase the amount of light that hits a pixel) directly into the starting wafer, before device fabrication begins. Then, a specialized process allows the detector substrate and readout circuits to be mated together and uniformly thinned to microns in thickness at the die level rather than at the wafer level. Both aspects can save a project millions of dollars in fabrication costs by enabling the production of small batches of detectors, instead of a full wafer run, while improving sensor noise and performance. This platform has allowed researchers to prototype new imaging sensor concepts — including detectors for future NASA autonomous lander missions — that would have taken years to develop in a traditional process.

Additive manufacturing, or 3D printing, holds promise for fabricating complex glass structures that would be unattainable with traditional glass manufacturing techniques. Lincoln Laboratory’s low-temperature additive manufacturing of glass composites allows 3D printing of multimaterial glass items without the need for costly high-temperature processing. This low-temperature technique, which cures the glass at 250 degrees Celsius as compared to the standard 1,000 C, relies on simple components: a liquid silicate solution, a structural filler, a fumed nanoparticle, and an optional functional additive to produce glass with optical, electrical, or chemical properties. The technique could facilitate the widespread adoption of 3D printing for glass devices such as microfluidic systems, free-form optical lenses or fiber, and high-temperature electronic components.

The researchers behind each R&D 100 Award–winning technology will be honored at an awards gala on Nov. 21 in Palm Springs, California.


3 Questions: Should we label AI systems like we do prescription drugs?

Researchers argue that in health care settings, “responsible use” labels could ensure AI systems are deployed appropriately.


AI systems are increasingly being deployed in safety-critical health care situations. Yet these models sometimes hallucinate incorrect information, make biased predictions, or fail for unexpected reasons, which could have serious consequences for patients and clinicians.

In a commentary article published today in Nature Computational Science, MIT Associate Professor Marzyeh Ghassemi and Boston University Associate Professor Elaine Nsoesie argue that, to mitigate these potential harms, AI systems should be accompanied by responsible-use labels, similar to U.S. Food and Drug Administration-mandated labels placed on prescription medications.

MIT News spoke with Ghassemi about the need for such labels, the information they should convey, and how labeling procedures could be implemented.

Q: Why do we need responsible use labels for AI systems in health care settings?

A: In a health setting, we have an interesting situation where doctors often rely on technology or treatments  that are not fully understood. Sometimes this lack of understanding is fundamental — the mechanism behind acetaminophen for instance — but other times this is just a limit of specialization. We don’t expect clinicians to know how to service an MRI machine, for instance. Instead, we have certification systems through the FDA or other federal agencies, that certify the use of a medical device or drug in a specific setting.

Importantly, medical devices also have service contracts — a technician from the manufacturer will fix your MRI machine if it is miscalibrated. For approved drugs, there are postmarket surveillance and reporting systems so that adverse effects or events can be addressed, for instance if a lot of people taking a drug seem to be developing a condition or allergy.

Models and algorithms, whether they incorporate AI or not, skirt a lot of these approval and long-term monitoring processes, and that is something we need to be wary of. Many prior studies have shown that predictive models need more careful evaluation and monitoring. With more recent generative AI specifically, we cite work that has demonstrated generation is not guaranteed to be appropriate, robust, or unbiased. Because we don’t have the same level of surveillance on model predictions or generation, it would be even more difficult to catch a model’s problematic responses. The generative models being used by hospitals right now could be biased. Having use labels is one way of ensuring that models don’t automate biases that are learned from human practitioners or miscalibrated clinical decision support scores of the past.      

Q: Your article describes several components of a responsible use label for AI, following the FDA approach for creating prescription labels, including approved usage, ingredients, potential side effects, etc. What core information should these labels convey?

A: The things a label should make obvious are time, place, and manner of a model’s intended use. For instance, the user should know that models were trained at a specific time with data from a specific time point. For instance, does it include data that did or did not include the Covid-19 pandemic? There were very different health practices during Covid that could impact the data. This is why we advocate for the model “ingredients” and “completed studies” to be disclosed.

For place, we know from prior research that models trained in one location tend to have worse performance when moved to another location. Knowing where the data were from and how a model was optimized within that population can help to ensure that users are aware of “potential side effects,” any “warnings and precautions,” and “adverse reactions.”

With a model trained to predict one outcome, knowing the time and place of training could help you make intelligent judgements about deployment. But many generative models are incredibly flexible and can be used for many tasks. Here, time and place may not be as informative, and more explicit direction about “conditions of labeling” and “approved usage” versus “unapproved usage” come into play. If a developer has evaluated a generative model for reading a patient’s clinical notes and generating prospective billing codes, they can disclose that it has bias toward overbilling for specific conditions or underrecognizing others. A user wouldn’t want to use this same generative model to decide who gets a referral to a specialist, even though they could. This flexibility is why we advocate for additional details on the manner in which models should be used.

In general, we advocate that you should train the best model you can, using the tools available to you. But even then, there should be a lot of disclosure. No model is going to be perfect. As a society, we now understand that no pill is perfect — there is always some risk. We should have the same understanding of AI models. Any model — with or without AI — is limited. It may be giving you realistic, well-trained, forecasts of potential futures, but take that with whatever grain of salt is appropriate.

Q: If AI labels were to be implemented, who would do the labeling and how would labels be regulated and enforced?

A: If you don’t intend for your model to be used in practice, then the disclosures you would make for a high-quality research publication are sufficient. But once you intend your model to be deployed in a human-facing setting, developers and deployers should do an initial labeling, based on some of the established frameworks. There should be a validation of these claims prior to deployment; in a safety-critical setting like health care, many agencies of the Department of Health and Human Services could be involved.

For model developers, I think that knowing you will need to label the limitations of a system induces more careful consideration of the process itself. If I know that at some point I am going to have to disclose the population upon which a model was trained, I would not want to disclose that it was trained only on dialogue from male chatbot users, for instance.

Thinking about things like who the data are collected on, over what time period, what the sample size was, and how you decided what data to include or exclude, can open your mind up to potential problems at deployment. 


MIT named No. 2 university by U.S. News for 2024-25

Undergraduate engineering is No. 1; undergraduate business and computer science programs are No. 2.


MIT has placed second in U.S. News and World Report’s annual rankings of the nation’s best colleges and universities, announced today. 

As in past years, MIT’s engineering program continues to lead the list of undergraduate engineering programs at a doctoral institution. The Institute also placed first in six out of nine engineering disciplines.

U.S. News placed MIT second in its evaluation of undergraduate computer science programs, along with Carnegie Mellon University and the University of California at Berkeley. The Institute placed first in four out of 10 computer science disciplines.

MIT remains the No. 2 undergraduate business program, a ranking it shares with UC Berkeley. Among business subfields, MIT is ranked first in three out of 10 specialties.

Within the magazine’s rankings of “academic programs to look for,” MIT topped the list in the category of undergraduate research and creative projects. The Institute also ranks as the third most innovative national university and the third best value, according to the U.S. News peer assessment survey of top academics.

MIT placed first in six engineering specialties: aerospace/aeronautical/astronautical engineering; chemical engineering; computer engineering; electrical/electronic/communication engineering; materials engineering; and mechanical engineering. It placed within the top five in two other engineering areas: biomedical engineering and civil engineering.

Other schools in the top five overall for undergraduate engineering programs are Stanford University, UC Berkeley, Georgia Tech, Caltech, the University of Illinois at Urbana-Champaign, and the University of Michigan at Ann Arbor.

In computer science, MIT placed first in four specialties: biocomputing/bioinformatics/biotechnology; computer systems; programming languages; and theory. It placed in the top five of five other disciplines: artificial intelligence; cybersecurity; data analytics/science; mobile/web applications; and software engineering.

The No. 1-ranked undergraduate computer science program overall is at Stanford. Other schools in the top five overall for undergraduate computer science programs are Carnegie Mellon, Stanford, UC Berkeley, Princeton University, and the University of Illinois at Urbana-Champaign.

Among undergraduate business specialties, the MIT Sloan School of Management leads in analytics; production/operations management; and quantitative analysis. It also placed within the top five in three other categories: entrepreneurship; management information systems; and supply chain management/logistics.

The No. 1-ranked undergraduate business program overall is at the University of Pennsylvania; other schools ranking in the top five include UC Berkeley, the University of Michigan at Ann Arbor, and New York University.


Playing a new tune

After taking a pass on the family bagpiping tradition to try a new vocation, Andrew Sutherland has made noise as an innovative business scholar.


For generations, Andrew Sutherland’s family had the same calling: bagpipes. Growing up in Halifax, Nova Scotia, in a family with Scottish roots, Sutherland’s father, grandfather, and great-grandfather all played the bagpipes competitively, criss-crossing North America. Sutherland’s aunts and uncles were pipers too.

But Sutherland did not take to the instrument. He liked math, went to college, entered a PhD program, and emerged as a professor at the MIT Sloan School of Management. Sutherland is an enterprising scholar whose work delves into issues around the financing and auditing of private firms, the effects of financial technology, and even detecting business fraud.

“I was actually the first male in my family to not play the bagpipes, and the first to go to university,” Sutherland explains. “The joke is that I’m the shame of the family, since I never picked up the pipes and continued the tradition.”

The family bagpiping loss is MIT’s gain. While Sutherland’s area of specialty is nominally accounting, his work has illuminated business practices more broadly.

“A lot of what we know about the financial system and how companies perform, and about financial statements, comes from big public companies,” Sutherland says. “But we have a lot of entrepreneurs come through Sloan looking to found startups, and in the U.S., private firms generate more than half of employment and investment. Until recently, we haven’t known a lot about how they get capital, how they make decisions.”

For his research and teaching, Sutherland was awarded tenure at MIT last year.

Piper at the gates of college

Sutherland is proud of his family history; his grandfather and great-grandfather have taught generations of bagpipe players in Nova Scotia, with many of their students becoming successful pipers around the world. But Sutherland took to math and business studies, receiving his undergraduate degree in commerce, with honors in accounting, from York University in Toronto. Then he received an MBA from Carnegie Mellon University, with concentrations in finance and quantitative analysis.

Sutherland still wanted to research financial markets, though. How did banks evaluate the private businesses they were lending to? How much were those firms disclosing to investors? How much just comes down to trust? He entered the PhD program at the University of Chicago’s Booth School of Business and found scholars encouraging him to pursue those questions.

That included Sutherland’s advisor, Christian Leuz; the long-time Chicago professor Douglas Diamond, now a Nobel Prize winner, whom Sutherland calls “one of the most generous researchers I’ve met” in academia; and a then-assistant professor, Michael Minnis, who shared Sutherland’s interest in studying private firms and entrepreneurs.

Sutherland earned his PhD from Chicago in 2015, with a dissertation about the changing nature of banker-to-business relationships, published in 2018. That research studied the effects of transparency-improving technologies on how small businesses obtained credit.

“Twenty years ago, banking was very relationship-based,” Sutherland says. “You might play golf with your loan officer once a year and they knew your business and maybe your employees, and they would sponsor the local softball team. Whereas now banking has been really influenced by technology. A lot of companies provide credit through online applications, and the days where you had to supply audited financial statements has gone away.” As a result of the expansion in technology-based lending, credit markets have shifted from a relationship basis to a transactional focus.

Sutherland, who is currently an associate professor at MIT, joined the faculty in 2015 and has remained at the Institute ever since. A fan of modern art, his office at MIT Sloan includes an Andy Warhol print, which is part of MIT’s art-lending program, as well as reproductions of some of Harold “Doc” Edgerton’s famous high-speed photographs.

Sutherland has since written five papers with Minnis (now a deputy dean at Chicago Booth), and other co-authors. Many of their findings highlight the variation in lending and contracting practices in the small business sector. In a 2017 study, they found that banks collected fewer verified financial statements from construction companies during the pre-2008 housing bubble than afterward; before 2008, lending had become lax, similar to what happened in the mortgage markets, and this contributed to the crisis. In another study from that year, they showed how banks with extensive industry and geographic expertise rely more on soft than hard information in lending.

“We’re trying to understand the ‘Wild West’ in accounting and finance more broadly,” Sutherland says. “For firms like entrepreneurs and privately held companies, largely unfettered by regulation, what choices do they make, and why? And how can we use economic theory to understand these choices?”

Business, trust, and fraud

Indeed, Sutherland has often homed in on issues around trust, rules, and financial misconduct, something students care about greatly.

“Students are always interested in talking about fraud,” Sutherland says. “Our financial system is based on trust. So many of us invest on an entirely anonymous basis — we don’t personally know our fund manager or closely watch what they do with our money.” And while regulations and a functioning justice system protect against problems, Sutherland notes, finance works partly because “people have some trust in the financial system. But that’s a fragile thing. Once people are swindled, they just keep their money in the bank or under the mattress. Often we’ll have students from countries with weak institutions or corruption, and they’ll say, ‘You would never do the things you can do in the U.S., in terms of investing your money.’ Without trust, it becomes harder for entrepreneurs to raise capital and undermines the whole vibrant economic system we have.”

Some measures can make a big difference. In a 2020 paper published in the Journal of Financial Economics, Sutherland and two co-authors found that a 2010 change to the investment adviser qualification exam, which reduced its focus on ethics, had significant effects: People who passed the exam when it featured more rules and ethics material are one-fourth less likely to commit misconduct. They are also more likely to depart employers during or even before scandals.

“It does seem to matter,” Sutherland says. “The person who has had less ethics training is more likely to get in trouble with the industry. You can predict future fraud in a firm by who is quitting. Those with more ethics training are more likely to leave before a scandal breaks.”

In the classroom

Sutherland also believes his interests are well-suited to the MIT Sloan School of Management, since many students are looking to found startups.

“One thing that really stands out about Sloan is that we attract a lot of entrepreneurs,” Sutherland says. “They’re curious about all this stuff: How do I get financing? Should I go to a bank? Should I raise equity? How do I compare myself to competitors? It’s striking to me that if that person wanted to work for a big public firm, I could hand them a textbook that answers many of these questions. But when it comes to private firms, a lot of that is unknown. And it motivates me to find answers.”

And while Sutherland is a prolific researcher, he views classroom time as being just as important. 

“What I hope with every project I work on is that I could take the findings to the classroom, and the students would find it relevant and interesting,” Sutherland says.

As much as Sutherland made a big departure from the family business, he still gets to teach, and in a sense perform for an audience. Ask Sutherland about his students, and he sounds an emphatically upbeat note.

“One of the best things about teaching at MIT,” Sutherland says, “is that the students are smart enough that you can explain how you did the study, and someone will put up a hand and say: ‘What about this, or that?’ You can bring research findings to the classroom and they absorb them and challenge you on them. It’s the best place in the world to teach, because the students are just so curious and so smart.”


A two-dose schedule could make HIV vaccines more effective

MIT researchers find that the first dose primes the immune system, helping it to generate a strong response to the second dose, a week later.


One major reason why it has been difficult to develop an effective HIV vaccine is that the virus mutates very rapidly, allowing it to evade the antibody response generated by vaccines.

Several years ago, MIT researchers showed that administering a series of escalating doses of an HIV vaccine over a two-week period could help overcome a part of that challenge by generating larger quantities of neutralizing antibodies. However, a multidose vaccine regimen administered over a short time is not practical for mass vaccination campaigns.

In a new study, the researchers have now found that they can achieve a similar immune response with just two doses, given one week apart. The first dose, which is much smaller, prepares the immune system to respond more powerfully to the second, larger dose.

This study, which was performed by bringing together computational modeling and experiments in mice, used an HIV envelope protein as the vaccine. A single-dose version of this vaccine is now in clinical trials, and the researchers hope to establish another study group that will receive the vaccine on a two-dose schedule.

“By bringing together the physical and life sciences, we shed light on some basic immunological questions that helped develop this two-dose schedule to mimic the multiple-dose regimen,” says Arup Chakraborty, the John M. Deutch Institute Professor at MIT and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MIT, MGH and Harvard University.

This approach may also generalize to vaccines for other diseases, Chakraborty notes.

Chakraborty and Darrell Irvine, a former MIT professor of biological engineering and materials science and engineering and member of the Koch Institute for Integrative Cancer Research, who is now a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the study, which appears today in Science Immunology. The lead authors of the paper are Sachin Bhagchandani PhD ’23 and Leerang Yang PhD ’24.

Neutralizing antibodies

Each year, HIV infects more than 1 million people around the world, and some of those people do not have access to antiviral drugs. An effective vaccine could prevent many of those infections. One promising vaccine now in clinical trials consists of an HIV protein called an envelope trimer, along with a nanoparticle called SMNP. The nanoparticle, developed by Irvine’s lab, acts as an adjuvant that helps recruit a stronger B cell response to the vaccine.

In clinical trials, this vaccine and other experimental vaccines have been given as just one dose. However, there is growing evidence that a series of doses is more effective at generating broadly neutralizing antibodies. The seven-dose regimen, the researchers believe, works well because it mimics what happens when the body is exposed to a virus: The immune system builds up a strong response as more viral proteins, or antigens, accumulate in the body.

In the new study, the MIT team investigated how this response develops and explored whether they could achieve the same effect using a smaller number of vaccine doses.

“Giving seven doses just isn’t feasible for mass vaccination,” Bhagchandani says. “We wanted to identify some of the critical elements necessary for the success of this escalating dose, and to explore whether that knowledge could allow us to reduce the number of doses.”

The researchers began by comparing the effects of one, two, three, four, five, six, or seven doses, all given over a 12-day period. They initially found that while three or more doses generated strong antibody responses, two doses did not. However, by tweaking the dose intervals and ratios, the researchers discovered that giving 20 percent of the vaccine in the first dose and 80 percent in a second dose, seven days later, achieved just as good a response as the seven-dose schedule.

“It was clear that understanding the mechanisms behind this phenomenon would be crucial for future clinical translation,” Yang says. “Even if the ideal dosing ratio and timing may differ for humans, the underlying mechanistic principles will likely remain the same.”

Using a computational model, the researchers explored what was happening in each of these dosing scenarios. This work showed that when all of the vaccine is given as one dose, most of the antigen gets chopped into fragments before it reaches the lymph nodes. Lymph nodes are where B cells become activated to target a particular antigen, within structures known as germinal centers.

When only a tiny amount of the intact antigen reaches these germinal centers, B cells can’t come up with a strong response against that antigen.

However, a very small number of B cells do arise that produce antibodies targeting the intact antigen. So, giving a small amount in the first dose does not “waste” much antigen but allows some B cells and antibodies to develop. If a second, larger dose is given a week later, those antibodies bind to the antigen before it can be broken down and escort it into the lymph node. This allows more B cells to be exposed to that antigen and eventually leads to a large population of B cells that can target it.

“The early doses generate some small amounts of antibody, and that’s enough to then bind to the vaccine of the later doses, protect it, and target it to the lymph node. That's how we realized that we don't need to give seven doses,” Bhagchandani says. “A small initial dose will generate this antibody and then when you give the larger dose, it can again be protected because that antibody will bind to it and traffic it to the lymph node.”

T-cell boost

Those antigens may stay in the germinal centers for weeks or even longer, allowing more B cells to come in and be exposed to them, making it more likely that diverse types of antibodies will develop.

The researchers also found that the two-dose schedule induces a stronger T-cell response. The first dose activates dendritic cells, which promote inflammation and T-cell activation. Then, when the second dose arrives, even more dendritic cells are stimulated, further boosting the T-cell response.

Overall, the two-dose regimen resulted in a fivefold improvement in the T-cell response and a 60-fold improvement in the antibody response, compared to a single vaccine dose.

“Reducing the ‘escalating dose’ strategy down to two shots makes it much more practical for clinical implementation. Further, a number of technologies are in development that could mimic the two-dose exposure in a single shot, which could become ideal for mass vaccination campaigns,” Irvine says.

The researchers are now studying this vaccine strategy in a nonhuman primate model. They are also working on specialized materials that can deliver the second dose over an extended period of time, which could further enhance the immune response.

The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the National Institutes of Health, and the Ragon Institute of MIT, MGH, and Harvard.


Engineers 3D print sturdy glass bricks for building structures

The interlocking bricks, which can be repurposed many times over, can withstand similar pressures as their concrete counterparts.


What if construction materials could be put together and taken apart as easily as LEGO bricks? Such reconfigurable masonry would be disassembled at the end of a building’s lifetime and reassembled into a new structure, in a sustainable cycle that could supply generations of buildings using the same physical building blocks.

That’s the idea behind circular construction, which aims to reuse and repurpose a building’s materials whenever possible, to minimize the manufacturing of new materials and reduce the construction industry’s “embodied carbon,” which refers to the greenhouse gas emissions associated with every process throughout a building’s construction, from manufacturing to demolition.

Now MIT engineers, motivated by circular construction’s eco potential, are developing a new kind of reconfigurable masonry made from 3D-printed, recycled glass. Using a custom 3D glass printing technology provided by MIT spinoff Evenline, the team has made strong, multilayered glass bricks, each in the shape of a figure eight, that are designed to interlock, much like LEGO bricks.

In mechanical testing, a single glass brick withstood pressures similar to that of a concrete block. As a structural demonstration, the researchers constructed a wall of interlocking glass bricks. They envision that 3D-printable glass masonry could be reused many times over as recyclable bricks for building facades and internal walls.

“Glass is a highly recyclable material,” says Kaitlyn Becker, assistant professor of mechanical engineering at MIT. “We’re taking glass and turning it into masonry that, at the end of a structure’s life, can be disassembled and reassembled into a new structure, or can be stuck back into the printer and turned into a completely different shape. All this builds into our idea of a sustainable, circular building material.”

“Glass as a structural material kind of breaks people’s brains a little bit,” says Michael Stern, a former MIT graduate student and researcher in both MIT’s Media Lab and Lincoln Laboratory, who is also founder and director of Evenline. “We’re showing this is an opportunity to push the limits of what’s been done in architecture.”

Becker and Stern, with their colleagues, detail their glass brick design in a study appearing today in the journal Glass Structures and Engineering. Their MIT co-authors include lead author Daniel Massimino and Charlotte Folinus, along with Ethan Townsend at Evenline.

Lock step

The inspiration for the new circular masonry design arose partly in MIT’s Glass Lab, where Becker and Stern, then undergraduate students, first learned the art and science of blowing glass.

“I found the material fascinating,” says Stern, who later designed a 3D printer capable of printing molten recycled glass — a project he took on while studying in the mechanical engineering department. “I started thinking of how glass printing can find its place and do interesting things, construction being one possible route.”

Meanwhile, Becker, who accepted a faculty position at MIT, began exploring the intersection of manufacturing and design, and ways to develop new processes that enable innovative designs.

“I get excited about expanding design and manfucaturing spaces for challenging materials with interesting characteristics, like glass and its optical properties and recyclability,” Becker says. “As long as it’s not contaminated, you can recycle glass almost infinitely.”

She and Stern teamed up to see whether and how 3D-printable glass could be made into a structural masonry unit as sturdy and stackable as traditional bricks. For their new study, the team used the Glass 3D Printer 3 (G3DP3), the latest version of Evenline’s glass printer, which pairs with a furnace to melt crushed glass bottles into a molten, printable form that the printer then deposits in layered patterns.

The team printed prototype glass bricks using soda-lime glass that is typically used in a glassblowing studio. They incorporated two round pegs onto each printed brick, similar to the studs on a LEGO brick. Like the toy blocks, the pegs enable bricks to interlock and assemble into larger structures. Another material placed between the bricks prevent scratches or cracks between glass surfaces but can be removed if a brick structure were to be dismantled and recycled, also allowing bricks to be remelted in the printer and formed into new shapes. The team decided to make the blocks into a figure-eight shape.

“With the figure-eight shape, we can constrain the bricks while also assembling them into walls that have some curvature,” Massimino says.

Stepping stones

The team printed glass bricks and tested their mechanical strength in an industrial hydraulic press that squeezed the bricks until they began to fracture. The researchers found that the strongest bricks were able to hold up to pressures that are comparable to what concrete blocks can withstand. Those strongest bricks were made mostly from printed glass, with a separately manufactured interlocking feature that attached to the bottom of the brick. These results suggest that most of a masonry brick could be made from printed glass, with an interlocking feature that could be printed, cast, or separately manufactured from a different material.

“Glass is a complicated material to work with,” Becker says. “The interlocking elements, made from a different material, showed the most promise at this stage.”

The group is looking into whether more of a brick’s interlocking feature could be made from printed glass, but doesn’t see this as a dealbreaker in moving forward to scale up the design. To demonstrate glass masonry’s potential, they constructed a curved wall of interlocking glass bricks. Next, they aim to build progressively bigger, self-supporting glass structures.

“We have more understanding of what the material’s limits are, and how to scale,” Stern says. “We’re thinking of stepping stones to buildings, and want to start with something like a pavilion — a temporary structure that humans can interact with, and that you could then reconfigure into a second design. And you could imagine that these blocks could go through a lot of lives.”

This research was supported, in part, by the Bose Research Grant Program and MIT’s Research Support Committee.


New AI JetPack accelerates the entrepreneurial process

The digital adviser helps users swiftly navigate the 24-step “Disciplined Entrepreneurship” process.


Apple co-founder Steve Jobs described the computer as a bicycle for the mind. What the Martin Trust Center for MIT Entrepreneurship just launched has a bit more horsepower.

“Maybe it’s not a Ferrari yet, but we have a car,” says Bill Aulet, the center’s managing director. The vehicle: the MIT Entrepreneurship JetPack, a generative artificial intelligence tool trained on Aulet’s 24-step Disciplined Entrepreneurship framework to input prompts into large language models.

Introduce a startup idea to the Eship JetPack, “and it’s like having five or 10 or 12 MIT undergraduates who instantaneously run out and do all the research you want based on the question you asked, and then they bring back the answer,” Aulet says.

The tool is currently being used by entrepreneurship students and piloted outside MIT, and there is a waitlist that prospective users can join. The tool is accessed through the Trust Center’s Orbit digital entrepreneurship platform, which was launched for student use in 2019. Orbit grew out of a need for an alternative to the static Trust Center website, Aulet says.

“We weren’t following our own protocols of entrepreneurship,” he says. “You meet the students where they are, and more and more of them were on their phones. I said, ‘Let’s build an app that’s more dynamic than a static website, and that will be the way that we can get to the students.”

With the help of Trust Center Executive Director Paul Cheek and Product Lead Doug Williams, Orbit has become a one-stop shop for student entrepreneurs. On the platform’s back end, leaders at the center are able to see what users are and are not clicking on.

Aulet and his team have been studying that user information since Orbit’s launch. It’s enabled them to learn how students want to access information, not just about course offerings or startup competition applications but also to get guidance on an idea they’re working on or connect to an entrepreneurial community of co-founders and advisers. The team also received advice from Ethan Mollick SM ’04, PhD ’10, an associate professor of management at the Wharton School and author of a new book, “Co-Intelligence: Living and Working With AI.”

Official work on the Eship JetPack began about six months ago. The name was inspired by the acceleration a jet pack provides, and the need for a human to take advantage of the boost and guide its direction.

“As we moved from our initial focus on capturing information to providing guidance, MIT's Disciplined Entrepreneurship and Startup Tactics frameworks were the perfect place to start,” Williams says.

One of the earliest beta users, Shari Van Cleave, MBA ’15, demonstrated how to use the AI tool in a YouTube video.

She submitted an experimental idea for mobile electric vehicle charging, and within seconds the AI tool suggested market segments, beachhead markets, a business model, pricing, assumptions, testing, and a product plan — and that’s only seven of the 24 steps of the Disciplined Entrepreneurship framework that she explored.

“I was impressed by how quickly the AI, with just a few details, generated recommendations for everything from market-sizing (TAM) to lifetime customer value models,” Van Cleave said in an email. “Having a high-quality rough draft means founders, whether new or experienced, can execute and fundraise faster.”

And for those entrepreneurs who might already have an idea and be well on their way through the 24-step process, the tool can be useful for them, too, Aulet says. For example, they might want insights and quotes about how their company can improve its performance or determine whether there’s a better market to be targeting.

“Our goal is to lift the field of entrepreneurship, and a tool like this would allow more people to be entrepreneurs, and be better entrepreneurs,” Aulet says.


AI model can reveal the structures of crystalline materials

By analyzing X-ray crystallography data, the model could help researchers develop new materials for many applications, including batteries and magnets.


For more than 100 years, scientists have been using X-ray crystallography to determine the structure of crystalline materials such as metals, rocks, and ceramics.

This technique works best when the crystal is intact, but in many cases, scientists have only a powdered version of the material, which contains random fragments of the crystal. This makes it more challenging to piece together the overall structure.

MIT chemists have now come up with a new generative AI model that can make it much easier to determine the structures of these powdered crystals. The prediction model could help researchers characterize materials for use in batteries, magnets, and many other applications.

“Structure is the first thing that you need to know for any material. It’s important for superconductivity, it’s important for magnets, it’s important for knowing what photovoltaic you created. It’s important for any application that you can think of which is materials-centric,” says Danna Freedman, the Frederick George Keyes Professor of Chemistry at MIT.

Freedman and Jure Leskovec, a professor of computer science at Stanford University, are the senior authors of the new study, which appears today in the Journal of the American Chemical Society. MIT graduate student Eric Riesel and Yale University undergraduate Tsach Mackey are the lead authors of the paper.

Distinctive patterns

Crystalline materials, which include metals and most other inorganic solid materials, are made of lattices that consist of many identical, repeating units. These units can be thought of as “boxes” with a distinctive shape and size, with atoms arranged precisely within them.

When X-rays are beamed at these lattices, they diffract off atoms with different angles and intensities, revealing information about the positions of the atoms and the bonds between them. Since the early 1900s, this technique has been used to analyze materials, including biological molecules that have a crystalline structure, such as DNA and some proteins.

For materials that exist only as a powdered crystal, solving these structures becomes much more difficult because the fragments don’t carry the full 3D structure of the original crystal.

“The precise lattice still exists, because what we call a powder is really a collection of microcrystals. So, you have the same lattice as a large crystal, but they’re in a fully randomized orientation,” Freedman says.

For thousands of these materials, X-ray diffraction patterns exist but remain unsolved. To try to crack the structures of these materials, Freedman and her colleagues trained a machine-learning model on data from a database called the Materials Project, which contains more than 150,000 materials. First, they fed tens of thousands of these materials into an existing model that can simulate what the X-ray diffraction patterns would look like. Then, they used those patterns to train their AI model, which they call Crystalyze, to predict structures based on the X-ray patterns.

The model breaks the process of predicting structures into several subtasks. First, it determines the size and shape of the lattice “box” and which atoms will go into it. Then, it predicts the arrangement of atoms within the box. For each diffraction pattern, the model generates several possible structures, which can be tested by feeding the structures into a model that determines diffraction patterns for a given structure.

“Our model is generative AI, meaning that it generates something that it hasn’t seen before, and that allows us to generate several different guesses,” Riesel says. “We can make a hundred guesses, and then we can predict what the powder pattern should look like for our guesses. And then if the input looks exactly like the output, then we know we got it right.”

Solving unknown structures

The researchers tested the model on several thousand simulated diffraction patterns from the Materials Project. They also tested it on more than 100 experimental diffraction patterns from the RRUFF database, which contains powdered X-ray diffraction data for nearly 14,000 natural crystalline minerals, that they had held out of the training data. On these data, the model was accurate about 67 percent of the time. Then, they began testing the model on diffraction patterns that hadn’t been solved before. These data came from the Powder Diffraction File, which contains diffraction data for more than 400,000 solved and unsolved materials.

Using their model, the researchers came up with structures for more than 100 of these previously unsolved patterns. They also used their model to discover structures for three materials that Freedman’s lab created by forcing elements that do not react at atmospheric pressure to form compounds under high pressure. This approach can be used to generate new materials that have radically different crystal structures and physical properties, even though their chemical composition is the same.

Graphite and diamond — both made of pure carbon — are examples of such materials. The materials that Freedman has developed, which each contain bismuth and one other element, could be useful in the design of new materials for permanent magnets.

“We found a lot of new materials from existing data, and most importantly, solved three unknown structures from our lab that comprise the first new binary phases of those combinations of elements,” Freedman says.

Being able to determine the structures of powdered crystalline materials could help researchers working in nearly any materials-related field, according to the MIT team, which has posted a web interface for the model at crystalyze.org.

The research was funded by the U.S. Department of Energy and the National Science Foundation.