Using equations to boost racial equality

MIT mathematics graduate student Lucas Mason-Brown has been named one of 35 Echoing Green Fellows for his work with Data for Black Lives (D4BL), an organization he recently co-founded to mobilize scientists to use data science to fight racial bias in real estate, finance, criminal justice, and other areas.

Historically, data science has been used in ways that disproportionately and negatively affect black communities. Big data and algorithms have been instrumental in predatory lending practices, predictive policing, and redlining, the practice of denying services such as banking or insurance to residents of a specific area based on its racial or ethnic composition.

D4BL aims to turn data science tools such as statistical modeling, data visualization, and crowdsourcing into instruments for fighting bias, building progressive movements, and promoting civic engagement.

Until last summer, D4BL was just a Twitter account with a few dozen followers until Mason-Brown and his friend and co-founder Yeshimabeit Milner put their ideas into action with a third friend, Max Clermont, who works in public health.

Now, with $90,000 of seed-funding that the Echoing Green Fellowship will provide over the next two years, D4BL will hire Milner as full-time executive director. Echoing Green is a nonprofit that has kickstarted both nonprofit and for-profit social entrepreneurs in more than 60 countries since 1987. Other fellowship benefits include health insurance, professional development, networking opportunities, technical support, pro bono partnerships, and a dedicated Echoing Green portfolio manager to help grow their organization. D4BL joins a community of social impact leaders that include Teach for America, City Year, One Acre Fund, and SKS Microfinance.

Public launch

D4BL will also host a conference at the MIT Media Lab in November, an event that will also serve as D4BL’s public launch. The conference is expected to gather more than 200 activists, organizers, data scientists, computer programmers, and public officials.

Conference speakers from MIT will include President L. Rafael Reif, Dean of the School of Humanities and Social Science Melissa Nobles, and Media Lab Director’s Fellows Adam Foss and Julia Angwin, as well as Cathy O’Neil, MIT instructor and author of “Weapons of Math Destruction.” Events will include a discussion of the mathematics of gerrymandering congressional districts led by the Metric Geometry and Gerrymandering Group, and a hackathon that will encourage mathematicians and data scientists to work with activists and organizers on pressing racial justice issues.

“The issues focused around people of color in this country deeply resonate with me,” said Isaiah Borne, a rising senior in chemical engineering, conference organizer, and political action co-chair of the MIT Black Students’ Union. “The D4BL conference is a unique opportunity for me — and anyone who’s involved — to look at these social issues from a different perspective and create real, meaningful change in our community.”

The conference has also received support from MIT Vice President Kirk Kolenbrander, Vice President of Student Life Suzy Nelson, Chancellor Cynthia Barnhart, Institute for Data Systems and Society Director Munther Dahleh, Media Lab Director Joi Ito, and Institute Community and Equity Officer Ed Bertschinger.

“We have been overwhelmed by the amount of support and interest we have received at MIT and beyond,” said Mason-Brown. “There is a real thirst and a real need for this kind of work.”

A student of representation theory

A native of Belmont, Massachusetts, Mason-Brown studied math and philosophy at Brown University, received his MS in mathematics from Trinity College in Dublin, and taught seventh grade math and science for a year at the Edward Brooke School in Roslindale, Massachusetts. He is now in his second year at MIT, where he studies representation theory — the study of abstract symmetries — with his “mathematical hero,” Professor David Vogan.

Increases processing speed while reducing energy consumption

For decades, computer chips have increased efficiency by using “caches,” small, local memory banks that store frequently used data and cut down on time- and energy-consuming communication with off-chip memory.

Today’s chips generally have three or even four different levels of cache, each of which is more capacious but slower than the last. The sizes of the caches represent a compromise between the needs of different kinds of programs, but it’s rare that they’re exactly suited to any one program.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed a system that reallocates cache access on the fly, to create new “cache hierarchies” tailored to the needs of particular programs.

The researchers tested their system on a simulation of a chip with 36 cores, or processing units. They found that, compared to its best-performing predecessors, the system increased processing speed by 20 to 30 percent while reducing energy consumption by 30 to 85 percent.

“What you would like is to take these distributed physical memory resources and build application-specific hierarchies that maximize the performance for your particular application,” says Daniel Sanchez, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS), whose group developed the new system.

“And that depends on many things in the application. What’s the size of the data it accesses? Does it have hierarchical reuse, so that it would benefit from a hierarchy of progressively larger memories? Or is it scanning through a data structure, so we’d be better off having a single but very large level? How often does it access data? How much would its performance suffer if we just let data drop to main memory? There are all these different tradeoffs.”

Sanchez and his coauthors — Po-An Tsai, a graduate student in EECS at MIT, and Nathan Beckmann, who was an MIT graduate student when the work was done and is now an assistant professor of computer science at Carnegie Mellon University — presented the new system, dubbed Jenga, at the International Symposium on Computer Architecture last week.

Staying local

For the past 10 years or so, improvements in computer chips’ processing power have come from the addition of more cores. The chips in most of today’s desktop computers have four cores, but several major chipmakers have announced plans to move to six cores in the next year or so, and 16-core processors are not uncommon in high-end servers. Most industry watchers assume that the core count will continue to climb.

Each core in a multicore chip usually has two levels of private cache. All the cores share a third cache, which is actually broken up into discrete memory banks scattered around the chip. Some new chips also include a so-called DRAM cache, which is etched into a second chip that is mounted on top of the first.

New generation of computers for coming superstorm of dat

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

The art facility for prototyping advanced fabrics

Just over a year after its funding award, a new center for the development and commercialization of advanced fabrics is officially opening its headquarters today in Cambridge, Massachusetts, and will be unveiling the first two advanced fabric products to be commercialized from the center’s work.

Advanced Functional Fabrics of America (AFFOA) is a public-private partnership, part of Manufacturing USA, that is working to develop and introduce U.S.-made high-tech fabrics that provide services such as health monitoring, communications, and dynamic design. In the process, AFFOA aims to facilitate economic growth through U.S. fiber and fabric manufacturing.

AFFOA’s national headquarters will open today, with an event featuring Under Secretary of Defense for Acquisition, Technology, and Logistics James MacStravic, U.S. Senator Elizabeth Warren, U.S. Rep. Niki Tsongas, U.S. Rep. Joe Kennedy, Massachusetts Governor Charlie Baker, New Balance CEO Robert DeMartini, MIT President L. Rafael Reif, and AFFOA CEO Yoel Fink. Sample versions of one of the center’s new products, a programmable backpack made of advanced fabric produced in North and South Carolina, will be distributed to attendees at the opening.

AFFOA was created last year with over $300 million in funding from the U.S. and state governments and from academic and corporate partners, to help foster the creation of revolutionary new developments in fabric and fiber-based products. The institute seeks to create “fabrics that see, hear, sense, communicate, store and convert energy, regulate temperature, monitor health, and change color,” says Fink, a professor of materials science and engineering at MIT. In short, he says, AFFOA aims to catalyze the creation of a whole new industry that envisions “fabrics as the new software.”

Under Fink’s leadership, the independent, nonprofit organization has already created a network of more than 100 partners, including much of the fabric manufacturing base in the U.S. as well as startups and universities spread across 28 states.

“AFFOA’s promise reflects the very best of MIT: It’s bold, innovative, and daring,” says MIT President L. Rafael Reif. “It leverages and drives technology to solve complex problems, in service to society. And it draws its strength from a rich network of collaborators — across governments, universities, and industries. It has been inspiring to watch the partnership’s development this past year, and it will be exciting to witness the new frontiers and opportunities it will open.”

Photon interactions at room temperature

Ordinarily, light particles — photons — don’t interact. If two photons collide in a vacuum, they simply pass through each other.

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations.

In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures.

But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces “nonlinearities” into the transmission of an optical signal.

“All of these approaches that had atoms or atom-like particles require low temperatures and work over a narrow frequency band,” says Dirk Englund, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper. “It’s been a holy grail to come up with methods to realize single-photon-level nonlinearities at room temperature under ambient conditions.”

Joining Englund on the paper are Hyeongrak Choi, a graduate student in electrical engineering and computer science, and Mikkel Heuck, who was a postdoc in Englund’s lab when the work was done and is now at the Technical University of Denmark.

Photonic independence

Quantum computers harness a strange physical property called “superposition,” in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal.

If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or — like Englund’s own research — defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain.

A significant MIT investment in advanced manufacturing innovation

These are not your grandmother’s fibers and textiles. These are tomorrow’s functional fabrics — designed and prototyped in Cambridge, Massachusetts, and manufactured across a network of U.S. partners. This is the vision of the new headquarters for the Manufacturing USA institute called Advanced Functional Fabrics of America (AFFOA) that opened Monday at 12 Emily Street, steps away from the MIT campus.

AFFOA headquarters represents a significant MIT investment in advanced manufacturing innovation. This facility includes a Fabric Discovery Center that provides end-to-end prototyping from fiber design to system integration of new textile-based products, and will be used for education and workforce development in the Cambridge and greater Boston community. AFFOA headquarters also includes startup incubation space for companies spun out from MIT and other partners who are innovating advanced fabrics and fibers for applications ranging from apparel and consumer electronics to automotive and medical devices.

MIT was a founding member of the AFFOA team that partnered with the Department of Defense in April 2016 to launch this new institute as a public-private partnership through an independent nonprofit also founded by MIT. AFFOA’s chief executive officer is Yoel Fink. Prior to his current role, Fink led the AFFOA proposal last year as professor of materials science and engineering and director of the Research Laboratory for Electronics at MIT, with his vision to create a “fabric revolution.” That revolution under Fink’s leadership was grounded in new fiber materials and textile manufacturing processes for fabrics that see, hear, sense, communicate, store and convert energy, and monitor health.

From the perspectives of research, education, and entrepreneurship, MIT engagement in AFFOA draws from many strengths. These include the multifunctional drawn fibers developed by Fink and others to include electronic capabilities within fibers that include multiple materials and function as devices. That fiber concept developed at MIT has been applied to key challenges in the defense sector through MIT’s Institute for Soldier Nanotechnology, commercialization through a startup called OmniGuide that is now OmniGuide Surgical for laser surgery devices, and extensions to several new areas including neural probes by Polina Anikeeva, MIT associate professor of materials science and engineering. Beyond these diverse uses of fiber devices, MIT faculty including Greg Rutledge, the Lammot du Pont Professor of Chemical Engineering, have also led innovation in predictive modeling and design of polymer nanofibers, fiber processing and characterization, and self-assembly of woven and nonwoven filters and textiles for diverse applications and industries.

Income or housing costs and predicts revitalization

Four years ago, researchers at MIT’s Media Lab developed a computer vision system that can analyze street-level photos taken in urban neighborhoods in order to gauge how safe the neighborhoods would appear to human observers.

Now, in an attempt to identify factors that predict urban change, the MIT team and colleagues at Harvard University have used the system to quantify the physical improvement or deterioration of neighborhoods in five American cities.

In work reported today in the Proceedings of the National Academy of Sciences, the system compared 1.6 million pairs of photos taken seven years apart. The researchers used the results of those comparisons to test several hypotheses popular in the social sciences about the causes of urban revitalization. They find that density of highly educated residents, proximity to central business districts and other physically attractive neighborhoods, and the initial safety score assigned by the system all correlate strongly with improvements in physical condition.

Perhaps more illuminating, however, are the factors that turn out not to predict change. Raw income levels do not, and neither do housing prices.

“So it’s not an income story — it’s not that there are rich people there, and they happen to be more educated,” says César Hidalgo, the Asahi Broadcasting Corporation Associate Professor of Media Arts and Sciences and senior author on the paper. “It appears to be more of a skill story.”

“That’s the first theory we found support for,” adds Nikhil Naik, a postdoc at MIT’s Abdul Latif Jameel Poverty Action Lab and first author on the new paper. “And the second theory was the the so-called tipping theory, which says that neighborhoods that are already doing well will continue to do better, and neighborhoods that are not doing well will not improve as much.”

While the researchers found that, on average, higher initial safety scores did indeed translate to larger score increases over time, the relationship was linear: A neighborhood with twice the initial score of another would see about twice as much improvement. This contradicts the predictions of some theorists, who have argued that past some “tipping point,” improvements in a neighborhood’s quality should begin to accelerate.

The researchers also tested the hypothesis that neighborhoods tend to be revitalized when their buildings have decayed enough to require replacement or renovation. But they found little correlation between the average age of a neighborhood’s buildings and its degree of physical improvement.

Joining Naik and Hidalgo on the paper are Ramesh Raskar, an associate professor of media arts and sciences, who, with Hidalgo, supervised Naik’s PhD thesis in the Media Lab, and two Harvard professors: Scott Kominers, an associate professor of entrepreneurial management at the Harvard Business School, and Edward Glaeser, an economics professor.

The inner workings of neural networks trained on visual data

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers will present a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

“We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

Industrial processes for drug manufacturing

When organic chemists identify a useful chemical compound — a new drug, for instance — it’s up to chemical engineers to determine how to mass-produce it.

There could be 100 different sequences of reactions that yield the same end product. But some of them use cheaper reagents and lower temperatures than others, and perhaps most importantly, some are much easier to run continuously, with technicians occasionally topping up reagents in different reaction chambers.

Historically, determining the most efficient and cost-effective way to produce a given molecule has been as much art as science. But MIT researchers are trying to put this process on a more secure empirical footing, with a computer system that’s trained on thousands of examples of experimental reactions and that learns to predict what a reaction’s major products will be.

The researchers’ work appears in the American Chemical Society’s journal Central Science. Like all machine-learning systems, theirs presents its results in terms of probabilities. In tests, the system was able to predict a reaction’s major product 72 percent of the time; 87 percent of the time, it ranked the major product among its three most likely results.

“There’s clearly a lot understood about reactions today,” says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT and one of four senior authors on the paper, “but it’s a highly evolved, acquired skill to look at a molecule and decide how you’re going to synthesize it from starting materials.”

With the new work, Jensen says, “the vision is that you’ll be able to walk up to a system and say, ‘I want to make this molecule.’ The software will tell you the route you should make it from, and the machine will make it.”

With a 72 percent chance of identifying a reaction’s chief product, the system is not yet ready to anchor the type of completely automated chemical synthesis that Jensen envisions. But it could help chemical engineers more quickly converge on the best sequence of reactions — and possibly suggest sequences that they might not otherwise have investigated.

Jensen is joined on the paper by first author Connor Coley, a graduate student in chemical engineering; William Green, the Hoyt C. Hottel Professor of Chemical Engineering, who, with Jensen, co-advises Coley; Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science.

Get miniature smart drones off the ground

In recent years, engineers have worked to shrink drone technology, building flying prototypes that are the size of a bumblebee and loaded with even tinier sensors and cameras. Thus far, they have managed to miniaturize almost every part of a drone, except for the brains of the entire operation — the computer chip.

Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone’s pitch, speed, and trajectory. To do so, these computers use between 10 and 30 watts of power, supplied by batteries that would weigh down a much smaller, bee-sized drone.

Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They will present a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held this week at MIT.

The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip.

The key contribution of their work is a new approach for designing the chip hardware and the algorithms that run on the chip. “Traditionally, an algorithm is designed, and you throw it over to a hardware person to figure out how to map the algorithm to hardware,” Sze says. “But we found by designing the hardware and algorithms together, we can achieve more substantial power savings.”

“We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down,” Karaman says.

The new chip processes streaming images at 20 frames per second and automatically carries out commands to adjust a drone’s orientation in space. The streamlined chip performs all these computations while using just below 2 watts of power — making it an order of magnitude more efficient than current drone-embedded chips.

Karaman, says the team’s design is the first step toward engineering “the smallest intelligent drone that can fly on its own.” He ultimately envisions disaster-response and search-and-rescue missions in which insect-sized drones flit in and out of tight spaces to examine a collapsed structure or look for trapped individuals. Karaman also foresees novel uses in consumer electronics.