Monthly Archives: February 2017

Get miniature smart drones off the ground

In recent years, engineers have worked to shrink drone technology, building flying prototypes that are the size of a bumblebee and loaded with even tinier sensors and cameras. Thus far, they have managed to miniaturize almost every part of a drone, except for the brains of the entire operation — the computer chip.

Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone’s pitch, speed, and trajectory. To do so, these computers use between 10 and 30 watts of power, supplied by batteries that would weigh down a much smaller, bee-sized drone.

Now, engineers at MIT have taken a first step in designing a computer chip that uses a fraction of the power of larger drone computers and is tailored for a drone as small as a bottlecap. They will present a new methodology and design, which they call “Navion,” at the Robotics: Science and Systems conference, held this week at MIT.

The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT’s Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip.

The key contribution of their work is a new approach for designing the chip hardware and the algorithms that run on the chip. “Traditionally, an algorithm is designed, and you throw it over to a hardware person to figure out how to map the algorithm to hardware,” Sze says. “But we found by designing the hardware and algorithms together, we can achieve more substantial power savings.”

“We are finding that this new approach to programming robots, which involves thinking about hardware and algorithms jointly, is key to scaling them down,” Karaman says.

The new chip processes streaming images at 20 frames per second and automatically carries out commands to adjust a drone’s orientation in space. The streamlined chip performs all these computations while using just below 2 watts of power — making it an order of magnitude more efficient than current drone-embedded chips.

Karaman, says the team’s design is the first step toward engineering “the smallest intelligent drone that can fly on its own.” He ultimately envisions disaster-response and search-and-rescue missions in which insect-sized drones flit in and out of tight spaces to examine a collapsed structure or look for trapped individuals. Karaman also foresees novel uses in consumer electronics.

3 D movies into a more TV friendly format

While 3-D movies continue to be popular in theaters, they haven’t made the leap to our homes just yet — and the reason rests largely on the ridge of your nose.

Ever wonder why we wear those pesky 3-D glasses? Theaters generally either use special polarized light or project a pair of images that create a simulated sense of depth. To actually get the 3-D effect, though, you have to wear glasses, which have proven too inconvenient to create much of a market for 3-D TVs.

But researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aim to change that with “Home3D,” a new system that allows users to watch 3-D movies at home without having to wear special glasses.

Home3D converts traditional 3-D movies from stereo into a format that’s compatible with so-called “automultiscopic displays.” According to postdoc Petr Kellnhofer, these displays are rapidly improving in resolution and show great potential for home theater systems.

“Automultiscopic displays aren’t as popular as they could be because they can’t actually play the stereo formats that traditional 3-D movies use in theaters,” says Kellnhofer, who was the lead author on a paper about Home3D that he will present at this month’s SIGGRAPH computer graphics conference in Los Angeles. “By converting existing 3-D movies to this format, our system helps open the door to bringing 3-D TVs into people’s homes.”

Home3D can run in real-time on a graphics-processing unit (GPU), meaning it could run on a system such as an Xbox or a PlayStation. The team says that in the future Home3D could take the form of a chip that could be put into TVs or media players such as Google’s Chromecast.

The team’s algorithms for Home3D also let users customize the viewing experience, dialing up or down the desired level of 3-D for any given movie. In a user study involving clips from movies including “The Avengers” and “Big Buck Bunny,” participants rated Home3D videos as higher quality 60 percent of the time, compared to 3-D videos converted with other approaches.

Kellnhofer wrote the paper with MIT professors Fredo Durand, William Freeman, and Wojciech Matusik, as well as postdoc Pitchaya Sitthi-Amorn, former CSAIL postdoc Piotr Didyk, and former master’s student Szu-Po Wang ’14 MNG ’16. Didyk is now at Saarland University and the Max-Planck Institute in Germany.

How it works

Home3D converts 3-D movies from “stereoscopic” to “multiview” video, which means that, rather than showing just a pair of images, the screen displays three or more images that simulate what the scene looks like from different locations. As a result, each eye perceives what it would see while really being at a given location inside the scene. This allows the brain to naturally compute the depth in the image.

Existing techniques for converting 3-D movies have major limitations. So-called “phase-based rendering” is fast, high-resolution, and largely accurate, but it doesn’t perform well when the left-eye and right-eye images are too different from each other. Meanwhile, “depth image-based rendering” is much better at managing those differences, but it has to run at a low-resolution that can sometimes lose small details. (One assumption it makes is that each pixel has only one depth value, which means that it can’t reproduce effects such as transparency and motion blur.)