“What if you could capture the dimensions of your home simply by walking around with your phone before you went furniture shopping?”
“What if the visually-impaired could navigate unassisted in unfamiliar indoor places?”
These are the questions that Google is putting forward, before it replies that because of its latest research project, these scenarios are now possible.
Project Tango is a new smartphone that can map its user’s surroundings while simultaneously building navigable three-dimensional virtual environments that can be used to give directions outdoors.
Based on the idea of humans living in a 3D world, Google released the project to give mobile devices a “human-scale understanding of space and motion”.
The current prototype is a five inch phone containing customised hardware and software with sensors allowing the phone to make over a quarter million 3D measurements every second. These sensors allow the phone to update its position and orientation in real-time, combining that data into a single 3D model of the space around users.
It runs on Android and includes development APIs to provide position, orientation and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine.
The phone uses a 4 megapixel camera, integrated depth sensing, and a motion tracking camera. It also includes a low-powered, vision processing chip called the Myriad 1 by Movidius.
The Myriad 1 can power the complex algorithms needed for computer vision without draining the smartphone’s battery, which is something that current smartphone chips cannot do.
These prototypes are still in active development, with Google to provide the smartphone to outside developers to encourage the writing of new applications, including those who will take the current technology of sensing 3D motion and geometry, and pushing it forward to build greater user experiences.
Google has worked with universities, research labs and industrial partners from 9 countries in the past year, building on current works in robotics and computer vision. This includes researchers from the University of Minnesota, George Washington University, German tech firm Bosch and the Open Source robotics Foundation.
To sign up for a prototype dev kit, please click here. Google expects to distribute all 200 available units by 14 March 2014.