Complementing Google Genie, Project Tango could revolutionise on-site real-time data capture, and Geniebelt sees such innovations as the future.
Project Tango is Google’s latest project focused on mobile devices and interaction with the 3D physical world. Last November, there was a flurry of speculation about another Google project, Genie (see also Vannevar Technology), and its potential impact on the architecture, engineering and construction market. Project Tango could also have a direct bearing on how we use technology in the built environment.
From point clouds to real-time 3D scanning
Alongside the recent explosion of interest in building information modelling, laser scanning of existing buildings and other built assets has also been drawing attention, particularly when the resulting point clouds can be deployed to help create 3D models that can be imported into BIM authoring tools and other applications. For example:
- Nick Blenkarn of Severn Partnership demonstrated this at the May 2013 RICS Building Conference, showing how BIMs of building interiors can be retrospectively linked to databases detailing assets contained in those building spaces and used for computer-aided FM (post); his company now has a technology business, SEEABLE, focused on exploiting this approach and delivering it to mobile devices.
- 2013 also saw a successful Kickstarter campaign raise US$200,000 in R&D funding for Spike, a laser-based device that attaches to an iOS or Android smartphone and enables users to rapidly and accurately measure and model an object up to 200m away, with data managed using Sketchup.
Project Tango has clear similarities with the latter. Google has developed a prototype smartphone containing customized hardware and software designed to track the full 3D motion of the Android device, while simultaneously creating a map of the surrounding environment:
These sensors allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.
The project has involved Google working universities, research labs, and industrial partners in nine countries, incorporating advances in robotics and computer vision into a unique mobile phone, which they are now making available to developers to create new tools.
As befits any early stage research project, the potential uses of the technology have not been narrowed down, with navigation and gaming prominent opportunities:
What if you could capture the dimensions of your home simply by walking around with your phone before you went furniture shopping? What if directions to a new location didn’t stop at the street address? What if you never again found yourself lost in a new building? What if the visually-impaired could navigate unassisted in unfamiliar indoor places? What if you could search for a product and see where the exact shelf is located in a super-store?
Imagine playing hide-and-seek in your house with your favorite game character, or transforming the hallways into a tree-lined path. Imagine competing against a friend for control over territories in your home with your own miniature army, or hiding secret virtual treasures in physical places around the world?
I can immediately see how the technology might be used for on-site data capture – for use in site inspections for quality control or health and safety management, for example – with 3D as-built data captured to augment the outputs of the designers and constructors involved, in much the same way as laser-scanning is already being used to detail completed structures. Or, taking the super-store search analogy into construction, users could search a BIM for a particular product and the mobile device would help you find where it was installed (repair and maintenance opportunities abound here).
And it may not stop with the mobile device alone. Paired with wearable technologies such as Google Glass, you could have a powerful way for the user to view context and situation-specific information from a building information model and relate that to his/her physical surroundings, and then perhaps hold real-time conversations with colleagues. Woobius Eye (prototyped in 2010 but never developed into a fully-fledged mobile product) showed signs of how mobile collaboration technology might develop – and with Woobius founder Bob Leung now part of the Geniebelt team, such “see what I mean” capabilities could yet be realised.
Coincidentally, I met up with Bob (chief UX and strategy) and three of his new Geniebelt colleagues, CEO Gari Nickson (left), CTO Nikolaj Berntsen and product marketing head Francisco Fernandez, when they were in London last week, and over a couple of beers we talked again about the core ideas. Like Bob’s Woobius simple SaaS collaboration product (post), Geniebelt is envisaged as simple and intuitive to use without training and consultancy support, but it is also intended to be optimised for mobile devices and for real-time change communication (I saw how when a task was changed on a tablet app, its status was immediately updated on a smartphone with barely a flicker of latency).
The Copenhagen, Denmark-based start-up already has a similar name to the secretive Google AEC project, is already tinkering with Google Glass, and, assuming it successfully completes its next funding round, it could also be riding the Project Tango Zeitgeist.
(But will you know when you’ve been tango’ed?)