Augmented reality is one of those technologies that never really seemed to have its day. While the arrival of the smartphone led to a surge of popularity, walking the streets while holding your phone at arm’s length was never really going to take off. However, it’s possible that the arrival of the Internet of Things, which is still frantically searching for a workable interface paradigm, will prove to be augmented reality’s killer application. If so, the Fluid Interfaces group at MIT might well be on to something with their latest project, the Reality Editor.
The Reality Editor picks up on real world visual markers, using their position to overlay digital content into an augmented reality interface. This allows you to use virtual controls to tap into the capabilities of the Internet of Things. If something like this was picked up by manufacturers, it could provide the compatibility layer, and link, between devices that the Internet of Things has lacked until now.
One of the first augmented reality applications to appear in Apple’s brand new App Store was the Nearest Tube application. It typified location-aware augmented reality, where objects are injected into the real-world view based on their location relative to your own position, as opposed to marker-based augmented reality like the Reality Editor, where the real-world view is interpreted in real or near to real time and objects are placed in the view based on markers or other characteristics of the image.
Location-aware augmented reality applications proliferated on smartphones because they made use of their then-new GPS capabilities, and now marker-based augmented realities offer similar levers on the new capabilities of the Internet of Things.
As our computing diffuses into our environment, the interfaces to our computing will have to change. Currently there is a huge debate as to which types of interfaces will work. There are those that feel that it will be more of the same, more screens with more buttons, and those that feel — like David Rose who wrote a book called Enchanted Objects — that the objects themselves will have to become the interface. Reality Editor seems to be a workable compromise between the two waring camps.
Reality Editor. Courtesy of Fluid Interfaces/MIT
The Reality Editor allows you to point the camera of your smartphone at an object to expose and edit its capabilities. It allows you to drag a line from one object to another to create a new relationship between these objects — for instance, connecting a single switch to a light, or a group of lights. Effectively it lets you edit your reality and manipulate the way objects control and interact with one another. Interestingly, the same objects could theoretically have different relationships for other people. Everyone’s reality isn’t necessarily the same.
This approach hides the interfaces and relationships of smart objects away from our day-to-day world. This works because these relationships, from switch to light, aren’t needed every time you interact with the object. We don’t necessarily need to see all the capabilities and options every time we flip a switch to turn on a light. Not every choice needs to be presented to the user all the time.
The Arduino Yún “Hello World” example. Courtesy of Fluid Interfaces/MIT
However what’s most interesting about MIT’s new Reality Editor is that we can play with it. You can download the Reality Editor on the App Store and use it alongside MIT’s Open Hybrid framework to build your own “enchanted objects” using the Arduino Yún and the Raspberry Pi.
Beyond this — especially if technologies like Magic Leap materialise as advertised and we can dispense with the clunky phone interfaces — you can see this sort of technology driving more mainstream adoption of the Internet of Things which, until now, has been going down the wrong road when it comes to how it interacts with it users.
Perhaps then, with the Internet of Things, augmented reality has finally found its killer app?