[Development] Augmented Business Card
Last update: 11-Oct-2015
Some time ago I mentioned a software tool called Vuforia that provides image recognition and a plugin for Unity. I particularly enjoyed the ability to develop things in Unity because its physics engine handles a lot of things, and can export the executable to many platforms, including mobile ones like Android and iOS.
Almost immediately I made a simple demo app on Android:
Some time ago I mentioned a software tool called Vuforia that provides image recognition and a plugin for Unity. I particularly enjoyed the ability to develop things in Unity because its physics engine handles a lot of things, and can export the executable to many platforms, including mobile ones like Android and iOS.
Almost immediately I made a simple demo app on Android:
It's a very simple app but demonstrates how the app recognizes a predefined image and projects a 3D object (the fountain) anywhere and anyway I want. The Vuforia tool automatically calculates the angle of the image against the trained data, and relays this to the 3D renderer of Unity. The result is an augmented reality experience where the user can move the mobile device (or the physical object) around to see different sides of the 3D object, which can be animated or even behave according to the view angle.
Meanwhile, I've managed to 3D model a number of objects (including myself) using depth sensing and some 3D software [Link]. Combining these I made a demo of my business card (that's the back of it):
Meanwhile, I've managed to 3D model a number of objects (including myself) using depth sensing and some 3D software [Link]. Combining these I made a demo of my business card (that's the back of it):
This is very similar to the first demo app I made, as I just replaced the fountain with my own 3D model. And in case you noticed, it's running on a BlackBerry device, though it's side-loading an Android apk file. This business card usecase is a great example as it shows what our next generation business card can look like. Information beyond those written on the card, additional media content (you can play a song if you are a musician, or do a magic trick if you are a magician), and interactivity.
The Vuforia SDK provides virtual buttons that can be projected like other 3D objects. The basic mechanism is when someone "touches" a button, it will be blocked from the camera's view, and recognized as a button press. This in turn can be used as a trigger. Meanwhile, Unity provides access to the device's input buffer, so the app knows when someone touches the screen or performs any gestures with the device. So in a similar manner, an on-screen button press can be used as a trigger.
I've tried the virtual buttons, while allowing more interaction with the augmented reality (you are interacting with the physical object), they are not as responsive as on-screen buttons, as everything is done with computer vision. Touching on a screen is more definite and helps keeping all the interactions within a close and reachable distance. These differences suggest different usecases.
Imagine the app is used in an exhibition, where multiple viewers are looking at the same artifact. If virtual buttons are used, touching/moving the artifact will also affect the view of the others, plus sometimes touching is not permitted anyway. On-screen buttons would therefore make more sense. However, if the app is used in an AR game, it would be desirable if touching/moving a game piece results in a update in all viewing devices. So despite current technological hurdles in computer vision, virtual buttons would prove themselves useful in some scenarios.
In a broader sense, the idea of augmenting reality and allowing interactivity through devices and/or physical objects is very fascinating. It allows public/private information dissemination, and better yet, mixed inter-object interaction. How cool would it be if by moving the physical object on-screen results in movement of the physical object in the real world!
The Vuforia SDK provides virtual buttons that can be projected like other 3D objects. The basic mechanism is when someone "touches" a button, it will be blocked from the camera's view, and recognized as a button press. This in turn can be used as a trigger. Meanwhile, Unity provides access to the device's input buffer, so the app knows when someone touches the screen or performs any gestures with the device. So in a similar manner, an on-screen button press can be used as a trigger.
I've tried the virtual buttons, while allowing more interaction with the augmented reality (you are interacting with the physical object), they are not as responsive as on-screen buttons, as everything is done with computer vision. Touching on a screen is more definite and helps keeping all the interactions within a close and reachable distance. These differences suggest different usecases.
Imagine the app is used in an exhibition, where multiple viewers are looking at the same artifact. If virtual buttons are used, touching/moving the artifact will also affect the view of the others, plus sometimes touching is not permitted anyway. On-screen buttons would therefore make more sense. However, if the app is used in an AR game, it would be desirable if touching/moving a game piece results in a update in all viewing devices. So despite current technological hurdles in computer vision, virtual buttons would prove themselves useful in some scenarios.
In a broader sense, the idea of augmenting reality and allowing interactivity through devices and/or physical objects is very fascinating. It allows public/private information dissemination, and better yet, mixed inter-object interaction. How cool would it be if by moving the physical object on-screen results in movement of the physical object in the real world!