We’re hearing more and more people talking about Augmented Reality (AR).
This technology juxtaposes in real time a virtual object onto our perception of the reality. It’s basically the dream of all gamers.
In AR, there are two big challenges :
- being able to incorporate virtual objects that they feel realistic.
- minimizing the computation time in order to be in real time.
Incorporating a virtual object into a static scene is relatively easy, but when the camera moves, the computer has to constantly update the parameters of the camera and the speed issues come to the forefront.
To solve those problems, it’s first necessary to localize the camera based on the information of the scene. If there is an object in the scene for which we know its dimensions, the computation gets easier (for example you want to incorporate an ape on the top of the Empire State Building, you can use the building as a marker itself by supposing that you know all the parameters of the building). That means we have a mark on which to place the virtual object.
If we don’t have any information of the 3D structure of the scene it gets harder. That’s why most of the applications that use AR forces the user to first download and print a certain marker specific to their app. By placing the marker inside the scene, the app can then easily get all the information needed from the camera by analyzing the size and orientation of the marker. Most of the time we use a black square marker because it’s easy to localize it on a scene.
Once we get all the parameters needed of the camera, we impose several transformations (translation, rotation, scale) on the virtual objects in order to incorporate them in the most realistic way.
But the use of markers makes the use of AR very “user unfriendly.” Before getting a result, you must print a marker and hold it in front of the camera to be able to see the virtual object appearing on your screen device. Because of light restriction and bad approximation by cameras, the result is not always ideal.
Here are two examples of AR marker usings ActionScript 3.0 Papervision library :
The video above is an example of AR boundary.
As you can see in the video, less movement yields better results.
To the joy of gamers, last November Microsoft released a new device for the Xbox 360: the Kinect. This device features a depth sensor that uses infrared sensors (input) and infra redlight (output) which provides much more information about the 3D structure of a scene than the usual webcam.
Here are two examples of the uses of this technology :
As you can see in this video, once the Kinect knows the body shape of the user, the user transforms into a super-hero. Then, depending on the movement of the user, he can use super-powers. As you can also see, the 3D structure of the scene is considered.
This video shows that Kinect technology is not only useful for gamers but also as an utility tool.
10 days ago, Sony also announced a markerless approach to Augmented Reality. This technology is named Smart AR. Get ready for some magic!
In this video it shows that you don’t need to have a proprietary marker because it uses every flat surface or object in the environment as a marker in real time.
We’re getting closer and closer to really immersive Augmented Reality thanks to all of these new devices. All we’re for now is Sony allowing us to use their technology !General Topics Technology