AR for Android

Augmented Reality Figure

Via: http://mobile.tutsplus.com/tutorials/android/android_augmented-reality/

The typical AR implementation contains two main parts: the “live” data we’re augmenting and the “meta” data used for the augmentation. For a real-world overlay example, the live data we’re augmenting will usually be a combination of information in the viewfinder of the rear-facing camera, the current location, and the direction the device is facing. This information is then cross-referenced with a list of “meta” data.

The augmentation data source can be anything, but often it’s a preloaded database or a web service that can filter to nearby points of interest.

The rest of the AR implementation consists of using device camera APIs, graphics APIs, and sensor APIs to overlay the augmentation data over the live data and create a pleasant augmented experience.

Key AR Component #1: Camera Data

Displaying the live feed from the Android camera is the reality in augmented reality. The camera data is available by using the APIs available within the android.hardware.Camera package.

If your application doesn’t need to analyze frame data, then starting a preview in the normal way by using a SurfaceHolder object with the setPreviewDisplay() method is appropriate. With this method, you’ll be able to display what the camera is recording on the screen for use. However, if your application does need the frame data, it’s available by calling the setPreviewCallback() method with a valid Camera.PreviewCallback object.

Key AR Component #2: Location Data

Just having the camera feed for most augmented reality applications isn’t enough. You’ll also need to determine the location of the device (and therefore its user). To do this, you’ll need to access fine or coarse location information, commonly accessed through the APIs available within the android.location package, with its LocationManager class. This way, your application can listen to location events and use those to determine where “live” items of interest are located in relation to the device.

If you’re building an augmented reality application that will analyze the camera feed with computer vision (that is, where the computer “sees” things by extracting all the information it needs from the input images) to determine where to place augmentation data, you may not need to know the device location. Using computer vision is, in itself, a deep topic currently under research. Most solutions we’ve seen use the OpenCV libraries. More information on OpenCV can be found at the OpenCV wiki.

When location data isn’t used, a “marker” or “tag” is often used. That is, an easily recognizable object where orientation and scale of an object to draw over it can be quickly determined. For instance, AndAR uses a simple marker to draw a cube over it, as a test of AR abilities.

AndAR using marker to show a cube at different orientations and scale

Key AR Component #3: Sensor Data

Sensor data is often important to AR implementations. For example, knowing the orientation of the phone is usually very useful when trying to keep data synchronized with the camera feed.

To determine the orientation of an Android device,you’ll need to leverage the APIs available in the android.hardware. SensorManager package. Some sensors you’re likely to tap include:

  • Sensor.TYPE_MAGNETIC_FIELD
  • Sensor.TYPE_ACCELEROMETER
  • Sensor.TYPE_ROTATION_VECTOR

The use of sensors to allow the user to move the device around and see changes on the screen in relation to it really pulls the user into applications in an immersive fashion. When the camera feed is showing, this is critical, but in other applications, such as those exploring pre-recorded image data (such as with Google Sky Map or Street View), this technique is still very useful and intuitive for users.

Bringing It Together: The Graphics Overlay

Of course, the whole point of augmented reality is to draw something over the camera feed that, well,augments what the user is seeing live. Conceptually, this is as simple as simply drawing something over the camera feed. How you achieve this, though, is up to you.

You could read in each frame from of the camera feed, add an overlay to it, and draw the frame on the screen (perhaps as a Bitmap or maybe as a texture on a 3D surface). For instance, you could leverage the android.hardware.Camera.PreviewCallback class, which allows your application to get frame-by-frame images.

Alternately, you could use a standard SurfaceHolder with the android.hardware.Camera object and simply draw over the top of the Surface, as needed.

Finally, what and how you draw depends upon your individual application requirements—there are both 2D or 3D graphics APIs available on Android, most notably the APIs within the android.graphics and android.opengl packages.

Storing and Accessing Augmentation Data

So where does the augmentation data come from? Generally speaking, you’ll either be getting this data from your own database, which might be stored locally or from a database online somewhere through a web or cloud service. If you’ve preloaded augmentation data on the device, you’ll likely want to use a SQLite database for quick and easy lookups; you’ll find the SQLite APIs in the android.database.sqlite package. For web-based data, you’ll want to connect up to a web service using the normal methods: HTTP and (usually) XML parsing. For this, you can simply use java.net.URL class with one of the XML parsing classes, such as the XmlPullParser class, to parse the results.

 

生活在西班牙

自己动手丰衣足食

BlueAsteroid

Just another WordPress.com site

Jing's Blog

Just another WordPress.com site

Start from here......

我的心情魔方

天才遠私廚

希望能做一個分享各種資訊的好地方

语义噪声

西瓜大丸子汤的博客

笑对人生,傲立寰宇

Just another WordPress.com site

Where On Earth Is Waldo?

A Project By Melanie Coles

the Serious Computer Vision Blog

A blog about computer vision and serious stuff

Cauthy's Blog

paper review...

Cornell Computer Vision Seminar Blog

Blog for CS 7670 - Special Topics in Computer Vision

datarazzi

Life through nerd-colored glasses

Luciana Haill

Brainwaves Augmenting Consciousness

槑烎

1,2,∞

Dr Paul Tennent

and the university of nottingham

turn off the lights, please

A bunch of random, thinned and stateless thoughts around the Web