Igor's Website - Articles - Augmented reality in C#

Science, stories, art and music.

Science / Computer science articles.

Augmented reality using C#

Purpose

Here I will demonstrate how to use Graphics, fast Bitmaps and camera to create an augmented reality setting using C# in WPF.

Requirements

Using of the code provided requires the XNA framework to be installed (however, it is not crucial, since I only use Vector2). WPF is not a strict requirement, the same algorithm can be implemented in a simple Windows Forms Application. Also, the code is parallelized, so you will need .NET framework 4.0 to run it (or simply replace each Parallel.For with a for).

Introduction

The closest thing to holding a fireball or a lightning ball in your hand is probably seeing yourself with it in an augmented reality world. So, here I will show you how to create a simple magic effect that follows a light source. The Effect should look like this:

Screenshot

Retrieving the image for processing

We will use a third-party library for obtaining frames from the camera. Using a DispatchTimer we will take the last frame obtained by the Capture object from the library.

Since we are using WPF, we need to convert the System.Imaging.Bitmap image to the BitmapImage used in the WPF. This can be easily accomplished by saving the Bitmap to a memory stream and then reading it as a BitmapImage.

private BitmapImage ConvertImage(img.Bitmap inputImage)
{
    MemoryStream ms = new MemoryStream();
    inputImage.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
    System.Windows.Media.Imaging.BitmapImage bImg = 
            new System.Windows.Media.Imaging.BitmapImage();
    bImg.BeginInit();
    bImg.StreamSource = new MemoryStream(ms.ToArray());
    bImg.EndInit();
    return bImg;
}
        

The retrieved image is then sent to the Analyzer class where it will be processed.

Finding the light source

The method Analyze from the Analyzer class determines the pixel with the highest color value. Since SetPixel() and GetPixel() methods are quite slow, we will lock the image and then work with an image matrix. Since we are using pointers, the program will have to be compiled allowing the usage of the unsafe context (compile with /unsafe).

public static Bitmap Analyze(Bitmap bmp)
{
    BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height),
             ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
    int stride = data.Stride;
    int x = 0, y = 0;
    unsafe
    {
        byte* ptr = (byte*)data.Scan0;
        int R = 0, G = 0, B = 0;
        int h = bmp.Height, w = bmp.Width;
        Parallel.For(0, h, i =>
        {
             for (int j = 0; j < w; ++j)
             {
                 int b = ptr[(j * 3) + i * stride];
                 int g = ptr[(j * 3) + i * stride + 1];
                 int r = ptr[(j * 3) + i * stride + 2];
                 if (g + r + b > G + R + B)
                 {
                      R = r;
                      G = g;
                      B = b;
                      x = i;
                      y = j;
                 }
            }
        });
    }
    bmp.UnlockBits(data);
    Update(y, x);
    Graphics gr = Graphics.FromImage(bmp);
    gr.DrawImage(bitmap, 0, 0);
    return bmp;
}
        

We parallelize the traversal through the matrix to make it quicker.

The Analyze() method could, of course, be modified so that the target pixel is the coordinate of a certain glyph or the location of a hand, but that is not a topic of this article (the AForge.NET library is an excellent library for glyph recognition, and could be used for this purpose).

After determining the target pixel, we update the location of the magic effect and draw it over the original image.

The magic effect

We will use a separate Bitmap for the magic effect.

Let our magic effect consist of a number of light points. We need to define a class for the light point. Each point will have its current location, previous location and velocity. These points could be considered particles. Each point has an Update() method that takes three parameters: x and y coordinates of the target pixel (the pixel the effect is moving towards) and the Graphics object used for drawing on the Bitmap.

The particles are initialized in the system in the Initialize() method. The Analyzer class contains a list of particles which is filled with particles once the Initialize() method is called.

public static void Initialize(int w, int h)
{
    for (int i = 0; i < w; i += 20)
        for (int j = 0; j < h; j += 20)
            list.Add(new Point(i, j));
    bitmap = new Bitmap(w, h);
    GC.Collect();
}
        

In the Update() method, we determine the distance of the given point to the desired point, and then calculate the velocity vector for that point. The velocity vector is calculated by subtracting point’s current location and the target location, normalizing the resulting vector (to get the unit vector pointing towards the target) and multiplying it with the speed we want to assign to the particle (notice that velocity is a vector and speed is a scalar, that is, the magnitude of velocity).

public void Update(int x, int y, Graphics gr)
{
    xna.Vector2 v = new xna.Vector2(x, y) - position;
    byte color = v.Length() / 3 > 128 ? (byte)128 : (byte)(v.Length() / 3);
    v.Normalize();
    velocity += v * speed;
    if (velocity.Length() > 20)
    {
        velocity.Normalize();
        velocity *= 20;
    }
    lastPosition = position;
    position += velocity;
    if (!float.IsNaN(position.X) && !float.IsNaN(position.Y))
        gr.DrawLine(new Pen(new SolidBrush(
            Color.FromArgb(64, 0, 127 + color, 255)), 3),
            lastPosition.ToPoint(), 
            position.ToPoint()
        );
}
        

In the example, I used the XNA Vector2 because it has overloaded operators, however the same can be easily achieved by defining a custom class Vector2 and overloading a few operators. What might confuse some is the normalization of the vector. The normalization of the vector is done by calculating the length of the vector and dividing all its components by that value. The conversion from the Vector2 into PointF is done using an extension method.

Since the time is discrete here, we have to make the last position current, and the current modified by the velocity for each iteration (for each call of the Update() method). This way we will get the discrete movement for each particle. After altering the last and current position, we draw a line between them (the color of the line depends on the distance from the target to make the effect more interesting).

The Analyzer class itself has an update method which calls update methods for each individual particle in the system. Since in each call of the update method, new lines are drawn, we need to fade the previous lines so that the image does not become cluttered. This can be achieved elegantly by multiplying each pixel’s alpha channel with 0.9 in each call of the update method.

public static void Update(int x, int y)
{
    Graphics gr = Graphics.FromImage(bitmap);
    list.ForEach(element => element.Update(x, y, gr));
    BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
            ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
    int stride = data.Stride;
    unsafe
    {
        byte* ptr = (byte*)data.Scan0;
        int h = bitmap.Height, w = bitmap.Width;
        Parallel.For(0, h, i =>
        {
            for (int j = 0; j < w; ++j)
            {
                int a = ptr[(j * 4) + i * stride + 3];
                if (a > 0)
                    ptr[(j * 4) + i * stride + 3] = (byte)(a * 0.9);
            }
        });
    }
    bitmap.UnlockBits(data);
}
        

Finally, after the positions are updated and the image is drawn on the frame obtained from the camera, the Bitmap is returned to the WPF form where it is assigned to the Image component.

Afterthoughts

The algorithm for adjusting the color and speed of the point can be modified so that each point has inertia. Another interesting effect could be to have a vector of small length in each point and to rotate and add that vector to the velocity in each call of the update method. This would create curled lines. Also, each point can be drawn using an image (like particles in games) to accomplish fire effects or such.

As I mentioned, another idea could be to change what the target is. For example to use a glyph, or to detect motion and make the points move towards the parts of the image that are different for two given frames.

Source code

The source code for the application is available here: lightFollow.zip

9.2.2013

Google Code Prettify

I use Google Prettify to format the source code in my articles. If the code is displaying in one line, you can try opening the page in a different browser.

Request software design

If you wish to have a specific application designed, contact me at software@igorsevo.com. If you want to know more about what I do, check out my home page and Science page.

Support this site

Suggest an article

You can suggest an article or ask a question on the Questions page.

Social