Speaking at Mobile Days in Ankara, Turkey

This Sunday (April 6) I will be giving a talk at the Mobile Days conference in Ankara, Turkey. The conference is held at Bilkent University and covers Windows Mobile, iOS and Android.

I presented at another conference in Ankara last year and overall it was a very good experience. Friendly people, well organised and interesting talks.
So when the organisers of Mobile Days contacted me and asked me to do a talk I simply couldn’t say no.

There are quite a few interesting talks at this conference about games, Google Glass, Project Tango and optimisation. Take a look at the full schedule. My talk will be about mobile Virtual Reality. Come say hi if you happen to be there!

Bilkent Mobile Days

Links For My AnDevCon San Francisco Tutorial and Presentation

Thanks to everyone who attended my half-day Augmented Reality tutorial and technical class about Rajawali.
I had a great time at AnDevCon San Francisco and I met a lot of interesting people. Most importantly, I learned many new things about Android!

Below are the links that are relevant to the presentations. Most of you asked about ways to learn Blender so I included links to video tutorial sites and books.

You can also find links to the Rajawali Vuforia project. This contains the example project that’ll get you up and running quickly. The Android 3D model is included as well.

Please let me know if you have questions or need anything else!

Rajawali Links

Blender Tutorials

Blender Books

Other Blender Links

Augmented Reality Links

Other Links

  • Durovis Dive – the world’s first hands-free VR headset for smartphones

WebGL GPS Running Data Visualizer

I’ve been playing with the Nike+ API, the Nike+ Running Android app and three.js. The original idea was to create a game based on GPS coordinates. Unfortunately I ran out of time because there is so much work to do on the Rajawali framework.

This visualization gets its data from the Nike+ API. The data was recorded using the Nike+ Running app. The GPS coordinates are then converted into cartesian coordinates and the road mesh is created using these coordinates. The floor effect is created using a vertex and a fragment shader.

Have a look!

WebGL GPS Running Data Visualizer using three.js

Augmented Reality + 3D Made Easy: Rajawali + Vuforia Integration

One of the things I received a lot of questions about was about how to integrate Rajawali with Qualcomm‘s Vuforia Augmented Reality SDK. Because I was busy developing core functionality for Rajawali I never found the time to take a look at it.

I finally did find the time and this resulted in a new project on Github called RajawaliVuforia.

This integrates Rajawali with Vuforia and should help you to get up & running quickly. It is still a work in progress but supports both frame and image markers.

Here’s a demo that shows the example project output:

I’m very much impressed by Vuforia’s performance. In the video you can see how responsive it is when I shake the marker and then drop it.

Vuforia offers more functionality than currently exposed by the integration template. It is still a work in progress and I hope to add more to it later on. Let me know if you have any suggestions!

Rajawali + Vurofia
Rajawali + Vurofia
Rajawali + Vurofia
Rajawali + Vurofia
Rajawali + Vurofia
Rajawali + Vurofia
Rajawali + Vurofia

Speaking at Google Developer Days in Ankara, Turkey

This Friday I’ll be giving a talk about Rajawali at the Google Developer Days conference in Ankara, Turkey. I’ll talk about the process of creating 3D content for Android using Rajawali.

It won’t be about Rajawali exclusively though. The presentation will also benefit people who are unfamiliar with 3D. I will be going through the basics and I will also discuss:

  • common techniques used in 3D
  • workflow
  • optimisation
  • animation

My presentation will be on Friday at 11:45am in Hall A. Hope to see you there!

http://www.androiddeveloperdays.com/
http://www.androiddeveloperdays.com/schedule/

Android Developer Days

ogre-intel-perceptual

Ogre3D + Intel Perceptual Computing SDK / Creative Gesture Cam

2013 will be the year of the 3D sensor. I just finished working on a Kinect project for one of our clients and it has been one of the most fun (and demanding) projects I worked on in a while. I learned a lot of news things: the Microsoft Kinect SDK (I used OpenKinect and OpenNI in the past), C#, WPF and XNA. Apart from technology I also learned a load of new math.

I recently posted about the fantastic Leap Motion. At CES 2013 PrimeSense (the company behind the Kinect hardware) announced Capri “The World’s Smallest 3D Sensing Device“.

Another interesting product comes from Intel and Creative. The latter is manufacturing a 3D sensor called “Creative Interactive Gesture Camera” and the former provides an SDK called “Perceptual Computing SDK“.

For the project I was working on I actually considered using the Creative camera & Intel SDK. Unfortunately the one thing I needed from the SDK (Face Pose) hasn’t been implemented yet. It’s a beta SDK after all.

Creative Gesture Camera

Despite that the SDK is very promising. It has features like speech recognition, close-range tracking, gesture recognition, hand pose detection, finger tracking, facial analysis and a lot more. They also provide wrappers for C#, Unity and Processing.

However, my 3D engine of choice is Ogre 3D. There isn’t much documentation to be found about the Perceptual Computing SDK yet so I’m posting my basic Perceptual Computing SDK – Ogre3D integration code here. It will save you some time if you plan on using these.

I’ve used the Ogre App Wizard to generate my Visual Studio project and base class (BaseApplication.cpp & BaseApplication.h).

Here’s a brief explanation of the code. You can find all the source files on BitBucket: https://bitbucket.org/MasDennis/ogre3d-intel-perceptual-computing-sdk/src.

The main Ogre class (Perceptual.cpp) sets up the scene and creates two Rectangle2D instances with textures that will be updated on every frame. It also creates an instance of the class (CreativeSensor) that will connect to the camera:


void PerceptualDemo::createScene(void)
{
	//
	// -- Set up the camera
	//
	mCamera->setPosition(0, 0, 0);
	mCamera->setNearClipDistance(0.01);

	//
	// -- Create color rectangle and texture
	//
	mColorTexture = TextureManager::getSingleton().createManual("ColorTex", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, 1280, 720, 0, PF_R8G8B8, TU_DYNAMIC);

	MaterialPtr material = MaterialManager::getSingleton().create("ColorMat", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
	material->getTechnique(0)->getPass(0)->createTextureUnitState("ColorTex");

	Rectangle2D* rect = new Rectangle2D(true);
	rect->setCorners(-1.0, 1.0, 1.0, -1.0);
	rect->setMaterial("ColorMat");
	rect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);

	AxisAlignedBox aabInf;
	aabInf.setInfinite();
	rect->setBoundingBox(aabInf);

	SceneNode* node = mSceneMgr->getRootSceneNode()->createChildSceneNode("ColorRect");
	node->attachObject(rect);

	//
	// -- Create depth rectangle and texture
	//
	mDepthTexture = TextureManager::getSingleton().createManual("DepthTex", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, 320, 240, 0, PF_R8G8B8, TU_DYNAMIC);

	material = MaterialManager::getSingleton().create("DepthMat", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
	material->getTechnique(0)->getPass(0)->createTextureUnitState("DepthTex");

	rect = new Rectangle2D(true);
	rect->setCorners(0, 0, 1.0, -1.0);
	rect->setMaterial("DepthMat");
	rect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);
	rect->setBoundingBox(aabInf);

	node = mSceneMgr->getRootSceneNode()->createChildSceneNode("DepthRect");
	node->attachObject(rect);
	node->setVisible(true);

	//
	// -- Create a new sensor instance
	//
	mSensor = new CreativeSensor(this);
	mSensor->connect();
}

On each frame the sensor is updated:


bool PerceptualDemo::frameStarted(const FrameEvent& evt)
{
	if(!mSensor->updateFrame()) return false;
}

Two methods which are defined in an interface called IImageTarget are implemented in PerceptualDemo.cpp. They convert raw data from the sensor to Ogre textures.


void PerceptualDemo::setColorBitmapData(char* data)
{
	HardwarePixelBufferSharedPtr pixelBuffer = mColorTexture->getBuffer();
	pixelBuffer->lock(HardwareBuffer::HBL_DISCARD);
	const PixelBox& pixelBox = pixelBuffer->getCurrentLock();
	uint8* pDest = static_cast(pixelBox.data);

	for(size_t i=0; i<720; i++)
	{
		for(size_t j=0; j<1280; j++)
		{
			*pDest++ = *data++;
			*pDest++ = *data++;
			*pDest++ = *data++;
			*pDest++ = 255;
		}
	}

	pixelBuffer->unlock();
}

void PerceptualDemo::setDepthBitmapData(short* data)
{
	HardwarePixelBufferSharedPtr pixelBuffer = mDepthTexture->getBuffer();
	pixelBuffer->lock(HardwareBuffer::HBL_DISCARD);
	const PixelBox& pixelBox = pixelBuffer->getCurrentLock();
	uint8* pDest = static_cast<uint8*>(pixelBox.data);

	for(size_t i=0; i<240; i++)
	{
		for(size_t j=0; j<320; j++)
		{
			short depthVal = *data++ / 16;
			*pDest++ = depthVal;
			*pDest++ = depthVal;
			*pDest++ = depthVal;
			*pDest++ = 255;
		}
	}

	pixelBuffer->unlock();
}

The connection with the camera is set up in the CreativeSensor class. This class also sets up an SDK session:


bool CreativeSensor::connect()
{
	//
	// -- Set up an SDK session
	//
	if(PXCSession_Create(&mSession) < PXC_STATUS_NO_ERROR)
	{
		OutputDebugString("Failed to create a session");
		return false;
	}

	//
	// -- Configure the video streams
	//
	PXCCapture::VideoStream::DataDesc request;
	memset(&request, 0, sizeof(request));
	request.streams[0].format = PXCImage::COLOR_FORMAT_RGB24;
	request.streams[0].sizeMin.width = request.streams[0].sizeMax.width = 1280;
	request.streams[0].sizeMin.height = request.streams[0].sizeMax.height = 720;
	request.streams[1].format = PXCImage::COLOR_FORMAT_DEPTH;

	//
	// -- Create the streams
	//
	mCapture = new UtilCapture(mSession);
	mCapture->LocateStreams(&request);

	//
	// -- Get the profiles the verify if we got the desired streams
	//
	PXCCapture::VideoStream::ProfileInfo colorProfile;
	mCapture->QueryVideoStream(0)->QueryProfile(&colorProfile);
	PXCCapture::VideoStream::ProfileInfo depthProfile;
	mCapture->QueryVideoStream(1)->QueryProfile(&depthProfile);

	//
	// -- Output to console
	//
	char line[64];
	sprintf(line, "Depth %d x %d\n", depthProfile.imageInfo.width, depthProfile.imageInfo.height);
	OutputDebugString(line);
	sprintf(line, "Color %d x %d\n", colorProfile.imageInfo.width, colorProfile.imageInfo.height);
	OutputDebugString(line);

	return true;
}

Once this is set up we can add the code that will execute on every frame. The color and depth images are retrieved and sent to the CreativeSensor class.


bool CreativeSensor::updateFrame()
{
	PXCSmartArray<PXCImage> images;
	PXCSmartSPArray syncPoints(1);

	pxcStatus status = mCapture->ReadStreamAsync(images, &syncPoints[0]);
	if(status < PXC_STATUS_NO_ERROR) return false;

	status = syncPoints.SynchronizeEx();
	if(syncPoints[0]->Synchronize(0) < PXC_STATUS_NO_ERROR) return false;

	//
	// -- get the color image
	//
	PXCImage *colorImage = mCapture->QueryImage(images, PXCImage::IMAGE_TYPE_COLOR);
	PXCImage::ImageData colorImageData;
	if(colorImage->AcquireAccess(PXCImage::ACCESS_READ, &colorImageData) < PXC_STATUS_NO_ERROR)
		return false;
	mPerceptualDemo->setColorBitmapData((char*)colorImageData.planes[0]);
	colorImage->ReleaseAccess(&colorImageData);

	//
	// -- get the depth image
	//
	PXCImage *depthImage = mCapture->QueryImage(images, PXCImage::IMAGE_TYPE_DEPTH);
	PXCImage::ImageData depthImageData;
	if(depthImage->AcquireAccess(PXCImage::ACCESS_READ, &depthImageData) < PXC_STATUS_NO_ERROR)
		return false;
	mPerceptualDemo->setDepthBitmapData((short*)depthImageData.planes[0]);
	depthImage->ReleaseAccess(&depthImageData);

	return true;
}

... and that's all there is to it.

Here's what it looks like:

Ogre3D and Intel Perceptual Computing SDK Integration

The complete source code can be found on BitBucket: https://bitbucket.org/MasDennis/ogre3d-intel-perceptual-computing-sdk/src

Have fun!

Leap Motion Demo

Leap Motion Demo

We managed to get our hands on a Leap Motion Developer Kit. If you have never heard about Leap Motion then I recommend you watch the video on their homepage.

This is how they describe it themselves:

The Leap is a small iPod sized USB peripheral that creates a 3D interaction space of 8 cubic feet to precisely interact with and control software on your laptop or desktop computer. It’s like being able to reach into the computer and pull out information as easily as reaching into a cookie jar.

The Leap senses your individual hand and finger movements independently, as well as items like a pen. In fact, it’s 200x more sensitive than existing touch-free products and technologies. It’s the difference between sensing an arm swiping through the air and being able to create a precise digital signature with a fingertip or pen.

These are some big promises and I was very happy to experience that they can live up to it. I am amazed by how accurate it is.

To familiarise myself with the Leap Motion and to check out its accuracy I created a demo using Ogre3D, PhysX and the Leap C++ API.

You can also watch it here: http://productification.tumblr.com/post/36884902432/weve-now-got-the-first-release-of-the-leap

Some screen grabs:

WebGL Goal Creator

Unused, Unfinished WebGL Prototypes

It’s been quite a while since I posted anything WebGL related. That doesn’t mean I haven’t touched it during this time. I have used it a lot to do quick 3D prototyping. Three.js is the engine of choice because of its rich feature set, consistent API, stability and general awesomeness.

I decided to upload these prototypes despite the fact that I’ve never finished them. They’re quite random and unpolished. I guess uploading them means they won’t disappear into the graveyard of digital content.

Anyway, here we go.

Goal Creator

WebGL Goal Creator
Click three times on the pitch and then in the goal. No realistic physics simulation whatsoever. Just a very lame curve.

Shirt Particles

WebGL Shirt Particles
Particles that build up the fabric of the shirt. Their final positions are the vertices of the shirt mesh.

Shirt Strands

WebGL Shirt Strands
Same things as the previous demo but with strands instead of particles. The strands follow the path of the mesh vertices.

Pitch with Animated Player Positions

WebGL Football Pitch
This example uses player position data to move the objects across the pitch.

Track Created with GPS Data

WebGL GPS Run Track
I recorded GPS coordinates during one of my runs through London. The GPS coordinates are converted into Cartesian coordinates to create the track.

Kinect

Using OpenCV 2 with Kinect SDK 1.5

OpenCV always makes a great combination with the Kinect SDK. I’m currently working on a project where I need to combine the two. Optical Flow and Face Recognition are just two out of many powerful features that complement the Kinect SDK.
I though this bit of code might be useful for anybody who wants to use these two frameworks together.
I’m just posting the whole class (C++) here (with links to the SDKs). Please feel free to ask any questions in the comments.

#include <iostream>

#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\core\core.hpp>

#include <NuiApi.h>

using namespace std;
using namespace cv;

class DepthSensor
{
private:
	static const int WIDTH = 320;
	static const int HEIGHT = 240;
	static const int BYTES_PER_PIXEL = 4;

	INuiSensor*		mNuiSensor;
	HANDLE			mNextDepthFrameEvent;
	HANDLE			mNextColorFrameEvent;
	HANDLE			mDepthStreamHandle;
	HANDLE			mColorStreamHandle;
	// -- this is the grey scale depth image
	Mat				mCVImageDepth;
	// -- this is the color image
	Mat				mCVImageColor;
	// -- this is the color image mapped to the depth image
	Mat				mCVImageColorDepth;
	// -- color pixel to depth pixel mapping
	RGBQUAD			mRGB[WIDTH * HEIGHT];

public:
	bool init() {
		int sensorCount = 0;
		
		HRESULT hr = NuiGetSensorCount(&sensorCount);
		if(FAILED(hr)) return false;

		hr = NuiCreateSensorByIndex(0, &mNuiSensor);
		if(FAILED(hr)) return false;

		hr = mNuiSensor->NuiStatus();
		if(hr != S_OK) {
			mNuiSensor->Release();
			return false;
		}

		if(mNuiSensor == NULL) return false;
		
		hr = mNuiSensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR);
		if(!SUCCEEDED(hr)) return false;

		mNextDepthFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
		mNextColorFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);

		hr = mNuiSensor->NuiImageStreamOpen(NUI_IMAGE_TYPE_DEPTH, NUI_IMAGE_RESOLUTION_640x480, 0, 2, mNextDepthFrameEvent, &mDepthStreamHandle);
		hr = mNuiSensor->NuiImageStreamOpen(NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, 0, 2, mNextColorFrameEvent, &mColorStreamHandle);

		mNuiSensor->NuiImageStreamSetImageFrameFlags(mDepthStreamHandle, NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE);

		if(mNuiSensor == NULL || FAILED(hr))
			return false;

		mCVImageDepth = Mat(Size(WIDTH, HEIGHT), CV_8UC1);
		mCVImageColorDepth = Mat(Size(WIDTH, HEIGHT), CV_8UC3);

		return true;
	}

	DepthSensor()
	{
	}

	Mat& getDepthMat()
	{
		return mCVImageDepth;
	}

	Mat& getColorDepthMat()
	{
		return mCVImageColorDepth;
	}

	Mat& getColorMat()
	{
		return mCVImageColor;
	}

	void processDepth() {
		HRESULT hr;
		NUI_IMAGE_FRAME depthFrame, colorFrame;

		hr = mNuiSensor->NuiImageStreamGetNextFrame(mDepthStreamHandle, 0, &depthFrame);
		if(FAILED(hr)) return;
		hr = mNuiSensor->NuiImageStreamGetNextFrame(mColorStreamHandle, 20, &colorFrame);
		if(FAILED(hr)) return;

		INuiFrameTexture* depthTex = depthFrame.pFrameTexture;
		INuiFrameTexture* colorTex = colorFrame.pFrameTexture;
		NUI_LOCKED_RECT lockedRectDepth;
		NUI_LOCKED_RECT lockedRectColor;

		depthTex->LockRect(0, &lockedRectDepth, NULL, 0);
		colorTex->LockRect(0, &lockedRectColor, NULL, 0);

		if(lockedRectDepth.Pitch != 0 && lockedRectColor.Pitch != 0) {
			const USHORT *depthBufferRun = (const USHORT*)lockedRectDepth.pBits;
			const USHORT *depthBufferEnd = (const USHORT*)depthBufferRun + (WIDTH * HEIGHT);
			const BYTE *colorBufferRun = (const BYTE*)lockedRectColor.pBits;
			const BYTE *colorBufferEnd = (const BYTE*)colorBufferRun + (WIDTH * HEIGHT * 4);

			memcpy(mRGB, colorBufferRun, WIDTH * HEIGHT * sizeof(RGBQUAD));

			int count = 0;
			int x, y;

			while(depthBufferRun < depthBufferEnd) 
			{
				USHORT depth = NuiDepthPixelToDepth(*depthBufferRun);
				
				BYTE intensity = 256 - static_cast<BYTE>(((float)depth / DEPTH_THRESH) * 256.);

				x = count % WIDTH;
				y = floor((float)count / (float)WIDTH);
				
				mCVImageDepth.at<uchar>(y, x) = intensity;

				LONG colorInDepthX;
				LONG colorInDepthY;

				mNuiSensor->NuiImageGetColorPixelCoordinatesFromDepthPixel(NUI_IMAGE_RESOLUTION_640x480, NULL, x/2, y/2, *depthBufferRun, &colorInDepthX, &colorInDepthY);

				if(colorInDepthX >=0 && colorInDepthX < WIDTH && colorInDepthY >=0 && colorInDepthY < HEIGHT)
				{
					RGBQUAD &color = mRGB[colorInDepthX + colorInDepthY * WIDTH];
					LONG colorIndex = colorInDepthX + colorInDepthY * WIDTH * 3;
					mCVImageColorDepth.at<Vec3b>(y, x) = Vec3b(color.rgbBlue, color.rgbGreen, color.rgbRed);
				} else {
					mCVImageColorDepth.at<Vec3b>(y, x) = Vec3b(0, 0, 0);
				}

				RGBQUAD &color = mRGB[count];
				mCVImageColor.at<Vec3b>(y, x) = Vec3b(color.rgbBlue, color.rgbGreen, color.rgbRed);

				count++;
				depthBufferRun++;
			}
		}

		depthTex->UnlockRect(0);
		colorTex->UnlockRect(0);
		mNuiSensor->NuiImageStreamReleaseFrame(mDepthStreamHandle, &depthFrame);
		mNuiSensor->NuiImageStreamReleaseFrame(mColorStreamHandle, &colorFrame);
	}

	~DepthSensor()
	{
		if(mNuiSensor)
			mNuiSensor->NuiShutdown();
	}
};
Unity Kinect SDK

Microsoft Kinect SDK Wrapper For Unity Crash Bug Fix

There’s a great free Kinect SDK wrapper available for Unity. It’s free & open source but there are still a few problems getting it to run with the 1.0 SDK (as opposed to the beta).

The first problem is that it is pointing to the wrong dll file. When you get this exception:

DllNotFoundException: C:\Program Files (x86)\Microsoft Research KinectSDK\MSRKINECTNUI.DLL

You should open the file KinectInterop.cs and changes all dll paths to:

C:\Windows\System32\Kinect10.dll

This will fix all compiler errors and it should run without problems.

However, it will only run once. When you run it the second time Unity will freeze and you will have to kill the process. Not very convenient.

This is caused by a bug in the Microsoft SDK. According to this page the problem is:

If C++ code is executing NuiInitializa/NuiShutdown multiple times through
the application's lifetime, SetDeviceStatusCallback should be called once,
before invoking those calls.

So apparently a single call to SetDeviceStatusCallback() should fix the problem. To be able to call this method we need to add some code to the KinectInterop.cs file. First of all we need to add an empty struct:

public struct NuiStatusProc
{
}

Then we need to link the native method. In the NativeMethods class add:


[DllImportAttribute(@"C:\Windows\System32\Kinect10.dll", EntryPoint = "NuiSetDeviceStatusCallback")]
	    public static extern void NuiSetDeviceStatusCallback(NuiStatusProc callback);

Now open the file KinectSensor.cs and add this line to the void Awake() method (just before the line “catch (Exception e)”):


NativeMethods.NuiSetDeviceStatusCallback(new NuiStatusProc());

Now everything should run fine. If it doesn’t let me know :O