Ogre 3D and the Intel Perceptual Computing SDK / Creative Gesture Camera

Posted by on Jan 30, 2013 in 3DNo comments

2013 will be the year of the 3D sensor. I just finished working on a Kinect project for one of our clients and it has been one of the most fun (and demanding) projects I worked on in a while. I learned a lot of news things: the Microsoft Kinect SDK (I used OpenKinect and OpenNI in the past), C#, WPF and XNA. Apart from technology I also learned a load of new math.

I recently posted about the fantastic Leap Motion. At CES 2013 PrimeSense (the company behind the Kinect hardware) announced Capri “The World’s Smallest 3D Sensing Device“.

Another interesting product comes from Intel and Creative. The latter is manufacturing a 3D sensor called “Creative Interactive Gesture Camera” and the former provides an SDK called “Perceptual Computing SDK“.

For the project I was working on I actually considered using the Creative camera & Intel SDK. Unfortunately the one thing I needed from the SDK (Face Pose) hasn’t been implemented yet. It’s a beta SDK after all.

Creative Gesture Camera

Despite that the SDK is very promising. It has features like speech recognition, close-range tracking, gesture recognition, hand pose detection, finger tracking, facial analysis and a lot more. They also provide wrappers for C#, Unity and Processing.

However, my 3D engine of choice is Ogre 3D. There isn’t much documentation to be found about the Perceptual Computing SDK yet so I’m posting my basic Perceptual Computing SDK – Ogre3D integration code here. It will save you some time if you plan on using these.

I’ve used the Ogre App Wizard to generate my Visual Studio project and base class (BaseApplication.cpp & BaseApplication.h).

Here’s a brief explanation of the code. You can find all the source files on BitBucket: https://bitbucket.org/MasDennis/ogre3d-intel-perceptual-computing-sdk/src.

The main Ogre class (Perceptual.cpp) sets up the scene and creates two Rectangle2D instances with textures that will be updated on every frame. It also creates an instance of the class (CreativeSensor) that will connect to the camera:


void PerceptualDemo::createScene(void)
{
	//
	// -- Set up the camera
	//
	mCamera->setPosition(0, 0, 0);
	mCamera->setNearClipDistance(0.01);

	//
	// -- Create color rectangle and texture
	//
	mColorTexture = TextureManager::getSingleton().createManual("ColorTex", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, 1280, 720, 0, PF_R8G8B8, TU_DYNAMIC);

	MaterialPtr material = MaterialManager::getSingleton().create("ColorMat", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
	material->getTechnique(0)->getPass(0)->createTextureUnitState("ColorTex");

	Rectangle2D* rect = new Rectangle2D(true);
	rect->setCorners(-1.0, 1.0, 1.0, -1.0);
	rect->setMaterial("ColorMat");
	rect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);

	AxisAlignedBox aabInf;
	aabInf.setInfinite();
	rect->setBoundingBox(aabInf);

	SceneNode* node = mSceneMgr->getRootSceneNode()->createChildSceneNode("ColorRect");
	node->attachObject(rect);

	//
	// -- Create depth rectangle and texture
	//
	mDepthTexture = TextureManager::getSingleton().createManual("DepthTex", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, 320, 240, 0, PF_R8G8B8, TU_DYNAMIC);

	material = MaterialManager::getSingleton().create("DepthMat", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
	material->getTechnique(0)->getPass(0)->createTextureUnitState("DepthTex");

	rect = new Rectangle2D(true);
	rect->setCorners(0, 0, 1.0, -1.0);
	rect->setMaterial("DepthMat");
	rect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);
	rect->setBoundingBox(aabInf);

	node = mSceneMgr->getRootSceneNode()->createChildSceneNode("DepthRect");
	node->attachObject(rect);
	node->setVisible(true);

	//
	// -- Create a new sensor instance
	//
	mSensor = new CreativeSensor(this);
	mSensor->connect();
}

On each frame the sensor is updated:


bool PerceptualDemo::frameStarted(const FrameEvent& evt)
{
	if(!mSensor->updateFrame()) return false;
}

Two methods which are defined in an interface called IImageTarget are implemented in PerceptualDemo.cpp. They convert raw data from the sensor to Ogre textures.


void PerceptualDemo::setColorBitmapData(char* data)
{
	HardwarePixelBufferSharedPtr pixelBuffer = mColorTexture->getBuffer();
	pixelBuffer->lock(HardwareBuffer::HBL_DISCARD);
	const PixelBox& pixelBox = pixelBuffer->getCurrentLock();
	uint8* pDest = static_cast(pixelBox.data);

	for(size_t i=0; i<720; i++)
	{
		for(size_t j=0; j<1280; j++)
		{
			*pDest++ = *data++;
			*pDest++ = *data++;
			*pDest++ = *data++;
			*pDest++ = 255;
		}
	}

	pixelBuffer->unlock();
}

void PerceptualDemo::setDepthBitmapData(short* data)
{
	HardwarePixelBufferSharedPtr pixelBuffer = mDepthTexture->getBuffer();
	pixelBuffer->lock(HardwareBuffer::HBL_DISCARD);
	const PixelBox& pixelBox = pixelBuffer->getCurrentLock();
	uint8* pDest = static_cast<uint8*>(pixelBox.data);

	for(size_t i=0; i<240; i++)
	{
		for(size_t j=0; j<320; j++)
		{
			short depthVal = *data++ / 16;
			*pDest++ = depthVal;
			*pDest++ = depthVal;
			*pDest++ = depthVal;
			*pDest++ = 255;
		}
	}

	pixelBuffer->unlock();
}

The connection with the camera is set up in the CreativeSensor class. This class also sets up an SDK session:


bool CreativeSensor::connect()
{
	//
	// -- Set up an SDK session
	//
	if(PXCSession_Create(&mSession) < PXC_STATUS_NO_ERROR)
	{
		OutputDebugString("Failed to create a session");
		return false;
	}

	//
	// -- Configure the video streams
	//
	PXCCapture::VideoStream::DataDesc request;
	memset(&request, 0, sizeof(request));
	request.streams[0].format = PXCImage::COLOR_FORMAT_RGB24;
	request.streams[0].sizeMin.width = request.streams[0].sizeMax.width = 1280;
	request.streams[0].sizeMin.height = request.streams[0].sizeMax.height = 720;
	request.streams[1].format = PXCImage::COLOR_FORMAT_DEPTH;

	//
	// -- Create the streams
	//
	mCapture = new UtilCapture(mSession);
	mCapture->LocateStreams(&request);

	//
	// -- Get the profiles the verify if we got the desired streams
	//
	PXCCapture::VideoStream::ProfileInfo colorProfile;
	mCapture->QueryVideoStream(0)->QueryProfile(&colorProfile);
	PXCCapture::VideoStream::ProfileInfo depthProfile;
	mCapture->QueryVideoStream(1)->QueryProfile(&depthProfile);

	//
	// -- Output to console
	//
	char line[64];
	sprintf(line, "Depth %d x %d\n", depthProfile.imageInfo.width, depthProfile.imageInfo.height);
	OutputDebugString(line);
	sprintf(line, "Color %d x %d\n", colorProfile.imageInfo.width, colorProfile.imageInfo.height);
	OutputDebugString(line);

	return true;
}

Once this is set up we can add the code that will execute on every frame. The color and depth images are retrieved and sent to the CreativeSensor class.


bool CreativeSensor::updateFrame()
{
	PXCSmartArray<PXCImage> images;
	PXCSmartSPArray syncPoints(1);

	pxcStatus status = mCapture->ReadStreamAsync(images, &syncPoints[0]);
	if(status < PXC_STATUS_NO_ERROR) return false;

	status = syncPoints.SynchronizeEx();
	if(syncPoints[0]->Synchronize(0) < PXC_STATUS_NO_ERROR) return false;

	//
	// -- get the color image
	//
	PXCImage *colorImage = mCapture->QueryImage(images, PXCImage::IMAGE_TYPE_COLOR);
	PXCImage::ImageData colorImageData;
	if(colorImage->AcquireAccess(PXCImage::ACCESS_READ, &colorImageData) < PXC_STATUS_NO_ERROR)
		return false;
	mPerceptualDemo->setColorBitmapData((char*)colorImageData.planes[0]);
	colorImage->ReleaseAccess(&colorImageData);

	//
	// -- get the depth image
	//
	PXCImage *depthImage = mCapture->QueryImage(images, PXCImage::IMAGE_TYPE_DEPTH);
	PXCImage::ImageData depthImageData;
	if(depthImage->AcquireAccess(PXCImage::ACCESS_READ, &depthImageData) < PXC_STATUS_NO_ERROR)
		return false;
	mPerceptualDemo->setDepthBitmapData((short*)depthImageData.planes[0]);
	depthImage->ReleaseAccess(&depthImageData);

	return true;
}

... and that's all there is to it.

Here's what it looks like:

Ogre3D and Intel Perceptual Computing SDK Integration

The complete source code can be found on BitBucket: https://bitbucket.org/MasDennis/ogre3d-intel-perceptual-computing-sdk/src

Have fun!



Tags: , , , , ,


Leave a comment