Using OpenCV 2 with Kinect SDK 1.5

OpenCV always makes a great combination with the Kinect SDK. I’m currently working on a project where I need to combine the two. Optical Flow and Face Recognition are just two out of many powerful features that complement the Kinect SDK.
I though this bit of code might be useful for anybody who wants to use these two frameworks together.
I’m just posting the whole class (C++) here (with links to the SDKs). Please feel free to ask any questions in the comments.

#include <iostream>

#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\core\core.hpp>

#include <NuiApi.h>

using namespace std;
using namespace cv;

class DepthSensor
	static const int WIDTH = 320;
	static const int HEIGHT = 240;
	static const int BYTES_PER_PIXEL = 4;

	INuiSensor*		mNuiSensor;
	HANDLE			mNextDepthFrameEvent;
	HANDLE			mNextColorFrameEvent;
	HANDLE			mDepthStreamHandle;
	HANDLE			mColorStreamHandle;
	// -- this is the grey scale depth image
	Mat				mCVImageDepth;
	// -- this is the color image
	Mat				mCVImageColor;
	// -- this is the color image mapped to the depth image
	Mat				mCVImageColorDepth;
	// -- color pixel to depth pixel mapping

	bool init() {
		int sensorCount = 0;
		HRESULT hr = NuiGetSensorCount(&sensorCount);
		if(FAILED(hr)) return false;

		hr = NuiCreateSensorByIndex(0, &mNuiSensor);
		if(FAILED(hr)) return false;

		hr = mNuiSensor->NuiStatus();
		if(hr != S_OK) {
			return false;

		if(mNuiSensor == NULL) return false;
		if(!SUCCEEDED(hr)) return false;

		mNextDepthFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
		mNextColorFrameEvent = CreateEvent(NULL, TRUE, FALSE, NULL);

		hr = mNuiSensor->NuiImageStreamOpen(NUI_IMAGE_TYPE_DEPTH, NUI_IMAGE_RESOLUTION_640x480, 0, 2, mNextDepthFrameEvent, &mDepthStreamHandle);
		hr = mNuiSensor->NuiImageStreamOpen(NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, 0, 2, mNextColorFrameEvent, &mColorStreamHandle);

		mNuiSensor->NuiImageStreamSetImageFrameFlags(mDepthStreamHandle, NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE);

		if(mNuiSensor == NULL || FAILED(hr))
			return false;

		mCVImageDepth = Mat(Size(WIDTH, HEIGHT), CV_8UC1);
		mCVImageColorDepth = Mat(Size(WIDTH, HEIGHT), CV_8UC3);

		return true;


	Mat& getDepthMat()
		return mCVImageDepth;

	Mat& getColorDepthMat()
		return mCVImageColorDepth;

	Mat& getColorMat()
		return mCVImageColor;

	void processDepth() {
		NUI_IMAGE_FRAME depthFrame, colorFrame;

		hr = mNuiSensor->NuiImageStreamGetNextFrame(mDepthStreamHandle, 0, &depthFrame);
		if(FAILED(hr)) return;
		hr = mNuiSensor->NuiImageStreamGetNextFrame(mColorStreamHandle, 20, &colorFrame);
		if(FAILED(hr)) return;

		INuiFrameTexture* depthTex = depthFrame.pFrameTexture;
		INuiFrameTexture* colorTex = colorFrame.pFrameTexture;
		NUI_LOCKED_RECT lockedRectDepth;
		NUI_LOCKED_RECT lockedRectColor;

		depthTex->LockRect(0, &lockedRectDepth, NULL, 0);
		colorTex->LockRect(0, &lockedRectColor, NULL, 0);

		if(lockedRectDepth.Pitch != 0 && lockedRectColor.Pitch != 0) {
			const USHORT *depthBufferRun = (const USHORT*)lockedRectDepth.pBits;
			const USHORT *depthBufferEnd = (const USHORT*)depthBufferRun + (WIDTH * HEIGHT);
			const BYTE *colorBufferRun = (const BYTE*)lockedRectColor.pBits;
			const BYTE *colorBufferEnd = (const BYTE*)colorBufferRun + (WIDTH * HEIGHT * 4);

			memcpy(mRGB, colorBufferRun, WIDTH * HEIGHT * sizeof(RGBQUAD));

			int count = 0;
			int x, y;

			while(depthBufferRun < depthBufferEnd) 
				USHORT depth = NuiDepthPixelToDepth(*depthBufferRun);
				BYTE intensity = 256 - static_cast<BYTE>(((float)depth / DEPTH_THRESH) * 256.);

				x = count % WIDTH;
				y = floor((float)count / (float)WIDTH);<uchar>(y, x) = intensity;

				LONG colorInDepthX;
				LONG colorInDepthY;

				mNuiSensor->NuiImageGetColorPixelCoordinatesFromDepthPixel(NUI_IMAGE_RESOLUTION_640x480, NULL, x/2, y/2, *depthBufferRun, &colorInDepthX, &colorInDepthY);

				if(colorInDepthX >=0 && colorInDepthX < WIDTH && colorInDepthY >=0 && colorInDepthY < HEIGHT)
					RGBQUAD &color = mRGB[colorInDepthX + colorInDepthY * WIDTH];
					LONG colorIndex = colorInDepthX + colorInDepthY * WIDTH * 3;<Vec3b>(y, x) = Vec3b(color.rgbBlue, color.rgbGreen, color.rgbRed);
				} else {<Vec3b>(y, x) = Vec3b(0, 0, 0);

				RGBQUAD &color = mRGB[count];<Vec3b>(y, x) = Vec3b(color.rgbBlue, color.rgbGreen, color.rgbRed);


		mNuiSensor->NuiImageStreamReleaseFrame(mDepthStreamHandle, &depthFrame);
		mNuiSensor->NuiImageStreamReleaseFrame(mColorStreamHandle, &colorFrame);

  • Delicious
  • Facebook
  • Digg
  • Reddit
  • StumbleUpon
  • Twitter

7 thoughts on “Using OpenCV 2 with Kinect SDK 1.5

  1. rchristo says:

    Wow! Thank you so much! While I won’t be using this verbatim, it’s great to see some OpenCV and Kinect SDK playing well together. I frankly have been upset to find most people are using the Kinect with C# doing a lot of the work for them. Thank you Dennis!

  2. Niket says:

    I’m using the microsoft kinect device for the implementation. There are samples in Kinect SDK for face tracking but that uses both color stream (from the RGB camera) and the depth stream (from the IR camera).
    I want to only use the RGB camera to do this task. So I’m looking for OpenCv C++ codes which uses Kinect API’s so that I’ll be able to use it with my Kinect device in visual studio 2010 express.

  3. Van Chinh says:

    Thanks you so much. Please explain for me in this code:
    x = count % WIDTH;
    y = floor((float)count / (float)WIDTH);
    and from microsoft SDK has an example DepthWithColor-D3D, there is some code a/f: // retrieve the depth to color mapping for the current depth pixel
    LONG colorInDepthX = m_colorCoordinates[depthIndex * 2];
    LONG colorInDepthY = m_colorCoordinates[depthIndex * 2 + 1];
    I don’t understand colorInDepthY. can you explain for me.
    Best regrad!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>