Using Ogre’s OpenGL renderer with the OpenVR API (SteamVR, HTC-Vive SDK)

Note to the reader: If some things in this article are unclear and/or ommited, it’s probably because I’ve allready explained them in the precedent article about the Oculus Rift SDK here

The OpenVR API, and the whole “SteamVR” software stack is really interesting to target, because it’s compatible with many VR systems from the get go. Making you code something once, and running it on all of them.

In practice, the OpenVR API is a bit simpler to code with than the Oculus SDK. It’s naming conventions are from the 90’s (as with all Valve’s SDKs. But at least they are consistent with themselves!)

Also, the resulting code is less verbose that with the Oculus SDK. It’s almost like Oculus wanted to make their code fit the Windows/DirectX style (setting up structures with a lot of parameters and sending pointers to them to functions) and Valve’s took a more OpenGL approach (functions that take fixed types and flags to tell what to do with it), they even named their library OpenVR.

You could say that the OpenVR API is a bit lower level than the Oculus Rift one, and you’ll be right. Contrary to the Oculus OVR library, you do not have the following thing :

  • No automatic allocation of render textures
  • No C++ mathematical classes for matrices, vectors and quaternions
  • No convenient “orientation” quaternion and “position” vector to express a tracked pose. You’ll get a plain old float array representing a transformation matrix.

Hopefully we have Ogre on hand. Ogre has every mathematical objects we need, and since we’re on a PC platform, Ogre define all the underlying mathematical operation in the FPU assembly, so it’s probably better than anything I would write myself…

However, we still need to “translate” the data retrieved from the OpenVR API to the formats we want.

SteamVR will return transformation matrices representing a translation and a rotation from the room’s Origin point. This reference point is at the center of the player’s space, on the ground. Theses matrices doesn’t represent a scale transform, because you cannot “scale” objects from the real word. You can move them in 6 degrees of freedom, not 9 (3 axis translations and 3 axis rotations).

Ogre’s Matrix4 class permit to easily extract the translation and rotation, and you can feed a list of float to it’s constructor. We will use this little helper function to deal with that :

inline Ogre::Matrix4 getMatrix4FromSteamVRMatrix34(const vr::HmdMatrix34_t& mat)
{
	return Ogre::Matrix4
	{
		mat.m[0][0], mat.m[0][1], mat.m[0][2], mat.m[0][3],
		mat.m[1][0], mat.m[1][1], mat.m[1][2], mat.m[1][3],
		mat.m[2][0], mat.m[2][1], mat.m[2][2], mat.m[2][3],
		0.0f,		 0.0f,		  0.0f,		   1.0f
	};
}

You can easily retrieve the translation by calling Ogre::Matrix4::getTrans() and the orientation by calling Ogre::Matrix4::extractQuaternion();

I keep the HMD pose as a global Ogre::Matrix4, and I retrieve the position/orientation with theses functions

inline Ogre::Vector3 getTrackedHMDTranslation()
{
	//Extract translation vector from the matrix
	return hmdAbsoluteTransform.getTrans();
}

inline Ogre::Quaternion getTrackedHMDOrieation()
{
	//Orientation/scale as quaternion (the matrix transfrom has no scale componant.
	return hmdAbsoluteTransform.extractQuaternion();
}

We can also extract the actual device name when we will do device enumeration. Valve’s example provide a function for that. We’re going to steal it too :

//This function is from VALVe
std::string GetTrackedDeviceString(vr::IVRSystem *pHmd, vr::TrackedDeviceIndex_t unDevice, vr::TrackedDeviceProperty prop, vr::TrackedPropertyError *peError = NULL)
{
	uint32_t unRequiredBufferLen = pHmd->GetStringTrackedDeviceProperty(unDevice, prop, NULL, 0, peError);
	if (unRequiredBufferLen == 0)
		return "";

	char *pchBuffer = new char[unRequiredBufferLen];
	unRequiredBufferLen = pHmd->GetStringTrackedDeviceProperty(unDevice, prop, pchBuffer, unRequiredBufferLen, peError);
	std::string sResult = pchBuffer;
	delete[] pchBuffer;
	return sResult;
}

Also, I don’t really like the way the MessageBox() function of the windows API look, it’s a bit confusing, so I have a small wrapper function just for displaying error dialog that look like this :

inline void displayWin32ErrorMessage(LPCWSTR title, LPCWSTR content)
{
	MessageBox(NULL, content, title, MB_ICONERROR);
}

And I’ve setup an enum for “left” and “right”. Sometimes we will need to convert that to another type, here a stupid function for that too :

enum oovrEyeType{left, right};
inline vr::EVREye OgreOpenVRRender::getEye(oovrEyeType eye)
{
	if (eye == left) return vr::Eye_Left;
	return vr::Eye_Right;
}

 

Integration outline

We basically need to perform the same operation as with the Oculus SDK.

  • Initialize OpenVR and check if we don’t have any error (no headset, no runtime installed, things like that)
  • Initialize a couple of render texture to the required render target size
  • Give Texture’s OpenGL IDs to a couple of “texture descriptor” for OpenVR that will tell it how to handle the textures and if we are using OpenGL or DirectX (so, OpenGL, right? :D)
  • Create in the scene a couple of cameras and place them how the user’s eye should be (taking care of using the IPD from the library)
  • Apply the correct transformation matrices to both cameras, to render at the correct perspective
  • in the render loop, get the tracking of the headset, and apply it to our camera system
  • render and submit both eye’s images to the VR Compositor

As in the previous article about the Oculus OVR library, I will only give you snippet of code and not a “complete” class because I don’t know how you’ll want to integrate all of this. You can either adapt all of this on the Ogre application framework, or you can setup you’re things on your own.

I have an example of this implementation inside the code of my open-source VR game engine, you can browse that code here : OgreOpenVRRender.cpp and here : OgreOpenVRRender.hpp.

OpenVR setup, VR System initialization

OpenVR SDK can be found on VavleSoftware’s GitHub here : https://github.com/valvesoftware/openvr

You’ll need the path to the “headers” directory to your include list, and the path in your linker search directory to the library you’ll use (<repo>/lib/win64 in my case)

You’ll need to link against openvr_api.lib on windows, and you’re executable has to be able to find openvr_api.dll. You’ll find this file on the bin folder of Valve’s repository.

Every component of OpenVR is part of the “vr” namespace. You can use the directive “using namespace vr;” or, just like I’ll do here, type “vr::” before any OpenVR member.

Here’s hare the includes you’ll need :

//OpenVR (HTC Vive; SteamVR) SDK
#include <openvr.h>
#include <openvr_capi.h>

#ifdef _WIN32
#include <Windows.h>
#include <glew.h>
#endif

//We need to get low level access to Ogre's RenderSystem_GL 
#include <RenderSystems/GL/OgreGLTextureManager.h>
#include <RenderSystems/GL/OgreGLRenderSystem.h>
#include <RenderSystems/GL/OgreGLTexture.h>

#include <Ogre.h>

We will also need a few global member objects :

	///main OpenVR object
	vr::IVRSystem* vrSystem;

	///Error handeling vaiable
	vr::HmdError hmdError;

	///window size
	unsigned int windowWidth, windowHeight;

	///EyeCamera render texures
	Ogre::TexturePtr rttTexture[2];
	
	///OpenGL "id" of the render textures
	GLuint rttTextureGLID[2];

	///EyeCameraViewport
	Ogre::Viewport* rttViewports[2];

	///Use hardware gamma correction
	bool gamma;

	///API handler, should be initialized to "OpenGL"
	vr::GraphicsAPIConvention API;

	///OpenVR texture handlers
	vr::Texture_t vrTextures[2];

	///Monoscopic camera
	Ogre::Camera* monoCam;

	///Viewport located on a window
	Ogre::Viewport* windowViewport;

	///OpenVR device strings
	std::string strDriver, strDisplay;

	///Geometry of an OpenGL texture
	vr::VRTextureBounds_t GLBounds;
	
	///Timing marker
	double then, now;

	///Array of tracked poses
	vr::TrackedDevicePose_t trackedPoses[vr::k_unMaxTrackedDeviceCount];

	///Transform that corespond to the HMD tracking
	Ogre::Matrix4 hmdAbsoluteTransform;
	
	///Camera Rig that holds the 2 cameras on the same plane 
	Ogre::SceneNode* eyeRig;

	///State of the "should quit" marker. If it goes to true, the game loop should stop 
	bool shouldQuitState;

As you can see, there’s a number of booleans, pointers and flag holder that we need to initialize. Before doing everything, here’s the expected state of everything :

	vrSystem = nullptr;
	hmdError = vr::EVRInitError::VRInitError_None;
	windowWidth = 1280;
	windowHeight = 720;
	gamma = false;
	API = vr::API_OpenGL;
	monoCam = nullptr;
	windowViewport = nullptr;
	then = 0;
	now = 0;
	hmdAbsoluteTransform = {};
	eyeRig = 0;
	shouldQuitState = false;

	rttTexture[left].setNull();
	rttTexture[right].setNull();

	rttTextureGLID[left] = NULL;
	rttTextureGLID[right] = NULL;

	rttViewports[left] = nullptr;
	rttViewports[right] = nullptr;

	vrTextures[left] = {};
	vrTextures[right] = {};
	GLBounds = {};

	handControllers[left] = nullptr;
	handControllers[right] = nullptr;

This should spare us from some trouble down the line.

Now that we’re on a common ground, we can initialize OpenVR by calling vr::VR_Init(). This function take a pointer to a variable for writing an eventual error status, and a vr::EVRApplicationType. This enumeration define what kind of OpenVR application we’re doing. The one that we are interested in is VRApplication_Scene

//Initialize OpenVR
	vrSystem = vr::VR_Init(&hmdError, vr::EVRApplicationType::VRApplication_Scene);

Now, we can check for errors

	if (hmdError != vr::VRInitError_None) //Check for errors
		switch (hmdError)
		{
			default:
				displayWin32ErrorMessage(L"Error: failed OpenVR VR_Init",
										 L"Undescribed error when initalizing the OpenVR Render object");
				exit(-1);

			case vr::VRInitError_Init_HmdNotFound:
			case vr::VRInitError_Init_HmdNotFoundPresenceFailed:
				displayWin32ErrorMessage(L"Error: cannot find HMD",
										 L"OpenVR cannot find HMD.\n"
										 L"Please install SteamVR and launchj it, and verrify HMD USB and HDMI connection");
				exit(-2);
		}

There’s more error code, and there’s a function to get a string describing the error, but this is enough for now.

There’s one last check we have to do. We have to test if the VrCompositor is initialized and available. vr::VRCompositor() is a static function returning the compositor address (a pointer). It will return nullptr if not available

    //Check if VRCompositor is present
    if (!vr::VRCompositor())
    {
        displayWin32ErrorMessage(L"Error: failed to init OpenVR VRCompositor",
                                 L"Failed to initialize the VR Compositor");
        exit(ANN_ERR_NOTINIT);
    }

Now we can retrieve the name of the driver and the display :

	//Get Driver and Display information
	strDriver = GetTrackedDeviceString(vrSystem, vr::k_unTrackedDeviceIndex_Hmd, vr::Prop_TrackingSystemName_String);
	strDisplay = GetTrackedDeviceString(vrSystem, vr::k_unTrackedDeviceIndex_Hmd, vr::Prop_SerialNumber_String);
	std::cerr << "Driver : " << strDriver;
	std::cerr << "Display : " << strDisplay

Now you can initialize your Ogre::Root as usual, load the RenderSystem_GL render system, and create a manual window. I’ll use the windowWidth and windowHeight variable declared above, and store the window address on the “window” pointer.

Create the camera, render textures and texture descriptors for OpenVR

This is part of any application using Ogre, so let me pass to more interesting things

Even if we’re not rendering directly to the window, creating it is mandatory for the render system initialization. We will use that window to display a “flat screen” version of the game. This helps demoing and debugging. The “naive” solution is to add a 3rd camera to the scene and use it to render to the window. This works and is easy to implement, but it’s not ideal since you’ll do 3 render of the scene per frame instead of two. The better solution is to add an additional scene manager, create a plane and a camera in orthographic projection, ignore any lighting and map the texture to that plane, this has the advantage to reduce the 3rd rendering to 2 polygons and some texture mapping. I do that in my Oculus example. You could just let the window display nothing. Since we are on windows, we will have to process the message queue of the application, or the system will tell the user that “this program is not responsive”. Ogre provide a static function to pump the messages out of the queue, so it’s a false problem.

Now with a created scene manager we can create some cameras

	//VR Eye cameras
	eyeRig = smgr->getRootSceneNode()->createChildSceneNode();

	//Camera for  each eye
	eyeCameras[left] = smgr->createCamera("lcam");
	eyeCameras[left]->setAutoAspectRatio(true);
	eyeRig->attachObject(eyeCameras[left]);

	eyeCameras[right] = smgr->createCamera("rcam");
	eyeCameras[right]->setAutoAspectRatio(true);
	eyeRig->attachObject(eyeCameras[right]);

Now, the 2 cameras are attached to the “camera rig node. however, they are both at the position (0, 0, 0) relative to the rig. We need to translate them to make the view adjusted for the user IPD

    //Get teh eyeToHeadTransform (they contain the IPD translation)
    for (char i(0); i < 2; i++)
        eyeCameras[i]->setPosition(getMatrix4FromSteamVRMatrix34(
            vrSystem->GetEyeToHeadTransform(getEye(oovrEyeType(i)))).getTrans())

(When I started making Annwvyn, my game engine, I was only testing programs on the debugger. The debugger will ignore the “not responsive” thing. I was a bit shocked to see my program “not work” inside of Visual Studio, and lost 2 days understanding what was going on. For one line of code)

Anyway, so, with Ogre initialized and a window created, we can setup the render textures.

OpenVR use the same size for both eye. you can get the required (well, “recommended”) texture size.

	//Get the render texture size recomended by the OpenVR API for the current Driver/Display
	unsigned int w, h;
	vrSystem->GetRecommendedRenderTargetSize(&w, &h);
	Annwvyn::AnnDebug() << "Recomended Render Target Size : " << w << "x" << h

Now, we want to create a couple of textures, and get their OpenGL ID. I want to make sure I’m using the RenderSystem_GL. So I’m going to make some cast that WILL crash the program if anything else… 😀

	//Create theses textures in OpenGL and get their OpenGL ID
	Ogre::GLTextureManager* textureManager = static_cast<Ogre::GLTextureManager*>(Ogre::TextureManager::getSingletonPtr());
	//Left eye texture
	rttTexture[left] = textureManager->createManual("RTT_TEX_L", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME,
													Ogre::TEX_TYPE_2D, w, h, 0, Ogre::PF_R8G8B8, Ogre::TU_RENDERTARGET, nullptr, gamma);
	rttTextureGLID[left] = static_cast<Ogre::GLTexture*>(textureManager->getByName("RTT_TEX_L").getPointer())->getGLID();

	//Right eye texture
	rttTexture[right] = textureManager->createManual("RTT_TEX_R", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME,
													 Ogre::TEX_TYPE_2D, w, h, 0, Ogre::PF_R8G8B8, Ogre::TU_RENDERTARGET, nullptr, gamma);
	rttTextureGLID[right] = static_cast<Ogre::GLTexture*>(textureManager->getByName("RTT_TEX_R").getPointer())->getGLID();

Now we can create camera viewports on the two textures:

	//Create viewport for each cameras in each render texture
	rttViewports[left] = rttTexture[left]->getBuffer()->getRenderTarget()->addViewport(eyeCameras[left]);
	rttViewports[right] = rttTexture[right]->getBuffer()->getRenderTarget()->addViewport(eyeCameras[right])

One thing we need also it to setup the projection matrix of each camera :

	//Get the couple of matrices
	vr::HmdMatrix44_t prj[2] = { 
		vrSystem->GetProjectionMatrix(getEye(left), nearClippingDistance, farClippingDistance, API),
		vrSystem->GetProjectionMatrix(getEye(right), nearClippingDistance, farClippingDistance, API) };

	//Apply them to the camera
	for (char eye(0); eye < 2; eye++)
	{
		//Need to convert them to Ogre's object
		Ogre::Matrix4 m;
		for (char i(0); i < 4; i++) for (char j(0); j < 4; j++)
			m[i][j] = prj[eye].m[i][j];

		//Apply projection matrix
		eyeCameras[eye]->setCustomProjectionMatrix(true, m);
	}

Now we can setup the data structures to submit our frames to the SteamVR compositor

	//Declare the textures for SteamVR
	vrTextures[left] = { (void*)rttTextureGLID[left], API, vr::ColorSpace_Gamma };
	vrTextures[right] = { (void*)rttTextureGLID[right], API, vr::ColorSpace_Gamma };

	//Set the OpenGL texture geometry
	GLBounds = {};
	GLBounds.uMin = 0;
	GLBounds.uMax = 1;
	GLBounds.vMin = 1;
	GLBounds.vMax = 0;

Device tracking and frame rendering

The way OpenVR expose tracking, is by having an array of device descriptor, each of them contains a device that is possibly connected, and tracked by the system. The only fix devices you have is the HMD. The HMD has a static id on this array : vr::k_unTrackedDeviceIndex_Hmd

We have created the array to hold the data, called trackedPoses. the vr::VRComposotor()->WaitGetPoses() function will pause the execution for some time (actually handling frame timing for you) and will populate the given “pose” array with the relevant data.

In your update loop, you will have to do this :

	vr::VRCompositor()->WaitGetPoses(trackedPoses, vr::k_unMaxTrackedDeviceCount, nullptr, 0);

You can get the HMD pose this way :

	vr::TrackedDevicePose_t hmdPose;
	if ((hmdPose = trackedPoses[vr::k_unTrackedDeviceIndex_Hmd]).bPoseIsValid)
		hmdAbsoluteTransform = getMatrix4FromSteamVRMatrix34(hmdPose.mDeviceToAbsoluteTracking)

Now my two function from the start “getTrackedHMDTranlation” and “getTrackedHMDOrientation” will return the correct ones.

I have a “feetPosition” vector and a “body orientation” quaternion that is used as reference for 1st person world positioning. I need them to put the camera at the wanted position

    //Update the eye rig tracking to make the eyes match yours
    eyeRig->setPosition(feetPosition
                        + bodyOrientation * getTrackedHMDTranslation());
    eyeRig->setOrientation(bodyOrientation * getTrackedHMDOrieation());

Now the cameras are in the right spot, and looking at the right thing. We can now actually render the 2 eye views, and submit them to the VR compositor.

We have to do the usual Ogre stuff before, pumping the window message and notify the engine that frame rendering will happen

	//Make Windows happy by pumping clearing it's event queue
	Ogre::WindowEventUtilities::messagePump();

	//Mark the fact that the frame rendering will hapen to Ogre (unlock the animation state updates for example)
	root->_fireFrameRenderingQueued();

Now we can update all the viewports of our program, and the actual window :

	//Update each viewports
	rttViewports[left]->update();
	rttViewports[right]->update();
	windowViewport->update();
	window->update();

And, since we already have populated our vrTextures and GLBounds objects, we can submit each eye render to the compositor, that will apply distortion and optical correction for us :

	//Submit the textures to the SteamVR compositor
	vr::VRCompositor()->Submit(getEye(left), &vrTextures[left], &GLBounds);
	vr::VRCompositor()->Submit(getEye(right), &vrTextures[right], &GLBounds);

And it’s done, you submited a frame, and hopefully on time 😉

There’s more thing to take care of with OpenVR. There’s an event system, that can tell you when the used pressed the “exit” button of your app inside the Steam overlay, or when he physically changed the HMD. Here’s how to poll the events and process them :

	vr::VREvent_t event;
	//Pump the events, and for each event, switch on it type
	while (vrSystem->PollNextEvent(&event, sizeof(event))) switch (event.eventType)
	{
		//Handle quiting the app from Steam
		case vr::VREvent_DriverRequestedQuit:
		case vr::VREvent_Quit:
			shouldQuitState = true;
			break;

		//Handle user IPD adjustment
		case vr::VREvent_IpdChanged:
			handleIPDChange();
			break;

There’s also more tracked devices in the array that we have the pose, you’re probably interested into getting the status of hand controllers with a Vive. You’ll have to iterate through the available devices and test if they are connected, if they are the controllers you’re interested in, and if they have a valid pose ( = being tracked by the system).

	//Iterate through the possible trackedDeviceIndexes
	for (vr::TrackedDeviceIndex_t trackedDevice = vr::k_unTrackedDeviceIndex_Hmd + 1;
		 trackedDevice < vr::k_unMaxTrackedDeviceCount;
		 trackedDevice++)
	{
		//If the device is not connected, pass.
		if (!vrSystem->IsTrackedDeviceConnected(trackedDevice))
			continue;
		//If the device is not recognized as a controller, pass
		if (vrSystem->GetTrackedDeviceClass(trackedDevice) != vr::TrackedDeviceClass_Controller)
			continue;
		//If we don't have a valid pose of the controller, pass
		if (!trackedPoses[trackedDevice].bPoseIsValid)
			continue;

		//Now you can use the "trackedDevice" id to retreive the absolute pose, and do more stuff with it
		//...
	}

That’s about it. Contrary to the Oculus VR SDK, you can provide the textures declared by Ogre directly to the compositor. The Oculus SDK allocate texture and give them to you. This is problematic with Ogre because you can’t access an external texture directly without big hack. The best solution I’ve found is to take the time to copy from one texture to the Oculus one is to use glCopyImageSubData and allocate the texture twice, once in Ogre and once by the Oculus SDK.

The implementation of the Vive with Ogre is pretty straight forward. I omited here the Ogre initialization because it’s exactly the same as the Rift case (and any Ogre application :-p. Part of it is hiden from you if you use Ogre’s frameword…).

I will discuss hand controllers in a later article, I’m still actively working on them in the SteamVR side, and I would like to have something equivalent with Oculus Touch. My plan for Annwvyn is to have an abstract class with the basic information, with a test to know if it’s a Touch or a Vive controller “user side” to get access to special features, like the Touch fake finger tracking…

I hope somebody made it through, it was yet another long, boring and filled with code article. Writing this was probably more helpful for me than for any eventual reader, because it was an excuse to review my own code. This is the problem of working solo on a peice of software, you can’t really look at it with fresh eyes.

Comments

  1. Hey,

    Just wanted to say, you saved me a world of pain. I’ve been wrestling with trying to get Oculus to work with Autodesk Motionbuilder, which has a similar OpenGL engine (I think) to Ogre.

    Trying everything to get the texture buffers to marry between Motionbuilder and the Oculus compositor. I’m going to try and use OpenVR this weekend. Seems like a much better bet:)

    Thanks for taking the time to walk through this so well.

    • Hi!

      The main advantage that OpenVR has when you use OpenGL, is that you can give the ID of your textures and it basically “just works”. The Oculus SDK wants to “own” the textures used. It’s mainly because of the way it’s implemented (notably, their weird custom post-processing steps they developed with GPU manufacturers, and the fact that they use interops between Direct3D and OpenGL so that their own compositor is only implemented using DirectX, even if the client app is rendering via OpenGL)

      The only “simple” way I found to get the Oculus SDK working painlessly is to do what is suggested in the developer’s documentation: copy the buffer you render into the input buffer provided by the Oculus service. It’s not ideal, but I found that the overhead of doing this copy via glCopyImageSubdata isn’t that much. Still, it annoys me.

      BTW, I’m supper happy that something I’ve wrote on this blog has helped anybody. I don’t have a lot of traffic, nor a lot of content. So I really appreciate your comment. 😀

  2. thanks for your blog~

Leave a Reply

Your email address will not be published / Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.