Using Ogre’s OpenGL renderer with the OpenVR API (SteamVR, HTC-Vive SDK)

Note to the reader: If some things in this article are unclear and/or ommited, it’s probably because I’ve allready explained them in the precedent article about the Oculus Rift SDK here

The OpenVR API, and the whole “SteamVR” software stack is really interesting to target, because it’s compatible with many VR systems from the get go. Making you code something once, and running it on all of them.

In practice, the OpenVR API is a bit simpler to code with than the Oculus SDK. It’s naming conventions are from the 90’s (as with all Valve’s SDKs. But at least they are consistent with themselves!)

Also, the resulting code is less verbose that with the Oculus SDK. It’s almost like Oculus wanted to make their code fit the Windows/DirectX style (setting up structures with a lot of parameters and sending pointers to them to functions) and Valve’s took a more OpenGL approach (functions that take fixed types and flags to tell what to do with it), they even named their library OpenVR.

You could say that the OpenVR API is a bit lower level than the Oculus Rift one, and you’ll be right. Contrary to the Oculus OVR library, you do not have the following thing :

  • No automatic allocation of render textures
  • No C++ mathematical classes for matrices, vectors and quaternions
  • No convenient “orientation” quaternion and “position” vector to express a tracked pose. You’ll get a plain old float array representing a transformation matrix.

Hopefully we have Ogre on hand. Ogre has every mathematical objects we need, and since we’re on a PC platform, Ogre define all the underlying mathematical operation in the FPU assembly, so it’s probably better than anything I would write myself…

However, we still need to “translate” the data retrieved from the OpenVR API to the formats we want.

SteamVR will return transformation matrices representing a translation and a rotation from the room’s Origin point. This reference point is at the center of the player’s space, on the ground. Theses matrices doesn’t represent a scale transform, because you cannot “scale” objects from the real word. You can move them in 6 degrees of freedom, not 9 (3 axis translations and 3 axis rotations).

Ogre’s Matrix4 class permit to easily extract the translation and rotation, and you can feed a list of float to it’s constructor. We will use this little helper function to deal with that :

You can easily retrieve the translation by calling Ogre::Matrix4::getTrans() and the orientation by calling Ogre::Matrix4::extractQuaternion();

I keep the HMD pose as a global Ogre::Matrix4, and I retrieve the position/orientation with theses functions

We can also extract the actual device name when we will do device enumeration. Valve’s example provide a function for that. We’re going to steal it too :

Also, I don’t really like the way the MessageBox() function of the windows API look, it’s a bit confusing, so I have a small wrapper function just for displaying error dialog that look like this :

And I’ve setup an enum for “left” and “right”. Sometimes we will need to convert that to another type, here a stupid function for that too :

 

Integration outline

We basically need to perform the same operation as with the Oculus SDK.

  • Initialize OpenVR and check if we don’t have any error (no headset, no runtime installed, things like that)
  • Initialize a couple of render texture to the required render target size
  • Give Texture’s OpenGL IDs to a couple of “texture descriptor” for OpenVR that will tell it how to handle the textures and if we are using OpenGL or DirectX (so, OpenGL, right? :D)
  • Create in the scene a couple of cameras and place them how the user’s eye should be (taking care of using the IPD from the library)
  • Apply the correct transformation matrices to both cameras, to render at the correct perspective
  • in the render loop, get the tracking of the headset, and apply it to our camera system
  • render and submit both eye’s images to the VR Compositor

As in the previous article about the Oculus OVR library, I will only give you snippet of code and not a “complete” class because I don’t know how you’ll want to integrate all of this. You can either adapt all of this on the Ogre application framework, or you can setup you’re things on your own.

I have an example of this implementation inside the code of my open-source VR game engine, you can browse that code here : OgreOpenVRRender.cpp and here : OgreOpenVRRender.hpp.

OpenVR setup, VR System initialization

OpenVR SDK can be found on VavleSoftware’s GitHub here : https://github.com/valvesoftware/openvr

You’ll need the path to the “headers” directory to your include list, and the path in your linker search directory to the library you’ll use (<repo>/lib/win64 in my case)

You’ll need to link against openvr_api.lib on windows, and you’re executable has to be able to find openvr_api.dll. You’ll find this file on the bin folder of Valve’s repository.

Every component of OpenVR is part of the “vr” namespace. You can use the directive “using namespace vr;” or, just like I’ll do here, type “vr::” before any OpenVR member.

Here’s hare the includes you’ll need :

We will also need a few global member objects :

As you can see, there’s a number of booleans, pointers and flag holder that we need to initialize. Before doing everything, here’s the expected state of everything :

This should spare us from some trouble down the line.

Now that we’re on a common ground, we can initialize OpenVR by calling vr::VR_Init(). This function take a pointer to a variable for writing an eventual error status, and a vr::EVRApplicationType. This enumeration define what kind of OpenVR application we’re doing. The one that we are interested in is VRApplication_Scene

Now, we can check for errors

There’s more error code, and there’s a function to get a string describing the error, but this is enough for now.

There’s one last check we have to do. We have to test if the VrCompositor is initialized and available. vr::VRCompositor() is a static function returning the compositor address (a pointer). It will return nullptr if not available

Now we can retrieve the name of the driver and the display :

Now you can initialize your Ogre::Root as usual, load the RenderSystem_GL render system, and create a manual window. I’ll use the windowWidth and windowHeight variable declared above, and store the window address on the “window” pointer.

Create the camera, render textures and texture descriptors for OpenVR

This is part of any application using Ogre, so let me pass to more interesting things

Even if we’re not rendering directly to the window, creating it is mandatory for the render system initialization. We will use that window to display a “flat screen” version of the game. This helps demoing and debugging. The “naive” solution is to add a 3rd camera to the scene and use it to render to the window. This works and is easy to implement, but it’s not ideal since you’ll do 3 render of the scene per frame instead of two. The better solution is to add an additional scene manager, create a plane and a camera in orthographic projection, ignore any lighting and map the texture to that plane, this has the advantage to reduce the 3rd rendering to 2 polygons and some texture mapping. I do that in my Oculus example. You could just let the window display nothing. Since we are on windows, we will have to process the message queue of the application, or the system will tell the user that “this program is not responsive”. Ogre provide a static function to pump the messages out of the queue, so it’s a false problem.

Now with a created scene manager we can create some cameras

Now, the 2 cameras are attached to the “camera rig node. however, they are both at the position (0, 0, 0) relative to the rig. We need to translate them to make the view adjusted for the user IPD

(When I started making Annwvyn, my game engine, I was only testing programs on the debugger. The debugger will ignore the “not responsive” thing. I was a bit shocked to see my program “not work” inside of Visual Studio, and lost 2 days understanding what was going on. For one line of code)

Anyway, so, with Ogre initialized and a window created, we can setup the render textures.

OpenVR use the same size for both eye. you can get the required (well, “recommended”) texture size.

Now, we want to create a couple of textures, and get their OpenGL ID. I want to make sure I’m using the RenderSystem_GL. So I’m going to make some cast that WILL crash the program if anything else… 😀

Now we can create camera viewports on the two textures:

One thing we need also it to setup the projection matrix of each camera :

Now we can setup the data structures to submit our frames to the SteamVR compositor

Device tracking and frame rendering

The way OpenVR expose tracking, is by having an array of device descriptor, each of them contains a device that is possibly connected, and tracked by the system. The only fix devices you have is the HMD. The HMD has a static id on this array : vr::k_unTrackedDeviceIndex_Hmd

We have created the array to hold the data, called trackedPoses. the vr::VRComposotor()->WaitGetPoses() function will pause the execution for some time (actually handling frame timing for you) and will populate the given “pose” array with the relevant data.

In your update loop, you will have to do this :

You can get the HMD pose this way :

Now my two function from the start “getTrackedHMDTranlation” and “getTrackedHMDOrientation” will return the correct ones.

I have a “feetPosition” vector and a “body orientation” quaternion that is used as reference for 1st person world positioning. I need them to put the camera at the wanted position

Now the cameras are in the right spot, and looking at the right thing. We can now actually render the 2 eye views, and submit them to the VR compositor.

We have to do the usual Ogre stuff before, pumping the window message and notify the engine that frame rendering will happen

Now we can update all the viewports of our program, and the actual window :

And, since we already have populated our vrTextures and GLBounds objects, we can submit each eye render to the compositor, that will apply distortion and optical correction for us :

And it’s done, you submited a frame, and hopefully on time 😉

There’s more thing to take care of with OpenVR. There’s an event system, that can tell you when the used pressed the “exit” button of your app inside the Steam overlay, or when he physically changed the HMD. Here’s how to poll the events and process them :

There’s also more tracked devices in the array that we have the pose, you’re probably interested into getting the status of hand controllers with a Vive. You’ll have to iterate through the available devices and test if they are connected, if they are the controllers you’re interested in, and if they have a valid pose ( = being tracked by the system).

That’s about it. Contrary to the Oculus VR SDK, you can provide the textures declared by Ogre directly to the compositor. The Oculus SDK allocate texture and give them to you. This is problematic with Ogre because you can’t access an external texture directly without big hack. The best solution I’ve found is to take the time to copy from one texture to the Oculus one is to use glCopyImageSubData and allocate the texture twice, once in Ogre and once by the Oculus SDK.

The implementation of the Vive with Ogre is pretty straight forward. I omited here the Ogre initialization because it’s exactly the same as the Rift case (and any Ogre application :-p. Part of it is hiden from you if you use Ogre’s frameword…).

I will discuss hand controllers in a later article, I’m still actively working on them in the SteamVR side, and I would like to have something equivalent with Oculus Touch. My plan for Annwvyn is to have an abstract class with the basic information, with a test to know if it’s a Touch or a Vive controller “user side” to get access to special features, like the Touch fake finger tracking…

I hope somebody made it through, it was yet another long, boring and filled with code article. Writing this was probably more helpful for me than for any eventual reader, because it was an excuse to review my own code. This is the problem of working solo on a peice of software, you can’t really look at it with fresh eyes.

Comments

  1. Hey,

    Just wanted to say, you saved me a world of pain. I’ve been wrestling with trying to get Oculus to work with Autodesk Motionbuilder, which has a similar OpenGL engine (I think) to Ogre.

    Trying everything to get the texture buffers to marry between Motionbuilder and the Oculus compositor. I’m going to try and use OpenVR this weekend. Seems like a much better bet:)

    Thanks for taking the time to walk through this so well.

    • Hi!

      The main advantage that OpenVR has when you use OpenGL, is that you can give the ID of your textures and it basically “just works”. The Oculus SDK wants to “own” the textures used. It’s mainly because of the way it’s implemented (notably, their weird custom post-processing steps they developed with GPU manufacturers, and the fact that they use interops between Direct3D and OpenGL so that their own compositor is only implemented using DirectX, even if the client app is rendering via OpenGL)

      The only “simple” way I found to get the Oculus SDK working painlessly is to do what is suggested in the developer’s documentation: copy the buffer you render into the input buffer provided by the Oculus service. It’s not ideal, but I found that the overhead of doing this copy via glCopyImageSubdata isn’t that much. Still, it annoys me.

      BTW, I’m supper happy that something I’ve wrote on this blog has helped anybody. I don’t have a lot of traffic, nor a lot of content. So I really appreciate your comment. 😀

Leave a Reply

Your email address will not be published / Required fields are marked *

*