Author: Ybalrid

Home / Author: Ybalrid

SDL: not (just) a 2D graphics library

3 September 2019 | C++, cross platform, SDL | No Comments

SDL logo

My first real introduction with programming was with the C programming language, on a French website that was, at the time, known by the “Le Site Du Zéro” (think “The newbie’s website”).

At this time, I was in middle school, and I was pretty bored by it. It’s around that time that I started to have really access to the internet at home, and started spending a lot of time on it.

There were a few things behind this, mostly I was looking for trying to understand how video games works and how they were made. I was also trying to learn more about how computers work, without actually knowing what to look for, and I started dabbling into Linux in that time. This was around 2007.

This website had a well written, comprehensive, and actually fun “Introduction to C programming” tutorial. (They called them tutorials, but think of them more like courses…) Today, this website became “Open Classrooms”, a popular french MOOC platform, but the originals “tutos” are still up, and were (for some of them) improved, some turned into video courses.

This one was divided into 3 parts, one that is you “C programming 101” type of stuff, were they show you how to create variables, functions, if statements, loops, structures… This kind of things, the 2nd one is about more advanced things with pointers, memory allocations, arrays and whatnot, and the 3rd one is about showing you how to use a library, and takes example of the SDL to create a window, display things, and make a little sokoban game (with stolen Super Mario World sprites 😉).

In this thing, SDL is described as being a 2D graphics library for games, and for years, I regarded the SDL as being just that. I have no clue if other people had this misconception, but when I look for things about the SDL, I mostly see things about making 2D games with it, or seeing it merely as a way to get a window and a screen and not much else.

But, oh man. It’s so much more than that.

The SDL name did make little sense to me at that time, but I never really thought about it. SDL is the Simple Directmedia Layer. A better way of describing the SDL would be a “cross platform API for doing most of the things you would want to do with an OS, including drawing windows, creating threads, loading DLLs, and pretty much anything.

I’ve learned recently that it was developed to be used as an aid for porting software from windows to other platform. The main idea being to support functionalities in a convenient package that are analogue to what was available with Microsoft DirectX. This was developed by former Loki Software employee Sam Lantinga. The business-model of that company was to port video games from windows to other platforms, and mostly Linux.

The SDL works on pretty much anything you would want to use. Today, for me it looks more like a fundamental building block for multimedia programs (video games, and other interactive things with custom graphics), and without having to worry too much about the platform you are running into.

If you use the SDL, anything you do through it will work on a computer running Windows, MacOS, Linux, flavors of BSD, obscure little things like Haiku (Haiku is awesome. This is a really interesting operating system project), any kind of smartphone being Android or iOs, and probably much more things like video game consoles. All these platforms are available to you without having to change your code too much (not at all most of the time).

Since version 2.0 came out, the SDL is licensed under the zLib license. Before that, SDL was LGPL. LGPL is not a problem in commercial, closed source software (like video games usually are) when the library is dynamically linked to the program (for example, on Windows, the library is a .DLL file distributed alongside the binaries of the game.) This permit you to recompile and change the library, and the developers of the game needs to make the source code of the version used with the game available to you, including any changes they may have done to it, and under the same rights. This can make some company’s lawyers uneasy. the zLib license is a much shorter, much straightforward license that pretty much says “do whatever with the code of this, and credit about it is appreciated”

Also since version 2.0, the huge feature list of the library was expended further. The impressive feature list can be read here.

I spent quite some time misrepresenting what the SDL was, thinking that it was just a lousy 2D graphics libraries with a software renderer, and not the impressive tool that it actually is.

Few years ago, while searching content from Vavle about their SteamVR system, I stumbled across this talk by Ryan C. Gordon, a prolific freelance programmer that, if you ever ran a video game natively on Linux, there’s a pretty high chance he helped make this a reality. I really invite you to watch this, and appreciate all the hard work this simple C library can do for you in just a few lines…

As a side note, I really should plug a little project a good friend of mine started some time ago (and which I contribute to, because it’s awesome) to make a safe, C++17, RAII wrapper around most of the useful bits of SDL for 2D/OpenGL/Vulkan programming. Check it out here: https://github.com/Edhebi/cpp-sdl2

Ogre_glTF: A glTF loader plugin for Ogre 2.x

18 February 2019 | C++, glTF, Ogre | No Comments

If there’s one open source library that I really like and think has a great level of usefulness for both myself, and a whole community, it’s Ogre.

Before going on the story of why I felt loading glTF files into ogre was a necessary thing to do, and why I decided to actually write a loader myself, I need to tell you a bit about Ogre:

Good old Ogre3D

I prefer to write it “Ogre”, but really, it’s name is OGRE. This is an acronym that stands for “Object-oriented Graphical Rendering Engine”.

Ogre existed for ages, and is a well regarded 3D rendering engine. The main advantages of Ogre are:

  • Its code base is stable,
  • It’s full of features,
  • It’s cross-platform,
  • It only exposes to you a simple scene graph, abstracting away any graphics API the system is using,
  • Has a simple “resource manager” to get your 3D objects and textures,
  • And is graphics API agnostic. Through Ogre’s public API you can choose the backend (called a “RenderSystem”) to use between OpenGL, Direct 3D, Metal…

This last point means that when using Ogre’s public API, you don’t need to know about the way to talk to the GPU, and your code should work the same whenever the rendering is happening thanks to OpenGL, Direct3D, Metal, or whatever else it could be supporting.

Recently (few years ago now), Ogre development has been pushed in a new direction in a parallel branch that is called Ogre 2.x. As far as I can tell, this is mostly the work of Matías N. Goldberg.

This new branch is focused on modernizing Ogre. Doing so by removing a lot of overhead from the original design. (Ogre was really “design pattern” focused, exposing listeners for pretty much anything, lots of virtual calls…) This is done in several ways: by adopting new best practices for performance like starting to use concept of DOD, using half precision floating point number where it doesn’t hurt, improving usage of caches, using AZDO (Approaching Zero Driver Overhead) techniques in the renderers, remove pointless preprocessing when modern hardware where now the more simpler and direct approaches can be faster… And a lot of things I’m less familiar with to be honest.

But the performance boost can be really significant, notably on CPU bound situations, Ogre now does really better usage of your CPU time in large scenes with instantiated objects and lots of nodes in the scene graph to go through.

What I was really interested about when upgrading to Ogre 2.1 was the introduction of a new “material” system in Ogre 2. It’s called HLMS, and stands for the “High Level Material System”. It is a shader preprocessor/template system where the definition of a material will generate the shader code needed to render it, using a a set of fixed parameters.

This replaces the older “dynamically generated” shader system Ogre used to have called RTSS (Run-Time Shader System). One of the main advantages of HLMS is a really simplified syntax for the end user, and the fact that it’s main goal is not to emulate an old fixed-function pipeline, like RTSS was originally conceived for.

Speaking of modernity, HLMS comes with two standard implementation (You can modify them/write your own if you know shader programming): Unlit and PBS. (with a “mobile” versions of each)

Unlit, like it’s name suggest, is the simplest thing a material can do. It only defines the color values of pixels, and doesn’t do any shading. This is intended to be used for things like UI interface, HUDs, overlays, things along those lines

PBS stands for Physically Based Shading. This is what most of the industry would refer to as “PBR”. (R for “Rendering”. There’s a distinction between them, Shading is one part of “Rendering” in general). The HLMS PBS implementation in Ogre 2.x is comparable to the standard PBR material you’ll find in Unreal Engine, and many other. It uses standard physical parameters to describe how light interact with the surface.

This is all fine and dandy, but, at least at the time I started working on the glTF plugin (about a year ago now), there was no way to export, for example, from blender, a Ogre 2.x “.mesh” file with an accompanying “.material” that would be compatible with the new HLMS system.

This would have not been a problem if Ogre could just load glTF assets. As explained in my previous article, glTF is a 3D file format that represent 3D assets (and even full scenes if you want!) including meshes, animations, textures and PBR Material. The material in question using a standar “metal-roughness” workflow (more explanations on the excellent guide from Allegorithmic, the makers of Substance).

glTF being an industry standard file format, we don’t need to rely on an up-to-date exporter for multiple 3D modeling software being able to produce files compatible with the current version of Ogre, and being able to correctly translate material definitions and animations tracks to Ogre.

This is what I thought, and this is the reason why I started working on a glTF loader for Ogre.

Loading glTF in practice

A glTF asset is either one or multiple files that are composed of 3 parts:

  • A JSON object containing metadata about the scenes, nodes, meshes, material, animation and everything inside the asset
  • Binary buffers containing raw vertex/index/animation data
  • Image assets containing textures

A .gltf file is a simple JSON file, either containing base64, data URI, or URLs pointing to binary buffers (generally in .bin files) and images.

A .glb file is a binary file containing a standard header, followed by the JSON metadata in ASCII text, followed by every binary data required (including images).


Poster explaining how a glTF asset is structured

In my Ogre_glTF plugin, I choose to rely on TinyGLTF to handle the parsing and loading of the images and binary data. TinyGLTF is header only and itself uses STB libraries for image loading and (currently, this is subject to change relatively soon) Niels Lohmann JSON library for modern C++.

To avoid any problems with the fact that TinyGLTF will include a JSON loader and other libraries as header only, and that the plugin will need to enclose the implementation of these things, the plugin follows the pimpl idiom. The only headers required are the ones shipped with the built plugin, and all the internal implementation is inside the dynamic library.

The plugin itself is built as a standard dynamic library that has a pluginized for Ogre afterwards. This permit you to either dynamically load it as an Ogre plugin (and maybe have conditional support for it) or to link it as a regular dynamic library and bypass the Ogre plugin system all together.

The repository for the Ogre plugin is hosted at https://github.com/Ybalrid/Ogre_glTF

Code is hosted on GitHub, under the MIT license, and at the time of writing this, we are at this commit.

I expect most people to use it as an Ogre plugin (as it is way simpler. You will probably just need to copy a few header files around and add a line in a “plugins.cfg” if you are using the regular Ogre framework for configuration). There are two example programs that loads random files chosen from the sample collection from Khronos, one that use the library by itself, and another one that loads it as a plugin.

The plugin is currently alpha software quality, but is promising. Iis available in a version 0.2.1 at the time or writing. and is compatible with the head of the v2-1 branch of Ogre.

Plans for the future is to rewrite the texture and material loading code to make it compatible with Ogre v2-2, but I cannot give an ETA on this.

The plugin is for now limited into being a “model” loader, intended as using .glb files as replacement for Ogre “.mesh + .skeleton + .material” classic resource format.

The plugin’s functionality includes:

  • Loading glTF meshes and create v2 “mesh” objects in Ogre MeshManager (including Draco compressed meshes).
  • Loading glTF skin metadata and create Ogre skeleton objects.
  • Loading glTF animations track for a given skeleton and adding them as skeleton animations into Ogre.
  • Loading glTF materials (standard metal-rougness workflow) and create HlmsPbs Datablock for Ogre v2. The plugin doesn’t support the Specular-Glossiness glTF extension.
  • Loading images in glTF assets and create texture out of them.
  • “Remuxing” textures from glTF into texture usable by HlmsPbs. Including separating the metalness and roughness maps into gray-scale images.
  • Adding a ResouceManager in Ogre’s ResourceGroupManager that retrieve .glb files from Ogre’s resource groups and get them through the “load mesh – load textures – load materials” pipeline to get Ogre Meshes and Items.

This is the most useful open-source project that I have ever started, and the one that has received the most contribution and interest from the community. I am really pleased with that yet.

For the projects future development, there’s a number of features that would be really valuable to have in the plugin, notably the support for a number of glTF extensions that may be used often in future assets, notably ones directly developed by khronos:

  • KHR_materials_pbrSpecularGlossiness to support the other common PBR workflow using specular and gloss values
  • KHR_materials_unlit to translate unlit materials into HlmsUnlit datablocks
  • KHR_texture_transform to be able to factor a scale and translation when reading texture coordinates, to load some types of texture atlases

I may also extend the scope of this plugin to make it a scene loader, not just an “object” loader. This would allow to directly use a 3D modeling software to create scenes and environments, but I haven’t decided on that.

And the only real missing feature is the loading of morph targets animations. Currently Ogre 2.x does not support animation targets in the v2-1 or v2-2 branches. I know somebody was working on this some time ago, and had support for that in OpenGL and metal, but last time I’ve checked this person had closed his pull request.

Sometimes I wonder why some things are inside the C and C++ standard libraries, and some aren’t.

As far as I can be bothered to read the actual “standards” document (that are mostly written in legalise, not in understandable english for you and me). Theses languages are defined against an “abstract machine”, and the actual real-world implementation of them, on computers that actually exists, should follow the behavior described for that thing, modulo some implementations details.

Beside the specific case of having theses languages in a “free standing environment” (meaning that the code written isn’t actually relying on being executed inside an operating system, but is directly running on the bare hardware), It seems that some notions, like the OS having a “filesystem” where textual paths can points to files that can be opened, read and modified, is pretty standard.

What is strange about that is, if the notion of files and file systems are part of the standard library for both C and C++, (and were present since the beginning) networking sockets doesn’t exist in both languages. And C++ gained the notion of creating multiple threads of execution, and manipulating them inside it’s standard library only in 2011.

All of theses concepts : files, threads, and sockets, are Operating System specific constructs. Opening a file on Linux is fairly different that opening a file on Windows for example. But the standard library offer a single, unique, and portable way of doing so.

These three things are present in all operating systems in use for the past decades now (since the 70’s at least?). I find it strange that the C standard library only includes files. Since C++ now also has threads, I will consider this a non-issue. So let’s talk about the other one…

A song of files and sockets

I would like to take some time to discuss writing some low-level network code in C++. The current interface to get data to and from a network we know today are using a notion called sockets.

For lack of a better analogy, a socket can be thought as some kind of “magical file” that, when, written into, will send the bytes that has been written to on the network, and when receiving bytes, they will be accessible by reading said file. This notion comes from the UNIX world, where everything is effectively a file. And nothing is wrong with that, it’s actually a really simple, straightforward way of doing things.

Most of the operating systems that matters today are using this analogy. When I say OSes that matters today, I’m thinking of both the modern UNIX derivatives (Linux, macOS, and the rest of the BSD family). And Microsoft Windows.

The Windows socket API has been mostly borrowed from BSD anyway, if you remove some oddities like a few renamed functions, a few changed data types, and the added initialization procedure, Windows sockets are equivalent of sockets you have on Linux.

100% non-standard code

But, none of this exist inside the standard library of these languages. When you are doing socket programming on Linux, you are not calling functions of the library, you are performing Linux Kernel System calls, and you are dealing with file descriptors and bytes.

On Windows, you are calling part of the Win32 API (called WSA for Windows Socket API). This situation is unfortunate, because it means that I, as a C++ programmer, I need to make sure that my code will work both under Linux and under Windows. There’s no one single networking API that I can use everywhere without thinking about it. Sure, it’s 80% similar, or maybe 90% similar, but still, if I need to put #ifdef WIN32 in my code for something as fundamental as sending bytes to another computer in a network in 2018, we are doing something wrong.

Moreover, all theses OS-level API are implemented in C. Not C++. This means that everything you have are functions and structures. When describing socket configuration, you are filling structures and passing them to functions. When you have to reference a specific socket, you need to keep a little token and give it to a functions. When you need to read data, you need to have a buffer with the correct amount of bytes ready and give a pointer, alongside a variable containing the size of said buffer to a function, and make sure that you don’t mix them together.

Basically, you are doing 1970’s level computer science. This is fine for the lowest level code out there, but not for the code of an application.

Unnecessary added complexity

There are solutions to solve this, and some that are even well advanced in the path of getting standardized inside the C++ language, like Boost.Asio. But, In my humble opinion, there’s a fundamental problem with Asio itself: it is Asio.

For those who aren’t familiar with Asio, it’s name stands for “asynchronous input and output”. It’s a library, a good library for what it’s name stands for: doing intput and output in an asynchronous manner. For doing this, Asio as multiple contexts and constructs to deal with multiple threads and non blocking calls, and who executes them, and when they are executed.

The problem is: If I just want to connect to the network and exchange data, do I need to worry about io-context and handlers, and executors? Probably not.

Keep It Simple, Stupid.

In the C++ world, we struggle at keeping simple thing simple. The collection of libraries from the Boost project is one good example. Don’t get me wrong, these are high-quality, peer-reviewed C++ libraries. They are good code written by smart people, with the seal of approval by other smart people.

They are a demonstration of what you can do when you want to push the language forward. They contains a lot of useful pieces of generic code that you can reuse. A lot of really important and useful things from Boost finally landed inside the C++ standard library since 2011, like smart pointers, chrono, array, lambdas, and probably a lot more things like that are going to jump from existing on boost, to being implemented inside the standard.

And, if today you ask for a recommendation of something to use to do networking code in C++, I’m almost sure that you’ll get pointed to either Asio, the version of Asio inside of Boost, or, the Networking TS (addition to the standard that will probably land in a future version of the C++ language) that is… Basically based on Asio.

As you can guess… It’s not like I don’t like Asio, I find it genuinely interesting, and potentially useful. But I’m unsure it is the thing that I want to see standardized.

As stated earlier, if you’re just going to do some TCP/UDP communication, Asio comes wrapped in unnecessarily complexity.

Moreover, your OS comes with a socket API, but it isn’t super convenient to use in C++, and it’s not portable without doing the ugly #ifdef preprocessor dance.

Few months ago, I was thinking about this situation, and thought why not just wrap the os library in a nice C++17 interface?

Introducing kissnet

Kissnet is a little personal project I started during the summer, and that I tweak a little from time to time, it’s mostly for fun, but I feel like some people could have some use for something like this.

The design goals of kissnet are pretty straightforward:

  • Be a single header library, nothing to link against
  • Be a single API for all supported operating systems
  • Use C++17 std::byte as a “safe” way of representing non-arthmetic binary data
  • Be the lighthest/thinest possible layer on top of the operating system as possible
  • Handle all (or most) of what TCP and UDP sockets can do in ipv4/v6
  • Don’t require the user to worry about the endianess of the network layer.
  • Just transport the bytes, and do nothing else
  • Hide os-specific weirdness
  • Optional exception support (You can chose to replace throwing exception with the program either aborting or calling your own error handler)
  • stay simple

Kissnet only implement 3 kinds of objects:

  • A socket class. The behavior of the socked is templated around the protocol used (TCP vs UDP) and the version of IP used (ipv4 vs ipv6)
  • An endpoint class that permit you to specify a host and port as just a string and a number, or a “hostname:port” string.
  • A “buffer<size>” class that is just syntactic sugar around an std::array<std::byte, size>

Buffer are for holding received data, buffer know their own size, and can read the correct amount of bytes for you. Kissnet doesn’t care what data is sent, it’s not kissnet job.

Sockets are non-copyable (but movable) objects and they have the typical operations you can apply on socket implemented (bind, listen, accept, connect, send, receive).

Kissnet automatically manages the initialization/de initialization of the underlying socket API if needed (like on windows). This is done by exploiting the reference-counting of std::shared_ptr<>. This is the only overhead on top of holding a socket “file descriptor” ( = an simple integer variable).

I’ve only used kissnet in a couple of toy programs and not in a real project, however, I already think that I prefer this simple, down to the metal but yet type-safe and cross platform library as using something like Asio. Asio feels like using a bazooka to do kill a fly. I’ve heard a few opinions going the same way as I do. This is why I’ve put this little experimental project on GitHub, under a permissive MIT license.

Why glTF 2.0 is awesome!

16 July 2018 | 3D programing stuff | No Comments

There’s one single thing that I find truly frustrating when dealing with multiple 3D-related software : making them exchange 3D assets.

You don’t have much warranty that what has been put out of one software will look the same into something else (e.g. a Game Engine. You may work with meters, and find out that Unreal works in centimeters. They could use different conversations for texturing, material definitions may just “not work”…)

There’s scale issues, animation issues, texture binding issues, material problem in general.

All of this is generally dealt with a huge and horrible mess of import/export scripts. And the more of these you need to run in your toolchain, the worst it gets

Still, there are a number of file formats that are more or less standardized in this industry, but none of them does really fit all the use cases for real time rendering/video games (and by extension, VR).

Most of them are way better suited as “authoring” format: Files for 3D modeling programs, like Autodesk’s FBX or Collada from the khronos group. They are complex, and they generally are too-flexible in how they can be implemented and how they can represent a specific thing, and just contain “too much data” for what you would want to use from a programming stand point.

FBX seems to be a de-factor standard, especially in the video game industry, because of the predominance of Autodesk software as modeling animation tools. When they aren’t using 3Ds Max, they are using Maya… 😉

Generally, what you would want to do is to transform theses assets into serialized binary files that can be loaded quickly into a game engine. This is generally done on a per-engine basis. For what I can tell, the general workflow of “putting thing inside Unity” generally involve putting (for example) .FBX files into the project directory, and when Unity will build your game it will convert every resources in formats optimized for the target platform.

Still, this approach of reinventing the wheel and reinventing one “transmission format” for each and every target application of 3D assets doesn’t seems to be the right thing.

In the 2D world, this problem was fixed when the whole industry basically standardized around the JPEG file format for 2D pictures. Every single camera out there is (special cases or advanced users aside) outputting JPEG encoded and compressed image files.

Obviously, a JPEG file doesn’t have all the details that a Photoshop “project” (a PSD) file would contain, but JPEG is not intended for authoring images, it’s intended for sharing them.

What we need is a “JPEG for 3D”, and that’s exactly what the Khronos group, with an open community effort, developed, and released as the “OpenGL Transmission Format”, and version 2.0 of the specification was released in June 2017.

This format permit to describe 3D objects (and even whole scenes with lights and camera, and animations) in a standard format. A contrario to Collada for example, there is no room to interpretation, and not 2 ways of describing the same thing in glTF

glTF represent model data as simple binary buffer. “Accessors” permit to interpret that data contextually. This is particularly adapted to loading theses files in OpenGL, where you could load each buffer into the GPU with glBufferData, then parse each accessor with glVertexAttribPointer to bind in to the location of each vertex element you have in a buffer.

glTF is based on a simple scene graph description, written in JSON, and a number of raw resources that are the binary buffers and texture data as images.

The standard is generally easy to understand in it’s plain text, but some of the notions used are simpler to understand with some drawings. Thankfully, there’s this awesome “What the Duck is glTF?” on the official repository:

glTF overview poster
This poster describe most of what you need to understand about the glTF file format

There are currently a ton of software, tools and libraries that now support glTF. and I hope they will be even more in the future.

This format is developed by the khronos group, alongside an official implementation of an exporter for blender, and it already has a lot of support form the industry

The glTF NASCAR jacked has some really imporant sponsors!

I don’t think I really need to develop more why I think every effort going towards having more support for glTF will be beneficial for the whole industry/comunity, but supporting glTF today should be your go to option.

Juan Linietsky, main Godot developer actually written a lengthy article comparing multiple format, concluding that we should all support glTF2, and I cannot agree more!

In my next post, I will relate what it took to write a glTF import plugin in C++ for the popular open-source Ogre rendering engine. It is currently a work in progress project, but It starting to become usable, and can be found here

So, I recently had the chance to try out an HTC-Vive on a Linux machine. Naturally, I installed Arch on it 😉

The installation is pretty straight forward, but there are some little catches if you want to do OpenGL development on Linux With OpenVR (OpenVR is the API you use to talk to the SteamVR runtime.)

SteamVR has a Linux beta since February 2017. They also announced that the SteamVR runtime itself is implemented with Vulkan only.

First thing first, I am unaware if it’s currently possible to update the firmware of the base station on Linux, The setup I’m using is also running on Windows and the firmwares where already updated on that platform. That’s one thing I can’t tell you about.

So, to get started, you will need to have Steam running on your machine. ArchLinux is now a 64bit only distribution, Steam is a 32 bit only program. So, if it’s not already the case, you need to activate the [multilib] repository for pacman. (just uncoment the lines for it in /etc/pacman.conf)

First thing first, to do VR, you do need a good GPU. Here I’m running an Nvidia GTX 1070 with the proprietary drivers. You can use an AMD one with the latest version of the mesa drivers apparently. I don’t have access to any modern AMD graphics card so I can’t tell.

Then, you will need to install the following packages:

  • steam
  • lsb-release
  • your graphic’s driver packages in 32 bit (lib32-nvidia-utils, lib32-libvdpau)

Once you have installed Steam, launch it, connect or create an account, and install the SteamVR package, then you want to turn on the beta:

Once you have steam VR on your machine, and before you plug the Vive on the computer, you will need to install some udev rules to permit the app to access the device directly.

Create a file /lib/udev/rules.d/60-HTC-Vive-perms.rules and write the following content in it:

# HTC Vive HID Sensor naming and permissioning
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="0bb4", ATTRS{idProduct}=="2c87", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2101", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2000", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="1043", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2050", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2011", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2012", TAG+="uaccess"
SUBSYSTEM=="usb", ATTRS{idVendor}=="0bb4", ATTRS{idProduct}=="2c87", TAG+="uaccess"
# HTC Camera USB Node
SUBSYSTEM=="usb", ATTRS{idVendor}=="114d", ATTRS{idProduct}=="8328", TAG+="uaccess"
# HTC Mass Storage Node
SUBSYSTEM=="usb", ATTRS{idVendor}=="114d", ATTRS{idProduct}=="8200", TAG+="uaccess"
SUBSYSTEM=="usb", ATTRS{idVendor}=="114d", ATTRS{idProduct}=="8a12", TAG+="uaccess"

You can now open SteamVR, and you should be prompted to run the room setup program to configure your “play space”

Then, if you want to use OpenVR to render to the Vive with OpenGL, you will need a Vulkan runtime and development package. For this, you need to install the vulkan-devel  package.

You may also need to launch your programs under under the steam runtime, this is done using this script :

~/.steam/steam/ubuntu12_32/steam-runtime/run.sh ./my_steamvr_app

And that should be about it. Programs that link against the OpenVR API will “just work” now.
The info in this article comes from my own experience, and this document from Valve : https://github.com/ValveSoftware/SteamVR-for-Linux

Annwvyn, my game engine is now “officially” compatible with Linux with OpenVR. 😉

I would be really happy if it was possible to use the Oculus Rift on Linux, yet, there aren’t any good solution right now. Oculus froze Linux development effort years ago, so we aren’t going to see an official SDK any time soon. The OpenHMD project has made “some” progress” but nothing really usable for now.

I don’t post regularly on this blog, but I really should post more… ^^”

If you have ever read me here before, you probably know that one of my pet project is a game engine called Annwvyn.

Where did I get from

Annwvyn was just “a few classes to act as glue code around a few free software library”. I really thought that in 2 months I had some piece of software worthy of bearing the name game engine. Obviously, I was just a foolish little nerd playing around with an Oculus DK1 in his room, but still, I did actually manage to have something render in real time on the rift with some physics and sound inside! That was cool!

Everything started as just a test project, then, I decided to remove the int main(void)  function I had and stash everything else inside a DLL file. That was quickly done (after banging my head against the MSDN website and Visual Studio’s 2010 project settings, and writing a macro to insert __declspec(dllexport) or __declspec(dllimport) everywhere.)

The need for testability and the difficulties of retrofitting tests

So let’s be clear: I know about good development practice, about automated testing, about TDD, about software architecture, about UML Class Diagrams and all that jazz. Heck, I’m a student in those things. But, the little hobby project wasn’t intended to grow as a 17000 lines of C++ with a lot of modules and bindings to a scripting language, and an event dispatch system, and a lot of interconnected components that abstract writing data to the file system (well, it’s for video game save files) or rendering to multiple different kind of VR hardware, to go expand the Resource Manager of Ogre. Hell, I did not know that Ogre had such a complex resource management system. I thought that Ogre was a C++ thing that drew polygon on the screen without me having to learn OpenGL. (I still had to actually learn quite a lot about OpenGL because I needed to hack into it’s guts, but I blogged about that already.).

Lets just say that things are really getting out of hands, and that I seriously needed to start thinking about making the code saner, and to be able to detect when I break stuff.

(more…)

So, the other day I was working on some Ogre + Qt5 code.

I haven’t really worked with Qt that much since Qt4 was the hot new thing, so I was a bit rusty, but I definitively like the new things I’ve seen in version 5. But I’m not here to discuss Qt 5 today. ^^

There’s a few weird things Qt does that I can’t really warp my head around. One is the incompatibility between QString and std::string (There’s probably a nasty problem called “unicode” behind this), but one other one is that QDebug is not an std::ostream derived object.

If you don’t know, in the Qt world, a QApplication is expected write it’s debugging output by writing into a QDebug object, with an output stream operator (“<<“) operator. A QDebug object is easilly accessible by calling “qDebug()” this makes this code fairly common :

qDebug() << "Some stuff to log";

This is pretty standard things to do in C++, for instance, the standard library itself makes heavy use of the stream operator for io (hence, the main header is called iostream), and on a personal note : they are, the cleanest way to represent in code how to push stuff in and out of a program, IMO.

Qt choose not to use the standard output stream object as the base for their streams, but to rebuild it from scratch, that’s fine, except when you are trying to interact with something non-Qt.

Every object in Ogre that contains relevant data (the vectors, matrices, colors and other stuff) that can be logged, has an operator<<() defined to write to standard streams, but obviously, it will not work with Qt.

If you are lazy like me, and consider that it’s code “for development” and that you intend to remove it/switch it out in production, here’s a snippet you can paste in an header somewhere to redirect theses streams to QDebug’s output :

 template <class T> static QDebug operator<<(QDebug dbg, const T& obj)
	{
	    std::stringstream ss;
	    ss << obj; //Obj has to have an operator<< overload itself to go to the stringtstream.
	    dbg << ss.str().c_str(); //Get the string, then get the c_string, should be ASCII
	    return dbg; //Return the debug stream to chain to QDebug objects, by value. Ask Qt's developers, not me
	}

This is absolutely not ideal, for example, that stringstream object will be instantiated at each call of this templated function. If your can attempt to, for example, stream out an Ogre::Vector3 inside a QDebug, the compiler will stamp out an operator<< that will write the text into a stream, extract the string, and call the operator<< of QDebug that take C-style char* strings.

Also, I have no idea why they choose to pass the QDebug object by value in this function, I did not took the time to dig much under the hood, but it seems to be the way Qt deal with this.

It works well enough to me to check the content of some Ogre::Vector3 objects into the debug panel of Qt Creator, or the terminal output on Linux. ^^”

So, while working on my game engine, I was curious about looking at the technical requirement for submitting an application to the Oculus Store.

One of the things required is that you need to target the audio output (and input) devices selected by the user in the Oculus app

So, how does the Oculus SDK tells you what is the selected device?

(more…)

(Seriously, I hesitated some time between this version and the original, but that’s not the point of this article, and I kinda like the 80’s vibe anyway…)

I think we can all agree here, Virtual Reality (VR) is now, and not science-fiction anymore. “Accessible” (not cheap by any stretch of the imagination) hardware is available for costumers to buy and enjoy. Now you can experience being immersed in virtual worlds generated in real time by a gaming computer and feel presence in it.

The subject that I’m about to address doesn’t really apply to mobile (smartphone powered) VR since theses experiences tend to be static ones. Mobile VR will need to have reliable positional tracking of the user’s head before hitting this issue… We will limit the discussion on actual computer-based VR

One problem still bother me, and the whole VR community as well is: In order to explore a virtual world, you have to, well, walk inside the virtual world. And doing this comfortably for the user is, interestingly, more complex that you can think.

You will allways have a limited space for your VR play room. You can’t physically walk from one town to another in Skyrim inside your living room, the open world of that game is a bit bigger than a few square meters.

The case of cockpit games like Elite:Dangerous aside, simulating locomotion is tricky. Any situation where you’re moving can induce nausea.

Cockpit-based game grounds you in the fact that you’re seated somewhere and “not moving” because most of the object around you don’t move (the inside of the spaceship/car/plane). This make it mostly a no problem, you can do barrel rolls and looping all day long and keep your meal inside your stomach. And you have less chance to kill yourself than inside an actual fighter jet 😉

Simulator (VR) sickness is induced by a disparity between the visual cues of acceleration you get from your visual system, and what your vestibular system sense. The vestibular system is your equilibrium center, it’s a bit like a natural accelerometer located inside your inner ears.

(more…)

If you know me, you also probably know that I’m developing a small C++ game engine, aimed at simplifying the creation of VR games and experiences for “consumer grade” VR systems (mainly the Oculus Rift, more recently the Vive too), called Annwvyn.

The funny question is : With the existence of tools like Unreal Engine 4 or Unity 5, that are free (or almost free) to use, why bother?

There are multiple reasons, but to understand why, I should add some context. This story started in 2013, at a time where you had to actually pay to use Unity with the first Oculus Rift Development Kit (aka DK1), and where UDK (the version of the Unreal Engine 3 you were able to use) was such a mess I wouldn’t want to touch it…

(more…)