No nonsense networking for C++ : Introducing kissnet, a K.I.S.S. socket library!

Sometimes I wonder why some things are inside the C and C++ standard libraries, and some aren’t.

As far as I can be bothered to read the actual “standards” document (that are mostly written in legalise, not in understandable english for you and me). Theses languages are defined against an “abstract machine”, and the actual real-world implementation of them, on computers that actually exists, should follow the behavior described for that thing, modulo some implementations details.

Beside the specific case of having theses languages in a “free standing environment” (meaning that the code written isn’t actually relying on being executed inside an operating system, but is directly running on the bare hardware), It seems that some notions, like the OS having a “filesystem” where textual paths can points to files that can be opened, read and modified, is pretty standard.

What is strange about that is, if the notion of files and file systems are part of the standard library for both C and C++, (and were present since the beginning) networking sockets doesn’t exist in both languages. And C++ gained the notion of creating multiple threads of execution, and manipulating them inside it’s standard library only in 2011.

All of theses concepts : files, threads, and sockets, are Operating System specific constructs. Opening a file on Linux is fairly different that opening a file on Windows for example. But the standard library offer a single, unique, and portable way of doing so.

These three things are present in all operating systems in use for the past decades now (since the 70’s at least?). I find it strange that the C standard library only includes files. Since C++ now also has threads, I will consider this a non-issue. So let’s talk about the other one…

A song of files and sockets

I would like to take some time to discuss writing some low-level network code in C++. The current interface to get data to and from a network we know today are using a notion called sockets.

For lack of a better analogy, a socket can be thought as some kind of “magical file” that, when, written into, will send the bytes that has been written to on the network, and when receiving bytes, they will be accessible by reading said file. This notion comes from the UNIX world, where everything is effectively a file. And nothing is wrong with that, it’s actually a really simple, straightforward way of doing things.

Most of the operating systems that matters today are using this analogy. When I say OSes that matters today, I’m thinking of both the modern UNIX derivatives (Linux, macOS, and the rest of the BSD family). And Microsoft Windows.

The Windows socket API has been mostly borrowed from BSD anyway, if you remove some oddities like a few renamed functions, a few changed data types, and the added initialization procedure, Windows sockets are equivalent of sockets you have on Linux.

100% non-standard code

But, none of this exist inside the standard library of these languages. When you are doing socket programming on Linux, you are not calling functions of the library, you are performing Linux Kernel System calls, and you are dealing with file descriptors and bytes.

On Windows, you are calling part of the Win32 API (called WSA for Windows Socket API). This situation is unfortunate, because it means that I, as a C++ programmer, I need to make sure that my code will work both under Linux and under Windows. There’s no one single networking API that I can use everywhere without thinking about it. Sure, it’s 80% similar, or maybe 90% similar, but still, if I need to put #ifdef WIN32 in my code for something as fundamental as sending bytes to another computer in a network in 2018, we are doing something wrong.

Moreover, all theses OS-level API are implemented in C. Not C++. This means that everything you have are functions and structures. When describing socket configuration, you are filling structures and passing them to functions. When you have to reference a specific socket, you need to keep a little token and give it to a functions. When you need to read data, you need to have a buffer with the correct amount of bytes ready and give a pointer, alongside a variable containing the size of said buffer to a function, and make sure that you don’t mix them together.

Basically, you are doing 1970’s level computer science. This is fine for the lowest level code out there, but not for the code of an application.

Unnecessary added complexity

There are solutions to solve this, and some that are even well advanced in the path of getting standardized inside the C++ language, like Boost.Asio. But, In my humble opinion, there’s a fundamental problem with Asio itself: it is Asio.

For those who aren’t familiar with Asio, it’s name stands for “asynchronous input and output”. It’s a library, a good library for what it’s name stands for: doing intput and output in an asynchronous manner. For doing this, Asio as multiple contexts and constructs to deal with multiple threads and non blocking calls, and who executes them, and when they are executed.

The problem is: If I just want to connect to the network and exchange data, do I need to worry about io-context and handlers, and executors? Probably not.

Keep It Simple, Stupid.

In the C++ world, we struggle at keeping simple thing simple. The collection of libraries from the Boost project is one good example. Don’t get me wrong, these are high-quality, peer-reviewed C++ libraries. They are good code written by smart people, with the seal of approval by other smart people.

They are a demonstration of what you can do when you want to push the language forward. They contains a lot of useful pieces of generic code that you can reuse. A lot of really important and useful things from Boost finally landed inside the C++ standard library since 2011, like smart pointers, chrono, array, lambdas, and probably a lot more things like that are going to jump from existing on boost, to being implemented inside the standard.

And, if today you ask for a recommendation of something to use to do networking code in C++, I’m almost sure that you’ll get pointed to either Asio, the version of Asio inside of Boost, or, the Networking TS (addition to the standard that will probably land in a future version of the C++ language) that is… Basically based on Asio.

As you can guess… It’s not like I don’t like Asio, I find it genuinely interesting, and potentially useful. But I’m unsure it is the thing that I want to see standardized.

As stated earlier, if you’re just going to do some TCP/UDP communication, Asio comes wrapped in unnecessarily complexity.

Moreover, your OS comes with a socket API, but it isn’t super convenient to use in C++, and it’s not portable without doing the ugly #ifdef preprocessor dance.

Few months ago, I was thinking about this situation, and thought why not just wrap the os library in a nice C++17 interface?

Introducing kissnet

Kissnet is a little personal project I started during the summer, and that I tweak a little from time to time, it’s mostly for fun, but I feel like some people could have some use for something like this.

The design goals of kissnet are pretty straightforward:

  • Be a single header library, nothing to link against
  • Be a single API for all supported operating systems
  • Use C++17 std::byte as a “safe” way of representing non-arthmetic binary data
  • Be the lighthest/thinest possible layer on top of the operating system as possible
  • Handle all (or most) of what TCP and UDP sockets can do in ipv4/v6
  • Don’t require the user to worry about the endianess of the network layer.
  • Just transport the bytes, and do nothing else
  • Hide os-specific weirdness
  • Optional exception support (You can chose to replace throwing exception with the program either aborting or calling your own error handler)
  • stay simple

Kissnet only implement 3 kinds of objects:

  • A socket class. The behavior of the socked is templated around the protocol used (TCP vs UDP) and the version of IP used (ipv4 vs ipv6)
  • An endpoint class that permit you to specify a host and port as just a string and a number, or a “hostname:port” string.
  • A “buffer<size>” class that is just syntactic sugar around an std::array<std::byte, size>

Buffer are for holding received data, buffer know their own size, and can read the correct amount of bytes for you. Kissnet doesn’t care what data is sent, it’s not kissnet job.

Sockets are non-copyable (but movable) objects and they have the typical operations you can apply on socket implemented (bind, listen, accept, connect, send, receive).

Kissnet automatically manages the initialization/de initialization of the underlying socket API if needed (like on windows). This is done by exploiting the reference-counting of std::shared_ptr<>. This is the only overhead on top of holding a socket “file descriptor” ( = an simple integer variable).

I’ve only used kissnet in a couple of toy programs and not in a real project, however, I already think that I prefer this simple, down to the metal but yet type-safe and cross platform library as using something like Asio. Asio feels like using a bazooka to do kill a fly. I’ve heard a few opinions going the same way as I do. This is why I’ve put this little experimental project on GitHub, under a permissive MIT license.

Why glTF 2.0 is awesome!

There’s one single thing that I find truly frustrating when dealing with multiple 3D-related software : making them exchange 3D assets.

You don’t have much warranty that what has been put out of one software will look the same into something else (e.g. a Game Engine. You may work with meters, and find out that Unreal works in centimeters. They could use different conversations for texturing, material definitions may just “not work”…)

There’s scale issues, animation issues, texture binding issues, material problem in general.

All of this is generally dealt with a huge and horrible mess of import/export scripts. And the more of these you need to run in your toolchain, the worst it gets

Still, there are a number of file formats that are more or less standardized in this industry, but none of them does really fit all the use cases for real time rendering/video games (and by extension, VR).

Most of them are way better suited as “authoring” format: Files for 3D modeling programs, like Autodesk’s FBX or Collada from the khronos group. They are complex, and they generally are too-flexible in how they can be implemented and how they can represent a specific thing, and just contain “too much data” for what you would want to use from a programming stand point.

FBX seems to be a de-factor standard, especially in the video game industry, because of the predominance of Autodesk software as modeling animation tools. When they aren’t using 3Ds Max, they are using Maya… 😉

Generally, what you would want to do is to transform theses assets into serialized binary files that can be loaded quickly into a game engine. This is generally done on a per-engine basis. For what I can tell, the general workflow of “putting thing inside Unity” generally involve putting (for example) .FBX files into the project directory, and when Unity will build your game it will convert every resources in formats optimized for the target platform.

Still, this approach of reinventing the wheel and reinventing one “transmission format” for each and every target application of 3D assets doesn’t seems to be the right thing.

In the 2D world, this problem was fixed when the whole industry basically standardized around the JPEG file format for 2D pictures. Every single camera out there is (special cases or advanced users aside) outputting JPEG encoded and compressed image files.

Obviously, a JPEG file doesn’t have all the details that a Photoshop “project” (a PSD) file would contain, but JPEG is not intended for authoring images, it’s intended for sharing them.

What we need is a “JPEG for 3D”, and that’s exactly what the Khronos group, with an open community effort, developed, and released as the “OpenGL Transmission Format”, and version 2.0 of the specification was released in June 2017.

This format permit to describe 3D objects (and even whole scenes with lights and camera, and animations) in a standard format. A contrario to Collada for example, there is no room to interpretation, and not 2 ways of describing the same thing in glTF

glTF represent model data as simple binary buffer. “Accessors” permit to interpret that data contextually. This is particularly adapted to loading theses files in OpenGL, where you could load each buffer into the GPU with glBufferData, then parse each accessor with glVertexAttribPointer to bind in to the location of each vertex element you have in a buffer.

glTF is based on a simple scene graph description, written in JSON, and a number of raw resources that are the binary buffers and texture data as images.

The standard is generally easy to understand in it’s plain text, but some of the notions used are simpler to understand with some drawings. Thankfully, there’s this awesome “What the Duck is glTF?” on the official repository:

glTF overview poster

This poster describe most of what you need to understand about the glTF file format

There are currently a ton of software, tools and libraries that now support glTF. and I hope they will be even more in the future.

This format is developed by the khronos group, alongside an official implementation of an exporter for blender, and it already has a lot of support form the industry

The glTF NASCAR jacked has some really imporant sponsors!

I don’t think I really need to develop more why I think every effort going towards having more support for glTF will be beneficial for the whole industry/comunity, but supporting glTF today should be your go to option.

Juan Linietsky, main Godot developer actually written a lengthy article comparing multiple format, concluding that we should all support glTF2, and I cannot agree more!

In my next post, I will relate what it took to write a glTF import plugin in C++ for the popular open-source Ogre rendering engine. It is currently a work in progress project, but It starting to become usable, and can be found here

Install and run SteamVR on ArchLinux (for using an HTC-Vive) and do OpenGL/OpenVR developement

So, I recently had the chance to try out an HTC-Vive on a Linux machine. Naturally, I installed Arch on it 😉

The installation is pretty straight forward, but there are some little catches if you want to do OpenGL development on Linux With OpenVR (OpenVR is the API you use to talk to the SteamVR runtime.)

SteamVR has a Linux beta since February 2017. They also announced that the SteamVR runtime itself is implemented with Vulkan only.

First thing first, I am unaware if it’s currently possible to update the firmware of the base station on Linux, The setup I’m using is also running on Windows and the firmwares where already updated on that platform. That’s one thing I can’t tell you about.

So, to get started, you will need to have Steam running on your machine. ArchLinux is now a 64bit only distribution, Steam is a 32 bit only program. So, if it’s not already the case, you need to activate the [multilib] repository for pacman. (just uncoment the lines for it in /etc/pacman.conf)

First thing first, to do VR, you do need a good GPU. Here I’m running an Nvidia GTX 1070 with the proprietary drivers. You can use an AMD one with the latest version of the mesa drivers apparently. I don’t have access to any modern AMD graphics card so I can’t tell.

Then, you will need to install the following packages:

  • steam
  • lsb-release
  • your graphic’s driver packages in 32 bit (lib32-nvidia-utils, lib32-libvdpau)

Once you have installed Steam, launch it, connect or create an account, and install the SteamVR package, then you want to turn on the beta:

Once you have steam VR on your machine, and before you plug the Vive on the computer, you will need to install some udev rules to permit the app to access the device directly.

Create a file /lib/udev/rules.d/60-HTC-Vive-perms.rules and write the following content in it:

# HTC Vive HID Sensor naming and permissioning
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="0bb4", ATTRS{idProduct}=="2c87", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2101", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2000", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="1043", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2050", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2011", TAG+="uaccess"
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="28de", ATTRS{idProduct}=="2012", TAG+="uaccess"
SUBSYSTEM=="usb", ATTRS{idVendor}=="0bb4", ATTRS{idProduct}=="2c87", TAG+="uaccess"
# HTC Camera USB Node
SUBSYSTEM=="usb", ATTRS{idVendor}=="114d", ATTRS{idProduct}=="8328", TAG+="uaccess"
# HTC Mass Storage Node
SUBSYSTEM=="usb", ATTRS{idVendor}=="114d", ATTRS{idProduct}=="8200", TAG+="uaccess"
SUBSYSTEM=="usb", ATTRS{idVendor}=="114d", ATTRS{idProduct}=="8a12", TAG+="uaccess"

You can now open SteamVR, and you should be prompted to run the room setup program to configure your “play space”

Then, if you want to use OpenVR to render to the Vive with OpenGL, you will need a Vulkan runtime and development package. For this, you need to install the vulkan-devel  package.

You may also need to launch your programs under under the steam runtime, this is done using this script :

~/.steam/steam/ubuntu12_32/steam-runtime/run.sh ./my_steamvr_app

And that should be about it. Programs that link against the OpenVR API will “just work” now.
The info in this article comes from my own experience, and this document from Valve : https://github.com/ValveSoftware/SteamVR-for-Linux

Annwvyn, my game engine is now “officially” compatible with Linux with OpenVR. 😉

I would be really happy if it was possible to use the Oculus Rift on Linux, yet, there aren’t any good solution right now. Oculus froze Linux development effort years ago, so we aren’t going to see an official SDK any time soon. The OpenHMD project has made “some” progress” but nothing really usable for now.

“Scenario Testing” a game engine by misusing an unit test framework.

I don’t post regularly on this blog, but I really should post more… ^^”

If you have ever read me here before, you probably know that one of my pet project is a game engine called Annwvyn.

Where did I get from

Annwvyn was just “a few classes to act as glue code around a few free software library”. I really thought that in 2 months I had some piece of software worthy of bearing the name game engine. Obviously, I was just a foolish little nerd playing around with an Oculus DK1 in his room, but still, I did actually manage to have something render in real time on the rift with some physics and sound inside! That was cool!

Everything started as just a test project, then, I decided to remove the int main(void)  function I had and stash everything else inside a DLL file. That was quickly done (after banging my head against the MSDN website and Visual Studio’s 2010 project settings, and writing a macro to insert __declspec(dllexport) or __declspec(dllimport) everywhere.)

The need for testability and the difficulties of retrofitting tests

So let’s be clear: I know about good development practice, about automated testing, about TDD, about software architecture, about UML Class Diagrams and all that jazz. Heck, I’m a student in those things. But, the little hobby project wasn’t intended to grow as a 17000 lines of C++ with a lot of modules and bindings to a scripting language, and an event dispatch system, and a lot of interconnected components that abstract writing data to the file system (well, it’s for video game save files) or rendering to multiple different kind of VR hardware, to go expand the Resource Manager of Ogre. Hell, I did not know that Ogre had such a complex resource management system. I thought that Ogre was a C++ thing that drew polygon on the screen without me having to learn OpenGL. (I still had to actually learn quite a lot about OpenGL because I needed to hack into it’s guts, but I blogged about that already.).

Lets just say that things are really getting out of hands, and that I seriously needed to start thinking about making the code saner, and to be able to detect when I break stuff.

(more…)

Shoehorning anything (with `operator<<()`) into `qDebug()` the quick and dirty templated way

So, the other day I was working on some Ogre + Qt5 code.

I haven’t really worked with Qt that much since Qt4 was the hot new thing, so I was a bit rusty, but I definitively like the new things I’ve seen in version 5. But I’m not here to discuss Qt 5 today. ^^

There’s a few weird things Qt does that I can’t really warp my head around. One is the incompatibility between QString and std::string (There’s probably a nasty problem called “unicode” behind this), but one other one is that QDebug is not an std::ostream derived object.

If you don’t know, in the Qt world, a QApplication is expected write it’s debugging output by writing into a QDebug object, with an output stream operator (“<<“) operator. A QDebug object is easilly accessible by calling “qDebug()” this makes this code fairly common :

qDebug() << "Some stuff to log";

This is pretty standard things to do in C++, for instance, the standard library itself makes heavy use of the stream operator for io (hence, the main header is called iostream), and on a personal note : they are, the cleanest way to represent in code how to push stuff in and out of a program, IMO.

Qt choose not to use the standard output stream object as the base for their streams, but to rebuild it from scratch, that’s fine, except when you are trying to interact with something non-Qt.

Every object in Ogre that contains relevant data (the vectors, matrices, colors and other stuff) that can be logged, has an operator<<() defined to write to standard streams, but obviously, it will not work with Qt.

If you are lazy like me, and consider that it’s code “for development” and that you intend to remove it/switch it out in production, here’s a snippet you can paste in an header somewhere to redirect theses streams to QDebug’s output :

 template <class T> static QDebug operator<<(QDebug dbg, const T& obj)
	{
	    std::stringstream ss;
	    ss << obj; //Obj has to have an operator<< overload itself to go to the stringtstream.
	    dbg << ss.str().c_str(); //Get the string, then get the c_string, should be ASCII
	    return dbg; //Return the debug stream to chain to QDebug objects, by value. Ask Qt's developers, not me
	}

This is absolutely not ideal, for example, that stringstream object will be instantiated at each call of this templated function. If your can attempt to, for example, stream out an Ogre::Vector3 inside a QDebug, the compiler will stamp out an operator<< that will write the text into a stream, extract the string, and call the operator<< of QDebug that take C-style char* strings.

Also, I have no idea why they choose to pass the QDebug object by value in this function, I did not took the time to dig much under the hood, but it seems to be the way Qt deal with this.

It works well enough to me to check the content of some Ogre::Vector3 objects into the debug panel of Qt Creator, or the terminal output on Linux. ^^”

The locomotion problem in Virtual Reality

(Seriously, I hesitated some time between this version and the original, but that’s not the point of this article, and I kinda like the 80’s vibe anyway…)

I think we can all agree here, Virtual Reality (VR) is now, and not science-fiction anymore. “Accessible” (not cheap by any stretch of the imagination) hardware is available for costumers to buy and enjoy. Now you can experience being immersed in virtual worlds generated in real time by a gaming computer and feel presence in it.

The subject that I’m about to address doesn’t really apply to mobile (smartphone powered) VR since theses experiences tend to be static ones. Mobile VR will need to have reliable positional tracking of the user’s head before hitting this issue… We will limit the discussion on actual computer-based VR

One problem still bother me, and the whole VR community as well is: In order to explore a virtual world, you have to, well, walk inside the virtual world. And doing this comfortably for the user is, interestingly, more complex that you can think.

You will allways have a limited space for your VR play room. You can’t physically walk from one town to another in Skyrim inside your living room, the open world of that game is a bit bigger than a few square meters.

The case of cockpit games like Elite:Dangerous aside, simulating locomotion is tricky. Any situation where you’re moving can induce nausea.

Cockpit-based game grounds you in the fact that you’re seated somewhere and “not moving” because most of the object around you don’t move (the inside of the spaceship/car/plane). This make it mostly a no problem, you can do barrel rolls and looping all day long and keep your meal inside your stomach. And you have less chance to kill yourself than inside an actual fighter jet 😉

Simulator (VR) sickness is induced by a disparity between the visual cues of acceleration you get from your visual system, and what your vestibular system sense. The vestibular system is your equilibrium center, it’s a bit like a natural accelerometer located inside your inner ears.

(more…)

The Annwvyn Game Engine, and how I started doing VR

If you know me, you also probably know that I’m developing a small C++ game engine, aimed at simplifying the creation of VR games and experiences for “consumer grade” VR systems (mainly the Oculus Rift, more recently the Vive too), called Annwvyn.

The funny question is : With the existence of tools like Unreal Engine 4 or Unity 5, that are free (or almost free) to use, why bother?

There are multiple reasons, but to understand why, I should add some context. This story started in 2013, at a time where you had to actually pay to use Unity with the first Oculus Rift Development Kit (aka DK1), and where UDK (the version of the Unreal Engine 3 you were able to use) was such a mess I wouldn’t want to touch it…

(more…)

My Linux handheld game console. Part 1

A custom made Linux powered handheld. This sounds interesting, don’t you think ?

I had this project from quite some time: I have some hardware that is collecting dust, and I want to make use of it, I just recently had the idea to blog about it, and release everything on GitHub afterwards, when it will be actually something useful.
Also, this thing will need a name… But that’s not important right now ^^”

I’m a fan of the Ben Heck Show from quite some time. In multiple episodes of the show, Ben makes portables consoles from scratch as electronic projects. With the existences of small single-board computers like the Raspberry Pi, this is actually surprisingly easy to build.

(more…)

Using Ogre’s OpenGL renderer with the OpenVR API (SteamVR, HTC-Vive SDK)

Note to the reader: If some things in this article are unclear and/or ommited, it’s probably because I’ve allready explained them in the precedent article about the Oculus Rift SDK here

The OpenVR API, and the whole “SteamVR” software stack is really interesting to target, because it’s compatible with many VR systems from the get go. Making you code something once, and running it on all of them.

In practice, the OpenVR API is a bit simpler to code with than the Oculus SDK. It’s naming conventions are from the 90’s (as with all Valve’s SDKs. But at least they are consistent with themselves!)

Also, the resulting code is less verbose that with the Oculus SDK. It’s almost like Oculus wanted to make their code fit the Windows/DirectX style (setting up structures with a lot of parameters and sending pointers to them to functions) and Valve’s took a more OpenGL approach (functions that take fixed types and flags to tell what to do with it), they even named their library OpenVR.

(more…)