Quake for Oculus DK2, Direct Mode

Almost exactly two years ago, I published my first release of Quake with support for the Oculus Rift. Since then, an amazing community has fixed a number of bugs and added some nice features. In particular Luke Groeninger did a lot of work to get the kinks out.

However, Quake was still stuck in the Oculus DK1 world, with no positional tracking and no support for the DK2's Direct Mode. Today, I finally fixed that!


The game looks absolutely gorgeous in the DK2, even after all those years. The run speed is still ridiculous, but somehow I don't get sick playing it in the Rift. Your mileage may vary though, so take it slow.

The game now starts in VR Mode by default, disables view bobbing and sets the texture mode to NEAREST for the proper, pixelated oldschool vibe.

Additonal vars you can use in the console in the game (bring it up using the ~ key):

Download and Source

Wednesday, July 29th 2015 / Comments (2)

Play GTA V In Your Browser – Sort Of

Inspired by a blog post to run your own cloud gaming service which uses a VPN and Steam's In-Home Streaming, I thought I could do this, too, but in the browser.

With the tech I developed for Instant Webcam I had all the major building blocks ready and just needed to glue them together in nice, useable Windows App.

Demo Video for jsmpeg-vnc

The result is jsmpeg-vnc – a tiny Windows application written in C. It captures the screen at 60 frames per second, encodes it into MPEG1 using FFmpeg, sends it over WebSockets to the browser where it is finally decoded again in JavaScript using jsmpeg. All the mouse and keyboard input in the browser is captured as well and sent back to the server again.

This works way better than it should. Latency is minimal and only affected by the network conditions. It's certainly good enough to play most types of games.

Update: I did some latency measurement now. For my system it hovers around 50-70ms (video, photo). Tested with an 800x600 window at ~7% CPU utilization on a Core i5. My desktop monitor is a Dell U2711, which seems to add about 15ms latency itself.

Full source code and binary releases are on github: jsmpeg-vnc

Monday, July 27th 2015 / Comments (46)

What makes an On-Screen Keyboard Fun?

We recently released our typing game ZType for the iPhone and learned a whole lot in the process.


The original Web Version of ZType was hastily created for Mozilla's Game On competition a few years ago and won the Community Choice Award back then. For this version, it was a conscious decission to not care for mobile device. After all, what fun would a typing game be if you don't have a real keyboard?

The Web Version of ZType still remains to be quite popular and we got and more requests for a mobile version of the game. So a few months back we started investigating the possibilities. There are already a few typing games for iOS out there. Many of them can hardly be called a "game" (to be fair, some don't aim to be one), most use the default iOS keyboard and gameplay is cumbersome – sort of what we expected from typing games with an on-screen keyboard.

The default iOS on-screen keyboard needs a lot of screen real estate for things we don't care about: language settings, microphone input, a space bar and word suggestions. Most modern on-screen keyboards are predictive in that they try to figure out what it is you want to write. This works okay-ish for writing a quick SMS, but quickly becomes obstructive for a typing game. It was clear that if we wanted to bring ZType to mobile devices and make it fun to play, we'd have to come up with a custom keyboard solution.

In ZType we carefully control which words appear on the screen. We prevent having two words with the same first letter in the game at the same time, so that targeting words remains unambiguous. With a layout map of a QWERTY keyboard, we can further ensure that these letters spread out over the keyboard, instead of clustering to nearby keys.

Now, here's the crux: if we control which words are on the screen and we ensure keys are spread out over the keyboard, we can enlarge the touch area for each valid key, making it easier to hit the right key. After having targeted a word, there's only one valid key to hit and we can similarly enlarge the touch area for it as well.

We carefully designed the keyboard so that even with those enlarged touch areas, it's still possible to hit all keys. For example, if S is a valid key and thus has a larger touch area, the touch areas for all surrounding keys (W, A, D, X and Y) simply gets a bit smaller. This took some fiddling around and hours of play testing to get right, but in the end was worth it.

Another thing we noticed is that sometimes you hit a key while performing a downward motion with your finger. Typically an on-screen keyboard only emits a letter when you lift your finger from a key, not when you touch it. This allows you to swipe over the keyboard until you see the right key highlighted, but it also creates another problem: Say you want to type the letter T – there's one smooth motion by your thumb, starting from somewhat above the keyboard, then hitting the surface of the touch screen at that T key and continuing to move your thumb further down on the glass before lifting your thumb again. That moment, when you lift your thumb, it may well be over the G key.

The iOS default keyboard seems to recognize this motion and correctly determines that the letter you meant to type was the T, even though you ended up lifting your thumb over the G. In our tests that happened often enough that we had to take care of it too. We ended up detecting if the key where you lifted your finger was held for a period shorter than 30msec and the key where you started the touch was one row above. If both of the conditions are met, we determine that the key you actually wanted to hit was the key were you started the touch at.

The end result is a game where you can almost blindly type on an on-screen keyboard, without being patronised. I'm proud to say that it's actually a fun typing game.

Try it yourself: Download ZType from the AppStore

Thursday, July 9th 2015 / Comments (6)

Reverse Engineering WipEout (PSX)

In 1995 one of my all time favorite video games was released: the original WipEout for PlayStation. The brand new PlayStation produced 3D graphics previously unseen on living room TVs and WipEout exploited its capabilities like no other game at the time. It was one of the pioneering titles of the fifth generation console era.

WipEout's art style was distinctively different from other games too. With the help of the UK based design studio The Designers Republic the game achieved a mature look that was in stark contrast to the comic style found in most other games.

I remember poking around on the CD of the PC Version of WipEout back in the day, looking for ways to modify the game. I was thrilled to find .pcx images of all textures and tried to change one of the in-game billboard graphics to show my name. I wasn't able to get it working.

Now, almost 20 years later, I thought I'd give it another shot.

WipEout Model Viewer – A WebGL Experiment

Extracting 3D Models

I didn't have much experience with reverse engineering any data formats, but given the game's age, I expected the 3D format to be quite simple and straight forward. What surprised me was the many different nuances in which scene models were stored. The track itself was stored separately with a different format, spanning multiple files, yet again.

Knowing the PlayStation didn't have a Floating Point Unit (FPU) I assumed vertex data had to be stored as Integers. Using JavaScript and the three.js 3D Library I quickly whipped up a page that would load one the games's .PRM files that I thought would contain the 3D objects. I loaded all data in the file as coordinates of 3 Integers (x, y, z) and rendered them as a point cloud. This allowed me to quickly shift the stride (the spacing between distinct coordinates in the file) and look for patterns.

What I found was indeed chunks of raw vertex data, along with an object header specifying the object's internal name and number of vertices and polygons in that object. Each polygon is preceded by a polygon header and a chunk of index pointers into the object's vertices. These polygon headers is where I probably spent most of my time. I identified 11 different polygon types used by the game – triangles, quads, sprites, textured or flat, having vertex colors or face colors etc. Two types are still ignored because I couldn't figure out what they were.

The track itself is stored in several distinct files. The most interesting of which are TRACK.TRV, containing raw vertices, and TRACK.TRF containing the track faces as quads of 4 index pointers into the vertices. Pretty straight forward.

Reading textures

Most textures in early PSX games were stored in the TIM image format, containing data straight in the PSX' native frame buffer format. A TIM file stores pixels either directly as 16 BPP colors, or as 8 BPP or 4 BPP indices into a color palette. And indeed, you can find a number of TIM files on the original WipEout CD: the start splash screen, background images, loading images etc. However, the images I was most interested in – the textures used for 3D models - were missing.

Some googling revealed that textures are stored in the compressed SCENE.CMP files. These files contain a very simple header that specifies the number of TIM images in the file, along with the uncompressed sizes of each image. The data itself is compressed using LZ77. I even found some C code in a reverse engineering wiki, that would uncompress these files. With this code quickly ported to JavaScript and poking around in the polygon data to find the texture index and coordinates, I was finally be able to draw textured models.

The textures for the track however were an entirely different beast. Along with the vertex and face data (TRACK.TRV and TRACK.TRF) each track comes with an additional TRACK.TRS, containing Track Sections of some sort, and a TRACK.VEW, containing visibility lists (I think). I spent a lot of time trying to find the texture index for each track face in any of these files.

In fact, the unhelpfully named TRACK.CMP did not contain the track textures. Instead, track textures were stored in a file called LIBRARY.CMP, which seemed to be compiled from a set of source images.

As I painstakingly found out, the tracks in WipEout had a Level of Detail (LOD) system, subdividing each track face in up to 4x4 quads when they're near the camera. This was probably done to lessen the impact of the PSX' missing perspective correction when drawing textures. These subdivided faces also used higher resolution textures.

Now, here's the kicker the LIBRARY.CMP file contains about 300 images, but only 19 distinct textures - each in 3 different LOD levels: 4x4, 2x2 and 1x1 tiles, each tile being 32x32 pixels in size. And, as if that wasn't complicated enough, these 4x4 and 2x2 tiles are not stored in the right order. Instead, yet another file, LIBRARY.TTF, stores indices into the LIBRARY.CMP to compose these 4x4 and 2x2 versions.

Knowing that the texture index has to be between 0 and 18, I found it stored as a single byte along with each track face. I also found that another single byte for each track face specifies a few flags for that face. The most important one for drawing: whether to draw with flipped X texture coordinates.

Wrapping up

Finally, I had a complete, textured, vertex-lit scene and track. Using three.js drawing everything was a no-brainer, as it happily lets you create models of all kinds of different polygon data. I also wanted to have a simple fly-through animation for each track. three.js was tremendously helpful to create a smooth camera spline along the track. The optional orbit controls for three.js are top notch, too. They just work without any modifications on mobile and touch devices.

With all the work that went into this project, handling the WebGL drawing with three.js turned out to be one of the simplest parts.

The finished WipEout Model Viewer loads all the original data files and does all the binary file reading, unpacking and scene creation in JavaScript.

All in all, I wrote about 800 lines of JavaScript to load and draw 3D scenes for a 20 year old game. I wonder how big the original WipEout sorurce is. It's quite sad that probably nobody will ever see it again, considering that it now belongs to Sony.

Full source on github:

Tuesday, April 14th 2015 / Comments (76)

Xibalba & WebVR

WebVR is an effort of Mozilla and Google to enable Virtual Reality content on the web. Experimental builds of Chromium and Firefox that provide a JavaScript API for WebVR have been available for a while now.

My game Xibalba already provided a stereo rendering mode and supported an external tracking server to get orientation updates from the VR headset, but it was clumsy and only worked for the old Oculus Rift DK1. So I decided to ditch the old stereo rendering mode and implement WebVR proper.

Xibalba VR

WebVR handles the distortion for the Headset by itself. All you have to do is render your game into a side-by-side stereo format and request fullscreen mode on your Headset. Orientation updates are provided by a nice, clean API that's easily implemented in the game.

I also added support for the JavaScript Gamepad API with a Plugin for Impact – the game plays great with a XBox360 Gamepad, even if you don't have an Oculus Rift.

Play the game here:

If you have a DK2 but no WebVR enabled build of Chrome or Firefox, you can also try a standalone Windows version of the game. It's essentially a Chromium build bundled with the game and batch file to start it. Makes it a bit easier to get going.

Download: ~80mb

Monday, February 2nd 2015 / Comments (8)

XType Plus – an HTML5 Game for the Nintendo Wii U

For many Indie Game Developers getting your game on a real gaming console is something like the holy grail. At least for me it was. The entry barrier for any gaming console seems unequally high, compared to PC or mobile development.

Nintendo is trying to change that with their Nintendo Web Framework – it's essentially a WebKit browser for the Wii U but offers some custom APIs to interface with various controllers and Nintendo's online services. Nintendo also did a tremendous job of speeding up Canvas2D rendering on their console and providing excellent HTML5 Audio and WebAudio support. The result is a game development framework suitable for about any 2D game you can think of, but with an extremely low entry barrier.

Gameplay Trailer for XType Plus

A few years ago I made an HTML5 game called X-Type, built on top of the Impact Game Engine. It was mostly meant to test the rendering speed of Browsers at the time, but turned out to be quite fun to play as well.

In 2013 Nintendo invited me to GDC Europe where I presented a quick port of the game for the Wii U. The feedback I got at GDC Europe was extremely positive, so I decided push the prototype as far as I can, polish all the rough edges and make a real game out of it.

A prototype of XType Plus at GDC Europe

Today, XType Plus launches in the Nintendo eShop in North America, Europe, the UK, Australia and New Zealand. I'm very excited to see this come to life!

For the most part, porting the game to the Nintendo Web Framework was very straight forward. My Game Engine Impact worked without any modifications and now officially supports the NWF as well to tightly integrate the Wii U controllers and other platform features.

There were some rough edges, particularly with Nintendo's online services, but these have mostly been smoothed out in current NWF releases. Interfacing with the GamePad screen and various controllers was a breeze and most features supported by a "real" Browser work as expected in NWF. Canvas and Audio work exactly as in the Browser and the settings in XType Plus for instance are simply saved to localStorage.

Even with all the additions for the Wii U, the game still runs in a Desktop Browser as well. This made the debugging process of core gameplay features a very simple process.

Screenshot of XType Plus from the all new Plus Mode

If you want to know more about the creation process of XType Plus and the Nintendo Web Framework please have a look at the interview I did with back in Februray.

I hope to see your names pop up in the Highscores of XType Plus!

Thursday, July 31st 2014 / Comments (10)

Xibalba – A WebGL First Person Shooter

More than a year ago I started to work on a project that was supposed to be released for 7DFPS – a game dev competition about making a First Person Shooter in 7 Days. I didn't meet the deadline, but the game I started back then is now finally finished.

Please Enjoy:

Xibalba Xibalba – A WebGL First Person Shooter

Xibalba is also available in the iOS AppStore

The game was build on top of my HTML5 Game Engine Impact and is entirely written in JavaScript. While Impact is intended for 2D games, this first person shooter fit very naturally with the engine.

Xibalba, at its heart, is just a 2D game with a 3D viewport. So all the necessary elements for the gameplay were already provided by the game engine: collision detection and response, entity management, sound playback a versatile level editor and much more. The only thing missing was a 3D view.

I created a 10 minute Making Of screencast that explains the ideas and the design behind the game a bit closer.

The Making Of Xibalba on Youtube

The iPhone and iPad version of the game was made with Ejecta. The game also runs just fine in Mobile Safari on the current iOS 8 beta, but the browser's UI, especially in landscape mode, unfortunately hinder the gameplay quite a bit. It's much more playable in Chrome for Android.

I also published the 3D Viewport part of Xibalba, along with another tiny demo game, under the name TwoPointFive – a tribute to the excellent ThreeJS library.

The TwoPointFive Plugin is available on Github:

Monday, July 28th 2014 / Comments (23)

Fast Image Filters with WebGL

WebGLImageFilter is a small JavaScript library for applying a chain of filters to an image.

Sergey Brin in his badass Tesla, sporting Chrome wheels

But to quote myself from twitter:

That awkward moment when you realize the lib you've been polishing for the past 6 hours already exists. With more features and a better API.

~ @phoboslab, Nov 3.

So, yes, there's already a library called glfx.js which basically does the same thing. And I'm almost saddened to say that it's excellent.

It's not all in vain, however. My implementation features raw convolution and color matrix filters. The latter should be particularly interesting for those coming from the Flash world: it's the exact same as Flash's ColorMatrixFilter and allows for some nice "Instagrammy" photo effects among others. There are some JavaScript libraries around that implement this ColorMatrixFilter, but they all do it in JavaScript directly, which is quite slow. Mine can be used in realtime, even on iOS with Ejecta.

I also learned some things about WebGL and Shaders with all this. One interesting aspect is that you can't draw a framebuffer texture onto itself. So in order to apply more than one effect for the same image, you need to juggle around 2 textures - use one as the source, the other one as the target and then switch. And if you don't want to waste any draw calls, you need to know which draw call will be the last, so that you can draw directly onto the target Canvas instead of on a framebuffer texture.

WebGL's coordinate system also complicates things a bit. It has the origin in the bottom left corner, instead of in the top left like most 2D APIs, essentially flipping everything on the Y axis. This has caused me a lot of pain with Ejecta already. The solution sounds trivial: just scale your drawing coordinates by -1, but this only gets you so far. If you need to get the pixel data of your image or draw one canvas into another, you suddenly have to invert everything again.

Lesson learned: Always google before you start working.

Download: WebGLImageFilter on github.

Sunday, November 3rd 2013 / Comments (9)