PHOBOSLAB

Xibalba – A WebGL First Person Shooter

More than a year ago I started to work on a project that was supposed to be released for 7DFPS – a game dev competition about making a First Person Shooter in 7 Days. I didn't meet the deadline, but the game I started back then is now finally finished.

Please Enjoy:

Xibalba Xibalba – A WebGL First Person Shooter

Xibalba is also available in the iOS AppStore

The game was build on top of my HTML5 Game Engine Impact and is entirely written in JavaScript. While Impact is intended for 2D games, this first person shooter fit very naturally with the engine.

Xibalba, at its heart, is just a 2D game with a 3D viewport. So all the necessary elements for the gameplay were already provided by the game engine: collision detection and response, entity management, sound playback a versatile level editor and much more. The only thing missing was a 3D view.

I created a 10 minute Making Of screencast that explains the ideas and the design behind the game a bit closer.

The Making Of Xibalba on Youtube

The iPhone and iPad version of the game was made with Ejecta. The game also runs just fine in Mobile Safari on the current iOS 8 beta, but the browser's UI, especially in landscape mode, unfortunately hinder the gameplay quite a bit. It's much more playable in Chrome for Android.

I also published the 3D Viewport part of Xibalba, along with another tiny demo game, under the name TwoPointFive – a tribute to the excellent ThreeJS library.

The TwoPointFive Plugin is available on Github: github.com/phoboslab/twopointfive

Monday, July 28th 2014 / Comments (2)

Fast Image Filters with WebGL

WebGLImageFilter is a small JavaScript library for applying a chain of filters to an image.

Filters:
Sergey Brin in his badass Tesla, sporting Chrome wheels

But to quote myself from twitter:

That awkward moment when you realize the lib you've been polishing for the past 6 hours already exists. With more features and a better API.

~ @phoboslab, Nov 3.

So, yes, there's already a library called glfx.js which basically does the same thing. And I'm almost saddened to say that it's excellent.

It's not all in vain, however. My implementation features raw convolution and color matrix filters. The latter should be particularly interesting for those coming from the Flash world: it's the exact same as Flash's ColorMatrixFilter and allows for some nice "Instagrammy" photo effects among others. There are some JavaScript libraries around that implement this ColorMatrixFilter, but they all do it in JavaScript directly, which is quite slow. Mine can be used in realtime, even on iOS with Ejecta.

I also learned some things about WebGL and Shaders with all this. One interesting aspect is that you can't draw a framebuffer texture onto itself. So in order to apply more than one effect for the same image, you need to juggle around 2 textures - use one as the source, the other one as the target and then switch. And if you don't want to waste any draw calls, you need to know which draw call will be the last, so that you can draw directly onto the target Canvas instead of on a framebuffer texture.

WebGL's coordinate system also complicates things a bit. It has the origin in the bottom left corner, instead of in the top left like most 2D APIs, essentially flipping everything on the Y axis. This has caused me a lot of pain with Ejecta already. The solution sounds trivial: just scale your drawing coordinates by -1, but this only gets you so far. If you need to get the pixel data of your image or draw one canvas into another, you suddenly have to invert everything again.

Lesson learned: Always google before you start working.

Download: WebGLImageFilter on github.

Sunday, November 3rd 2013 / Comments (8)

A Tale of Bad UX

A few weeks ago, I completed my iOS App Instant Webcam and eagerly submitted it to Apple for review. They approved it about a week later and I was free to sell it in the AppStore. About 30 minutes later my App showed up when you searched for it. All ready to launch I thought.

Before publishing the accompanying blog post and tweeting about it, I tested the App one last time: I downloaded it directly from the AppStore on my iPhone and made sure it works. Which it did. Then, just to see what it looks like, I searched for the App on my iPad 1, which as you know doesn't have a camera. You shouldn't be able to download the App on a camera-less device at all. The AppStore should indicate this, but It didn't.

You could still download the App just fine on the iPad 1, despite the device having no camera to use it. When you start the App, it instantly crashes.

Shit.

Of course this was my fault. I forgot to set the appropriate key for UIRequiredDeviceCapabilities in the App's Info.plist file - namely the video-camera key that tells the AppStore that this App indeed requires camera capabilities.

You can't test this at all before submitting your App for review. There's no "staging area" where you can see how your App will look like in the store. This bug only presents itself after release. You can't test for it beforehand. Apple's review process should probably have caught this bug, but didn't.

The UIRequiredDeviceCapabilities property only affects the presentation in the AppStore, yet it has to be set in the App directly. Which means in order to change this you have to recompile your App, submit it for review again and wait a week or two till it's approved. Not fun.

Worse still, you can't release an update for your App that requires features that your original version did not require. So this bug that only presents itself after release is essentially unfixable.

The official way to "mitigate" this, confirmed via iTunes connect support, is the following.

Suffice to say, you can still download Instant Webcam on your camera-less iPad or iPod and have it crash immediately.

Tuesday, October 29th 2013 / Comments (4)

HTML5 Live Video Streaming via WebSockets

When I built my Instant Webcam App, I was searching for solutions to stream live video from the iPhone's Camera to browsers. There were none.

When it comes to (live) streaming video with HTML5, the situation is pretty dire. HTML5 Video currently has no formalized support for streaming whatsoever. Safari supports the awkward HTTP Live Streaming and there's an upcomming Media Source Extension standard as well as MPEG-DASH. But all these solutions divide the video in shorter segments, each of which can be downloaded by the browser individually. This introduces a minimum lag of 5 seconds.

So here's a totally different solution that works in any modern browser: Firefox, Chrome, Safari, Mobile Safari, Chrome for Android and even Internet Explorer 10.

Read complete post »

Wednesday, September 11th 2013 / Comments (71)

Quake for Oculus Rift

With the many "modern" games like Half-Life 2 and Doom 3 already modified to support the Oculus Rift, I decided to give one of my favorite classic games a shot: the original Quake.

Id Software long ago released the full Source Code of Quake under the GPL license. It has been ported to virtually every hardware platform in existence by the community and is still maintaned and kept up to date by several Open Source projects. One of them is the excellent Quakespasm - it aims to be faithful to the original with no changes to the gameplay but tons of smaller improvements and bugfixes. It's built on top of SDL, so it runs on Windows, OSX and Linux.

Quakespasm provided a nice and clean starting point to implement support for the Rift. It's evident that Quake's C source code has a radically different style than John Carmack's later games, with almost all data stored in global variables and tons of functions with no arguments. Still, the source is very straight forward and easily understandable. I had almost no problems implementing the Rift support, but in the end took quite a few shortcuts with some dirty hacks.

All in all, it works really well with the Rift. Quake feels even grittier and darker than it ever did before. The textures are pretty coarse and you can count the polygons on the enemies by hand, but it all comes together so nicely when you run through the corridors, completely immersed in the world with the original Nine Inch Nails soundtrack blasting through your headphones. For me personally, this ended up being one of the most enjoyable Rift experiences.

There are still various problems. Making the UI readable in Rift mode is a single dirty hack, some values for the eye offsets are elaborate guesses and in general this thing could have been implemented a lot cleaner. But as far as I can tell, everything works like it should. I only tried this on Windows; my guess is that it works on OSX as well, but I haven't changed the project files to incorporate the new code.

To enable Rift support, bring up the console (~ key) and type the following:

vr_enabled 1

If your Rift is connected, this should be all that's needed to get started. On my system, turning on VSync was a good idea - updates are a bit slower, but less stuttery. There's no tearing without VSync, so I assume that SDL is doing something funky here behind the scenes.

You can still buy the full game at idsoftware.com or on Steam.

Edit July 6th: I fixed some rendering issues that warped the view. Please re-download.

Edit July 7th: Switched to predicted view orientation and added vr_supersample option.

Edit August 22nd: renamed all cvars to have a vr_ prefix. See the Readme on Github for a list of available cvars.

Friday, July 5th 2013 / Comments (52)

MPEG1 Video Decoder in JavaScript

With still no common video format for HTML5 in sight, I decided to implement an MPEG1 decoder in JavaScript. I know there's already an h264 decoder for JavaScript around, but it's huge, compiled with emscripten and quite complicated.

An MPEG1 decoder sounded like a relatively simple and fun weekend project. While the real world use cases for this are of course a bit limited, I still learned a whole lot about video codecs in the process. The size of the source is just around 15kb gzipped and the performance is quite okay-ish - a 320x240 video easily plays with 30fps on the iPhone5.

Read complete post »

Tuesday, May 7th 2013 / Comments (17)

How much Traffic is too much Traffic for CloudFlare?

Evidence suggests it's 100TB per month.

Before I go into the details I want to state two things first:

So the reason I'm writing this is not because we were kicked (after all, CloudFlare was in the right to do so), but because of how shitty it went down.

Read complete post »

Wednesday, February 13th 2013 / Comments (47)

Ejecta

Ejecta is a fast JavaScript, Canvas & Audio implementation for iOS. Today, I'm releasing it under the MIT Open Source license.

Visit the Ejecta website for more info on what it is and how to use it. I will talk a bit more about some implementation details for the Canvas API here.

Implementing a general purpose drawing API, such as the HTML5 Canvas API, on top of OpenGL is by no means an easy endeavor. Before I decided to roll my own solution (you know, I have this problem), I looked at a number of graphic libraries including Google's skia and OpenVG.

I discovered exactly what I feared beforehand: these libraries do way too much, are too large and too hard to implement. You can't just use them here and there to draw – instead they replace your whole drawing stack. Getting them to compile alone is a huge pain; getting them to compile on the iPhone and then get them do what you wanted to seemed close to impossible.

So I began working on my own solution. Implementing the path methods for moveTo(), lineTo(), bezierCurveTo(), etc. was fairly straight forward: have an array of subpaths where each subpath is an array of points (x,y). Each call to the API methods pushes one or more points to the subpath or closes it.

However, I struggled a bit with getting bezier curves to behave in a manner that makes sense for the current scale; i.e. push more points for large bezier curves and at sharp corners, fewer points for smaller ones and straight lines. After a few days of reading and experimenting, I found this excellent article on adaptive bezier curves and adopted its solution.

The hard part was getting that array of points on the screen. For drawing lines (.stroke()) I didn't want to go with the obvious solution of just using GL_LINES, because it has a number of drawbacks, especially on iOS: no anti aliasing, limited line width and no miters or line caps.

So instead of using GL_LINES to draw, I ended up creating 2 triangles for each line segment and calculate the miter values myself. This correctly honors the APIs .miterLimit property, though the bevel it then draws is still a bit off. The code I ended up with is a bit on the ugly side, because it handles a lot of edge cases, but all in all this solution worked very well and is extremely fast.

Implementing .fill() proved to be yet another challenge. With OpenGL, before you can draw a primitive to the screen, you have to break it down into triangles first. This is quite easy to do for convex polygons, but not so much for concave ones that potentially have holes in them.

I spent a few days looking for triangulation library and soon realized that this is serious business. Triangle for instance, sports 16k loc – I'm quite allergic to libraries that need that much code to solve seemingly simple problems. Poly2Tri looked much more sane, but apparently has some stability problems.

After a bit of searching, I found libtess2, which is based on OpenGL's libtess and is supposed to be extremely robust and quite fast. The code base is excellent and I had no problem implementing it with Ejecta.

However, some tests showed that it's much slower than I hoped it would be. Realtime triangulation of complex polygons isn't very feasible on the iPhone.

In the end, I found a trick that lets you draw polygons in OpenGL without triangulating them first. It is so simple and elegant to implement, yet so ingenious: You can draw polygons with a simple triangle fan and mark those areas that you overdraw in the stencil buffer. See Drawing Filled, Concave Polygons Using the Stencil Buffer. It's a hacker's solution – thinking outside the box – and it fills me with joy.

There's still some parts missing in my Canvas implementation, namely gradients, shadows and most notably: text. I believe the best solution for drawing text in OpenGL, while honoring the Canvas spec, would be drawing to a texture using the iPhone's CG methods. This will make it quite slow, but should be good enough for a few paragraphs of text.

If you want to help out with anything grab the Ejecta source code on github – I'd be honored.

Wednesday, September 26th 2012 / Comments (13)