PHOBOSLAB

Blog Home

Posts for 2012/09

Ejecta

Ejecta is a fast JavaScript, Canvas & Audio implementation for iOS. Today, I'm releasing it under the MIT Open Source license.

Visit the Ejecta website for more info on what it is and how to use it. I will talk a bit more about some implementation details for the Canvas API here.

Implementing a general purpose drawing API, such as the HTML5 Canvas API, on top of OpenGL is by no means an easy endeavor. Before I decided to roll my own solution (you know, I have this problem), I looked at a number of graphic libraries including Google's skia and OpenVG.

I discovered exactly what I feared beforehand: these libraries do way too much, are too large and too hard to implement. You can't just use them here and there to draw – instead they replace your whole drawing stack. Getting them to compile alone is a huge pain; getting them to compile on the iPhone and then get them do what you wanted to seemed close to impossible.

So I began working on my own solution. Implementing the path methods for moveTo(), lineTo(), bezierCurveTo(), etc. was fairly straight forward: have an array of subpaths where each subpath is an array of points (x,y). Each call to the API methods pushes one or more points to the subpath or closes it.

However, I struggled a bit with getting bezier curves to behave in a manner that makes sense for the current scale; i.e. push more points for large bezier curves and at sharp corners, fewer points for smaller ones and straight lines. After a few days of reading and experimenting, I found this excellent article on adaptive bezier curves and adopted its solution.

The hard part was getting that array of points on the screen. For drawing lines (.stroke()) I didn't want to go with the obvious solution of just using GL_LINES, because it has a number of drawbacks, especially on iOS: no anti aliasing, limited line width and no miters or line caps.

So instead of using GL_LINES to draw, I ended up creating 2 triangles for each line segment and calculate the miter values myself. This correctly honors the APIs .miterLimit property, though the bevel it then draws is still a bit off. The code I ended up with is a bit on the ugly side, because it handles a lot of edge cases, but all in all this solution worked very well and is extremely fast.

Implementing .fill() proved to be yet another challenge. With OpenGL, before you can draw a primitive to the screen, you have to break it down into triangles first. This is quite easy to do for convex polygons, but not so much for concave ones that potentially have holes in them.

I spent a few days looking for triangulation library and soon realized that this is serious business. Triangle for instance, sports 16k loc – I'm quite allergic to libraries that need that much code to solve seemingly simple problems. Poly2Tri looked much more sane, but apparently has some stability problems.

After a bit of searching, I found libtess2, which is based on OpenGL's libtess and is supposed to be extremely robust and quite fast. The code base is excellent and I had no problem implementing it with Ejecta.

However, some tests showed that it's much slower than I hoped it would be. Realtime triangulation of complex polygons isn't very feasible on the iPhone.

In the end, I found a trick that lets you draw polygons in OpenGL without triangulating them first. It is so simple and elegant to implement, yet so ingenious: You can draw polygons with a simple triangle fan and mark those areas that you overdraw in the stencil buffer. See Drawing Filled, Concave Polygons Using the Stencil Buffer. It's a hacker's solution – thinking outside the box – and it fills me with joy.

There's still some parts missing in my Canvas implementation, namely gradients, shadows and most notably: text. I believe the best solution for drawing text in OpenGL, while honoring the Canvas spec, would be drawing to a texture using the iPhone's CG methods. This will make it quite slow, but should be good enough for a few paragraphs of text.

If you want to help out with anything grab the Ejecta source code on github – I'd be honored.

Drawing Pixels is Hard

Way harder than it should be.

Back in 2009 when I first started to work on what would become my HTML5 game engine Impact, I was immediately presented with the challenge of scaling the game screen while maintaining crisp, clean pixels. This sounds like an easy problem to solve – after all Flash did this from day one and "retro" games are a big chunk of the market, especially for browser games, so it really should be supported – but it's not.

Let's say I have a game with an internal resolution of 320×240 and I want to scale it up 2x to 640×480 when presented on a website. With the HTML5 Canvas element, there are essentially two different ways to do this.

a) Creating the Canvas element in the scaled up resolution (640×480) and draw all images at twice the size:

var canvas = document.createElement('canvas');
canvas.width = 640;
canvas.width = 480;

var ctx = canvas.getContext('2d');
ctx.scale( 2, 2 );
ctx.drawImage( img, 0, 0 );

b) Using CSS to scale the Canvas – In my opinion this is the cleaner way to do it. It nicely decouples the internal canvas size from the size at which it is presented:

var canvas = document.createElement('canvas');
canvas.width = 320;
canvas.width = 240;
canvas.style.width = '640px';
canvas.style.width = '480px';

var ctx = canvas.getContext('2d');
ctx.drawImage( img, 0, 0 );

Both methods have a problem though – they use a bilinear (blurry) filtering instead of nearest-neighbor (pixel repetition) when scaling.

For the internal scaling approach (method a), you can set the context's imageSmoothingEnabled property to false in order to have crisp, nearest-neighbor scaling. This has been supported in Firefox for a few years now, but Chrome only just recently implemented it and it is currently unsupported in Safari (including Mobile Safari) and Internet Explorer (test case).

When doing the scaling in CSS (method b), you can use the image-rendering CSS property to specify the scaling algorithm the browser should use. This works well in Firefox and Safari, but all other browsers simply ignore it for the Canvas element (test case).

Of course Internet Explorer is the only browser that currently doesn't support any of these methods.

Not having crisp scaling really bothered me when I initially started to work on Impact. Keep in mind that at the time no browser supported either of the two methods described above. So I experiment a lot to find a solution.

And I found one. It's incredibly backwards and really quite sad: I do the scaling in JavaScript. Load the pixel data of each image, loop through all pixels and copy and scale the image, pixel by pixel, into a larger canvas then throw away the original image and use this larger canvas as the source for drawing instead.

var resize = function( img, scale ) {
    // Takes an image and a scaling factor and returns the scaled image
    
    // The original image is drawn into an offscreen canvas of the same size
    // and copied, pixel by pixel into another offscreen canvas with the 
    // new size.
    
    var widthScaled = img.width * scale;
    var heightScaled = img.height * scale;
    
    var orig = document.createElement('canvas');
    orig.width = img.width;
    orig.height = img.height;
    var origCtx = orig.getContext('2d');
    origCtx.drawImage(img, 0, 0);
    var origPixels = origCtx.getImageData(0, 0, img.width, img.height);
    
    var scaled = document.createElement('canvas');
    scaled.width = widthScaled;
    scaled.height = heightScaled;
    var scaledCtx = scaled.getContext('2d');
    var scaledPixels = scaledCtx.getImageData( 0, 0, widthScaled, heightScaled );
    
    for( var y = 0; y < heightScaled; y++ ) {
        for( var x = 0; x < widthScaled; x++ ) {
            var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4;
            var indexScaled = (y * widthScaled + x) * 4;
            scaledPixels.data[ indexScaled ] = origPixels.data[ index ];
            scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ];
            scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ];
            scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ];
        }
    }
    scaledCtx.putImageData( scaledPixels, 0, 0 );
    return scaled;
}

This worked surprisingly well and has been the easiest way to scale up pixel-style games in Impact from day one. The scaling is only done once when the game first loads, so the performance hit isn't that bad, but you still notice the longer load times on mobile devices or when loading big images. After all, it's a stupidly costly operation do to, even in native code. We usually use GPUs for stuff like that.

All in all, doing the scaling in JavaScript is not the "right" solution, but the one that works for all browsers.

Or rather worked for all browsers.

Meet the retina iPhone

When Apple introduced the iPhone 4, it was the first device with a retina display. The pixels on the screen are so small, that you can't discern them. This also means, that in order to read anything on a website at all, this website has to be scaled up 2x.

So Apple introduced the devicePixelRatio. It's the ratio of real hardware pixels to CSS pixels. The iPhone 4 has a device pixel ratio of 2, i.e. one CSS pixel is displayed with 2 hardware pixels on the screen.

This also means that the following canvas element will be automatically scaled up to 640×480 hardware pixels on a retina device, when drawn on a website. Its internal resolution, however, still is 320×240.

<canvas width="320" height="240">

This automatic scaling again happens with the bilinear (blurry) filtering by default.

So, in order to draw at the native hardware resolution, you'd have to do your image scaling in JavaScript as usual but with twice the scaling factor, create the canvas with twice the internal size and then scale it down again using CSS.

Or, in recent Safari's, use the image-rendering: -webkit-optimize-contrast; CSS property. Nice!

This certainly makes things a bit more complicated, but devicePixelRatio was a sane idea. It makes sense.

Meet the retina MacBook Pro

For the new retina MacBook Pro (MBP), Apple had another idea. Instead of behaving in the same way as Mobile Safari on the iPhone, Safari for the retina MBP will automatically create a canvas element with twice the internal resolution than you requested. In theory, this is quite nice if you only want to draw shapes onto your canvas - they will automatically be in retina resolution. However, it significantly breaks drawing images.

Consider this Canvas element:

<canvas width="320" height="240"></canvas>

On the retina MBP, this will actually create a Canvas element with an internal resolution of 640×480. It will still behave as if it had an internal resolution of 320×240, though. Sort of.

This ingenious idea is called backingStorePixelRatio and, you guessed it, for the retina MBP it is 2. It's still 1 for the retina iPhone. Because… yeah…

(Paul Lewis recently wrote a nice article about High DPI Canvas Drawing, including a handy function that mediates between the retina iPhone and MBP and always draws in the native resolution)

Ok, so what happens if you now draw a 320×240 image to this 320×240 Canvas that in reality is a 640×480 Canvas? Yep, the image will get scaled using bilinear (blurry) filtering. Granted, if it wouldn't use bilinear filtering, this whole pixel ratio dance wouldn't make much sense. The problem is, there's no opt-out.

Let's say I want to analyze the colors of an image. I'd normally just draw the image to a canvas element retrieve an array of pixels from the canvas and then do whatever I want to do with them. Like this:

ctx.drawImage( img, 0, 0 );
var pixels = ctx.getImageData( 0, 0, img.width, img.height );
// do something with pixels.data...

On the retina MBP you can't do that anymore. The pixels that getImageData() returns are interpolated pixels, not the original pixels of the image. The image you have drawn to the canvas was first scaled up, to meet the bigger backing store and then scaled down again when retrieved through getImageData(), because getImageData() still acts as if the canvas was 320×240.

Fortunately, Apple also introduced a new getImageDataHD() method to retrieve the real pixel data from the backing store. So all you'd have to do is draw your image to the canvas with half the size, in order to draw it at the real size. Confused yet?

var ratio = ctx.webkitBackingStorePixelRatio || 1;
ctx.drawImage( img, 0, 0, img.width/ratio, img.height/ratio );

var pixels = null;
if( ratio != 1 ) {
    pixels = ctx.webkitGetImageDataHD( 0, 0, img.width, img.height );
}
else {
    pixels = ctx.getImageData( 0, 0, img.width, img.height );
}

(Did I say it's called getImageDataHD()? I lied. You gotta love those vendor prefixes. Imagine how nice it would be if there also was a moz, ms, o and a plain variant!)

The "Good" News

Ok, take a deep breath, there are only 3 different paths you have to consider when drawing sharp pixels on a scaled canvas.


The CSS image-rendering property and the Canvas' imageSmoothingEnabled really make things a bit easier, but it would be nice if they were universally supported. Especially Safari is in desperate need for imageSmoothingEnabled-support, with all the crazy retina stuff they have going on.

Let me also go on record saying that backingStorePixelRatio was a bad idea. It would have been a nice opt-in feature, but it's not a good default. A comment from Jake Archibald on Paul Lewis' article tells us why:

<canvas> 2D is a bitmap API, it's pixel dependent. An api that lets you query individual pixels shouldn't be creating pixels you don't ask for.

Apple's backingStorePixelRatio completely breaks the font rendering in Impact, makes games look blurry and breaks a whole bunch of other apps that use direct pixel manipulation. But at least Apple didn't have to update all their dashboard widgets for retina resolution. How convenient!

Update September 18th 2012: To demonstrate the bug in Safari, I build another test case and filed a report with Apple.