PhobosLab http://www.phoboslab.org/log Latest news en Fast Image Filters with WebGL http://www.phoboslab.org/log/2013/11/fast-image-filters-with-webgl <p>WebGLImageFilter is a small JavaScript library for applying a chain of filters to an image.</p> <div> Filters: <div style="float: right"> <select id="webgl-filter-stage-1" style="width: 120px"></select> <select id="webgl-filter-stage-2" style="width: 120px"></select> <select id="webgl-filter-stage-3" style="width: 120px"></select> </div> </div> <canvas id="image-filter-canvas" width="500" height="375"></canvas> <div><em>Sergey Brin in his badass Tesla, sporting Chrome wheels</em></div> <script type="text/javascript" src="http://www.phoboslab.org/files/webgl-image-filter/webgl-image-filter.js?v3"></script> <script type="text/javascript"> (function(){ var presets = [ {name: 'none'}, {name: 'negative'}, {name: 'brightness', args:[1.5]}, {name: 'saturation', args:[1.5]}, {name: 'contrast', args:[1.5]}, {name: 'hue', args:[180]}, {name: 'desaturate'}, {name: 'desaturateLuminance'}, {name: 'brownie'}, {name: 'sepia'}, {name: 'vintagePinhole'}, {name: 'kodachrome'}, {name: 'technicolor'}, {name: 'detectEdges'}, {name: 'sharpen'}, {name: 'emboss'}, {name: 'blur', args:[7]} ]; var fillSelectBox = function( id, onchange ) { var select = document.getElementById(id); select.onchange = onchange; for( var i = 0; i < presets.length; i++ ) { var name = presets[i].name; var opt = document.createElement('option'); opt.value = i; opt.innerHTML = name; select.appendChild(opt); } }; var addFilterFromSelectBox = function( filter, id ) { var index = parseInt(document.getElementById(id).value); var preset = presets[index]; if( preset.name == 'none' ) { return; } filter.addFilter( preset.name, preset.args ); }; // Get the 2d context from the canvas and load an image var canvas = document.getElementById('image-filter-canvas'); var ctx = canvas.getContext('2d'); ctx.textAlign = 'center'; ctx.fillStyle = '#000'; ctx.fillRect(0,0, canvas.width, canvas.height); ctx.fillStyle = '#fff'; ctx.fillText("Loading...", canvas.width/2, canvas.height/2); // Create the filter try { var filter = new WebGLImageFilter(); } catch( err ) { ctx.fillStyle = '#000'; ctx.fillRect(0,0, canvas.width, canvas.height); ctx.fillStyle = '#fff'; ctx.fillText("This browser doesn't support WebGL", canvas.width/2, canvas.height/2); return; } var img = new Image(); img.src = '/files/webgl-image-filter/sergey-brin.jpg'; img.onload = function() { canvas.width = img.width; canvas.height = img.height; // When a select box changes its value, run the filter again var onchange = function( ev ) { filter.reset(); addFilterFromSelectBox(filter, 'webgl-filter-stage-1'); addFilterFromSelectBox(filter, 'webgl-filter-stage-2'); addFilterFromSelectBox(filter, 'webgl-filter-stage-3'); var filteredImage = filter.apply(img); // Draw the filtered image into our 2D Canvas ctx.drawImage(filteredImage,0,0); }; // Fill the Select Box and attach the onchange listener fillSelectBox('webgl-filter-stage-1', onchange); fillSelectBox('webgl-filter-stage-2', onchange); fillSelectBox('webgl-filter-stage-3', onchange); document.getElementById('webgl-filter-stage-1').selectedIndex = 8; document.getElementById('webgl-filter-stage-1').onchange(); }; })(); </script> <p>But to quote myself from twitter:</p> <blockquote> <p>That awkward moment when you realize the lib you've been polishing for the past 6 hours already exists. With more features and a better API.</p> </blockquote> <p>~ <a href="https://twitter.com/phoboslab/status/397137160341426176">@phoboslab, Nov 3.</a></p> <p>So, yes, there's already a library called <a href="https://github.com/evanw/glfx.js">glfx.js</a> which basically does the same thing. And I'm almost saddened to say that it's excellent.</p> <p>It's not all in vain, however. My implementation features <em>raw</em> convolution and color matrix filters. The latter should be particularly interesting for those coming from the Flash world: it's the exact same as Flash's <a href="http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/filters/ColorMatrixFilter.html">ColorMatrixFilter</a> and allows for some nice &quot;Instagrammy&quot; photo effects among others. There are some JavaScript libraries around that implement this ColorMatrixFilter, but they all do it in JavaScript directly, which is quite slow. Mine can be used in realtime, even on iOS with <a href="https://github.com/phoboslab/Ejecta">Ejecta</a>.</p> <p>I also learned some things about WebGL and Shaders with all this. One interesting aspect is that you can't draw a framebuffer texture onto itself. So in order to apply more than one effect for the same image, you need to juggle around 2 textures - use one as the source, the other one as the target and then switch. And if you don't want to waste any draw calls, you need to know which draw call will be the last, so that you can draw directly onto the target Canvas instead of on a framebuffer texture.</p> <p>WebGL's coordinate system also complicates things a bit. It has the origin in the bottom left corner, instead of in the top left like most 2D APIs, essentially flipping everything on the Y axis. This has caused me a lot of pain with <a href="http://impactjs.com/ejecta">Ejecta</a> already. The solution sounds trivial: just scale your drawing coordinates by -1, but this only gets you so far. If you need to get the pixel data of your image or draw one canvas into another, you suddenly have to invert everything again.</p> <p>Lesson learned: Always google before you start working.</p> <p>Download: <a href="https://github.com/phoboslab/WebGLImageFilter">WebGLImageFilter on github</a>.</p> Sun, 03 Nov 2013 23:02:43 +0100 http://www.phoboslab.org/log/2013/11/fast-image-filters-with-webgl A Tale of Bad UX http://www.phoboslab.org/log/2013/10/a-tale-of-bad-ux <p>A few weeks ago, I completed my iOS App <a href="http://instant-webcam.com/">Instant Webcam</a> and eagerly submitted it to Apple for review. They approved it about a week later and I was free to sell it in the AppStore. About 30 minutes later my App showed up when you searched for it. All ready to launch I thought.</p> <p>Before publishing the accompanying blog post and tweeting about it, I tested the App one last time: I downloaded it directly from the AppStore on my iPhone and made sure it works. Which it did. Then, just to see what it looks like, I searched for the App on my iPad 1, which as you know doesn't have a camera. You shouldn't be able to download the App on a camera-less device at all. The AppStore should indicate this, but It didn't.<br/></p> <p>You could still download the App just fine on the iPad 1, despite the device having no camera to use it. When you start the App, it instantly crashes.</p> <p>Shit.</p> <p>Of course this was my fault. I forgot to set the appropriate key for <code>UIRequiredDeviceCapabilities</code> in the App's <code>Info.plist</code> file - namely the <code>video-camera</code> key that tells the AppStore that this App indeed requires camera capabilities.</p> <p>You can't test this at all before submitting your App for review. There's no &quot;staging area&quot; where you can see how your App will look like in the store. This bug only presents itself after release. You can't test for it beforehand. Apple's review process should probably have caught this bug, but didn't.</p> <p>The <code>UIRequiredDeviceCapabilities</code> property only affects the presentation in the AppStore, yet it has to be set in the App directly. Which means in order to change this you have to recompile your App, submit it for review again and wait a week or two till it's approved. Not fun.</p> <p>Worse still, you can't release an update for your App that requires features that your original version did not require. So this bug that only presents itself after release is essentially unfixable.</p> <p>The official way to &quot;mitigate&quot; this, confirmed via iTunes connect support, is the following.</p> <ul> <li>recompile your App with a new version number and name</li> <li>submit this update to Apple for review</li> <li>wait till the update is approved. This will free the original App name</li> <li>delete your App from the store</li> <li>add the desired device capabilities to your App's <code>Info.plist</code></li> <li>recompile your App with the old version number and original name</li> <li>submit this as new App to Apple for review</li> <li>enter all your metadata, descriptions and screenshots again</li> <li>wait till your App is approved</li> <li>publish your App</li> </ul> <p>Suffice to say, you can still download <a href="http://instant-webcam.com/">Instant Webcam</a> on your camera-less iPad or iPod and have it crash immediately.</p> Tue, 29 Oct 2013 16:06:19 +0100 http://www.phoboslab.org/log/2013/10/a-tale-of-bad-ux HTML5 Live Video Streaming via WebSockets http://www.phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets <p>When I build my <a href="http://instant-webcam.com/">Instant Webcam</a> App, I was searching for solutions to stream live video from the iPhone's Camera to browsers. There were none.</p> <p>When it comes to (live) streaming video with HTML5, the situation is pretty dire. HTML5 Video currently has no formalized support for streaming whatsoever. Safari supports the awkward <a href="http://en.wikipedia.org/wiki/HTTP_Live_Streaming">HTTP Live Streaming</a> and there's an upcomming <a href="http://www.w3.org/TR/2013/WD-media-source-20130905/">Media Source Extension</a> standard as well as <a href="http://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP">MPEG-DASH</a>. But all these solutions divide the video in shorter segments, each of which can be downloaded by the browser individually. This introduces a <em>minimum</em> lag of 5 seconds.</p> <p>So here's a totally different solution that works in any modern browser: Firefox, Chrome, Safari, Mobile Safari, Chrome for Android and even Internet Explorer 10.</p> <p><a href="http://www.phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets">Read complete post &raquo;</a></p> Wed, 11 Sep 2013 15:22:36 +0200 http://www.phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets Quake for Oculus Rift http://www.phoboslab.org/log/2013/07/quake-for-oculus-rift <p>With the many &quot;modern&quot; games like Half-Life 2 and Doom 3 already modified to support the <a href="http://www.oculusvr.com/">Oculus Rift</a>, I decided to give one of my favorite classic games a shot: the original Quake.</p> <p><img class="center" src="http://www.phoboslab.org/files/images/quake-rift.jpg" alt=""/></p> <p>Id Software long ago <a href="https://github.com/id-Software/Quake">released the full Source Code of Quake</a> under the GPL license. It has been ported to virtually every hardware platform in existence by the community and is still maintaned and kept up to date by several Open Source projects. One of them is the excellent <a href="http://quakespasm.sourceforge.net/">Quakespasm</a> - it aims to be faithful to the original with no changes to the gameplay but tons of smaller improvements and bugfixes. It's built on top of SDL, so it runs on Windows, OSX and Linux.</p> <p>Quakespasm provided a nice and clean starting point to implement support for the Rift. It's evident that Quake's C source code has a radically different style than John Carmack's later games, with almost all data stored in global variables and tons of functions with no arguments. Still, the source is very straight forward and easily understandable. I had almost no problems implementing the Rift support, but in the end took quite a few shortcuts with some dirty hacks.</p> <p>All in all, it works really well with the Rift. Quake feels even grittier and darker than it ever did before. The textures are pretty coarse and you can count the polygons on the enemies by hand, but it all comes together so nicely when you run through the corridors, completely immersed in the world with the original Nine Inch Nails soundtrack blasting through your headphones. For me personally, this ended up being one of the most enjoyable Rift experiences.</p> <p><img class="center" src="http://www.phoboslab.org/files/images/quake-rift-screenshot.jpg" alt=""/></p> <p>There are still various problems. Making the UI readable in Rift mode is a single dirty hack, some values for the eye offsets are elaborate guesses and in general this thing could have been implemented a lot cleaner. But as far as I can tell, everything works like it should. I only tried this on Windows; my guess is that it works on OSX as well, but I haven't changed the project files to incorporate the new code.</p> <p>To enable Rift support, bring up the console (~ key) and type the following:</p> <pre> vr_enabled 1 </pre> <p>If your Rift is connected, this should be all that's needed to get started. On my system, turning on VSync was a good idea - updates are a bit slower, but less stuttery. There's no tearing without VSync, so I assume that SDL is doing something funky here behind the scenes.</p> <ul> <li><a href="http://www.phoboslab.org/files/quakespasm-rift-win.zip">Quakespasm Rift for Windows</a> - includes Shareware pak0.</li> <li><a href="http://www.phoboslab.org/files/quakespasm-rift-mac.zip">Quakespasm Rift for OSX</a> - includes Shareware pak0.</li> <li><a href="http://github.com/phoboslab/quakespasm-rift">Source code on Github</a></li> </ul> <p>You can still buy the full game at <a href="http://www.idsoftware.com/games/quake/quake">idsoftware.com</a> or on Steam.</p> <p><em>Edit July 6th:</em> I fixed some rendering issues that warped the view. Please re-download.</p> <p><em>Edit July 7th:</em> Switched to predicted view orientation and added <code>vr_supersample</code> option.</p> <p><em>Edit August 22nd:</em> renamed all cvars to have a <code>vr_</code> prefix. See the <a href="https://github.com/phoboslab/Quakespasm-Rift/blob/master/README.md">Readme on Github</a> for a list of available cvars.</p> Fri, 05 Jul 2013 14:22:37 +0200 http://www.phoboslab.org/log/2013/07/quake-for-oculus-rift MPEG1 Video Decoder in JavaScript http://www.phoboslab.org/log/2013/05/mpeg1-video-decoder-in-javascript <p>With still no common video format for HTML5 in sight, I decided to implement an MPEG1 decoder in JavaScript. I know there's already an <a href="https://github.com/mbebenita/Broadway">h264 decoder for JavaScript</a> around, but it's huge, compiled with <a href="https://github.com/kripken/emscripten">emscripten</a> and quite complicated.</p> <p>An MPEG1 decoder sounded like a relatively simple and fun weekend project. While the real world use cases for this are of course a bit limited, I still learned a whole lot about video codecs in the process. The size of the source is just around 15kb gzipped and the performance is quite <em>okay-ish</em> - a 320x240 video easily plays with 30fps on the iPhone5.</p> <p><a href="http://www.phoboslab.org/log/2013/05/mpeg1-video-decoder-in-javascript">Read complete post &raquo;</a></p> Tue, 07 May 2013 18:09:53 +0200 http://www.phoboslab.org/log/2013/05/mpeg1-video-decoder-in-javascript How much Traffic is too much Traffic for CloudFlare? http://www.phoboslab.org/log/2013/02/how-much-traffic-is-too-much-traffic-for-cloudflare <p>Evidence suggests it's 100TB per month.</p> <p>Before I go into the details I want to state two things first:</p> <ul> <li>CloudFlare generously provided most of the bandwidth for our site for a year, without any hiccups.</li> <li>We (unknowingly) violated their TOS. However, I was assured that was not the reason we were kicked.</li> </ul> <p>So the reason I'm writing this is not because we were kicked (after all, CloudFlare was in the right to do so), but because of how shitty it went down. </p> <p><a href="http://www.phoboslab.org/log/2013/02/how-much-traffic-is-too-much-traffic-for-cloudflare">Read complete post &raquo;</a></p> Wed, 13 Feb 2013 18:39:03 +0100 http://www.phoboslab.org/log/2013/02/how-much-traffic-is-too-much-traffic-for-cloudflare Ejecta http://www.phoboslab.org/log/2012/09/ejecta <p>Ejecta is a fast JavaScript, Canvas &amp; Audio implementation for iOS. Today, I'm releasing it under the MIT Open Source license.</p> <iframe src="http://player.vimeo.com/video/50138422" width="500" height="281" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p>Visit the <a href="http://impactjs.com/ejecta">Ejecta website</a> for more info on what it is and how to use it. I will talk a bit more about some implementation details for the Canvas API here.</p> <p>Implementing a general purpose drawing API, such as the HTML5 Canvas API, on top of OpenGL is by no means an easy endeavor. Before I decided to roll my own solution (you know, <a href="http://www.phoboslab.org/log/2010/10/i-have-a-problem">I have this problem</a>), I looked at a number of graphic libraries including Google's <a href="http://code.google.com/p/skia/">skia</a> and <a href="http://www.khronos.org/openvg/">OpenVG</a>.</p> <p>I discovered exactly what I feared beforehand: these libraries do way too much, are too large and too hard to implement. You can't just use them here and there to draw – instead they replace your whole drawing stack. Getting them to compile alone is a huge pain; getting them to compile on the iPhone and then get them do what you wanted to seemed close to impossible.</p> <p>So I began working on my own solution. Implementing the path methods for <code>moveTo()</code>, <code>lineTo()</code>, <code>bezierCurveTo()</code>, etc. was fairly straight forward: have an array of subpaths where each subpath is an array of points (x,y). Each call to the API methods pushes one or more points to the subpath or closes it.<br/></p> <p>However, I struggled a bit with getting bezier curves to behave in a manner that makes sense for the current scale; i.e. push more points for large bezier curves and at sharp corners, fewer points for smaller ones and straight lines. After a few days of reading and experimenting, I found this <a href="http://www.antigrain.com/research/adaptive_bezier/index.html">excellent article on adaptive bezier curves</a> and adopted its solution.</p> <p>The hard part was getting that array of points on the screen. For drawing lines (<code>.stroke()</code>) I didn't want to go with the obvious solution of just using GL_LINES, because it has a number of drawbacks, especially on iOS: no anti aliasing, limited line width and no miters or line caps.</p> <p>So instead of using GL_LINES to draw, I ended up creating 2 triangles for each line segment and calculate the miter values myself. This correctly honors the APIs <code>.miterLimit</code> property, though the bevel it then draws is still a bit off. The code I ended up with is a bit on the ugly side, because it handles a lot of edge cases, but all in all this solution worked very well and is extremely fast.</p> <p>Implementing <code>.fill()</code> proved to be yet another challenge. With OpenGL, before you can draw a primitive to the screen, you have to break it down into triangles first. This is quite easy to do for convex polygons, but not so much for concave ones that potentially have holes in them.</p> <p>I spent a few days looking for triangulation library and soon realized that this is serious business. <a href="http://www.cs.cmu.edu/~quake/triangle.html">Triangle</a> for instance, sports 16k loc – I'm quite allergic to libraries that need that much code to solve seemingly simple problems. <a href="http://code.google.com/p/poly2tri/">Poly2Tri</a> looked much more sane, but apparently has some stability problems.<br/></p> <p>After a bit of searching, I found <a href="http://digestingduck.blogspot.de/2009/07/libtess2.html">libtess2</a>, which is based on OpenGL's libtess and is supposed to be extremely robust and quite fast. The code base is excellent and I had no problem implementing it with Ejecta.</p> <p>However, some tests showed that it's much slower than I hoped it would be. Realtime triangulation of complex polygons isn't very feasible on the iPhone.</p> <p>In the end, I found a trick that lets you draw polygons in OpenGL without triangulating them first. It is so simple and elegant to implement, yet so ingenious: You can draw polygons with a simple triangle fan and mark those areas that you overdraw in the stencil buffer. See <a href="http://fly.cc.fer.hr/~unreal/theredbook/chapter13.html">Drawing Filled, Concave Polygons Using the Stencil Buffer</a>. It's a hacker's solution – thinking outside the box – and it fills me with joy.</p> <p>There's still some parts missing in my Canvas implementation, namely gradients, shadows and most notably: text. I believe the best solution for drawing text in OpenGL, while honoring the Canvas spec, would be drawing to a texture using the iPhone's CG methods. This will make it quite slow, but should be good enough for a few paragraphs of text.</p> <p>If you want to help out with anything grab the <a href="https://github.com/phoboslab/Ejecta">Ejecta source code on github</a> – I'd be honored. </p> Wed, 26 Sep 2012 15:58:15 +0200 http://www.phoboslab.org/log/2012/09/ejecta Drawing Pixels is Hard http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard <p>Way harder than it should be.</p> <p>Back in 2009 when I first started to work on what would become my HTML5 game engine <a href="http://impactjs.com/">Impact</a>, I was immediately presented with the challenge of scaling the game screen while maintaining crisp, clean pixels. This sounds like an easy problem to solve – after all Flash did this from day one and &quot;retro&quot; games are a big chunk of the market, especially for browser games, so it really should be supported – but it's not.</p> <p>Let's say I have a game with an internal resolution of 320×240 and I want to scale it up 2x to 640×480 when presented on a website. With the HTML5 Canvas element, there are essentially two different ways to do this.</p> <p><em>a)</em> Creating the Canvas element in the scaled up resolution (640×480) and draw all images at twice the size:</p> <pre> <span class="K">var</span> canvas = document.createElement(<span class="S">'canvas'</span>); canvas.width = <span class="N">640</span>; canvas.width = <span class="N">480</span>; <span class="K">var</span> ctx = canvas.getContext(<span class="S">'2d'</span>); ctx.scale( <span class="N">2</span>, <span class="N">2</span> ); ctx.drawImage( img, <span class="N">0</span>, <span class="N">0</span> ); </pre> <p><em>b)</em> Using CSS to scale the Canvas – In my opinion this is the cleaner way to do it. It nicely decouples the internal canvas size from the size at which it is presented:</p> <pre> <span class="K">var</span> canvas = document.createElement(<span class="S">'canvas'</span>); canvas.width = <span class="N">320</span>; canvas.width = <span class="N">240</span>; canvas.style.width = <span class="S">'640px'</span>; canvas.style.width = <span class="S">'480px'</span>; <span class="K">var</span> ctx = canvas.getContext(<span class="S">'2d'</span>); ctx.drawImage( img, <span class="N">0</span>, <span class="N">0</span> ); </pre> <p>Both methods have a problem though – they use a bilinear (blurry) filtering instead of nearest-neighbor (pixel repetition) when scaling.</p> <p><img class="center" src="http://www.phoboslab.org/files/images/biolab-scaled.png" alt=""/> <em>What I wanted (left) vs. what I got (right)</em></p> <p>For the internal scaling approach (method <em>a</em>), you can set the context's <code>imageSmoothingEnabled</code> property to <code>false</code> in order to have crisp, nearest-neighbor scaling. This has been supported in Firefox for a few years now, but Chrome only just recently implemented it and it is currently unsupported in Safari (including Mobile Safari) and Internet Explorer (<a href="http://jsfiddle.net/VAXrL/190/">test case</a>).</p> <p>When doing the scaling in CSS (method <em>b</em>), you can use the <a href="https://developer.mozilla.org/en-US/docs/CSS/Image-rendering">image-rendering</a> CSS property to specify the scaling algorithm the browser should use. This works well in Firefox and Safari, but all other browsers simply ignore it for the Canvas element (<a href="http://jsfiddle.net/VAXrL/21/">test case</a>).</p> <p>Of course Internet Explorer is the only browser that currently doesn't support any of these methods.</p> <p>Not having crisp scaling really bothered me when I initially started to work on Impact. Keep in mind that at the time no browser supported either of the two methods described above. So I experiment a lot to find a solution.<br/></p> <p>And I found one. It's incredibly backwards and really quite sad: I do the scaling in JavaScript. Load the pixel data of each image, loop through all pixels and copy and scale the image, pixel by pixel, into a larger canvas then throw away the original image and use this larger canvas as the source for drawing instead.</p> <pre> <span class="K">var</span> resize = <span class="K">function</span>( img, scale ) { <span class="C">// Takes an image and a scaling factor and returns the scaled image </span> <span class="C">// The original image is drawn into an offscreen canvas of the same size </span> <span class="C">// and copied, pixel by pixel into another offscreen canvas with the </span> <span class="C">// new size. </span> <span class="K">var</span> widthScaled = img.width * scale; <span class="K">var</span> heightScaled = img.height * scale; <span class="K">var</span> orig = document.createElement(<span class="S">'canvas'</span>); orig.width = img.width; orig.height = img.height; <span class="K">var</span> origCtx = orig.getContext(<span class="S">'2d'</span>); origCtx.drawImage(img, <span class="N">0</span>, <span class="N">0</span>); <span class="K">var</span> origPixels = origCtx.getImageData(<span class="N">0</span>, <span class="N">0</span>, img.width, img.height); <span class="K">var</span> scaled = document.createElement(<span class="S">'canvas'</span>); scaled.width = widthScaled; scaled.height = heightScaled; <span class="K">var</span> scaledCtx = scaled.getContext(<span class="S">'2d'</span>); <span class="K">var</span> scaledPixels = scaledCtx.getImageData( <span class="N">0</span>, <span class="N">0</span>, widthScaled, heightScaled ); <span class="K">for</span>( <span class="K">var</span> y = <span class="N">0</span>; y &lt; heightScaled; y++ ) { <span class="K">for</span>( <span class="K">var</span> x = <span class="N">0</span>; x &lt; widthScaled; x++ ) { <span class="K">var</span> index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * <span class="N">4</span>; <span class="K">var</span> indexScaled = (y * widthScaled + x) * <span class="N">4</span>; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+<span class="N">1</span> ] = origPixels.data[ index+<span class="N">1</span> ]; scaledPixels.data[ indexScaled+<span class="N">2</span> ] = origPixels.data[ index+<span class="N">2</span> ]; scaledPixels.data[ indexScaled+<span class="N">3</span> ] = origPixels.data[ index+<span class="N">3</span> ]; } } scaledCtx.putImageData( scaledPixels, <span class="N">0</span>, <span class="N">0</span> ); <span class="K">return</span> scaled; } </pre> <p>This worked surprisingly well and has been the easiest way to scale up pixel-style games in Impact from day one. The scaling is only done once when the game first loads, so the performance hit isn't that bad, but you still notice the longer load times on mobile devices or when loading big images. After all, it's a stupidly costly operation do to, even in native code. We usually use GPUs for stuff like that.</p> <p>All in all, doing the scaling in JavaScript is not the &quot;right&quot; solution, but the one that works for all browsers.<br/></p> <p>Or rather <em>worked</em> for all browsers.</p> <h2>Meet the retina iPhone</h2> <p>When Apple introduced the iPhone 4, it was the first device with a <em>retina</em> display. The pixels on the screen are so small, that you can't discern them. This also means, that in order to read anything on a website at all, this website has to be scaled up 2x.</p> <p>So Apple introduced the <code>devicePixelRatio</code>. It's the ratio of real hardware pixels to CSS pixels. The iPhone 4 has a device pixel ratio of 2, i.e. one CSS pixel is displayed with 2 hardware pixels on the screen.</p> <p>This also means that the following canvas element will be automatically scaled up to 640×480 hardware pixels on a retina device, when drawn on a website. Its internal resolution, however, still is 320×240.</p> <pre> &lt;canvas width=<span class="S">&quot;320&quot;</span> height=<span class="S">&quot;240&quot;</span>&gt; </pre> <p>This automatic scaling again happens with the bilinear (blurry) filtering by default.</p> <p>So, in order to draw at the native hardware resolution, you'd have to do your image scaling in JavaScript as usual but with twice the scaling factor, create the canvas with twice the internal size and then scale it <em>down</em> again using CSS.</p> <p>Or, in recent Safari's, use the <code>image-rendering: -webkit-optimize-contrast;</code> CSS property. Nice!</p> <p>This certainly makes things a bit more complicated, but <code>devicePixelRatio</code> was a sane idea. It makes sense.</p> <h2>Meet the retina MacBook Pro</h2> <p>For the new retina MacBook Pro (MBP), Apple had another idea. Instead of behaving in the same way as Mobile Safari on the iPhone, Safari for the retina MBP will automatically create a canvas element with twice the internal resolution than you requested. In theory, this is quite nice if you only want to draw shapes onto your canvas - they will automatically be in retina resolution. However, it significantly breaks drawing images.</p> <p>Consider this Canvas element:</p> <pre> &lt;canvas width=<span class="S">&quot;320&quot;</span> height=<span class="S">&quot;240&quot;</span>&gt;&lt;/canvas&gt; </pre> <p>On the retina MBP, this will actually create a Canvas element with an internal resolution of 640×480. It will still behave as if it had an internal resolution of 320×240, though. Sort of.<br/></p> <p>This ingenious idea is called <code>backingStorePixelRatio</code> and, you guessed it, for the retina MBP it is <code>2</code>. It's still <code>1</code> for the retina iPhone. Because… yeah…</p> <p>(Paul Lewis recently wrote a nice article about <a href="http://www.html5rocks.com/en/tutorials/canvas/hidpi/">High DPI Canvas Drawing</a>, including a handy function that mediates between the retina iPhone and MBP and always draws in the native resolution)</p> <p>Ok, so what happens if you now draw a 320×240 image to this 320×240 Canvas that in reality is a 640×480 Canvas? Yep, the image will get scaled using bilinear (blurry) filtering. Granted, if it wouldn't use bilinear filtering, this whole pixel ratio dance wouldn't make much sense. The problem is, there's no <em>opt-out</em>.</p> <p>Let's say I want to analyze the colors of an image. I'd normally just draw the image to a canvas element retrieve an array of pixels from the canvas and then do whatever I want to do with them. Like this:</p> <pre> ctx.drawImage( img, <span class="N">0</span>, <span class="N">0</span> ); <span class="K">var</span> pixels = ctx.getImageData( <span class="N">0</span>, <span class="N">0</span>, img.width, img.height ); <span class="C">// do something with pixels.data... </span></pre> <p>On the retina MBP you can't do that anymore. The pixels that <code>getImageData()</code> returns are interpolated pixels, not the original pixels of the image. The image you have drawn to the canvas was first scaled up, to meet the bigger backing store and then scaled down again when retrieved through <code>getImageData()</code>, because <code>getImageData()</code> still acts as if the canvas was 320×240.</p> <p>Fortunately, Apple also introduced a new <code>getImageDataHD()</code> method to retrieve the real pixel data from the backing store. So all you'd have to do is draw your image to the canvas with half the size, in order to draw it at the real size. Confused yet?</p> <pre> <span class="K">var</span> ratio = ctx.webkitBackingStorePixelRatio || <span class="N">1</span>; ctx.drawImage( img, <span class="N">0</span>, <span class="N">0</span>, img.width/ratio, img.height/ratio ); <span class="K">var</span> pixels = <span class="K">null</span>; <span class="K">if</span>( ratio != <span class="N">1</span> ) { pixels = ctx.webkitGetImageDataHD( <span class="N">0</span>, <span class="N">0</span>, img.width, img.height ); } <span class="K">else</span> { pixels = ctx.getImageData( <span class="N">0</span>, <span class="N">0</span>, img.width, img.height ); } </pre> <p>(Did I say it's called <code>getImageDataHD()</code>? I lied. You gotta love those vendor prefixes. Imagine how nice it would be if there also was a <code>moz</code>, <code>ms</code>, <code>o</code> and a plain variant!)</p> <h2>The &quot;Good&quot; News</h2> <p>Ok, take a deep breath, there are <em>only</em> 3 different paths you have to consider when drawing sharp pixels on a scaled canvas.</p> <ul> <li>Check the backingStorePixelRatio. If it's not <code>1</code>, divide your canvas size and the destination size of all image draw calls by it, then scale the canvas element up using CSS and the <code>image-rendering</code> property. (Safari)</li> <li>Check if the <code>imageSmoothingEnabled</code> property is available and if so, set it to false. Create your Canvas in the final, scaled size and draw all images with your scaling factor. Don't use CSS to scale the Canvas. (Chrome, Firefox)</li> <li>Use JavaScript to scale up all images at load time. (Internet Explorer)</li> </ul> <p><br/> The CSS <code>image-rendering</code> property and the Canvas' <code>imageSmoothingEnabled</code> really make things a bit easier, but it would be nice if they were universally supported. Especially Safari is in desperate need for <code>imageSmoothingEnabled</code>-support, with all the crazy retina stuff they have going on.</p> <p>Let me also go on record saying that <code>backingStorePixelRatio</code> was a bad idea. It would have been a nice <em>opt-in</em> feature, but it's not a good default. A <a href="http://disq.us/8bint4">comment from Jake Archibald</a> on Paul Lewis' article tells us why:</p> <blockquote> <p>&lt;canvas&gt; 2D is a bitmap API, it's pixel dependent. An api that lets you query individual pixels shouldn't be creating pixels you don't ask for.</p> </blockquote> <p>Apple's <code>backingStorePixelRatio</code> completely breaks the font rendering in Impact, makes games look blurry and breaks a whole bunch of other apps that use direct pixel manipulation. But at least Apple didn't have to update all their dashboard widgets for retina resolution. How convenient!</p> <p><em>Update September 18th 2012:</em> To demonstrate the bug in Safari, I build another <a href="http://www.phoboslab.org/crap/backingstore/">test case</a> and filed a report with Apple.</p> Fri, 14 Sep 2012 01:01:43 +0200 http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard