• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 08 Aug 2011 02:22
This time around, the now traditional making of blog post is a video instead..

I didn't go much into detail but hopefully there are some interesting bits:



Thanks to Mozilla for sponsoring!
Author: "--"
Send by mail Print  Save  Delicious 
Clean up   New window
Date: Friday, 08 Apr 2011 12:30
Don't include in your portfolio projects you wouldn't enjoy doing again.

That's something I learnt right when I started this site. The aim here was just to share some experimentation and have fun, wasn't meant to be my portfolio but people started contacting me proposing interesting projects. At that point I took down my old (and boring) portfolio and kept doing experiments.

So it's not time to apply that concept again... At this point I've no interest on doing Flash projects any more, hence I've removed all the Flash projects and experiments (files are still there, just not linked).
Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 04 Jan 2011 04:53
One of the things WebGL will bring us is Fragment Shaders. If you're familiar with Pixel Bender you know how Fragment Shaders work.

Although they usually go with Vertex Shaders, you can set it up for doing just 2D effects with it. Iq did exactly this some months ago with Shader Toy — a browser based Fragment Shader editor.

This week I started experimenting with this. First thing I needed to do was a sandbox with the basic WebGL initialisation code. With that done is just a matter of testing values and refreshing the browser. I tried to have a compile button right in the page so I wouldn't even need to refresh the browser but it was over complicating the code...

These are the tests I've done so far:









Next thing on the list is to implement render to texture so I can use the output of the fragment shader as input. That'll allow crazy feedback effects and even crazier drawing tools!

Feel free to use the sandbox code for your own experimentation.
Author: "--"
Send by mail Print  Save  Delicious 
WebGL   New window
Date: Tuesday, 04 Jan 2011 04:11
Seems like WebGL is about to land. Chrome 9 beta, Firefox 4 beta 8 and Safari Nighly Build have it enabled by default already. It's just a matter of weeks for the final versions to be released.

I think we're going to face a funny situation... For web developers OpenGL may be a bit intimidating — at least it was for me! For game developers it's piece of cake, but because some features have been disabled some of them feel like a step back in their careers. So it's an area where there won't be so many people for a while.

You may know that, for the last few months, I've been busy developing a library that aims to make this as painless and fun as possible. Hopefully there will be more libraries like this in the near future.

If you're a web developer I would suggest you to start tinkering with all this. I have the feeling things are going to move quite fast as soon as it lands. Won't be long until Mobile Safari — iOS and Android browser — also enables this.

Exciting times! :)
Author: "--"
Send by mail Print  Save  Delicious 
WebGL   New window
Date: Tuesday, 04 Jan 2011 04:11
Seems like WebGL is about to land. Chrome 9 beta, Firefox 4 beta 8 and Safari Nighly Build have it enabled by default already. It's just a matter of weeks for the final versions to be released.

I think we're going to face a funny situation... For web developers OpenGL may be a bit intimidating — at least it was for me! For game developers it's piece of cake, but because some features have been disabled some of them feel like a step back in their careers. So it's an area where there won't be so many people for a while.

Unless you've been living under a rock for the past few months, you probably know that I've been busy developing a library that will, hopefully, make this as painless and fun as possible. Hopefully there will be more libraries like this in the near future.

If you're a web developer I would suggest you to start tinkering with all this. I have the feeling things are going to move quite fast as soon as it lands. Won't be long until Mobile Safari — iOS and Android browser — also enables this.

Exciting times! :)
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 18 Nov 2010 00:09
Continuing with the three.js development, after implementing multi lights the flat shading was starting to be quite a limitation.



In order to get smooth faces I needed to figure out a way to create what it's called 3 point gradients. I took a look at the usual Flash 3D engines and what was my surprise when I found out that they tend to be limited to just 1 light (correct me if I'm wrong). This is probably because by supporting 1 light they just needed to create a light map per material. Something like this:


That's indeed a fast approach, but it limits to just 1 light, and then you need to update the map if the ambient light changes or the color of the material... not really up my street.

The old way of doing this with OpenGL is by using Vertex Colors. Basically, I was after this:



I wondered if there was a way to create this kind of gradient with the Canvas API. I googled a bit to see what people had come up with, but all the approaches seemed quite cpu intensive. Like this one from nicoptere.

Then a crazy idea pop into my mind. What about having a 2x2 image, and change the colors of each pixel depending of the vertex color and do that at render time per polygon? The browser will then stretch that image and create all the gradient in between those 3-4 pixels.

This is the 2x2 image:


Can't you see it? Ok, this is the same image scaled to 256x256 with no filtering (using Gimp):


However, by default (whether you like it or not) the browsers filter scaled images. This is what you get within the browsers:


Each browser gets slightly different results but pretty much that's what you get. This wasn't exactly what I was after, there is too much color on the corners. However, by looking at all the results from all the browsers I realised that the center part of the image was the only part that was always similar.


Then I realised that that's the actual gradient I was after!


Here it's the code:

var QUALITY = 256;

var canvas_colors = document.createElement( 'canvas' );
canvas_colors.width = 2;
canvas_colors.height = 2;

var context_colors = canvas_colors.getContext( '2d' );
context_colors.fillStyle = 'rgba(0,0,0,1)';
context_colors.fillRect( 0, 0, 2, 2 );

var image_colors = context_colors.getImageData( 0, 0, 2, 2 );
var data = image_colors.data;

var canvas_render = document.createElement( 'canvas' );
canvas_render.width = QUALITY;
canvas_render.height = QUALITY;
document.body.appendChild( canvas_render );

var context_render = canvas_render.getContext( '2d' );
context_render.translate( - QUALITY / 2, - QUALITY / 2 );
context_render.scale( QUALITY, QUALITY );

data[ 0 ] = 255; // Top-left, red component
data[ 5 ] = 255; // Top-right, green component
data[ 10 ] = 255; // Bottom-left, blue component

context_colors.putImageData( image_colors, 0, 0 );
context_render.drawImage( canvas_colors, 0, 0 );

So it was just a matter of changing 3-4 pixels, scaling up and cropping and then use that as a texture for each polygon. Crazy? Yes. But it works! And fast enough! And what's more, it's not just limited to 3 points, but a 4th point comes for free (quads).


Here it's an example of the first material I've applied the technique to (use arrow keys and ASWD to navigate).

At this point I was surprised that I didn't see this before. And that I hadn't seen this being done in any of the usual 3d engines. I did another search and I found this snippet from Pixelero that uses the same concept. Good to know I'm not the only one with crazy ideas! :)

Thanks to this, now I'm able to do smooth materials (gouraud, phong) that support multiple lights, fog, even SAAO :) We just need the browsers to become faster (which they seem to be on it already).

Of course, if your browser supports WebGL you should be using that instead, but if it doesn't, at least you'll have something better than a text message.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 18 Nov 2010 00:09
Continuing with the three.js development, after implementing multi lights support the flat shading look was starting to be quite a limitation.



In order to get smooth faces I needed to figure out a way to create what it's called 3 point gradients. I took a look at the usual Flash 3D engines and what was my surprise when I found out that they tend to be limited to just 1 light (correct me if I'm wrong). This is probably because by supporting 1 light they just needed to create a light map per material. Something like this:


That's indeed a fast approach, but it limits to just 1 light, and then you need to update the map if the ambient light changes or the color of the material... not really up my street.

The old way of doing this with OpenGL is by using Vertex Colors. Basically, I was after this:



I wondered if there was a way to create this kind of gradient with the Canvas API. I googled a bit to see what had people come up with, but all the approaches seemed quite cpu intensive. Like this one from nicoptere.

Then a crazy idea pop into my mind. What about having a 2x2 image, and change the colors of each pixel depending of the vertex color and do that at render time per polygon? The browser will then stretch that image and create all the gradient in between those 3-4 pixels.

This is the 2x2 image:


Can't you see it? Ok, this is the same image scaled to 256x256 with Gimp and no filtering:


However, by default (whether you like it or not) the browser apply some filtering to the images when scaling them. This is what you get by when scaling the image with a browser:


Each browser gets slightly different results but pretty much that's what you get. This wasn't really what I was after, there is too much color on the corners. However, by looking at all the results from all the browsers I realised that the center part of the image was the only part that was always similar.


Then I realised that that's the actual gradient I was after! So it was just a matter of changing 3-4 pixels, then scaling up and cropping and then use that as a texture for each polygon.


Here it's the code:

var QUALITY = 256;

var canvas_colors = document.createElement( 'canvas' );
canvas_colors.width = 2;
canvas_colors.height = 2;

var context_colors = canvas_colors.getContext( '2d' );
context_colors.fillStyle = 'rgba(0,0,0,1)';
context_colors.fillRect( 0, 0, 2, 2 );

var image_colors = context_colors.getImageData( 0, 0, 2, 2 );
var data = image_colors.data;

var canvas_render = document.createElement( 'canvas' );
canvas_render.width = QUALITY;
canvas_render.height = QUALITY;
document.body.appendChild( canvas_render );

var context_render = canvas_render.getContext( '2d' );
context_render.translate( - QUALITY / 2, - QUALITY / 2 );
context_render.scale( QUALITY, QUALITY );

data[ 0 ] = 255; // Top-left, red component
data[ 5 ] = 255; // Top-right, green component
data[ 10 ] = 255; // Bottom-left, blue component

context_colors.putImageData( image_colors, 0, 0 );
context_render.drawImage( canvas_colors, 0, 0 );

Crazy? Yes. But it works! And fast enough! And what's more, it's not just limited to 3 points, but a 4th points gradient for free (quads).


Here it's an example of the first material I've applied the technique to (use arrow keys and ASWD to navigate).

At this point I was surprised that I didn't see this before. And that I hadn't seen this being done in any of the usual 3d engines. I did another search and I found this snippet from Pixelero that uses the same concept. Good to know I'm not the only one with crazy ideas! :)

Thanks to this, now I'm able to do multi smooth materials that support multi light (gouraud, phong), fog, even SAAO :) We just need the browsers to become faster (which they seem to be on it already).

Of course, if your browser supports WebGL you should be using that instead, but if it doesn't, at least you'll have something better than a text message.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 04 Nov 2010 23:55
For the last few weeks I've been quite focused on developing the engine and I think the API is starting to get quite stable. I'm still unsure on what parts of the API need to be included in the actual build and what parts should just keep outside.

For instance, primitives is something you don't want to have in the compiled .js file. If you need a Cube, Sphere, ... I think it's easier to have it in /js/geometry/primives/*. Otherwise we end up having a 100kbytes file just for drawing a bunch of particles. I really want to avoid that, right now it's 60kbytes. But that includes all the renderers, if you're only going to use CanvasRenderer, you can save 20-30kb by removing the SVGRenderer, WebGLRenderer, ... logic. These scripts do that automatically.

Mr. AlteredQualia has been doing an awesome job with the WebGLRenderer these past weeks. If you have WebGL enabled, these 400k polys are waiting for you.

There is still quite a bit of work to do here and there, specially on the materials side (Mapping types, Blending, Gouraud, Phong... ) but it'll all come in good time :)
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 11 Oct 2010 13:46
Joe Parry suggested by email to write a follow up to the post about JavaScript IDEs. That's also one of the common questions and I intended to include it on the first post but I forgot :S

I've heard many people referring to Firebug as the best way to debug when coding JavaScript but I've never got to try it out properly as I started coding directly with/for Chrome. So in my case I mainly use the Webkit Inspector/Developer Tools panel.

However, rather than debug the code I usually log stuff instead for which console.log(), console.log(), console.error(), console.warn(), console.info() gives me way more than I need. Specially coming from Flash where the console is so basic that everyone keeps building their own logging library.

At this last Google I/O I saw this presentation about the Developer Tools that left me quite impressed of how much stuff I was missing.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 11 Oct 2010 13:46
Joe Parry suggested by email to write a follow up to the post about JavaScript IDEs. That's also one of the common questions and I intended to include it on the first post but I forgot :S

I've heard many people referring to Firebug as the best way to debug when coding JavaScript but I've never got to try it out properly as I started coding directly with/for Chrome. So in my case I mainly use the Webkit Inspector/Developer Tools panel.

However, rather than debug the code I usually log stuff instead for which console.log(), console.log(), console.error(), console.warn(), console.info() gives me way more than I need. Specially coming from Flash where the console is so basic that everyone keeps building their own logging library.

At this last Google I/O I saw this presentation about the Developer Tools that left me quite impressed of how much stuff I was missing.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 11 Oct 2010 03:12
Last week Matthew Lein shared a very interesting tip over twitter.

Apart from FFFOUND!'s bookmarklet I haven't seen myself using many of these as I've never felt they were that useful. However, this case is different and, once again, the possibilities of JavaScript amaze me.

Just drag and drop this link to your bookmarks toolbar:

Display Stats

By clicking on the saved bookmark you'll be able to insert the Stats.js widget in any website and monitor the FPS/MS/MEM by doing so \:D/.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 11 Oct 2010 03:12
Last week Matthew Lein shared a very interesting tip over twitter.

Apart from FFFOUND!'s bookmarklet I haven't seen myself using many of these as I've never felt they were that useful. However, this case is different and, once again, the possibilities of JavaScript amaze me.

Just drag and drop this link to your bookmarks toolbar:

Display Stats

By clicking on the saved bookmark you'll be able to insert the Stats.js widget in any website and monitor the FPS/MS/MEM by doing so \:D/.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 06 Oct 2010 22:18
I've been asked this question a few times for the past week so I thought I would make a "public" answer.

At first I tested some IDEs like NetBeans and Aptana but somehow I didn't get to like them. Specially because they tend to add "hidden" folders around.

Believe it or not, I simply use Ubuntu's default text editor gedit. Sometimes the syntax highlighting is not correct and editing 1 line files is incredibly slow but apart from that it doesn't get on the way. Simple and fast.

However, auto completion is something I was missing from my FDT/AS3 days. Turns out auto completion based on already written text in the loaded files is all I need.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 06 Oct 2010 22:18
I've been asked this question a few times for the past week so I thought I would make a "public" answer.

At first I tested some IDEs like NetBeans and Aptana but somehow I didn't get to like them. Specially because they tend to add "hidden" folders around.

Believe it or not, I basically use gedit. Sometimes the syntax highlighting is not correct and editing 1 line files is incredibly slow but apart from that it doesn't get on the way. Simple and fast.

However, auto completion is something I was missing from my FDT/AS3 days. Turns out auto completion based on already written text in the loaded files is all I need.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 06 Oct 2010 22:18
I've been asked this question a few times for the past week so I thought I would make a "public" answer.

At first I tested some IDEs like NetBeans and Aptana but somehow I didn't get to like them. Specially because they tend to add "hidden" folders around.

Believe it or not, I basically use gedit. Sometimes the syntax highlighting is not correct and editing 1 line files is incredibly slow but apart from that it doesn't get on the way. Simple and fast.

However, auto completion is something I was missing from my FDT/AS3 days. Turns out auto completion based on already written text in the loaded files is all I need.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 06 Oct 2010 22:18
I've been asked this question a few times for the past week so I thought I would make a "public" answer.

At first I tested some IDEs like NetBeans and Aptana but somehow I didn't get to like them. Specially because they tend to add "hidden" folders around.

Believe it or not, I simply use Ubuntu's default text editor gedit. Sometimes the syntax highlighting is not correct and editing 1 line files is incredibly slow but apart from that it doesn't get on the way. Simple and fast.

However, auto completion is something I was missing from my FDT/AS3 days. Turns out auto completion based on already written text in the loaded files is all I need.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 17 Sep 2010 12:25


I think it was about one year ago, right after the Google Sphere experiment, when Aaron started talking about a music video in the browser idea and if I would be up for it. If you consider my background, you'll understand how excited I was with such opportunity. Since then they (the Google Creative Lab guys) started looking for a band to work with and a director. Months later we got busy with The Johnny Cash Project and seeing how well it worked out they saw clear how Chris could do a great job directing this project too. Chris happened to be friends with Arcade Fire whom also seemed to be interested on joining the party.

As the project was considerably big we also needed a production company to handle the design and development process and B-Reel seemed a good fit. Albeit they had some in-house developers we really needed people with HTML5/Javascript experience. Finding people with these skills turned out to be a hard task as it seems most of the people we knew were stuck with either jQuery-kind-of-Javascript, AS3 or full employed/unavailable... As an act of desperation I tweeted this. An old friend of mine replied to the call. I knew he had already played around with <canvas> and I knew he would be invaluable for the team for his know-how. At this point, we had a band, a track, Chris+Aaron had the script ready and the team was all set.

At first I didn't realised the cleverness of the idea of using Google Maps/Street View data set. It wasn't until we had the first test of the kid running around the neighbourhood that it made an impact on me and made me remember old times. Kudos to Chris and Aaron for envisioning that :)

Production time

By reading the script I realised that how valuable the javascript libraries I had been developing for the past year were going to be. We had a sequencer ready to add and remove effects in sync with a tune, a 3D renderer and Harmony (as it was referred in the script).

I'm sure most of the people will think that the drawing tool is basically what I did... not really. Although it used Harmony as the base code, George was in charge of that part. He did a great work by creating a new brush out of it, a recorder and repainter and, in the last minute, some keyboard based input with the letter being drawn using that brush.

Eduard was responsible of the whole mainframe and making sure everyone was producing compatible code. The sequencer already provided some basic template for show/hide behaviour of effects but we also needed pre-loading, sharing data between effects and more things I'm probably not aware of. If that wasn't enough, he created a sequencing tool (with javascript) so the Director and Art Director could easily set when each window and effect would appear and where in the screen.

Jaime took the maps beast and geocoding utils. We couldn't just directly use the embeddable Google Maps, the maps were supposed to have some tilting on the camera so a new Maps Data viewer was required. We also had to figure out possible routes for the runner to get to the user's home in sync with the music, the Direction Route API provided that but we had to implement it properly. All this takes a lot of time, research and testing.

I was going to work on all the Street View scenes and also the cgi version of the runner. However, I didn't have skinning nor any animation code for three.js yet, so as these scenes weren't interactive neither customisable it was quickly decided that B-Reel would create videos for these parts — which ended up really great too! In the end I also added the birds flocking and birds landing on the drawing to my plate.

Now... I can't speak much about the challenges that other people faced but you can get the idea with the ones I did.

(Fast) Street View

At first we intended to simply use Google's Street View. I did a test integrating three.js with it and it seemed to work all fluid. However, what I didn't know was that if you have WebGL enabled Google's Street View would already use it. So what other people would see was considerable slower than what I was seeing. If WebGL is not enabled the Street View uses a three.js-like renderer. That was fine on Windows and Linux but not so much for MacOS. Turns out Google Chrome internally uses a different graphics library in MacOS than in Windows and Linux. CoreGraphics for MacOS and Skia for Windows and Linux. Each library have their own pros and cons but CoreGraphics is specially slow when transforming and clipping big images. Street View would run at 30fps on Windows/Linux while it would get 1fps on MacOS.

Like with the maps, we had to build a custom Street View Data viewer. Jaime encountered the same problem while doing the 3d maps using three.js. He then started researching other ways of drawing the Maps Data in a way that would create the same effect. An additional challenge was that with <canvas> you don't have access to the pixel data of images loaded from another domain. Otherwise we could just use this technique and call it a day. However, albeit pixel access is forbidden, context.drawImage() is allowed for copying areas from images hosted on other domains.

By stitching all the tiles the API provides for each panorama we get this image:



After zooming in to a part of the image we get this:



Somehow we need to apply this distortion:



We can do that by cropping columns from the original image and positioning them one after the other horizontally. Each one with some vertical scaling depending of the proximity to the center. We get this:



Now we just need to wider the columns a bit to hide the gaps:



The distortion isn't perfect but it's close enough. This approach seemed to be quite fast in all the platforms and all it was left was to apply the good old Perlin Noise to the movement to get some human feeling.

The right heading

Or so I thought. We were missing an important bit. For each StreetView we had to place the camera target at specific positions. For instance, the Street View that does the 360 right in front of your house had to start spinning right from your house. But how do you know where to look at? Where is the user's house? The Street View Service doesn't give any information about that. After studying all the data the API provided and directly debugging Google Maps I ended up noticing that each panorama has a lat/lng information, plus I also had the lat/lng information of the location of the house. On top of that, the panorama does provide the angle where the north points to.

Very long story short... subtract the lat/lng position of the house with the lat/lng position of the panorama. Get the angle of that vector and mix it with the angle of where the north is in the panorama. Voila! :)

Birds



Although Guille and Michael had done great progress with the birds we felt we could do better. After considering the options, my approach was using a 3 polygons mesh — one polygon for each wing and one for the body. Then animating it by sinusoidally moving up and down the vertices at the end of the wings. Although it didn't look like a crow it gave, once again, a close enough effect. Specially when you have a bunch of them following a boid simulation.

Tinting

This one is going to be controversial... Street View and Maps footage needed to be colour corrected because the action is supposed to take place in the morning, thus some yellowish tint was needed.



Again, we can't access the pixel data of images hosted in another domain, so the only option was to layer a colour on top of the image and play with blend modes. However, take a look at the blend modes available... only lighten could be of some use here. So we first tried drawing a rectangle on top with a yellow colour and lighten blending mode enabled. That kind of worked but it washed the footage out.



There is another blending mode though... darken. It was taken out of the specification but it still remains in WebKit and I hope they put it back because this is a good example of why is useful. By drawing that yellow colour using the darken blending mode on top of the image, and then drawing the original image on top using the lighten blending mode we achieved a really nice yellow tint and contrast that worked quite well for simulating the light we get in the morning.



Notice how the darks stay dark. Here it's the actual snippet:

var context = texture_mod.getContext( '2d' );
 
context.drawImage( texture, 0, 0 );
 
context.globalAlpha = 0.5;
context.globalCompositeOperation = 'darker';
context.fillStyle = '#704214';
context.fillRect( 0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT );
 
context.globalCompositeOperation = 'lighter';
context.drawImage( texture, 0, 0 );

Tween and Manual Tween

Most of the movements in the video use the well known Penner's Easing equations.

Sole had been working on a simplified Javascript Tweening library some months before this project and it proved really useful. You probably know how Tweening libraries work... you define an animation with its properties to be animated, it's delay and so on... then you start it. However, we needed to be able to go backwards at any point of the video (at least I needed it ;P). So by doing the Manual Tween alternative we were able to move freely to any point of the virtual timeline.

Now that I look back, instead of having 2 different libraries for Tweening, the Manual one should have been part of the sequencer code... hmmm... something to consider...

Optimising

Launch day was approaching and the server guys were giving some recommendations on some changes we could do for making the server happier. One of them was the tree animation I was using on the last Street View part. It was something I intended to do but I didn't have time just yet. Instead of having 63 separate images for a growing tree animation is better to mix them up in a single image.



We have seen this in previous projects haven't we? ;)

The second one was about javascript files.... we had about 40+ javascript files. The more files you have the more server resources each user will consume. There is a defined amount of connections available in a server, if that amount is, for example, 40 and each user needs to open 40 files to be able to see the website, the website would be able to be seen by just 1 user at a time. Mix all those 40 files into 1 file and then 40 users will be able to visit the website at a time. We didn't used the compressed/compiled index by launch time and I believe that was one of the main reasons we suffered some downtimes.

This script shows pretty much how to combine and minify easily.

Ok, that's enough

I know, that was quite long wasn't it. Hope this is of some use to some one, and hope you liked the actual piece too. Now, let's move to WebGL ;)

PS: If you wonder about any other technical details, feel free to use the comments and I'll try to address them.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 02 Sep 2010 12:25


I think it was about one year ago, right after the Google Sphere experiment, when Aaron started talking about a music video in the browser idea and if I would be up for it. If you consider my background, you'll understand how excited I was with such opportunity. Since then they (the Google Creative Lab guys) started looking for a band to work with and a director. Months later we got busy with The Johnny Cash Project and seeing how well it worked out they saw clear how Chris could do a great job directing this project too. Chris happened to be friends with Arcade Fire whom also seemed to be interested on joining the party.

As the project was considerably big we also needed a production company to handle the design and development process and B-Reel seemed a good fit. Albeit they had some in-house developers we really needed people with HTML5/Javascript experience. Finding people with these skills turned out to be a hard task as it seems most of the people we knew were stuck with either jQuery-kind-of-Javascript, AS3 or full employed/unavailable... As an act of desperation I tweeted this. An old friend of mine replied to it showing some interest. He had already played around with <canvas> and I knew he would be invaluable for the team. At this point, we had a band, a track, Chris+Aaron had the script ready and the team was all set.

Production time

By reading the script I realised that how valuable the javascript libraries I had been developing for the past year were going to be. We had a sequencer ready to add and remove effects in sync with a tune, a 3D renderer and Harmony (as it was referred in the script).

I'm sure most of the people will think that the drawing tool is basically what I did... not really. Although it used Harmony as the base code, George was in charge of that part. He did a great work by creating a new brush out of it, a recorder and repainter and, in the last minute, some keyboard based input with the letter being drawn using that brush.

Eduard was responsible of the whole mainframe and making sure everyone was producing compatible code. The sequencer already provided some basic template for show/hide behaviour of effects but we also needed pre-loading, sharing data between effects and more things I'm probably not aware of. If that wasn't enough, he created a sequencing tool (with javascript) so the Director and Art Director could easily set when each window and effect would appear and where in the screen.

Jaime took the maps beast and geocoding utils. We couldn't just directly use the embeddable Google Maps, the maps were supposed to have some tilting on the camera so a new Maps Data viewer was required. We also had to figure out possible routes for the runner to get to the user's home in sync with the music, the Direction Route API provided that but we had to implement it properly. All this takes a lot of time, research and testing.

I was going to work on all the Street View scenes and also the cgi version of the runner. However, I didn't have skinning nor any animation code for three.js yet, so as these scenes weren't interactive neither customisable it was quickly decided that B-Reel would create videos for these parts. In the end I also added the birds flocking and birds landing on the drawing to my plate.

Now... I can't speak much about the challenges that other people faced but you can get the idea with the ones I did.

(Fast) Street View

At first we intended to simply use Google's Street View. I did a test integrating three.js with it and it seemed to work all fluid. However, what I didn't know was that if you have WebGL enabled Google's Street View would already use it. So what other people would see was considerable slower than what I was seeing. If WebGL is not enabled the Street View uses a three.js-like renderer. That was fine on Windows and Linux but not so much for MacOS. Turns out Google Chrome internally uses a different graphics library in MacOS than in Windows and Linux. CoreGraphics for MacOS and Skia for Windows and Linux. Each library have their own pros and cons but CoreGraphics is specially slow when transforming and clipping big images. Street View would run at 30 fps on Windows/Linux while it would get 1fps on MacOS.

Once again, we had to build a custom Street View Data viewer. Jaime encountered the same problem while doing the 3d maps using three.js. He then started researching other ways of drawing the Maps Data in a way that would create the same effect. An additional challenge was that with <canvas> you don't have access to the pixel data of images loaded from another domain. Otherwise we could just use this technique and call it a day. However, albeit pixel access is forbidden, context.drawImage() is allowed for copying areas from images hosted on other domains.

This is the image we get from the server:



After zooming in to a part of the image we get this:



Somehow we need to apply this distortion:



We can do that by cropping columns from the original image and positioning them one after the other horizontally. Each one with some vertical scaling depending of the proximity to the center. We get this:



And by making each column a bit wider we finally get the desired distortion:



The distortion isn't 100% as it should be but it's close enough. This approach seemed to be quite fast in all the platforms and all it was left was to apply the good old Perlin Noise to the camera target to give some human feeling.

The right heading

Or so I thought. We were missing an important bit. For each StreetView we had to place the camera target at specific positions. For instance, the Street View that does the 360 right in front of your house had to start spinning right from your house. But how do you know where to look at? Where is the user's house? The Street View Service doesn't give any information about that. After studying all the data the API provided and directly debugging Google Maps I ended up noticing that each panorama has a lat/lng information, plus I also had the lat/lng information of the location of the house. On top of that, the panorama does provide the angle where the north points to.

Very long story short... subtract the lat/lng position of the house with the lat/lng position of the panorama. Get the angle of that vector and mix it with the angle of where the north is in the panorama. Voila! :)

Birds



Although Guille and Michael had done great progress with the birds we felt we could do better. After considering the options, my approach was using a 3 polygons mesh — one polygon for each wing and one for the body. Then animating it by sinusoidally moving up and down the vertices at the end of the wings. Although is didn't look like a crow it gave, once again, a close enough effect. Specially when you have a bunch of them following a boid simulation.

Tinting

This one is going to be controversial... Street View and Maps footage needed to be colour corrected because the action is supposed to take place in the morning, thus some yellowish tint was needed.



Again, we can't access the pixel data of images hosted in another domain, so the only option was to layer a colour on top of the image and play with blend modes. However, take a look at the blend modes available... only lighten could be of some use here. So we first tried drawing a rectangle on top with a yellow colour and lighten blending mode enabled. That kind of worked but it washed out the footage.



There is another blending mode though... darken. It was taken out of the specification but it still remains in WebKit and I hope they put it back because this is a good example of why is useful. By drawing that yellow colour using the darken blending mode on top of the image, and then drawing the original image on top using the lighten blending mode we achieved a really nice yellow tint and contrast that worked quite well for simulating the light we get in the morning.



Notice how the darks stay dark. Here it's the actual snippet:

var context = texture_mod.getContext( '2d' );
 
context.drawImage( texture, 0, 0 );
 
context.globalAlpha = 0.5;
context.globalCompositeOperation = 'darker';
context.fillStyle = '#704214';
context.fillRect( 0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT );
 
context.globalCompositeOperation = 'lighter';
context.drawImage( texture, 0, 0 );

Tween and Manual Tween

Most of the movements in the video use the well known Penner's Easing equations.

Sole had been working on a simplified Javascript Tweening library some months before this project and it proved really useful. You probably know how Tweening libraries work... you define an animation with its properties to be animated, it's delay and so on... then you start it. However, we needed to be able to go backwards at any point of the video (at least I needed it ;P). So a did another version of the library that allowed moving freely to any point of the virtual timeline.

Now that I look back, instead of having 2 different libraries for Tweening, the "manual" one should have been part of the sequencer code... hmmm... something to consider...

Optimising

Launch day was approaching and the server guys were giving some recommendations on some changes we could do for making the server happier. One of them was the tree animation I was using on the last Street View part. It was something I intended to do but I didn't have time just yet. Instead of having 63 separate images for a growing tree animation is better to mix them up in a single image.



We have seen this in previous projects haven't we? ;)

The second one was about javascript files.... we had about 40+ javascript files. The more files you have the more server resources each user will consume. There is a defined amount of connections available in a server, if that amount is, for example, 40 and each user needs to open 40 files to be able to see the website, the website would be able to be seen by just 1 user at a time. Mix all those 40 files into 1 file and then 40 users will be able to visit the website at a time. We didn't used the compressed/compiled index by launch time and I believe that was one of the main reasons we suffered some downtimes.

This script shows pretty much how to combine and minify easily.

Ok, that's enough

I know, that was quite long wasn't it. Hope this is of some use to some one, and hope you liked the actual piece too. Now, let's move to WebGL ;)

PS: If you wonder about any other technical details, feel free to use the comments and I'll try to address them.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 02 Aug 2010 20:55
Last friday MIX Online and An Event Apart launched an unusual Javascript contest. 10K Apart pushes the developer to reduce their code so it fits in 10,000 bytes. This sounds like a nice challenge, but strangely they allow specific external libraries, which, in my opinion, over complicate things.

However, just to see what could be done, I quickly checked the amount of bytes that I would need to have a simple and easy to use 3d engine. That turned out to be about 1,000 bytes. Which left me with 9,000 bytes. Now, this may sound great, but it's kind of discouraging. Now I understand why the 64 kbytes coders always say that 64 kbytes are way harder to do than 4 kbytes – the number of possibilities is lower. Anyway, if I come up with a good idea I may try doing something with that for the contest.

Today I found out about JS1k. Which seemed to be Peter van der Zee's response to the former contest (no external libs, just 1,024 bytes of plain javascript) and it also had a few submissions.

Once I got the 3d engine working I got a bit code-addicted and ended up doing a plasma in 3D in 1,464 bytes, it was nice looking already, so it was a matter of reducing those extra 440 bytes. After learning some tricks and testing things here and there, it got reduced to 996 bytes. Here it's the result:



Looking forward to see what p01 has to "say" of all this... :)

EDIT: After reading Diego's post and finding interesting to see which tricks he used I thought I should also share the non-obfuscated code of my entry.

( function () {

	var res = 25, res3 = res * res * res,
	i = 0, x = 0, y = 0, z = 0, s, size, sizeHalf,
	vx, vy, vz, rsx, rcx, rsy, rcy, rsz, rcz,
	xy, xz, yx, yz, zx, zy,
	cx = 0, cy = 0, cz = 1, rx = 1, ry = 1, rz = 0,
	t, t1, t2, t3,
	sin = Math.sin, cos = Math.cos, pi = Math.PI * 3,
	mouseX = 0, mouseY = 0, color,
	doc = document, body = doc.body,
	canvas, context, mesh = [],
	width = innerWidth,
	height = innerHeight,
	widthHalf = width / 2,
	heightHalf = height / 2;

	body.style.margin = '0px';
	body.style.overflow = 'hidden';

	canvas = doc.body.children[0];
	canvas.width = width;
	canvas.height = height;

	context = canvas.getContext( '2d' );
	context.translate( widthHalf, heightHalf );

	doc.onmousemove = function ( event ) {

		mouseX = ( event.clientX - widthHalf ) / 1000;
		mouseY = ( event.clientY + heightHalf ) / 1000;

	};

	while ( i++ < res3 ) {

		mesh.push( x / res - 0.5 );
		mesh.push( y / res - 0.5 );
		mesh.push( z / res - 0.5 );

		z = i % res;
		y = !z ? ++y %res : y;
		x = !z && !y ? ++x : x;

	}

	setInterval( function () {

		context.clearRect( - widthHalf, - heightHalf, width, height );

		cx += ( mouseX - cx ) / 10;
		cz += ( mouseY - cz ) / 10;

		t = new Date().getTime();
		t1 = sin( t / 20000 ) * pi;
		t2 = sin( t / 10000 ) * pi;
		t3 = sin( t / 15000 ) * pi;

		rx = t / 10000;

		rsx = sin( rx ); rcx = cos( rx );
		rsy = sin( ry ); rcy = cos( ry );
		rsz = sin( rz ); rcz = cos( rz );

		i = 0;

		while ( ( i += 3 ) < res3 * 3 ) {

			x = mesh[ i ];
			y = mesh[ i + 1 ];
			z = mesh[ i + 2 ];
			s = sin( t1 + x * t1 ) + sin( t2 + y * t2 ) + sin( t3 + z * t3 );

			if ( s >= 0 ) {

				xy = rcx * y - rsx * z;
				xz = rsx * y + rcx * z;

				yz = rcy * xz - rsy * x;
				yx = rsy * xz + rcy * x;

				zx = rcz * yx - rsz * xy;
				zy = rsz * yx + rcz * xy;

				vx = zx - cx;
				vy = zy - cy;
				vz = yz + cz;

				if ( vz > 0 ) {

					color = ( 64 / vz ) >> 0;
					context.fillStyle = 'rgb('+ ( color - 16 ) + ','+ ( color * 2 - 128 ) + ','+ ( color + 64 ) + ')';

					size = s * 30 / vz;
					sizeHalf = size / 2;

					context.fillRect( ( vx / vz ) * widthHalf - sizeHalf, ( vy / vz ) * widthHalf - sizeHalf, size, size );

				}

			}
		}

	}, 16 );

} )();

Which, after compression ends up like this:

var O=24,d=O*O*O,X=0,U=0,T=0,S=0,W,j,L,o,m,k,b,q,ac,n,ab,l,K,I,r,p,Z,Y,C=0,A=0,w=1,G=1,F=1,E=0,V,P,N,M,u=Math.sin,f=Math.cos,v=Math.PI*3,R=0,Q=0,H,g=document,D=g.body,h,aa,B=[],a=innerWidth,e=innerHeight,J=a/2,c=e/2;D.style.margin="0px";D.style.overflow="hidden";h=g.body.children[0];h.width=a;h.height=e;aa=h.getContext("2d");aa.translate(J,c);g.onmousemove=function(i){R=(i.clientX-J)/1e3;Q=(i.clientY+c)/1e3};while(X++=0){K=q*T-b*S;I=b*T+q*S;p=n*I-ac*U;r=ac*I+n*U;Z=l*r-ab*K;Y=ab*r+l*K;o=Z-C;m=Y-A;k=p+w;if(k>0){H=(64/k)>>0;aa.fillStyle="rgb("+(H-16)+","+(H*2-128)+","+(H+64)+")";j=W*30/k;L=j/2;aa.fillRect((o/k)*J-L,(m/k)*J-L,j,j)}}}},16);
Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 25 Jul 2010 16:45
This weekend Euskal took place and for quite a while I was considering doing a demo for it, but it wasn't until Friday night that I finally got free time for doing it (and the deadline was Saturday evening...).

Now that three.js was starting to get stable and also thanks to some sequencing code I had done some months ago I had no excuses to get working on it. So, from Friday midnight until Saturday afternoon, I managed to get this:



That's just a video of the demo, watch the actual demo.

Quite amazing how quickly you can get stuff done with javascript once you have the basic libs ready ;)

I need to check what's wrong with Opera, for some reason is not cleaning the screen. Apart from that it should work in all browsers (albeit quite slow in some, specially MacOS ones). Oh, I think it wouldn't work on Safari either, as it seems to be the only modern browser that doesn't support .ogg files.

In case anyone is interested, I've also shared the source code.

EDIT: Found the problem with Opera. Turns out context.clearRect() doesn't work if the context has been transformed with negative values. I'll let them fix their bug ;)
Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader