• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Sunday, 17 Jan 2010 12:57

At the day-job we do a lot of Widget work. We have Container widgets that hold Panel widgets, which hold Box widgets, which hold other widgets. The widgets themselves create other widgets and place them in their own ownership. Our full-page/no-refresh/Ajax-app with the long lived page views creates and destroys all these widgets based on the various events published around the page, but we ran into a problem along the way: a lot of the widgets weren't being destroyed. Ever.

The whole system is quite sound, though lacked in this one regard. To be fair, Dijit cleans up after itself. Everything that is created when a Dijit widget instance is new'd up is removed when that instance is destroyed. Everything that is created declaratively in a template is cleaned up automatically for you when a widget is destroyed. The problem only exists when one widget creates another widget programatically, either for permanent placement or temporary use. Let me illustrate with a basic Widget example:

 
// file is my/Thinger.js
dojo.provide("my.Thinger");
dojo.require("dijit._Widget");
dojo.require("dijit._Templated");
dojo.declare("my.Thinger", [dijit._Widget, dijit._Templated], {
 
    // the listItem in the template is "tracked" and cleaned
    widgetsInTemplate:true,
    templateString:"
<div>
<ul><" + "li dojoType='my.ListItem' dojoAttachPoint="li">${foo}</" + "li></ul>
</div>
 
",
 
    postCreate: function(){
        // some self-cleaning event connections. These are disconnected upon destruction.
        this.connect(this.domNode, "onclick", "_clicker");
        // this item is leaked when Thinger is destroyed.
        new my.ListItem().placeAt(this.li, "after");
 
    }
 
});
 

Here, we're creating new my.ListItem instances and adding them to our own DOM, but not keeping track of them in any way. This is an exceptionally common pattern of code compartmentalization. The my.Thinger scopes all DOM references and event connections to itself. It contains other widgets which handle some part of the overall functionality. By making a whole ListItem Class, we compartmentalize that bit of logic into little pieces.

We can see the leak pretty easily:

 
console.log(dijit.registry.length); // 0
var x = new my.Thinger().placeAt(dojo.body());
console.log(dijit.registry.length); // 3, A Thinger, and Two ListItem instances
x.destroy();
console.log(dijit.registry.length); // 1. A ListItem
 

So I'll show my solution to this little dilemma in hopes you can adopt a similar method to suite your own needs. My solution requires imposing a style guideline on how new widgets are bound and added to other widgets, but I feel it to be well worth the extra efforts.

The first thing we do, because all Dijit widgets clean up properly after themselves already, is make our own Base Widget class, extending dijit._Widget. Here we'll add a few functions to add new APIs to the lifecycle so we can track our own widgets.

 
dojo.provide("my.Widget");
dojo.require("dijit._Widget");
dojo.delcare("my.Widget", [ dijit._Widget, dijit._Templated ], {
    // summary: Our custom dijit._Widget base class. All our widgets should inherit from this. 
 
    adopt: function(cls, props, srcNode){
        // summary: Take ownership of a new widget
        // returns: The added widget
    },
 
    orphan: function(widget, destroy){
        // summary: relinquish ownership of an owned widget. optionally destroy.
        // returns: nothing.
    },
 
    destroy: function(preserveDom){
        // summary: an overridden destroy method. call the parent method, but we'll add stuff here too.
        this.inherited(arguments);
    },
 
    _kill: function(widget){
        // summary: destroy properly this widget instance
        // returns: nothing.
    },
 
    _addItem: function(/* Widget... */){
        // summary: Add any number of widgets to this instance
        // returns: nothing.
    }
 
});
 

So we've stubbed out the API with two new public functions, a single overridden destroy method, and two private helper functions. _addItem will do most of the work, so we'll show it first:

 
    _addItem: function(/* Widget... */){
        this._addedItems = this._addedItems || [];
        this._addedItems.push.apply(this._addedItems, arguments);
    },
 

The code will create or reuse an instance member (array) called _addedItems. This contains the list of widgets we've created programatically. It accepts any number of arguments, so you can add multiple items in the same line of code.

 
    this._addItem(
        new dijit.form.Button().placeAt(this.fooNode),
        new dijit.form.Button().placeAt(this.barNode)
    );
 

When the widget calling addItem is destroyed, the two anonymously created Button instances will go with it. Well, not quite yet, but we only have to add a line to our destroy method to handle that action. We'll also add the code for the _kill function, which we'll reuse in orphan in a moment. _kill is a simple function which will destroy a single widget instance properly. In destroy we call _kill by passing it to the forEach call:

 
    destroy: function(){
        dojo.forEach(this._addedItems, this._kill);
        this.inherited(arguments);
    },
    _kill: function(widget){
        if(widget.destroyRecursive){
            widget.destroyRecursive();
        }else if(widget.destroy){
            widget.destroy();
        }
    }
 

So we now have a way to add tracked widgets to ourself and to destroy them cleanly when we go away. Admittedly, I don't like the API for _addItem ... Not only is it _private, I still have a lot of boilerplate stuff repeated, now being wrapped with _addItem(). This is where the adopt API. It takes the repetition down one notch, and wraps the creation and adding into a single function call:

 
    adopt: function(cls, props, srcNode){
        var x = new cls(pros, srcNode);
        this._addItem(x);
        return x;
    }
 

All Dijit _Widget derivatives accept two constructor arguments: an object hash of properties, and an optional source-node reference (from which to create the instance). We simply relay those two arguments into a single 'new' call, add the item to ourselves, and returning the widget instance otherwise unmodified. To use it is simple:

 
    postCreate: function(){
        this.adopt(my.ListItem).placeAt(this.li, "before");
    }
 

The ListItem in this example doesn't need any constructor args, so they are omitted. The return from adopt is the widget instance, so you can chain placeAt calls on it, or save the return value to reuse in other scoped functions. Here's a small example of everything working "together":

 
    postCreate: function(){
        this.dialog = this.adopt(my.Dialog, { onClick: dojo.hitch(this, "_click" )});
        setTimeout(dojo.hitch(this.dialog, "open"), 2000);
        this.connect(this.destroyButton, "onclick", "_close");
    },
    _click: function(e){
        e && e.preventDefault();
        this.close();
    },
    _close: function(){
        this.dialog && this.dialog.close();
    }
 

The last function is wildy useful, but not as mind-blowing as having tracked programmatic instances. In the above example we tracked an instance of my.Dialog locally as this.dialog. We're checking the presence of this.dialog before calling close() on it. Sometimes we only need a widget for a short while ... where we want to destroy it upon closing, or destroy a single reused variable holding a widget. That is what orphan is for:

 
    orphan: function(widget, destroy){
        this._addedItems = this._addedItems || [];
        var i = dojo.indexOf(this._addedItems, widget);
        if(i >= 0) this._addedItems.splice(i, 1);
        destroy && this._kill(widget);
    }
 

The code loaded the local list of added children, searches for a passed reference, and removes it from the tracking list. Optionally, you can destroy the widget while being orphaned by passing any truthy value in the second argument. Now, the code pattern which looks like:

 
    _click: function(){
        this.dialog && this.dialog.destroy();
        this.dialog = new my.Dialog();
    }
 

can be reduced to:

 
    _click: function(){
        this.dialog && this.orphan(this.dialog, true);
        this.dialog = this.adopt(my.Dialog);
    }
 

The latter example is slightly more verbose, but it has the benefit of not leaking any widgets. Memory costs more than keystrokes methinks.

The full, documented code is as follows. Feel free to use it in your own app: the migration is painless:

 
dojo.declare("my.Widget", [ dijit._Widget, dijit._Templated ],
{
    // summary:
    //    The Foundation widget for our things. Includes _Widget and _Templated with some custom addin methods.
 
    adopt: function(/* Function */cls, /* Object? */props, /* DomNode */node){
        // summary: Instantiate some new item from a passed Class, with props with an optional srcNode (node)
        //  reference. Also tracks this widget as if it were a child to be destroyed when this parent widget
        //  is removed.
        //
        // cls: Function
        //      The class to instantiate. Cannot be a string. Use dojo.getObject to get a full class object if you
        //      must.
        //
        // props: Object?
        //      An optional object mixed into the constructor of said cls.
        //
        // node: DomNode?
        //      An optional srcNodeRef to use with dijit._Widget. This thinger will be instantiated using
        //      this passed node as the target if passed. Otherwise a new node is created and you must placeAt() your
        //      instance somewhere in the dom for it to be useful.
        //
        // example:
        //  |    this.adopt(my.ui.Button, { onClick: function(){} }).placeAt(this.domNode);
        //
        // example:
        //  |   var x = this.adopt(my.ui.Button).placeAt(this.domNode);
        //  |   x.connect(this.domNode, "onclick", "fooBar");
        //
        //  example:
        //  If you *must* new up a thinger and only want to adopt it once, use _addItem instead:
        //  |   var t;
        //  |   if(4 > 5){ t = new my.ui.Button(); }else{ t = new joost.ui.OtherButton() }
        //  |   this._addItem(t);
 
        var x = new cls(props, node);
        this._addItem(x);
        return x; // my.Widget
    },
 
    _addItem: function(/* dijit._Widget... */){
        // summary: Add any number of programatically created children to this instance for later cleanup.
        // private, use `adopt` directly.
        this._addedItems = this._addedItems || [];
        this._addedItems.push.apply(this._addedItems, arguments);
    },
 
    orphan: function(/* dijit._Widget */widget, /* Boolean? */destroy){
        // summary: remove a single item from this instance when we destroy it. It is the parent widget's job
        // to properly destroy an orphaned child.
        //
        // widget:
        //      A widget reference to remove from this parent.
        //
        // destroy:
        //      An optional boolean used to force immediate destruction of the child. Pass any truthy value here
        //      and the child will be orphaned and killed.
        //
        // example:
        //  Clear out all the children in an array, but do not destroy them.
        //  |   dojo.forEach(this._thumbs, this.orphan, this);
        //
        // example:
        //  Create and destroy a button cleanly:
        //  |   var x = this.adopt(my.ui.Button, {});
        //  |   this.orphan(x, true);
        //
        this._addedItems = this._addedItems || [];
        var i = dojo.indexOf(this._addedItems, widget);
        if(i >= 0) this._addedItems.splice(i, 1);
        destroy && this._kill(widget);
    },
 
    _kill: function(w){
        // summary: Private helper function to properly destroy a widget instance.
        if(w && w.destroyRecursive){
            w.destroyRecursive();
        }else if(w && w.destroy){
            w.destroy()
        }
    },
 
    destroy: function(){
        // summary: override the default destroy function to account for programatically added children.
        dojo.forEach(this._addedItems, this._kill)
        this.inherited(arguments);
    }
 
});
 

Does anyone have any alternate solutions they've found to combat this problem?

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 04 Dec 2009 11:26

A few days ago I tweeted a line of code and decided it was wonderful enough to warrant further explanation. While the code itself may have fit into 140 characters, the examples and use cases go on and on. Because of a few architectural decisions made by Dojo long ago, these 140 characters are powerful and flexible as well as mindlessly simple to use.

One thing a dojo.NodeList does not have by default in Base (dojo.js) is a way to fetch and inject content into a node (or nodes). This isn't necessarily an oversight, there are several ways this can be accomplished. In fact it is so simple an operation with so many possible edge cases it may even not be worth implementing. Here I present to you my own take on this super simple concept, built by using standard Base (dojo.js) functionality.

The function will be called grab(). It will grab a url, and inject it into some nodes. Here is the first iteration of this idea:

 
dojo.extend(dojo.NodeList, {
	grab: function(url){
		dojo.xhr("GET", { url:url })
			.addCallback(this, function(response){
				this.addContent(response, "only");
			});
		return this;
	}
});
 

We're simply mixing a new function in dojo.NodeList.prototype via dojo.extend, giving all instances of a NodeList this method. The plugin calls "return this;" to allow further chaining. Ajax operations by default are asynchronous

We work with NodeLists via dojo.query, so to use this code now we simply query the dom and load some content via Ajax:

 
dojo.addOnLoad(function(){
	dojo.query("#header").grab("header.html");
});
 

That is all well and good and works brilliantly until you need to a) load non-html content b) issue an xhrPost instead of get or c) configure the xhr call in any way. No worries, with just a few more bytes we can add all of that functionality piggybacking on the behavior of dojo.Deferred and objects.

First, we'll add support for mixing in extra parameters into the object passed to the XHR call. Currently, we're only passing { url: url }, giving the Ajax call an endpoint to fetch. dojo.xhr accepts a lot of options in this magic object, so allowing a developer/user to mix in their own custom information here is as simple as calling dojo.mixin with an optional parameter. mixin() is safe like this, in that if extraArgs is undefined or null nothing happens. The url:url aspect is still retained, and any args passed as extraArgs are overwritten into the call.

 
dojo.extend(dojo.NodeList, {
	grab: function(url, extraArgs){
		dojo.xhr("GET", dojo.mixin({ url:url }, extraArgs))
			.addCallback(this, function(response){
				this.addContent(response, "only");
			});
		return this;
	}
});
 

This now allows us to mix extra items into the Ajax call like timeout, sync, error handlers and so on. Back to the nature of Ajax calls being asynchronous: if we issue a grab() call on some node, the next items in the chain will not apply to the new content until an undetermined point in the future. We can overcome this limitation by passing { sync:true } to the Ajax call, making the operation synchronous and thus making the following chains not execute until after the transfer is complete.

 
dojo.addOnLoad(function(){
	dojo.query("#content").grab("/article/12345", { sync: true })
		.find("p.title").onclick(function(e){ alert(this.innerHTML); });
});
 

If we had omitted the { sync:true } here, the find() call would be querying the dom of the node with id="content", though the content of /article/12345 url would not yet be in the container. We don't have to use synchronous Ajax here, it is just a possibility. For instance, we could have connected the click handler to the #content NodeList and delegated from the bubble, or used plugd's connectLive plugin to simple connect to "p.title" and delegated that way.

We can use the extraArgs in so many ways it's ridiculous. What if you wanted to know if the loading error'd out for some reason:

 
dojo.query("#content").grab("/foo.html", {
	error: function(e){
		console.warn("error fetching foo.html");
	}
});
 

Or, more importantly, what if you wanted to load content other than plain HTML. Perhaps you have a JSON object to load in. This is entirely possible because of the way dojo.Deferred works. When we call addCallback() on the Deferred chain in the plugin we're really registering the last chain. Anything we mix in to the Ajax call directly is executed before the final callback which calls this.addContent(). The secret is: the return value from a previous callback is passed to the next callback in the chain. So if we register a load: callback in the extraArgs we are permitted a pre-processing step for whatever content comes in. We simply need to supply an alternate handleAs to tell Ajax to process as JSON, and the load callback to process the JSON and return HTML so addContent works.

 
dojo.query("ul > li").grab("person.json", {
	handleAs:"json",
	load: function(data){
		// again, broken string for a broken wordpress highlight.
		return dojo.replace("<" + "p>{username} - {age} / {sex}</" + "p>", data);
	}
});
 

The [new in Dojo 1.4] function dojo.replace does simple template substitution. In the above example, data.username is substituted into {username}, and the full HTML is returned to the second callback for this Ajax call, then injected into the appropriate node(s). In version prior to Dojo 1.4 you can use dojo.string.replace to accomplish mostly the same goals (though the templating is done differently, ${username} would need to be substituted). Above will inject a paragraph into each of the matched list items with some data. If our data came back as an array, we could simply make up the list items as well.

 
dojo.query("ul#mainlist").grab("tweets.json", {
	handleAs:"json",
	load:function(data){
		var response = [];
		dojo.forEach(data, function(item){
			// string intentionally broken up because wordpress highlight breaks it.
			response.push(dojo.replace("<l" + "i><" + "p>{username} - {age}</" + "p></l" + "i>", data));
		});
		return response.join("");
	}
});
 

Here we're building up a string of list-items to be injected into the unordered list with id="mainlist". When the full response is joined and returned, that value is processed by addContent and injected appropriately.

The last little piece of the puzzle is to assume you don't want to issue and xhrGet call. There are several HTTP verbs to use, each with their own purpose: PUT, DELETE, or POST come to mind. With a simple adjustment to the grab function we can allow the developer to optionally overwrite this as well:

 
dojo.extend(dojo.NodeList, {
	grab: function(url, extraArgs, method){
		dojo.xhr(method || "GET", dojo.mixin({ url:url }, extraArgs))
			.addCallback(this, function(response){
				this.addContent(response, "only");
			});
		return this;
	}
});
 

Now, if we pass a method parameter the default value of "GET" is ignored. Otherwise the defaults take over. Again utilizing the extraArgs "mixin", we can make an Ajax POST request and send along custom data:

 
dojo.query("#login").grab("/user/login", {
	content:{
		name: dojo.byId("username").value,
		pass: dojo.byId("password").value
	}
}, "POST");
 

One implementation detail I don't like about grab is the forced emptying of the target node. In the addCallback function where we call addContent with a parameter of "only" we are forcefully emptying out the nodes before injecting the new content. Unfortunately, this is the only behavior we've permitted here. It would be trivial to omit to "only" (and less bytes, too) and require the developer to call empty() manually before injecting new content if that behavior was desired. By default then we'd be adding the content "last", which is the default addContent position. Let's rework the grab function to handle this:

 
dojo.extend(dojo.NodeList, {
	grab: function(url, extraArgs, method){
		dojo.xhr(method || "GET", dojo.mixin({ url:url }, extraArgs)).addCallback(this, "addContent");
		return this;
	}
});
 

It got even smaller! the "(scope, method)" pattern used in addCallback above is found throughout Dojo, and is super handy. That function "pair" (this.addContent) will be passed whatever value is returned from the Ajax call still, in the first position. The difference is we are not permitted to pass a position (without lamba's in JavaScript). Now, to inject some content into a node and empty it we must do that manually:

 
dojo.query("#foo").empty().grab("bar.html");
 

Without the empty(), "bar.html" content would be appended to the node rather than replace the content. I have yet to decide which behavior I like better. Opinions? I'll need to decide that before adding this functionality to Base Dojo proper. It currently lives in plugd, along with a slew of other new functionality for Dojo 1.4.

One last thing to do to grab function: there is no point in sending off the request if no nodes were found in the query. A simple check on the .length of the NodeList will prevent any unnecessary XHR calls from taking place:

 
dojo.extend(dojo.NodeList, {
	grab: function(url, extraArgs, method){
		this.length && dojo.xhr(method || "GET", dojo.mixin({ url:url }, extraArgs)).addCallback(this, "addContent");
		return this;
	}
});
 

And there you have it, 140 characters of awesome. (it may be a bit more with the longhand variables, pre-shrinkage. Also the inclusion of the extend call and so on add a few bytes, but in plugd's base.js these were already there for the other plugins provided, so is moot)

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 24 Nov 2009 21:56

Please note: the Thunderbird 3.0 Release Candidate is a public preview release intended for developer testing and community feedback. It includes many new features as well as improvements to performance, web compatibility, and speed. We recommend that you read the release notes and known issues before installing this release candidate.

The Thunderbird 3.0 Release Candidate is now available for download. This milestone is focused on providing a preview of the functionality provided by the new features and changes that will be included in Thunderbird 3.0.

New features in Thunderbird 3 that require feedback include:

Testers can download Thunderbird 3.0 Release Candidate builds for Windows, Mac OS X, and Linux in 49 different languages. Developers should also read the Thunderbird 3.0 for Developers article on the Mozilla Developer Center.

Note: Please do not link directly to the download site. Instead we strongly encourage you to link to this Thunderbird 3.0 Release Candidate milestone announcement so that everyone will know what this milestone is, what they should expect, and who should be downloading to participate in testing at this stage of development.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 24 Nov 2009 06:35

Hi Dojo-ers,

Almost there on the 1.4 release... we just put out the first (and hopefully only) 1.4 release candidate. Please give it a final check, reviewing the release notes and filing any bugs (don't forget to attach a test case after filing the ticket).

Thanks!
Bill

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 23 Nov 2009 21:32

Last week at PDC, as we were about to start talking to people about IE9, I saw the following notification from my Facebook account:

From: Facebook [mailto:notification+mwm5axbx@facebookmail.com]
Sent: Tuesday, November 17, 2009 10:05 AM

Dina posted something on your Wall and wrote:

"funny vid of u, you see it? http://www.facebook.com/l/ca339;hTTP://www.N70.InFO/2d"

To see your Wall or to write on Dina's Wall, follow the link below:

<..>

Thanks,

The Facebook Team

The message was from someone I know pretty well, and I believed the message. The address itself (http://www.n70.info/2d) wasn’t that suspicious; there are a lot of URL shortening services, and the .info domain has many legitimate sites on it. So I clicked the it:

IE8 SmartScreen blocking page indicating that the requested URL is unsafe

and thought – whew. 

IE8’s SmartScreen now blocks malware sites over two million times a day. IE8 offers a lot of protection from real-world attacks: phishing protection, a cross-site scripting filter, and Protected Mode (I may run as an administrator, but my browser doesn’t). With attacks on the rise, using (or upgrading to) a browser with this much protection is more important than ever. IE8 also offers great reliability because of process-isolation, and offers users the ability to manage add-ons that affect performance and stability. InPrivate Browsing and InPrivate Filtering are also quite handy.

I wrote back to my friend, and she was surprised. You can read Facebook’s guidance about what to do if this happens to you or a friend.

Dean Hachamovitch

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 22 Nov 2009 20:27

The simplest way to put it: you’re responsible for the amount of coverage for “Going Rogue”.

Not Fox News. Not the evangelical movement.

You.

Period.

Call Fox News and the evangelical movement “enablers” (I have not been through any kind of therapy but the term comes up enough for me to pick up the general meaning). But even I have enough sense to tell that the fact that I’ve been bombarded with this woman’s presence for far too long; to me, she’s like Kate from John and Kate + Eight. People who have a need to be in the spotlight, and don’t give a flying fcuk about the costs to anyone around them–and the children are a major part of that.

Palin doesn’t give a flying fuck about the state of the United States. She cares about Sarah Palin. Period. She cares about the sales, and the attraction of others that will create new sales, of anything that requires her presence. She cares about the funding to do what she might like to do later on.

Ordinarily, I wouldn’t have an issue with that…most of the time. I do have an issue with someone who is obviously not looking out for the common good when he or she does it.

Unfortunately, every time I hear either her or about her, all I can think of is the take-off on the Simpsons, based on the Music Man, of the Monorail. Watch the episode sometime, it’s instructive.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 20 Nov 2009 20:04
Raindrop uses CouchDB for data storage. We are starting to hit some tough issues with how data is stored and queried. This is my attempt to explain them. I am probably not the best to talk about these things. Mark Hammond, Raindrop's back-end lead is a better candidate for it. I am hoping by trying to write it out myself, I can get a better understanding of the issues and trade-offs. Also note that this is my opinion/view, may not be the view of my employer and work colleagues, etc...

First, what are our requirements for the data?
  • Extensible Data: we want people to write extensions that extend the data.
  • Rollback: we want it easy for people to try extensions, but this means some may not work out. We need to roll back data created by an extension by easily removing the data they create.
  • Efficient Querying: We need to be able to efficiently query this data for UI purposes. This includes possibly filtering the data that comes back.
  • Copies: Having copies of the data helps with two things:
    • Replication: beneficial when we think about a user having a Raindrop CouchDB on the client as well as the server.
    • Backup: for recovering data if something bad happens.
How Raindrop tries to meet these goals today

Extensible Data: each back-end data extension writes a new "schema" for the type of data it wants to emit. A schema for our purposes is just a type of JSON object. It has a "rd_schema_id" on it that tells us the "type" of the schema. For instance a schema object with rd_schema_id == "rd.msg.body" means that we expect it to have properties like "from", "to" and "body" on it. Details on how schemas relate to extensions:
  • An extension specifies what input schema it wants to consume, and the extension is free to emit no schemas (if the input schema does not match some criteria), or one or more schemas.
  • Each schema written by an extension is stamped with a property rd_schema_provider = "extension name".
  • All the messages schemas are tied together via an rd_key value, a unique, per-message value. Schemas that have the same rd_key value all relate to the same message.
More info is on the Document Model page.

Rollback: Right now each schema is stored as a couch document. To roll back an extension, we just select all documents with rd_schema_provider = "extension name" that we want to remove, and remove them. As part of that action, we can re-run extensions that depended on that data to have them recalculate their values, or to just remove the schemas generated by those extensions.

Having each schema as a separate document also helps with the way CouchDB stores data -- if you make a change to a document and save it back, then it appends the new document to the end of the storage. The previous version is still in storage, but can be removed via a compaction call.

If we store all the schemas for a message in one CouchDB document, then it results in more frequent writes of larger documents to storage, making compaction much more necessary.

Efficient Querying: Querying in CouchDB means writing Views. However, a view is like a query that is run as data is written, not when the UI may actually want to retrieve the information. The views can then be very efficient and fast when actually called.

However, the down side is that you must know the query (or a pretty good idea of it) ahead of time. This is hard since we want extensible data. There may be some interesting things that need to be queried later, but adding a view after there are thousands of documents is painful: you need to wait for couch to run all the documents through the view when you create the view.

Our solution to this, started by Andrew Sutherland and refined by Mark, was to create what we call "the megaview". It essentially tries to emit every piece of interesting data in a document as a row in the view. Then, using the filtering capabilities of CouchDB when calling the view (which are cheap), we can select the documents we want to get.

Copies: While we have not actively tested it, we planned on using CouchDB's built-in replication support. This was seen as particularly valuable for master-master use cases: when I have a Raindrop CouchDB on my laptop and one in the cloud.

Problems Today

It feels like the old saying, "Features, Quality or Time, pick two", except for us it is "Extensible, Rollback, Querying or Copies, pick three". What we have now is an extensible system with rollback and copies, but the querying is really cumbersome.

One of the problems with the megaview: no way to do joins. For instance, "give me all twitter messages that have not been seen by the user". Right now, knowledge of a message being from twitter is in a different schema document than the schema document that knows if it has been seen by the user. And the structure of the megaview means we can really only select one property at a time on a schema.

So it means doing multiple megaview calls and then doing the join in application code. We recently created a server-side API layer in python to do this. So the browser only makes one call to the server API and that API layer does multiple network calls to CouchDB to get the data, then does the join merging in memory.

Possible solutions

Save all schemas for a message in one document and more CouchDB views
Saving all schemas for a message in one document makes it possible to then at least consult one document for both the "type=twtter, seen=false" sort of data, but we still cannot query that with the megaview. It most likely means using more CouchDB views to get at the data. But views are expensive to generate after data has been written. So this approach does not seem to scale for our extensible platform.

This approach means taking a bit more care on rollbacks, but it is possible. It also increases the size of data stored on disk via Couch's append-only model, and will require compaction. With our existing system, we could consider just never compacting.

This is actually the approach we are starting to take. Mark is looking at creating "summary documents" of the data, but the summary documents are based on the API entry points, and the kind of data the API wants to consume. These API entry points are very application-specific, so the summary document generation will likely operated like just another back end extension. Mark has mentioned possibly just going to one document to store all schemas for a message too.

However, what we have not sorted out how to do is an easier join model: "type=twitter and seen=false". What we really want is "type=twitter and seen=false, ordered by time with most recent first". Perhaps we can get away with a small set of CouchDB views that are very specific and that we can identify up-front. Searching on message type and being seen or unseen, ordered by time seems like a fairly generic need for a messaging system.

However, it means that the system as a whole is less extensible. Other applications on the Raindrop platform need to either use our server API model of using the megaview then doing joins in their app API code (may not be so easy to learn/perform), or tell the user to take the hit waiting for their custom views to get up to date with all the old messages.

Something that could help: Make CouchDB views less painful to create after the fact. Right now, creating a new view, then changing any document means waiting for that view to index all the documents in the couch, and it seems to take a lot of resources for this to happen. I think we would be fine with something that started with most recent documents first and worked backwards in time, using a bit more resources at first, but then tailing off and doing it in the background more, and allow the view to return data for things it has already seen.

Do not use CouchDB
It would be very hard for us to move away from CouchDB, and we would likely try to work with the CouchDB folks to make our system work best with couch and vice versa. It is helpful though to look at alternatives, and make sure we are not using a hammer for a screwdriver.

Schema-less storage is a requirement for our extensible platform. Something that handles ad-hoc queries better might be nice, since we basically are running ad-hoc queries with our API layer now, in that they have to do all the join work each time, for each request.

Dan Goldstein in the Raindrop chat mentioned MongoDB. Here is a comparison of MongoDB and CouchDB. Some things that might be useful:
  • Uses update-in-place, so the file system impact/need for compaction is less if we store our schemas in one document are likely to work better.
  • Queries are done at runtime. Some indexes are still helpful to set up ahead of time though.
  • Has a binary format for passing data around. One of the issues we have seen is the JSON encode/decode times as data passes around through couch and to our API layer. This may be improving though.
  • Uses language-specific drivers. While the simplicity of REST with CouchDB sounds nice, due to our data model, the megaview and now needing a server API layer means that querying the raw couch with REST calls is actually not that useful. The harder issue is trying to figure out the right queries to do and how to do the "joins" effectively in our API app code.
What we give up:
1) easy master-master replication. However, for me personally, this is not so important. In my mind, the primary use case for Raindrop is in the cloud, given that we want to support things like mobile devices and simplified systems like Chrome OS. In those cases it is not realistic to run a local couch server. So while we need backups, we probably are fine with master-slave. To support the sometimes-offline case, I think it is more likely that using HTML5 local storage is the path there. But again, that is just my opinion.

2) ad-hoc query cost may still be too high. It is nice to be able to pass back a JavaScript function to do the query work. However, it is not clear how expensive that really is. On the other hand, at least it is a formalized query language -- right now we are on the path to inventing our own with the server API with a "query language" made up of other API calls.

Persevere might be a possibility. Here is an older comparison with CouchDB. However, I have not looked in depth at it. I may ask Kris Zyp more about it and how it relates to the issues above. I have admired it from afar for a while. While it would be nice to get other features like built-in comet support, I am not sure it will address our fundamental issues any differently than say, MongoDB. It seems like an update-in-place model is used with queries run at runtime. But definitely worth more of a look.

Something else?

What did I miss? Bad formulation of the problem? Missing design solution with the tools we have now?
Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 19 Nov 2009 17:18
WebKit nightlies now support the HTML5 noreferrer link relation, a neat little feature that allows web developers to prevent browsers from sending the Referrer: header when navigating either <a> or <area> elements.  Just add noreferrer in the rel attribute of a link like so: <a href="www.example.com" rel="noreferrer">noreferrer!</a> When example.com receives the HTTP request generated by clicking this [...]
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 18 Nov 2009 17:23

We’re just about a month after the Windows 7 launch, and wanted to show an early look at some of the work underway on Internet Explorer 9. 

At the PDC today, in addition to demonstrating some of the progress on performance and interoperable standards, we showed how IE and Windows will make the power of PC hardware available to web developers in the browser. Specifically, we demonstrated hardware-accelerated rendering of all graphics and text in web pages, something that other browsers don’t do today. Web site developers will see performance gains and other benefits without having to re-write their sites.

Performance Progress. Browser performance involves many different sub-systems within the browser. Different sites – and different activities within the same site – place different loads and demands on the browser.

For example, two news sites might look similar to a user but have very different performance characteristics. Because of how the developers authored the sites, one site might spend most of its time in the Javascript engine and DOM, while the other site might spend most of its time in layout and rendering. A site that’s more of an “application” than a page (like web-based email, or the Office Web Apps) can exercise browser subsystems in completely different ways depending on the user’s actions.

The chart below shows how much time different sites spends in different subsystems of IE. For example, it shows that one major news site spends most of its time in the script engine and marshalling, while another spends most of its time in script and rendering, and the Excel Web App spends very little of its time running script at all.

chart of which IE subsystems different websites spend their time in.  The chart shows that each site has a very different allocation of which subsystems they spend time in.

Note that this chart shows the percentages of total time spent in each subsystem, not relative time between sites. It focuses on just the primary browsing sub-systems and doesn’t include “frame” functionality (like anti-phishing), or third-party software that’s running in the IE process (like toolbars, or controls like Flash). It also factors out networking since that’s dependent on the users network speed. Notice also that a site’s profile can change significantly across scenarios; for example, the Excel Web App profile for loading a file is quite different from the profile for selecting part of the sheet.

The script engine is just one of these browser subsystems. There are many benchmarks for script performance. One common test of script performance is from Apple’s Webkit team, the SunSpider test. The chart below shows the relative performance of different browsers on the same machine running the SunSpider test.

chart of IE, FF, Chrome and Safari performance of Sunspider test.  The IE9 results on sunspider are competitve with FF 3.6, Chrome4 and the nightly webkit build.

In addition to IE7 and the current “final release” versions of major browsers, we’ve included the latest pre-release “under development” builds of the major browsers. We’re just about a month after IE8 was released as part of the Windows 7 launch, and the version of IE under development is no longer an outlier. 

It is worth noting that once the differences are this small, the other subsystems that contribute to performance become much more important, and perceiving the differences may be difficult on real-world sites. That said, we remain committed to improving script performance.

We’re looking at the performance characteristics of all the browser sub-systems as real-world sites use them. Our goal is to deliver better performance across the board for real-world sites, not just benchmarks.

Standards Progress. Our focus is providing rich capabilities – the ones that most developers want to use – in an interoperable way.  Developers want more capabilities in the browser to build great apps and experiences; they want them to work in an interoperable way so they don’t have to re-write and re-test their sites again and again. The standards process offers a good means to that end.

As engineers, when we want to assess progress, we develop a test suite that exercises the breadth and depth of functionality. With IE8, we delivered a highly-interoperable implementation of CSS 2.1 and contributed over 7,200 tests to the W3C. Standards that do not include validation tests are much more difficult to implement consistently, and more difficult for site developers to rely on.

Some standards tests – like Acid3 – have become widely used as shorthand for standards compliance, even with some shortcomings. Acid3 tests about 100 aspects of different technologies (many still in the “working draft” stage of standardization), including many edge cases and error conditions. Here’s the latest build of IE9 running Acid3: 

screen shot of ACID3 test showing a score of 32.

As we improve support in IE for technologies that site developers use, the score will continue to go up. A more meaningful (from the point of view of web developers) example of standards support involves rounded corners. Here’s IE9 drawing rounded corners, along with the underlying mark-up:

screenshot of a box with rounded corners.  each corner is rounded differently.

Another example of standards support that matters to web developers is CSS3 selectors. Here’s a test page that some people in the web development community put together at css3.info; it’s a good illustration of a more thorough test, and one that shows some of the progress we’ve made since releasing IE8:

screenshot of css3.info test page showing many passing test cases.

Community testing efforts like this one can be helpful. Ultimately, we want to work with the community and W3C and other members of the working groups to define true validation test suites, like the one that we’re all working on together for CSS 2.1, for the standards that matter to developers. For example, this link tests one of the HTML5 storage APIs; some browsers (including IE8) support it today, while others don’t.

The work we do here, both in the product and on test suites, is a means to an end: a rich interoperable platform that developers can rely on. 

Bringing the power of PC hardware and Windows to web developers in the browser. The PC platform and ecosystem around Windows deliver amazing hardware innovation. The browser should be a place where the benefits of that hardware innovation shine through for web developers.

We’re changing IE to use the DirectX family of Windows APIs to enable many advances for web developers. The starting point is moving all graphics and text rendering from the CPU to the graphics card using Direct2D and DirectWrite. Graphics hardware acceleration means that rich, graphically intensive sites can render faster while using less CPU. (This interview includes screen captures of a few examples.) Now, web developers can take advantage of the hardware ecosystem’s advances in graphics while they continue to author sites with the same interoperable standards patterns they’re used to.

In addition to better performance, this technology shift also increases font quality and readability with sub-pixel positioning:

96 point Gabriola on a Lenovo X61 ThinkPad at 100% Zoom using GDI (note jaggies):

text

96 point Gabriola on a Lenovo X61 ThinkPad at 100% Zoom: Direct2D (without jaggies):

text

Last week, Channel 9 interviewed several of the engineers on the team. You can find videos of the interviews here:

Introduction, and Interoperable Standards

Early look at the Script Engine

Hardware accelerated graphics and text in the browser via Direct2D

While we’re still early in the product cycle, we wanted to be clear to developers about our approach and the progress so far. We’re applying the feedback from the IE8 product cycle, and we’re committed to delivering on another version of IE.

Thanks,
Dean Hachamovitch
General Manager, Internet Explorer

Update 11/23/09 - The IE9 demo from PDC is now available.  The IE content starts around minute 48.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 18 Nov 2009 12:59

Last night the Mozilla community released Firefox 3.6 Beta 3, making it available for free download and issuing an update for all Firefox 3.6 beta users. This update contains over 80 fixes from the last Firefox 3.6 beta, containing many improvements for web developers, Add-on developers, and users. More than half of the thousands of Firefox Add-ons have now been upgraded by their authors to be compatible with Firefox 3.6 Beta. If your favorite Add-on isn’t yet compatible, you can also download and install the Add-on Compatibility Reporter – your favorite Add-on author will appreciate it!

The Mozilla community appreciates your feedback and assistance in testing this preview of the next version of Firefox. Your beta software will update itself periodically, and eventually will be updated to the final release itself.

The Beta of Firefox 3.6 / Gecko 1.9.2 introduces several new features for users to evaluate:

Web developers and Add-on developers should read more detail about the many new features in Firefox 3.6 for developers on the Mozilla Developer Center. For the full list of changes since the alpha release, see this list (it’s big).

Please use the following links to download Firefox 3.6 Beta, or visit the beta download page:

As always, the Mozilla community would appreciate hearing about any feedback you have about this release, or any bugs you may find.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 16 Nov 2009 22:25

We hate crashes. When Firefox crashes, we try to get you back on your feet as quickly as possible, but we’d much rather you not crash in the first place. In Firefox 3.6, we are changing the way that some third party software hooks into Firefox which should eliminate a good chunk of those crashes without sacrificing our extensibility in any way. In the process, we’ll also be giving you greater control over the code that runs in your browser.

Background

Firefox is built around the idea of extensibility – it’s part of our soul. Users can install extensions that modify the way their browser looks, the way it works, or the things it’s capable of doing. Our add-ons community is an amazing part of the Mozilla ecosystem, one we work hard to grow and improve.

In addition to the standard mechanism for extending the browser via add-ons and plugins, though, there has historically been another way to do it. Third-party applications installed on your machine would sometimes try extend Firefox by just adding their own code directly to the “components” directory, where much of Firefox’s own code is stored.

There are no special abilities that come from doing things this way, but there are some significant disadvantages.  For one thing, components installed in this way aren’t user-visible, meaning that users can’t manage them through the add-ons manager, or disable them if they’re encountering difficulties. What’s worse, components dropped blindly into Firefox in this way don’t carry version information with them, which means that when users upgrade Firefox and these components become incompatible, there’s no way to tell Firefox to disable them. This can lead to all kinds of unfortunate behaviour: lost functionality, performance woes, and outright crashing – often immediately on startup.

In Firefox 3.6 (including upcoming beta refreshes), we’re closing this door. Third party applications can still extend Firefox via add-ons and plugins the way they always could, but the components directory will be for Firefox only.

What Does This Mean For Me?

If you’re a Firefox user, this should be 100% positive. You don’t have to change anything, your regular add-ons should continue to work properly – you just might notice fewer crashes or odd bugs. If you do notice that something has stopped working, particularly a third party addition to Firefox, you might want to contact the producer of that addition to ensure they know about the change.

If you’re a Firefox component developer, this shouldn’t be a big change, either. If you’re already packaging your additions as an XPI, installed as an add-on it’s business as usual. If you have been dropping components directly, though, you’ll need to change to an XPI-based approach. Our migration document on the Mozilla Developer Connection outlines the changes you’ll need to make, and should be pretty straightforward. The good news is that once you’ve done this, your add-on will actually be visible to users and will support proper version information so that our shared users are guaranteed a more positive experience.

If you haven’t downloaded the new Firefox beta yet, and want to give it a spin, you can find a copy here.

Johnathan Nightingale
Human Shield

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 16 Nov 2009 15:21

Many thanks to Tom Mahieu who organized the next dojo.beer("Antwerpen"), tomorrow November 17th in Antwerpen during the Devoxx.
If you are near the conference, go and join this event and enjoy an evening full of JavaScript, Dojo and much more.

The event starts at 9pm, make sure you bring some of your Dojo work to you can show what you have done.
More information about the event can be found at the LinkedIn events page.

Looking forward to the very first dojo.beer() in Belgium!! Yiha

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Sep 2009 18:13
Chrome Frame came out yesterday. I like the conversation it is trying to start. It makes a big difference having working code to get the conversation to seriously happen. There are some mechanics of the specific approach that probably need tweaking, but the general idea is worth consideration.

The big point is about separating the innovation over organizing the user's experience of the web in general (the Browser) vs. innovation in the display of web sites (the Renderer).

The Chrome Frame approach allows browser makers to preserve their revenue models and their UI interaction models, at least in the ideal -- there are some specific issues to work out with the Chrome Frame model, like password/form data storage. But I think the direction is a good one.

It also solves the issue where a web application was developed a few years ago and is not going to be maintained any more. If the browser allows multiple rendering engines it makes it possible for those old web apps to continue to work.

Old enterprise/business applications in particular can continue to work, but we still get to move the new web experience forward.

From a browser market share perspective, this clean split of responsibility could allow a newer browser like Google Chrome to get more market share. It could use the reverse idea: embed the IE render engine in Google Chrome. Then, take that to all the IT administrators and say, here is a newer, safer, more secure browser for your company that will work on older machines, is free, and can still allow your old business web apps to work.

That may not work, since most people are attached to their browser chrome vs. the actual renderer. In those cases, the plain Chrome Frame approach of installing a plugin for another renderer works to allow better, faster web apps experiences.

One concern: user choice or control. I think it actually leads to more user control: the user gets to keep the browser chrome, their organizing model for the whole web, intact, but gets to use more of the better parts of the web. And for areas where the user does not have control now (in business environments that need old apps to work), it gives a way out to use more of the web.

Of course, there should user controls so they can set preferences and overrides for the renderers used. At a minimum, business IT groups will need it to configure the browsers to use old renderers for older in-house business web apps.

So, some tweaks to the basic mechanics I would like to see (realizing that I have no concept how hard this work would be):

1) Make sure the Renderer works well with the Browser: for instance make sure that saved passwords/form data works well no matter what Renderer is used. Make sure the split is clean.

2) Change the UA-Compatible thing to be more Renderer-based feature sets vs. browser version numbers. So, something where the developer mentions the capabilities that are desired:

<meta equiv="X-UA-Compatible" content="addEventListener=1,svg=1">

If the developer has a suggested browser renderer, it could place that at the end, sort of how CSS font names can start with generic names, then get more specific:

<meta equiv="X-UA-Compatible" content="addEventListener=1,svg=1;gecko=1.9.2">

It would be good to *not* use browser names in the tag, but rather the renderer engine names/versions. Ideally though, just list capabilities.

The above syntax is not exactly right, but just to demonstrate the idea: focus on telling the browser the capabilities the page wants, and use render engine names, not browser names.

3) Make sure the user can override the choices made by the browser. The pref control does not have to be obvious, but should be there, so the user has the final say.

In summary, as a conversation starter, I like that Chrome Frame has really tried to highlight the difference between upgrades in renderers vs. browser interface.


Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Sep 2009 22:47

Google today announced Chrome Frame, a plug-in that selectively upgrades Internet Explorer without breaking existing sites. Think of it as working like Flash, but for open web technologies, replacing Internet Explorer’s entire rendering engine for sites that include a single meta tag indicating that they would prefer to use Chrome Frame rather than IE.

So why is this a good thing?

* Remaining IE users (especially IE 6) are unlikely to upgrade because they have apps that don’t work in other browsers. This allows them to get the best of both worlds when Chrome Frame is installed, with fast performance and better features for sites that take advantage of this plug-in.
* Open source and open standards, so it’s unlike Flash or Silverlight, yet still very convenient.
* For users that blindly click “yes”, this is a great way to upgrade them from Internet Explorer, finally, without taking away their “Big E” or having older sites break on them.

Again, this is targeted at people that cannot or refuse to upgrade from Internet Explorer. Obviously Google has a few hurdles to overcome to make this technology completely relevant:

* Supporting older versions of Windows, where older versions of Internet Explorer are more prevalent.
* Convincing IT departments that presently lock down the version of Internet Explorer that installing Chrome Frame is ok.

Even with these obstacles, this is very exciting news for the future of web application performance and ending support for Internet Explorer 6.

Congrats to the Chrome team and Alex Russell, SitePen alum and Dojo Toolkit co-founder, on this great announcement!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Sep 2009 22:42

If you’re attending the Future of Web Apps conference in London in early October, be sure to introduce yourself. I’m excited to learn the results of the 2009 Web Application survey.

After the conference, you can learn more about SitePen and the future of Dojo at these post-conference events:

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Sep 2009 23:29
Previously I touched on what exactly the Page Cache does and outlined some of the improvements we’re working on. This post is geared towards web developers and is therefore even more technical than the last. In this article I’d like to talk more about unload event handlers, why they prevent pages from going into the Page Cache, [...]
Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 19 Sep 2009 18:29
  • Dynamic languages can’t be fast relative to static languages
  • Any language with a working lambda can be saved from itself, given a fast enough runtime. But you can’t save the other folks who use that language
  • You agree with me
  • RDFa is smart technology, and can be cleanly integrated into HTML
  • It was all invented in the 70’s
  • Java-style static typing prevents me from doing dumb things in the small. This makes it awesome.
  • You slow down as you get older, but it’s a learned response. You get there because you find caution useful. You stay there because you find caution comfortable
  • Java-style classes prevent me from doing smart things in the large, or at least makes smart things harder to communicate. This makes it terrible.
  • Dynamic languages can be more than fast enough.
  • Your language is probably better than my language
  • C++ made it all possible years ago, but nobody noticed because their compiler didn’t support it yet
  • RDFa is doomed to inevitable, painful failure
  • Making it common is more important than making it to start with
  • Forging agreement is hard, sometimes impossible
  • You violently disagree with most things I say
  • The more things change, the more they change
Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 18 Sep 2009 08:31

The dojo.beer() circus is coming to London!  On October 3rd 2009 Sitepen and Uxebu are hosting an all day dojo.beer() event, including talks on all things Dojo from the core committers, a bit of hacking and a lot of beer.  Not neccesarily in that order ;-)   See http://dojobeerlondon.eventbrite.com for more details.

I’m hopping over the Irish sea to join in the festivities, hope to you see you there

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 16 Sep 2009 23:47
This is the first of two posts that will center around a modern browser engine feature that doesn’t usually get a lot of press: The Page Cache. Today I’ll talk a bit about what this feature is, why it often doesn’t work, and what plans we have to improve it. Page Cache Overview Some of you might be [...]
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 16 Sep 2009 21:49

We wrote last week about a new project we’ve started, informing our users when they’re running out of date versions of popular plugins. We focused our initial efforts on the Adobe Flash Player and now, a week after launch, Mozilla’s Numerator, Ken Kovash, has a blog post up looking at the results.

Those results have been nothing short of awesome. In the first week that the project has been live, we’ve seen 10 million people click through from our page to Adobe’s update site. As Ken points out, this is not just a huge number, it’s also about 5x higher click through than that page typically sees.

We’re continuing to look for ways to help our users stay safe and up to date. We’re working to roll other plugins into our web-based checking, and the Firefox team is also building an integrated check that will let you know whenever a site you visit is trying to use an outdated plugin (more on that soon). This is just the beginning.

Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader