• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Tuesday, 26 Aug 2014 03:28

For years we’ve used CSS classes as not only stylistic functionality but also as signifiers or “triggers” for JavaScript functionality.  For example, Mozilla uses a #tabzilla element (coupled with a JavaScript file that looks for it) to inject the Tabzilla widget.  Using #tabzilla is nice, and even though it’s an ID, it isn’t necessarily descriptive.

Now cut to cases where there isn’t an ID, and you’re triggering JavaScript functionality solely based on a CSS class.  You could add a CSS class signifier like highlight-me or login, but when you try to maintain the HTML, those classes area relative mystery.  What I propose is a system by which we add a do- prefix to any CSS class that is implemented solely for the sake of JavaScript functionality.  A few examples:

<div class="do-launch-login">...</div>

<a class="do-close-modal">...</a>

<input class="do-autocomplete" />

So I’m not proposing anything groundbreaking and hopefully you already have a system in place, but if you don’t, you should consider a do- or likewise system.  The days of using generic CSS classes are over and a class prefix like this will help you quickly identify if a class is relevant to styling or simply implemented for the case of identifying elements for JavaScript functionality.

Again, this probably seems like a simple idea and exercise but it’s an important practice.  If you open a HTML template, look at an element, and have no idea if it’s used for JavaScript or not, you’re possibly in for a world of hurt.  And the word “possibly” should be scare the hell out of you.

The prefix isn’t important as long as you keep one.  As long as you keep a className that you can immediately identify as JavaScript-only, you’ll be in good shape!

Read the full article at: CSS `do-*`


Author: "David Walsh" Tags: "CSS"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 25 Aug 2014 12:52

When I tell people that I’m a web programmer, they think I’m a genius.  When I tell them I work for the company that makes Firefox, they think I’m some sort of God.  I’d be willing to bet other developers out there get the same treatment.  And I don’t say this to talk myself and people in our profession up, I say it because people outside of our profession don’t have any idea how we do what we do.

Even developers look up to other developers in our industry that way.  I feel like every other developer around me at Mozilla is a legend, and in many cases, they are.  It’s the reason we have such thing as imposter syndrome, and it’s the reason we’re intimidated to join OSS projects and approach popular developers at conferences.  The following comic seems apt:

We all make stupid mistakes.  We all stare at our code to find that one line that is somehow bricking our app.  We all swear our browser is broken before finding the one obvious mistake in our code.  Don’t sweat it and don’t feel like you aren’t good enough.  Everyone has these moments and no developer goes a day without struggling on something basic.  Now go out there and kick some ass!

Read the full article at: The Truth About Programming Perception


Author: "David Walsh" Tags: "Comics"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 22 Aug 2014 01:21

If you have an Android device, you’ve gotta check out Firefox for Android.  It’s an outstanding mobile browser — it has been very well received and you can even install apps from the Firefox Marketplace from within this awesome browser.  One usability practice implemented by Firefox for Android is a gradient shade on all button elements.  While I appreciate the idea, I don’t necessarily want this added to elements which I want to look a very specific way.  Removing this gradient effect is simple:

/* Cancels out Firefox Mobile's gradient background */
button {
	background-image: none;

Before you jump on Mozilla for this practice, WebKit-based browsers do something very similar.  Preventing this effect is also very simple so if you want to remove this gradient, use the code above and you’re on your way!

Read the full article at: Remove Mobile Firefox Button Gradient


Author: "David Walsh" Tags: "CSS, Firefox OS, Mobile"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 19 Aug 2014 14:08

In case you didn’t know:  the damn DOM is slow.  As we make our websites more dynamic and AJAX-based, we need to find ways of manipulating the DOM with as little impact on performance as possible.  A while back I mentioned DocumentFragments, a clever way of collecting child elements under a “pseudo-element” so that you could mass-inject them into a parent.  Another great element method is insertAdjacentHTML:  a way to inject HTML into an element without affecting any elements within the parent.

The JavaScript

If you have a chunk of HTML in string format, returned from an AJAX request (for example), the common way of adding those elements to a parent is via innerHTML:

function onSuccess(newHtml) {
	parentNode.innerHTML += newHtml;	

The problem with the above is that any references to child elements or events connected to them are destroyed due to setting the innerHTML, even if you’re only appending more HTML – insertAdjacentHTML and beforeend fixes that issue:

function onSuccess(newHtml) {
	parentList.insertAdjacentHTML('beforeend', newHtml);

With the code sample above, the string of HTML is appended to the parent without affecting other elements under the same parent.  It’s an ingenious way of injecting HTML into a parent node without the dance of appending HTML or temporarily creating a parent node and placing the child HTML within it.

This API wreaks of knowing a problem exists and fixing it — who would have thought?!  OK, that was a bit passive aggressive but you know what I mean.  Keep insertAdjacentHTML handy — it’s a very lesser known API that more of us should be using!

Read the full article at: JavaScript insertAdjacentHTML and beforeend


Author: "David Walsh" Tags: "JavaScript"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 18 Aug 2014 02:25

Every once a while I’ll test is a given function is native code — it’s an important part of feature testing whether a function was provided by the browser or via a third party shim which acts like the native feature.  The best way to detect this, of course, is evaluating the toString return value of the function.

The JavaScript

The code to accomplish this task is fairly basic:

function isNative(fn) {
	return (/\{\s*\[native code\]\s*\}/).test('' + fn);

Converting to the string representation of the function and performing a regex match on the string is how it’s done.  There isn’t a better way of confirming a function is native code!


Lodash creator John-David Dalton has provided a better solution:

;(function() {

  // Used to resolve the internal `[[Class]]` of values
  var toString = Object.prototype.toString;
  // Used to resolve the decompiled source of functions
  var fnToString = Function.prototype.toString;
  // Used to detect host constructors (Safari > 4; really typed array specific)
  var reHostCtor = /^\[object .+?Constructor\]$/;

  // Compile a regexp using a common native method as a template.
  // We chose `Object#toString` because there's a good chance it is not being mucked with.
  var reNative = RegExp('^' +
    // Coerce `Object#toString` to a string
    // Escape any special regexp characters
    .replace(/[.*+?^${}()|[\]\/\\]/g, '\\$&')
    // Replace mentions of `toString` with `.*?` to keep the template generic.
    // Replace thing like `for ...` to support environments like Rhino which add extra info
    // such as method arity.
    .replace(/toString|(function).*?(?=\\\()| for .+?(?=\\\])/g, '$1.*?') + '$'
  function isNative(value) {
    var type = typeof value;
    return type == 'function'
      // Use `Function#toString` to bypass the value's own `toString` method
      // and avoid being faked out.
      ? reNative.test(fnToString.call(value))
      // Fallback to a host object check because some environments will represent
      // things like typed arrays as DOM methods which may not conform to the
      // normal native pattern.
      : (value && type == 'object' && reHostCtor.test(toString.call(value))) || false;
  // export however you want
  module.exports = isNative;

So there you have it — a better solution for detecting if a method is native. Of course you shouldn’t use this as a form of security — it’s only to hint toward native support!

Read the full article at: Detect if a Function is Native Code with JavaScript


Author: "David Walsh" Tags: "JavaScript, Quick Tips"
Comments Send by mail Print  Save  Delicious 
Date: Friday, 15 Aug 2014 14:32
Treehouse Coupon Code

Treehouse has been a mainstay of web learning for a few years now and I’m amazed at how they’ve grown.  You’d expect them to teach basic stuff like web design and JavaScript, but they’ve moved on to broader but important topics like jQuery, DNS, and entrepreneurship.  Treehouse doesn’t just get you started in web development, they try to encompass the entire web development and admin sphere.  The only thing missing are the fire drills you experience those first few years on the job!  I thought I’d put together a list of reasons to try Treehouse, even for the biggest skeptics in you!

They Flipped the Web Learning Process on its Head

I went the “low route” to a web development career, meaning I went to a small technical college and then went on to another two-year school to get my bachelor degree.  What did I learn from that?  A piece of paper costs a lot.  I was paying thousands of dollars for a piece of paper than no one has ever asked for.  Treehouse is much more direct and cost-effective.  Videos, quizzes, detailed documentation, and best of all:  it’s focused on the set of tasks important to the topic you’re looking to learn; imagine that.

Treehouse iPad App

They Have an Epic iPad App

I wrote about Treehouse’s iPad app a while back, but it bears repeating:  it’s awesome.  It’s usable, well designed, and provides all of the videos, quizzes, and code sample creation you’d find on the site.  Learning on the go!

Get One, Give One

For every gold account purchased, Treehouse gives one to a public school student.  That’s absolute class.  Knowledge of the web and how to create beautiful websites and apps is a gift that will last a long time, especially considering public schools generally don’t (in my experience) teach the topic very well, especially at early ages.  I encourage you to read Treehouse’s public school initiative, as it sheds light on why they do it.

You Can Start For Free

Treehouse lets you start for free, so you aren’t tied into paying a bunch before you get a chance to give their online education system a real go!  There’s no reason to not give it a try!

If only I had treehouse when I was younger.  I enjoyed my struggles learning but a more structured learning environment would have better taught me basics I learned later on by making mistakes, sometimes in production.  As a seasoned dev now, I still use Treehouse to learn about advanced DNS and web design principles, and occasionally even the entrepreneurship instruction.

Read the full article at: Try Treehouse!


Author: "David Walsh" Tags: "Blog"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 13 Aug 2014 17:17

Word has it that X, which was released by Themeco (theme.co) at the end of last year, is a great asset for web designers and developers everywhere – and the last website theme you’ll have to buy. As more professionals hurry to get their hands on it, the wildly spreading enthusiasm is growing every day. X provides multiple designs, free updates, full BuddyPress and WooCommerce integration, excellent Shortcodes, and a fantastic live previewer. In this article, I’d like to introduce you to some of the greatest features in X plus take you behind the scenes to hear directly from their developers as well as showcase a dozen different sites built by this powerful theme. Let’s begin…


Case Study: Performance Optimizations

For the developers out there, we reached out to the Themeco team to share with us a little bit about how they go about developing X, what features are important to them, and most importantly how they go about creating a product that is both feature-rich as well as optimized for performance. Here is what they had to say on the topic of performance with a real-life example from one of their recent releases.

“Performance is a big deal to us and we are constantly on the lookout for even the tiniest detail that we can improve upon to ultimately make X perform at the highest levels possible. With our 2.2.0 release, we specifically wanted take a closer look at how the Options API (http://codex.wordpress.org/Options_API) compared to the Theme Modification API (http://codex.wordpress.org/Theme_Modification_API) for storing and retrieving settings from the database. The Theme Modification API essentially acts as a wrapper for the Options API as theme modifications are stored as options, but this is done by serializing all stored settings into one row in the database instead of individual rows for each option. While utilizing the Theme Modification API is ostensibly a better choice as it stores all options within a serialized location in the database, we were curious to know if the unserialization process in retrieving each value might be affecting performance. We ran both methods through a PHP profiler to see where any potential bottleneck might be and through our research found some interesting data points:

On average, 67% of an entire page request happens within the get_theme_mod() function when it is utilized. Compare this to 29% on average when utilizing the get_option() function. This certainly narrows things down for us as we can see right away that one is twice as fast!

When using the Theme Modification API, PHP’s unserialize() function accounts for roughly 23% of the load time. So if a hypothetical page was taking around four seconds to load, nearly one second of that would simply be attributed to the converting of data from its storage format into something more usable at runtime. Again, the Options API does not need to take advantage of this as all settings are stored in the database without any modifications, so they can be fetched immediately.

The default method used in the Customizer when adding settings is the Theme Modification API, which is what we utilized in the theme up until our most recent release. We were curious to know what would happen if were were to switch to using the Options API instead, which stores each setting as an individual row in the database. Upon doing this, we instantly noticed the following improvements:

81% improvement in page load times in the Customizer.

31% improvement in front end load times.

X-Theme Performance

These are very meaningful findings that made a huge impact on from tend and backend performance with a very minor change. So should we forget about the Theme Modification API altogether? Not necessarily. It can be a bit subjective as it really depends on the application and the amount of data being accessed regularly. If you’re working with a small set of data the difference isn’t going to be noticeable, but if you’re working with a theme framework, or plan to store dozens of options, then the Options API seems to show much better performance.”

Completely Customizable Designs


I’ve never seen anything like this before. The Stacks from X are actually multiple, specific designs all for the price of one theme (with more in the development lab). The Integrity Stack is ideal for elegant, business-like sites, whereas Renew is the epitome of flat design in the instance where you don’t want any 3D elements. For a minimalistic and modern look, you may prefer to use Icon and so place your site content in the spotlight. Finally, the most recent addition to the X family is Ethos, a visually compelling design that is perfect for magazine and news websites. Remember that these are not individual themes (although they almost function like that) but rather different designs built into the one X theme!

Creating a way to build multiple unique designs into one performance-optimized theme was not enough for the Themeco folks. They decided to reinvent the way their WordPress theme was administered by doing away with the admin panel. 

Instead of forcing you to make changes on the backend of your site, Themeco provides their customers with quite literally the most fun and effortless way to manage the configuration of your site: with a live preview!

X-Theme Live Preview

However you choose to manage or customize your website, know that each and every move, and its overall effects on the site’s apparel and functionality, can be previewed live. It’s child’s play to establish pixel and percentage width, set background colors or images, as well as decide on the complexity, position and height of sidebars and navigation elements. Instead of it being a chore to manage the setting up of the admin parts of your site, it is actually an enjoyable experience with X.

Innovations Powered by Elite Marketers

If all that weren’t enough, the Themeco team set out to pick the brains of some of the world’s top specialists in email/video marketing, SEO and copywriting. This invaluable feedback was then taken and built into the theme and training material providing X customers with much, much more than just a “theme.” X users are all privy to more than the average WordPress Shortcodes. Through a simple plugin, they can take advantage of several brand new Shortcodes created by the engineers of Theme X. For instance, the Table of Contents Shortcode increases Google site visibility, as well as invites more pageviews. Furthermore, the  Responsive Visibility Shortcode allows users to choose which things to exhibit or leave out on various devices. 

One thing you will quickly see when using X is that extreme thought and planning has gone into every single detail. Creating a tool that is as extensible as X can be quite the challenge given the different skill sets and abilities of customers in the “web design” world, however the final product that Themeco has put together is simply superb. It’s no surprise they are the fastest selling theme of all time over at ThemeForest and have set several records with the theme in the process.

X-Theme Support

Support That Is Second To None

Perhaps one of the most complimented aspects to Themeco’s offering in X is their incredible customer support. With recent reports of average response times being just a few hours, it is no wonder that people are flocking to this WordPress theme en masse! In addition, their superb documentation and ever expanding video tutorials provide numerous self serve resources for those wanting to get the most out of X. With customers ranging from complete beginners to the government, Themeco has established themselves as the preeminent provider of world-class support in the WordPress space…something you have to see to believe!

Demos & Customer Showcase

No other WordPress theme has ever been met with such raving success: over the course of a few months, more than 20,000 copies of Theme X were purchased and part of that is due to the incredible demos they have put together. To access over 30 unique demo sites be sure to head on over to their main page: http://theme.co/x/

Moreover, we wanted to take you behind the scenes to see actual customer sites built with this amazing theme. As we shared above, Stacks are the unique designs built into the X theme. They come with their own unique look and features, so we have picked out 3 sites for each Stack who are using X at the time of this writing. Be sure to check them out below to get an idea of just how extensible and versatile this theme truly is. Don’t forget that ALL of these were built with the same theme.

Integrity Stack Examples

Renew Stack Examples

Icon Stack Examples

Ethos Stack Examples

The Future Is Bright With X

It goes without saying, the future is quite bright with X. After learning more about the attention to detail on every aspect of the theme in addition to Themeco’s commitment to keeping X the best theme on the market both now and in the future, you can rest assured that when making the move to X you will have a team of committed professionals with you every step of the way (and have a lot of fun in the process).

Read the full article at: The X Theme: Inside Look, Customer Showcase & More (Sponsored)


Author: "David Walsh" Tags: "Sponsored"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 12 Aug 2014 21:03

Velocity Conference

O’Reilly’s Velocity Conference is quickly approaching — it’s September 15–17 in beautiful New York.  As a follow up to last month’s post, I wanted to make sure people knew I had 3 more tickets left to give away to this epic front-end performance conference!

In my last post, I asked for links to awesome performance-related articles.  I learned a ton and I hope you did too!  This time I’m looking for something a bit more interactive!  In the comments below, please post a link to an awesome demo.  Whether it’s a CSS animation or a canvas/WebGL masterpiece, I want to see something epic!

If you entered via the previous post, your entry will be put in the drawing for subsequent ticket giveaways. If you don’t want to chance it and want to get a 20% off discount to the conference, use code AFF20 after clicking this link!

Read the full article at: Velocity NY is Coming!


Author: "David Walsh" Tags: "Events"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 12 Aug 2014 13:24

If you’ve read and digested part 1, part 2, and part 3 of this blog post series, you’re probably feeling pretty confident with ES6 generators at this point. Hopefully you’re inspired to really push the envelope and see what you can do with them.

Our final topic to explore is kinda bleeding edge stuff, and may twist your brain a bit (still twisting mine, TBH). Take your time working through and thinking about these concepts and examples. Definitely read other writings on the topic.

The investment you make here will really pay off in the long run. I’m totally convinced that the future of sophisticated async capability in JS is going to rise from these ideas.

Formal CSP (Communicating Sequential Processes)

First off, I am completely inspired in this topic almost entirely due to the fantastic work of David Nolen @swannodette. Seriously, read whatever he writes on the topic. Here’s some links to get you started:

OK, now to my exploration of the topic. I don’t come to JS from a formal background in Clojure, nor do I have any experience with Go or ClojureScript. I found myself quickly getting kinda lost in those readings, and I had to do a lot of experimentation and educated guessing to glean useful bits from it.

In the process, I think I’ve arrived at something that’s of the same spirit, and goes after the same goals, but comes at it from a much-less-formal way of thinking.

What I’ve tried to do is build up a simpler take on the Go-style CSP (and ClojureScript core.async) APIs, while preserving (I hope!) most of the underlying capabilities. It’s entirely possible that those smarter than me on this topic will quickly see things I’ve missed in my explorations thus far. If so, I hope my explorations will evolve and progress, and I’ll keep sharing such revelations with you readers!

Breaking CSP Theory Down (a bit)

What is CSP all about? What does it mean to say “communicating”? “Sequential”? What are these “processes”?

First and foremost, CSP comes from Tony Hoare’s book “Communicating Sequential Processes”. It’s heavy CS theory stuff, but if you’re interested in the academic side of things, that’s the best place to start. I am by no means going to tackle the topic in a heady, esoteric, computer sciency way. I’m going to come at it quite informally.

So, let’s start with “sequential”. This is the part you should already be familiar with. It’s another way of talking about single-threaded behavior and the sync-looking code that we get from ES6 generators.

Remember how generators have syntax like this:

function *main() {
    var x = yield 1;
    var y = yield x;
    var z = yield (y * 2);

Each of those statements is executed sequentially (in order), one at a time. The yield keyword annotates points in the code where a blocking pause (blocking only in the sense of the generator code itself, not the surrounding program!) may occur, but that doesn’t change anything about the top-down handling of the code inside *main(). Easy enough, right?

Next, let’s talk about “processes”. What’s that all about?

Essentially, a generator sort of acts like a virtual “process”. It’s a self-contained piece of our program that could, if JavaScript allowed such things, run totally in parallel to the rest of the program.

Actually, that’d fudging things a little bit. If the generator accesses shared memory (that is, if it accessed “free variables” besides its own internal local variables), it’s not quite so independent. But let’s just assume for now we have a generator function that doesn’t access outside variables (so FP theory would call it a “combinator”). So, it could in theory run in/as its own process.

But we said “processes” — plural — because the important part here is having two or more going at once. In other words, two or more generators that are paired together, generally to cooperate to complete some bigger task.

Why separate generators instead of just one? The most important reason: separation of capabilities/concerns. If you can look at task XYZ and break it down into constituent sub-tasks like X, Y, and Z, then implementing each in its own generator tends to lead to code that can be more easily reasoned about and maintained.

This is the same sort of reasoning you use when you take a function like function XYZ() and break it down into X(), Y(), and Z() functions, where X() calls Y(), and Y() calls Z(), etc. We break down functions into separate functions to get better separation of code, which makes code easier to maintain.

We can do the same thing with multiple generators.

Finally, “communicating”. What’s that all about? It flows from the above — cooperation — that if the generators are going to work together, they need a communication channel (not just access to the shared surrounding lexical scope, but a real shared communication channel they all are given exclusive access to).

What goes over this communication channel? Whatever you need to send (numbers, strings, etc). In fact, you don’t even need to actually send a message over the channel to communicate over the channel. “Communication” can be as simple as coordination — like transferring control from one to another.

Why transferring control? Primarily because JS is single-threaded and literally only one of them can be actively running at any given moment. The others then are in a running-paused state, which means they’re in the middle of their tasks, but are just suspended, waiting to be resumed when necessary.

It doesn’t seem to be realistic that arbitrary independent “processes” could magically cooperate and communicate. The goal of loose coupling is admirable but impractical.

Instead, it seems like any successful implementation of CSP is an intentional factorization of an existing, well-known set of logic for a problem domain, where each piece is designed specifically to work well with the other pieces.

Maybe I’m totally wrong on this, but I don’t see any pragmatic way yet that any two random generator functions could somehow easily be glued together into a CSP pairing. They would both need to be designed to work with the other, agree on the communication protocol, etc.


There are several interesting explorations in CSP theory applied to JS.

The aforementioned David Nolen has several interesting projects, including Om, as well as core.async. The Koa library (for node.js) has a very interesting take, primarily through its use(..) method. Another library that’s pretty faithful to the core.async/Go CSP API is js-csp.

You should definitely check out those great projects to see various approaches and examples of how CSP in JS is being explored.

asynquence’s runner(..): Designing CSP

Since I’ve been trying intensely to explore applying the CSP pattern of concurrency to my own JS code, it was a natural fit for me to extend my async flow-control lib asynquence with CSP capability.

I already had the runner(..) plugin utility which handles async running of generators (see “Part 3: Going Async With Generators”), so it occurred to me that it could be fairly easily extended to handle multiple generators at the same time in a CSP-like fashion.

The first design question I tackled: how do you know which generator gets control next?

It seemed overly cumbersome/clunky to have each one have some sort of ID that the others have to know about, so they can address their messages or control-transfer explicitly to another process. After various experiments, I settled on a simple round-robin scheduling approach. So if you pair three generators A, B, and C, A will get control first, then B takes over when A yields control, then C when B yields control, then A again, and so on.

But how should we actually transfer control? Should there be an explicit API for it? Again, after many experiments, I settled on a more implicit approach, which seems to (completely accidentally) be similar to how Koa does it: each generator gets a reference to a shared “token” — yielding it will signal control-transfer.

Another issue is what the message channel should look like. On one end of the spectrum you have a pretty formalized communication API like that in core.async and js-csp (put(..) and take(..)). After my own experiments, I leaned toward the other end of the spectrum, where a much less formal approach (not even an API, just a shared data structure like an array) seemed appropriate and sufficient.

I decided on having an array (called messages) that you can arbitrarily decide how you want to fill/drain as necessary. You can push() messages onto the array, pop() messages off the array, designate by convention specific slots in the array for different messages, stuff more complex data structures in these slots, etc.

My suspicion is that some tasks will need really simple message passing, and some will be much more complex, so rather than forcing complexity on the simple cases, I chose not to formalize the message channel beyond it being an array (and thus no API except that of arrays themselves). It’s easy to layer on additional formalism to the message passing mechanism in the cases where you’ll find it useful (see the state machine example below).

Finally, I observed that these generator “processes” still benefit from the async capabilities that stand-alone generators can use. In other words, if instead of yielding out the control-token, you yield out a Promise (or asynquence sequence), the runner(..) mechanism will indeed pause to wait for that future value, but will not transfer control — instead, it will return the result value back to the current process (generator) so it retains control.

That last point might be (if I interpret things correctly) the most controversial or unlike the other libraries in this space. It seems that true CSP kind of turns its nose at such approaches. However, I’m finding having that option at my disposal to be very, very useful.

A Silly FooBar Example

Enough theory. Let’s just dive into some code:

// Note: omitting fictional `multBy20(..)` and
// `addTo2(..)` asynchronous-math functions, for brevity

function *foo(token) {
    // grab message off the top of the channel
    var value = token.messages.pop(); // 2

    // put another message onto the channel
    // `multBy20(..)` is a promise-generating function
    // that multiplies a value by `20` after some delay
    token.messages.push( yield multBy20( value ) );

    // transfer control
    yield token;

    // a final message from the CSP run
    yield "meaning of life: " + token.messages[0];

function *bar(token) {
    // grab message off the top of the channel
    var value = token.messages.pop(); // 40

    // put another message onto the channel
    // `addTo2(..)` is a promise-generating function
    // that adds value to `2` after some delay
    token.messages.push( yield addTo2( value ) );

    // transfer control
    yield token;

OK, so there’s our two generator “processes”, *foo() and *bar(). You’ll notice both of them are handed the token object (you could call it whatever you want, of course). The messages property on the token is our shared message channel. It starts out filled with the message(s) passed to it from the initialization of our CSP run (see below).

yield token explicitly transfers control to the “next” generator (round-robin order). However, yield multBy20(value) and yield addTo2(value) are both yielding promises (from these fictional delayed-math functions), which means that the generator is paused at that moment until the promise completes. Upon promise resolution, the currently-in-control generator picks back up and keeps going.

Whatever the final yielded value is, in this case the yield "meaning of... expression statement, that’s the completion message of our CSP run (see below).

Now that we have our two CSP process generators, how do we run them? Using asynquence:

// start out a sequence with the initial message value of `2`
ASQ( 2 )

// run the two CSP processes paired together

// whatever message we get out, pass it onto the next
// step in our sequence
.val( function(msg){
    console.log( msg ); // "meaning of life: 42"
} );

Obviously, this is a trivial example. But I think it illustrates the concepts pretty well.

Now might be a good time to go try it yourself (try changing the values around!) to make sure these concepts make sense and that you can code it up yourself!

Another Toy Demo Example

Let’s now examine one of the classic CSP examples, but let’s come at it from the simple observations I’ve made thus far, rather than from the academic-purist perspective it’s usually derived from.

Ping-pong. What a fun game, huh!? It’s my favorite sport.

Let’s imagine you have implemented code that plays a ping-pong game. You have a loop that runs the game, and you have two pieces of code (for instance, branches in an if or switch statement) that each represent the respective player.

Your code works fine, and your game runs like a ping-pong champ!

But what did I observe above about why CSP is useful? Separation of concerns/capabilities. What are our separate capabilities in the ping-pong game? The two players!

So, we could, at a very high level, model our game with two “processes” (generators), one for each player. As we get into the details of it, we will realize that the “glue code” that’s shuffling control between the two players is a task in and of itself, and this code could be in a third generator, which we could model as the game referee.

We’re gonna skip over all kinds of domain-specific questions, like scoring, game mechanics, physics, game strategy, AI, controls, etc. The only part we care about here is really just simulating the back-and-forth pinging (which is actually our metaphor for CSP control-transfer).

Wanna see the demo? Run it now (note: use a very recent nightly of FF or Chrome, with ES6 JavaScript support, to see generators work)

Now, let’s look at the code piece by piece.

First, what does the asynquence sequence look like?

    ["ping","pong"], // player names
    { hits: 0 } // the ball
.val( function(msg){
    message( "referee", msg );
} );

We set up our sequence with two initial messages: ["ping","pong"] and { hits: 0 }. We’ll get to those in a moment.

Then, we set up a CSP run of 3 processes (coroutines): the *referee() and two *player() instances.

The final message at the end of the game is passed along to the next step in our sequence, which we then output as a message from the referee.

The implementation of the referee:

function *referee(table){
    var alarm = false;

    // referee sets an alarm timer for the game on
    // his stopwatch (10 seconds)
    setTimeout( function(){ alarm = true; }, 10000 );

    // keep the game going until the stopwatch
    // alarm sounds
    while (!alarm) {
        // let the players keep playing
        yield table;

    // signal to players that the game is over
    table.messages[2] = "CLOSED";

    // what does the referee say?
    yield "Time's up!";

I’ve called the control-token table to match the problem domain (a ping-pong game). It’s a nice semantic that a player “yields the table” to the other when he hits the ball back, isn’t it?

The while loop in *referee() just keeps yielding the table back to the players as long as his alarm on his stopwatch hasn’t gone off. When it does, he takes over and declares the game over with "Time's up!".

Now, let’s look at the *player() generator (which we use two instances of):

function *player(table) {
    var name = table.messages[0].shift();
    var ball = table.messages[1];

    while (table.messages[2] !== "CLOSED") {
        // hit the ball
        message( name, ball.hits );

        // artificial delay as ball goes back to other player
        yield ASQ.after( 500 );

        // game still going?
        if (table.messages[2] !== "CLOSED") {
            // ball's now back in other player's court
            yield table;

    message( name, "Game over!" );

The first player takes his name off the first message’s array ("ping"), then the second player takes his name ("pong"), so they can both identify themselves properly. Both players also keep a reference to the shared ball object (with its hits counter).

While the players haven’t yet heard the closing message from the referee, they “hit” the ball by upping its hits counter (and outputting a message to announce it), then they wait for 500 ms (just to fake the ball not traveling at the speed of light!).

If the game is still going, they then “yield the table” back to the other player.

That’s it!

Take a look at the demo’s code to get a complete in-context code listing to see all the pieces working together.

State Machine: Generator Coroutines

One last example: defining a state machine as a set of generator coroutines that are driven by a simple helper.

Demo (note: use a very recent nightly of FF or Chrome, with ES6 JavaScript support, to see generators work)

First, let’s define a helper for controlling our finite state handlers:

function state(val,handler) {
    // make a coroutine handler (wrapper) for this state
    return function*(token) {
        // state transition handler
        function transition(to) {
            token.messages[0] = to;

        // default initial state (if none set yet)
        if (token.messages.length < 1) {
            token.messages[0] = val;

        // keep going until final state (false) is reached
        while (token.messages[0] !== false) {
            // current state matches this handler?
            if (token.messages[0] === val) {
                // delegate to state handler
                yield *handler( transition );

            // transfer control to another state handler?
            if (token.messages[0] !== false) {
                yield token;

This state(..) helper utility creates a delegating-generator wrapper for a specific state value, which automatically runs the state machine, and transfers control at each state transition.

Purely by convention, I’ve decided the shared token.messages[0] slot will hold the current state of our state machine. That means you can seed the initial state by passing in a message from the previous sequence step. But if no such initial message is passed along, we simply default to the first defined state as our initial state. Also, by convention, the final terminal state is assumed to be false. That’s easy to change as you see fit.

State values can be whatever sort of value you’d like: numbers, strings, etc. As long as the value can be strict-tested for equality with a ===, you can use it for your states.

In the following example, I show a state machine that transitions between four number value states, in this particular order: 1 -> 4 -> 3 -> 2. For demo purposes only, it also uses a counter so that it can perform the transition loop more than once. When our generator state machine finally reaches the terminal state (false), the asynquence sequence moves onto the next step, just as you’d expect.

// counter (for demo purposes only)
var counter = 0;

ASQ( /* optional: initial state value */ )

// run our state machine, transitions: 1 -> 4 -> 3 -> 2

    // state `1` handler
    state( 1, function*(transition){
        console.log( "in state 1" );
        yield ASQ.after( 1000 ); // pause state for 1s
        yield transition( 4 ); // goto state `4`
    } ),

    // state `2` handler
    state( 2, function*(transition){
        console.log( "in state 2" );
        yield ASQ.after( 1000 ); // pause state for 1s

        // for demo purposes only, keep going in a
        // state loop?
        if (++counter < 2) {
            yield transition( 1 ); // goto state `1`
        // all done!
        else {
            yield "That's all folks!";
            yield transition( false ); // goto terminal state
    } ),

    // state `3` handler
    state( 3, function*(transition){
        console.log( "in state 3" );
        yield ASQ.after( 1000 ); // pause state for 1s
        yield transition( 2 ); // goto state `2`
    } ),

    // state `4` handler
    state( 4, function*(transition){
        console.log( "in state 4" );
        yield ASQ.after( 1000 ); // pause state for 1s
        yield transition( 3 ); // goto state `3`
    } )


// state machine complete, so move on
    console.log( msg );

Should be fairly easy to trace what’s going on here.

yield ASQ.after(1000) shows these generators can do any sort of promise/sequence based async work as necessary, as we’ve seen earlier. yield transition(..) is how we transition to a new state.

Our state(..) helper above actually does the hard work of handling the yield* delegation and transition juggling, leaving our state handlers to be expressed in a very simple and natural fashion.


The key to CSP is joining two or more generator “processes” together, giving them a shared communication channel, and a way to transfer control between each other.

There are a number of libraries that have more-or-less taken a fairly formal approach in JS that matches Go and Clojure/ClojureScript APIs and/or semantics. All of these libraries have really smart developers behind them, and they all represent great resources for further investigation/exploration.

asynquence tries to take a somewhat less-formal approach while hopefully still preserving the main mechanics. If nothing else, asynquence‘s runner(..) makes it pretty easy to start playing around with CSP-like generators as you experiment and learn.

The best part though is that asynquence CSP works inline with the rest of its other async capabilities (promises, generators, flow control, etc). That way, you get the best of all worlds, and you can use whichever tools are appropriate for the task at hand, all in one small lib.

Now that we’ve explored generators in quite a bit of detail over these last four posts, my hope is that you’re excited and inspired to explore how you can revolutionize your own async JS code! What will you build with generators?

Read the full article at: Getting Concurrent With ES6 Generators


Author: "Kyle Simpson" Tags: "Guest Blogger, JavaScript"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 11 Aug 2014 13:40

When it comes to building web apps, there are a couple of tools available that help you develop faster. There’s GruntJS, GulpJS, Brunch and others which streamline your workflow by doing a series of build tasks:

  • Test the code
  • Clean the build directory
  • Copy source files to the build folder
  • Do some magic tricks on your copied files, such as replacing variable names.
  • Compile Less or Sass files
  • Dynamically generate script tags for your index.html
  • Run a webserver to test out your app locally
  • Watch for code changes and re-build


These tools do an outstanding job of helping you develop your web app faster. Huzzah!

Let’s Build Another App!

Once you have finished your app and have started on a new project, you again would like to have a good build configuration. You have optimised your last app’s build config so it builds as efficiently as possible, and it’s got some cool gimmicks like that AWS S3 deploy task which you spent a couple of hours on last weekend.

Obviously, you want to reap the fruits of your hard labor and use those new and optimised build tasks in your new app as well. What to do now? There are a couple of ways.

Duplicating the Old App

You could just copy paste your old app folder, rename it and start working. The problem comes when you are improving your build setup even further. By now, there are likely newer and faster build tasks available, so you eagerly start implementing those in your new app. And wow, now there’s a soft CSS refresh feature in the new app!

A few days later you need to bring an update to your old app. You painfully notice some cool features are missing in the old app’s build config. Like that soft CSS refresh and the numerous performance updates you’ve made. What now?


One solution to the problem is Yeoman, a scaffolding tool. It generates your build config by asking questions, every time you make a new app. On its website you can find plenty of generators which include web frameworks and build tasks that have been set-up for you. These generators are maintained by many people and you will reap the benefits of their optimisations when you generate a new app.


Generators aren’t perfect however. When they are updated to include new tools and optimisations, you are stuck with your old build config. You can’t simply update without generating and answering those scaffolding questions again. In addition, it is likely that your ideal build config requires changing or adding tasks such as the AWS S3 deploy which you need for your particular client.


The problem is that at the end of the day, you are again duplicating logic. When you have several apps, it is very likely that the build steps are similar if not identical. If you want to change something there or add a cool new build feature to many apps, you’re out of luck.

Don’t Repeat Yourself

A build config is just like any other code. You should not repeat yourself and you want to re-use your build config across different apps. What if there was a way to use one build configuration for all your apps?

Introducing Angus

Amid growing frustration with the state of things, I decided to make a generic and pre-configured build framework called Angus.


Angus is a pre-configured build framework that you simply clone as a git repository. Inside this repo, you build your apps inside an apps/ folder which gets ignored by the Angus repo. For each app, you can define which libraries and build steps you would like to use. Every build task is already configured to work with other tasks.

The framework uses GruntJS to do all build steps. The cool thing is that you don’t need to configure anything, you just need to tell which tasks you’d like to enable per app.

Project Structure

angus/   <-- angus repository
    grunt/   <-- generic build tasks
        my-second-app/    <-- app sub repository

Apps Inside!

Unknown to many, Git repositories can actually exist within each other without using rocket science like submodules. Inside Angus, the apps/ folder gets ignored by git. You can safely create sub-folders inside apps/ which have their own repositories! To do so, simply create a folder inside the apps/ folder and run git init.

Given this structure, you can develop as many apps as you like without having to generate or adjust build configurations.

Configuring Each App

Every app inside Angus has its own configuration file, config.js. In this file, you can define Bower libraries and tell Angus which files from Bower you actually need. When including Bootstrap for instance, you may only really need a couple of .scss files.

**Example config file**
packages: [
// A list of files which this app will actually use from the bower packages above.
// Angus will look inside bower_components/ for these files.
libIncludes: {
    js: [
    scss: [
        // Core variables and mixins

Running the App

Simply run grunt dev, and Angus takes care of the rest. By default it will launch the hello-world application, but you can pass the —app=your-app parameter or change the config.json file in the root Angus folder.

Angus will automatically install Bower packages, auto include libraries and serve your app. It comes with [pushState support](http://diveintohtml5.info/history.html), auto-refresh on code changes and soft CSS refresh on CSS changes.


Angus also includes a grunt prod command, which takes care of minification, uglifying and concatenating. The output of your files will be inside the dist/prod/ folder. You can even deploy directly to Amazon S3 with one command.

Additional Tasks

You can easily enable additional tasks you would like your app to execute. If you’re running AngularJS, chances are you’ll be wanting to use common build tasks specific to AngularJS such as template minifying, constants generation and the ng-min library.

The cool thing is, these tasks are already preconfigured! You just need to enable them as follows in your config.js file:

// In addition to the default task list (core/defaultTasks.js), also execute these
gruntTasksAdd: [

The Future

Angus is still a very fresh project, and I encourage you to help out by checking out the source code and sending pull requests. In the future, we may even switch to newer tools such as GulpJS or Brunch, but with the same philosophy. Don’t repeat yourself!

I hope I have given you fresh insights into the build process of web apps, and how Angus can increase your productivity. Give it a try and let me know what you think!

Read the full article at: Building Web Apps Faster Using Angus


Author: "Nick Janssen" Tags: "Guest Blogger, JavaScript"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 07 Aug 2014 22:16

Velocity Conference

A few months back, O’Reilly gave me two free tickets to give away  for Velocity Conference in Santa Clara.  The chosen two reported back to me that the conference was incredible, as did a Mozilla colleague that quickly came back and implemented a bunch of speed updates for the Mozilla Marketplace.   Well, you’re all in luck – O’Reilly has given me four (!) tickets to give away over the weeks leading up to the next Velocity Conference in New York from September 15–17.

You know the deal — I can’t just, you know, give you a ticket for a lame comment.  In the interest of teaching something through this giveaway, I’d like you to post a link to an article you’ve read recently that taught you something brilliant for the front end.  CSS, JavaScript, HTML5, WebGL, Canvas, whatever — share a link to an article that others will learn from.

If you don’t want to risk not winning and simply want to save some cash on the conference, click here and use promo code AFF20.  Good luck!

As always, remember to provide a real email address in case you’re the lucky winner.

Read the full article at: Velocity New York: Ticket Giveaway


Author: "David Walsh" Tags: "Giveaways"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 07 Aug 2014 15:24

As a website designer or developer, you soon learn to appreciate how important it is to have a ready source to fill your needs for stock photography. For most large projects where images or illustrations are required, the use of stock photographs can be the only practical option.

There are numerous providers of these types of images, most of whom have a large selection from which to choose. Prices may vary but there is usually a large selection available in any given price range. Many images are quite affordable and some are free. You can hire a professional photographer or create your own images but you will usually save both time and expense by taking the stock image approach.

Perhaps you are already familiar with one or more stock photography merchants. There might be a particular merchant that you turn to time and again. I have my own favorites, each one capable of providing high quality images.


In terms of popularity and longevity, iStock leads the pack of stock photography providers. iStock is in fact the oldest merchant of its type. You not only have a huge selection on royalty-free images to choose among but there is very large selection of sound effect, videos, music and vectors to choose from as well and the selections you purchase come with the appropriate legal guarantees.

There are usually a few special offers available. Currently, if you are a new client placing an order and type in the code SUMMER20 you will receive a 20 percent discount. The discount may be applied to any item in iStock’s product line. To receive the discount you must make a purchase using a minimum of 30 credits. Credits may be purchased ahead of time. Even if you are not yet considering making a purchase, it would serve you well to sign up anyway so you can take advantage of their huge inventory at some time in the future.


Alamy is my second-favorite merchant due in large part to its truly massive collection of quality stock photos, images and more. You can choose among a mind-numbing 18 million items. The chances of your finding exactly what you need are therefore very good although you may at times face a number of equally attractive products you will want to choose from.

Working with such a large inventory will not pose a problem however since one of the attractive features of the Alamy website is its user-friendly search engine. As a new customer you could find the home page a little difficult to navigate, but once you click on the “For Buyers” button, you’re off and running.


When you’re searching through a vast amount of data, having a friendly user interface to work with is important. This is the case with Stockfresh, where you will spend a minimal amount of time finding what you are looking for. While the number of options this merchant has is somewhat smaller than what others offer, each offering is of the very highest quality.

You don’t have to register to make purchases and the ease with which you can shop makes doing business with this merchant a pleasure. If there is a mediocre product to be found, you’ll have to look very hard for it, likely without success.

PhotoSpin, Inc.

PhotoSpin, Inc. has not been in business as long as some other merchants such as iStock but this merchant is still one of the pioneers in the online stock photography business. PhotoSpin was the first of its kind to introduce an image subscription program. This program has changed over the year, usually improving somewhat with each change. As a subscriber,  you can access up to 50 downloads at a time and up to 1,000 downloads every month. Such a subscription is a handy thing to have if you normally work with a large number of images.

PhotoSpin has an affiliate program and they will provide you with banners and contextual links to place on your website or blog. You can then refer visitors to PhotoSpin and receive a 15 percent commission on any purchases they may make. This is a higher commission than most stock image providers offer and is in fact higher than those given by most affiliate programs in almost any marketing sector.

Media Bakery

During its decade of doing business, Media Bakery has managed to build up a large and impressive collection of stock items, including photos, music files, videos and vectors. The total number of items in their collection currently exceeds 10 million.

Approximately 20 percent of these items have made their way into what Media Bakery calls their Microstock Image Collection, a collection that contains a large number of highly affordable items. This merchant’s sophisticated search feature makes browsing through this large collection a relatively easy task. Most of the items sell for under $50. Many sell for around $1. Media Bakery is one of my top five favorites, largely because of the ease with which I am able to navigate through their collection and the fact that shopping has proven to be a painless experience. 


One of the truly fine features of this merchant is the way it treats its providers. The company’s business model is such that the photographers, artists and designers that supply the products you will find in the Photofolio inventory are treated fairly and given commissions commensurate with the quality of the products they provide. 

These commissions are often as high as 60 percent of the eventual sales price. Commissions of this magnitude are somewhat of a rarity in this highly competitive field. When a merchant places such a high value on the providers of the product it sells it should not come as a surprise that a high value is placed on customer satisfaction as well. I have found that in my dealings with Photofolio this respect for the customer is apparent and I am sure you will find that to be the case as well.


With Mostphotos as your source, you’ll feel as if you’ve found nirvana, or have at least reached cloud nine on the way. This merchant has assembled a collection of over 7 million images, and with new material being added every day, that number is constantly growing. 

Not only are the offerings royalty free, but they are crisp and professional looking in every detail. If you don’t see exactly what you want and can do so, wait a day or two as there will likely have been several thousand new additions to the collection in that time. Mostphotos is an ideal source for the graphic designer, blog or website developer, or the art director.


Snapwire’s approach to the stock photography business is different than most business models in this area. When you’re looking for a special photo or illustration, especially the latter, Snapwire will forward your request to one or more freelance artists who will then create your custom image. 

What this means is that you are able to get exactly what you want, rather than settle for ‘close enough’. What is happening is that you are hiring a professional through a second party, and at far less expense that would usually be the case.

Corbis Images

Shopping at Corbis Images can be a real revelation. Here, you’ll find one top-notch photographic image after another. All of these images are available royalty free and can be used in both commercial and editorial projects. Many of the illustrations offered are truly fantastic. The Corbis collection is arranged in a number of specialty categories that serve to make finding what you are after a less challenging task.


Its collection of available stock photography and images will speak for itself, but what is truly impressive about Shutterstock, and why it appears on this list, is its global presence. This stock photography provider is based in New York and has offices in 150 locations around the globe. You’ll get a good sense of this merchant’s global reach if you visit the website that functions in 20 different languages. Many professional artists and designers have a decided preference for Shutterstock as a source because of the company’s impressive business model.

In Conclusion

These are, at the moment, my ten favorite places to visit when looking for high-quality stock photos in an affordable price range. Each of these merchants subscribes to a more or less unique business model, but each one has its fair share of satisfied customers as well as satisfied suppliers. Perhaps one or more of these is a favorite of yours as well. If you do business with a stock photo merchant you feel should be included on this list, please let me know.


Read the full article at: The Best Summer Stock Image Sources are Here. Check Them Out. (Sponsored)


Author: "David Walsh" Tags: "Sponsored"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 13:49

The awesome part of RSS is that it lets you pull content wherever you want.  The bad part, as a publisher, is that the user may be missing out on important information that is on the site but doesn’t display in articles.  WordPress’ hook system to the rescue!


We’re going to hook onto the the_content and the_excerpt_rss function hooks to append or prepend content to feed entries:

// Additional RSS Content
$rss_more_content = 'blah blah blah';
$rss_more_position = 'before'; // or "after"

// Function which adds content to RSS entries
function add_rss_content($content) {
	global $rss_more_position, $rss_more_content;

	if(is_feed()) {
		if ($rss_more_position == 'before') {
			$content = "


\n$content"; } else { $content .= "


\n"; } } return $content; } // Add hooks add_filter('the_content', 'add_rss_content'); add_filter('the_excerpt_rss', 'add_rss_content');

The only conditional is whether or not you want the content added at the top or bottom of the content block.  This function should go into your functions.php file, but how you pull in the additional content in is up to you!

Read the full article at: Append and Prepend to WordPress RSS Feed Content


Author: "David Walsh" Tags: "PHP, Quick Tips, WordPress"
Comments Send by mail Print  Save  Delicious 
Date: Monday, 04 Aug 2014 01:16

Now that you’ve seen ES6 generators and are more comfortable with them, it’s time to really put them to use for improving our real-world code.

The main strength of generators is that they provide a single-threaded, synchronous-looking code style, while allowing you to hide the asynchronicity away as an implementation detail. This lets us express in a very natural way what the flow of our program’s steps/statements is without simultaneously having to navigate asynchronous syntax and gotchas.

In other words, we achieve a nice separation of capabilities/concerns, by splitting up the consumption of values (our generator logic) from the implementation detail of asynchronicity fulfilling those values (the generator iterator).

The result? All the power of asynchronous code, with all the ease of reading and maintainability of synchronous(-looking) code.

So how do we accomplish this feat?

Simplest Async

At its most simple, generators don’t need anything extra to handle async capabilities that your program doesn’t already have.

For example, let’s imagine you have this code already:

function makeAjaxCall(url,cb) {
    // do some ajax fun
    // call `cb(result)` when complete

makeAjaxCall( "http://some.url.1", function(result1){
    var data = JSON.parse( result1 );

    makeAjaxCall( "http://some.url.2/?id=" + data.id, function(result2){
        var resp = JSON.parse( result2 );
        console.log( "The value you asked for: " + resp.value );
} );

To use a generator (without any additional decoration) to express this same program, here’s how you do it:

function request(url) {
    // this is where we're hiding the asynchronicity,
    // away from the main code of our generator
    // `it.next(..)` is the generator's iterator-resume
    // call
    makeAjaxCall( url, function(response){
        it.next( response );
    } );
    // Note: nothing returned here!

function *main() {
    var result1 = yield request( "http://some.url.1" );
    var data = JSON.parse( result1 );

    var result2 = yield request( "http://some.url.2?id=" + data.id );
    var resp = JSON.parse( result2 );
    console.log( "The value you asked for: " + resp.value );

var it = main();
it.next(); // get it all started

Let’s examine how this works.

The request(..) helper basically wraps our normal makeAjaxCall(..) utility to make sure its callback invokes the generator iterator’s next(..) method.

With the request("..") call, you’ll notice it has no return value (in other words, it’s undefined). This is no big deal, but it’s something important to contrast with how we approach things later in this article: we effectively yield undefined here.

So then we call yield .. (with that undefined value), which essentially does nothing but pause our generator at that point. It’s going to wait until the it.next(..) call is made to resume, which we’ve queued up (as the callback) to happen after our Ajax call finishes.

But what happens to the result of the yield .. expression? We assign that to the variable result1. How does that have the result of the first Ajax call in it?

Because when it.next(..) is called as the Ajax callback, it’s passing the Ajax response to it, which means that value is getting sent back into our generator at the point where it’s currently paused, which is in the middle of the result1 = yield .. statement!

That’s really cool and super powerful. In essence, result1 = yield request(..) is asking for the value, but it’s (almost!) completely hidden from us — at least us not needing to worry about it here — that the implementation under the covers causes this step to be asynchronous. It accomplishes that asynchronicity by hiding the pause capability in yield, and separating out the resume capability of the generator to another function, so that our main code is just making a synchronous(-looking) value request.

The exact same goes for the second result2 = yield result(..) statement: it transparently pauses & resumes, and gives us the value we asked for, all without bothering us about any details of asynchronicity at that point in our coding.

Of course, yield is present, so there is a subtle hint that something magical (aka async) may occur at that point. But yield is a pretty minor syntactic signal/overhead compared to the hellish nightmares of nested callbacks (or even the API overhead of promise chains!).

Notice also that I said “may occur”. That’s a pretty powerful thing in and of itself. The program above always makes an async Ajax call, but what if it didn’t? What if we later changed our program to have an in-memory cache of previous (or prefetched) Ajax responses? Or some other complexity in our application’s URL router could in some cases fulfill an Ajax request right away, without needing to actually go fetch it from a server?

We could change the implementation of request(..) to something like this:

var cache = {};

function request(url) {
    if (cache[url]) {
        // "defer" cached response long enough for current
        // execution thread to complete
        setTimeout( function(){
            it.next( cache[url] );
        }, 0 );
    else {
        makeAjaxCall( url, function(resp){
            cache[url] = resp;
            it.next( resp );
        } );

Note: A subtle, tricky detail here is the need for the setTimeout(..0) deferral in the case where the cache has the result already. If we had just called it.next(..) right away, it would have created an error, because (and this is the tricky part) the generator is not technically in a paused state yet. Our function call request(..) is being fully evaluated first, and then the yield pauses. So, we can’t call it.next(..) again yet immediately inside request(..), because at that exact moment the generator is still running (yield hasn’t been processed). But we can call it.next(..) “later”, immediately after the current thread of execution is complete, which our setTimeout(..0) “hack” accomplishes. We’ll have a much nicer answer for this down below.

Now, our main generator code still looks like:

var result1 = yield request( "http://some.url.1" );
var data = JSON.parse( result1 );

See!? Our generator logic (aka our flow control) didn’t have to change at all from the non-cache-enabled version above.

The code in *main() still just asks for a value, and pauses until it gets it back before moving on. In our current scenario, that “pause” could be relatively long (making an actual server request, to perhaps 300-800ms) or it could be almost immediate (the setTimeout(..0) deferral hack). But our flow control doesn’t care.

That’s the real power of abstracting away asynchronicity as an implementation detail.

Better Async

The above approach is quite fine for simple async generators work. But it will quickly become limiting, so we’ll need a more powerful async mechanism to pair with our generators, that’s capable of handling a lot more of the heavy lifting. That mechanism? Promises.

If you’re still a little fuzzy on ES6 Promises, I wrote an extensive 5-part blog post series all about them. Go take a read. I’ll wait for you to come back. <chuckle, chuckle>. Subtle, corny async jokes ftw!

The earlier Ajax code examples here suffer from all the same Inversion of Control issues (aka “callback hell”) as our initial nested-callback example. Some observations of where things are lacking for us so far:

  1. There’s no clear path for error handling. As we learned in the previous post, we could have detected an error with the Ajax call (somehow), passed it back to our generator with it.throw(..), and then used try..catch in our generator logic to handle it. But that’s just more manual work to wire up in the “back-end” (the code handling our generator iterator), and it may not be code we can re-use if we’re doing lots of generators in our program.
  2. If the makeAjaxCall(..) utility isn’t under our control, and it happens to call the callback multiple times, or signal both success and error simultaneously, etc, then our generator will go haywire (uncaught errors, unexpected values, etc). Handling and preventing such issues is lots of repetitive manual work, also possibly not portable.
  3. Often times we need to do more than one task “in parallel” (like two simultaneous Ajax calls, for instance). Since generator yield statements are each a single pause point, two or more cannot run at the same time — they have to run one-at-a-time, in order. So, it’s not very clear how to fire off multiple tasks at a single generator yield point, without wiring up lots of manual code under the covers.

As you can see, all of these problems are solvable, but who really wants to reinvent these solutions every time. We need a more powerful pattern that’s designed specifically as a trustable, reusable solution for our generator-based async coding.

That pattern? yielding out promises, and letting them resume the generator when they fulfill.

Recall above that we did yield request(..), and that the request(..) utility didn’t have any return value, so it was effectively just yield undefined?

Let’s switch that around a little bit. Let’s remake our request(..) utility to be promises-based, so that what we yield out is actually a promise. And then we’ll make a utility that will receive those yielded promises and wires them up to resume the generator.

function request(url) {
    // Note: returning a promise now!
    return new Promise( function(resolve,reject){
        makeAjaxCall( url, resolve );
    } );

request(..) now constructs a promise that will be resolved when the Ajax call finishes, and we return that promise. So, that returned promise is going to be yielded out. What next?

Let’s make a wrapper/helper for our generator that can handle these details for us. I’ll call it runGenerator(..) for now:

// run (async) a generator to completion
// Note: simplified approach: no error handling here
function runGenerator(g) {
    var it = g(), ret;

    // asynchronously iterate over generator
    (function iterate(val){
        ret = it.next( val );

        if (!ret.done) {
            // poor man's "is it a promise?" test
            if ("then" in ret.value) {
                // wait on the promise
                ret.value.then( iterate );
            // immediate value: just send right back in
            else {
                // avoid recursion
                setTimeout( function(){
                    iterate( ret.value );
                }, 0 );

Key things to notice:

  1. We automatically initialize the generator (creating its it iterator), and we asynchronously will run it to completion (done:true).
  2. We look for a promise to be yielded out (aka the return value from each it.next(..) call). If so, we wait for it to complete by registering then(..) on the promise.
  3. If any immediate (aka non-promise) value is returned out, we simply send that value back into the generator so it keeps going immediately.

Now, how do we use it?

runGenerator( function *main(){
    var result1 = yield request( "http://some.url.1" );
    var data = JSON.parse( result1 );

    var result2 = yield request( "http://some.url.2?id=" + data.id );
    var resp = JSON.parse( result2 );
    console.log( "The value you asked for: " + resp.value );
} );

Bam! Wait… that’s the exact same generator code as earlier? Yep. Again, this is the power of generators being shown off. The fact that we’re now creating promises, yielding them out, and resuming the generator on their completion — ALL OF THAT IS “HIDDEN” IMPLEMENTATION DETAIL! It’s not really hidden, it’s just separated from the consumption code (our flow control in our generator).

By waiting on the yielded out promise, and then sending its completion value back into it.next(..), the result1 = yield request(..) gets the value exactly as it did before.

But now that we’re using promises for managing the async part of the generator’s code, we solve all the inversion/trust issues from callback-only coding approaches. We get all these solutions to our above issues for “free” by using generators + promises:

  1. We now have built-in error handling which is easy to wire up. We didn’t show it above in our runGenerator(..), but it’s not hard at all to listen for errors from a promise, and wire them to it.throw(..) — then we can use try..catch in our generator code to catch and handle errors.
  2. We get all the control/trustability that promises offer. No worries, no fuss.
  3. Promises have lots of powerful abstractions on top of them that automatically handle the complexities of multiple “parallel” tasks, etc.

    For example, yield Promise.all([ .. ]) would take an array of promises for “parallel” tasks, and yield out a single promise (for the generator to handle), which waits on all of the sub-promises to complete (in whichever order) before proceeding. What you’d get back from the yield expression (when the promise finishes) is an array of all the sub-promise responses, in order of how they were requested (so it’s predictable regardless of completion order).

First, let’s explore error handling:

// assume: `makeAjaxCall(..)` now expects an "error-first style" callback (omitted for brevity)
// assume: `runGenerator(..)` now also handles error handling (omitted for brevity)

function request(url) {
    return new Promise( function(resolve,reject){
        // pass an error-first style callback
        makeAjaxCall( url, function(err,text){
            if (err) reject( err );
            else resolve( text );
        } );
    } );

runGenerator( function *main(){
    try {
        var result1 = yield request( "http://some.url.1" );
    catch (err) {
        console.log( "Error: " + err );
    var data = JSON.parse( result1 );

    try {
        var result2 = yield request( "http://some.url.2?id=" + data.id );
    } catch (err) {
        console.log( "Error: " + err );
    var resp = JSON.parse( result2 );
    console.log( "The value you asked for: " + resp.value );
} );

If a promise rejection (or any other kind of error/exception) happens while the URL fetching is happening, the promise rejection will be mapped to a generator error (using the — not shown — it.throw(..) in runGenerator(..)), which will be caught by the try..catch statements.

Now, let’s see a more complex example that uses promises for managing even more async complexity:

function request(url) {
    return new Promise( function(resolve,reject){
        makeAjaxCall( url, resolve );
    } )
    // do some post-processing on the returned text
    .then( function(text){
        // did we just get a (redirect) URL back?
        if (/^https?:\/\/.+/.test( text )) {
            // make another sub-request to the new URL
            return request( text );
        // otherwise, assume text is what we expected to get back
        else {
            return text;
    } );

runGenerator( function *main(){
    var search_terms = yield Promise.all( [
        request( "http://some.url.1" ),
        request( "http://some.url.2" ),
        request( "http://some.url.3" )
    ] );

    var search_results = yield request(
        "http://some.url.4?search=" + search_terms.join( "+" )
    var resp = JSON.parse( search_results );

    console.log( "Search results: " + resp.value );
} );

Promise.all([ .. ]) constructs a promise that’s waiting on the three sub-promises, and it’s that main promise that’s yielded out for the runGenerator(..) utility to listen to for generator resumption. The sub-promises can receive a response that looks like another URL to redirect to, and chain off another sub-request promise to the new location. To learn more about promise chaining, read this article section.

Any kind of capability/complexity that promises can handle with asynchronicity, you can gain the sync-looking code benefits by using generators that yield out promises (of promises of promises of …). It’s the best of both worlds.

runGenerator(..): Library Utility

We had to define our own runGenerator(..) utility above to enable and smooth out this generator+promise awesomeness. We even omitted (for brevity sake) the full implementation of such a utility, as there’s more nuance details related to error-handling to deal with.

But, you don’t want to write your own runGenerator(..) do you?

I didn’t think so.

A variety of promise/async libs provide just such a utility. I won’t cover them here, but you can take a look at Q.spawn(..), the co(..) lib, etc.

I will however briefly cover my own library’s utility: asynquence‘s runner(..) plugin, as I think it offers some unique capabilities over the others out there. I wrote an in-depth 2-part blog post series on asynquence if you’re interested in learning more than the brief exploration here.

First off, asynquence provides utilities for automatically handling the “error-first style” callbacks from the above snippets:

function request(url) {
    return ASQ( function(done){
        // pass an error-first style callback
        makeAjaxCall( url, done.errfcb );
    } );

That’s much nicer, isn’t it!?

Next, asynquence‘s runner(..) plugin consumes a generator right in the middle of an asynquence sequence (asynchronous series of steps), so you can pass message(s) in from the preceding step, and your generator can pass message(s) out, onto the next step, and all errors automatically propagate as you’d expect:

// first call `getSomeValues()` which produces a sequence/promise,
// then chain off that sequence for more async steps

// now use a generator to process the retrieved values
.runner( function*(token){
    // token.messages will be prefilled with any messages
    // from the previous step
    var value1 = token.messages[0];
    var value2 = token.messages[1];
    var value3 = token.messages[2];

    // make all 3 Ajax requests in parallel, wait for
    // all of them to finish (in whatever order)
    // Note: `ASQ().all(..)` is like `Promise.all(..)`
    var msgs = yield ASQ().all(
        request( "http://some.url.1?v=" + value1 ),
        request( "http://some.url.2?v=" + value2 ),
        request( "http://some.url.3?v=" + value3 )

    // send this message onto the next step
    yield (msgs[0] + msgs[1] + msgs[2]);
} )

// now, send the final result of previous generator
// off to another request
.seq( function(msg){
    return request( "http://some.url.4?msg=" + msg );
} )

// now we're finally all done!
.val( function(result){
    console.log( result ); // success, all done!
} )

// or, we had some error!
.or( function(err) {
    console.log( "Error: " + err );
} );

The asynquence runner(..) utility receives (optional) messages to start the generator, which come from the previous step of the sequence, and are accessible in the generator in the token.messages array.

Then, similar to what we demonstrated above with the runGenerator(..) utility, runner(..) listens for either a yielded promise or yielded asynquence sequence (in this case, an ASQ().all(..) sequence of “parallel” steps), and waits for it to complete before resuming the generator.

When the generator finishes, the final value it yields out passes along to the next step in the sequence.

Moreover, if any error happens anywhere in this sequence, even inside the generator, it will bubble out to the single or(..) error handler registered.

asynquence tries to make mixing and matching promises and generators as dead-simple as it could possibly be. You have the freedom to wire up any generator flows alongside promise-based sequence step flows, as you see fit.

ES7 async

There is a proposal for the ES7 timeline, which looks fairly likely to be accepted, to create still yet another kind of function: an async function, which is like a generator that’s automatically wrapped in a utility like runGenerator(..) (or asynquence‘s’ runner(..)). That way, you can send out promises and the async function automatically wires them up to resume itself on completion (no need even for messing around with iterators!).

It will probably look something like this:

async function main() {
    var result1 = await request( "http://some.url.1" );
    var data = JSON.parse( result1 );

    var result2 = await request( "http://some.url.2?id=" + data.id );
    var resp = JSON.parse( result2 );
    console.log( "The value you asked for: " + resp.value );


As you can see, an async function can be called directly (like main()), with no need for a wrapper utility like runGenerator(..) or ASQ().runner(..) to wrap it. Inside, instead of using yield, you’ll use await (another new keyword) that tells the async function to wait for the promise to complete before proceeding.

Basically, we’ll have most of the capability of library-wrapped generators, but directly supported by native syntax.

Cool, huh!?

In the meantime, libraries like asynquence give us these runner utilities to make it pretty darn easy to get the most out of our asynchronous generators!


Put simply: a generator + yielded promise(s) combines the best of both worlds to get really powerful and elegant sync(-looking) async flow control expression capabilities. With simple wrapper utilities (which many libraries are already providing), we can automatically run our generators to completion, including sane and sync(-looking) error handling!

And in ES7+ land, we’ll probably see async functions that let us do that stuff even without a library utility (at least for the base cases)!

The future of async in JavaScript is bright, and only getting brighter! I gotta wear shades.

But it doesn’t end here. There’s one last horizon we want to explore:

What if you could tie 2 or more generators together, let them run independently but “in parallel”, and let them send messages back and forth as they proceed? That would be some super powerful capability, right!?! This pattern is called “CSP” (communicating sequential processes). We’ll explore and unlock the power of CSP in the next article. Keep an eye out!

Read the full article at: Going Async With ES6 Generators


Author: "Kyle Simpson" Tags: "Guest Blogger, JavaScript"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 31 Jul 2014 22:10

This article serves as a first step toward mastering SVG element animation. Included within are links to key resources for diving deeper, so bookmark this page and refer back to it throughout your journey toward SVG mastery.

An SVG element is a special type of DOM element that mimics the syntax of a standard HTML element. SVG elements have unique tags, attributes, and behaviors that allow them to define arbitrary shapes — essentially providing the ability to create an image directly within the DOM, and thereby benefit from the JavaScript- and CSS-based manipulation that DOM elements can be subjected to.

As a teaser of what we’re about to learn, check out these demos that are only possible thanks to SVG:

There are three significant benefits to creating graphics in SVG rather than using rendered images (PNG, JPEG, etc.): First, SVG compresses incredibly well; graphics defined in SVG have smaller file sizes than their PNG/JPEG equivalents. Second, SVG graphics scale to any resolution without the loss of clarity; they look razor sharp on all desktop and mobile screens. Third, you can animate the individual components of an SVG graphic at run-time (using JavaScript and CSS).

To create an SVG graphic, either design it by hand using DOM elements to represent each piece of your graphic, or use your favorite photo editor to draw arbitrary shapes then export the resulting SVG code for copy-pasting into your HTML. For a primer on exporting SVGs, read this fantastic article: Working With SVG.

SVG Animation

Neither jQuery nor CSS transitions offer complete support for the animation of SVG-specific styling properties (namely, positional and dimensional properties). Further, CSS transitions do not allow you to animate SVG elements on IE9, nor can they be used to apply transforms to SVGs on any version of IE.

Accordingly, to animate SVG elements, either use a dedicated SVG manipulation library or a JavaScript animation library that has support for SVG. The most popular dedicated SVG manipulation library is Snap.svg, and the most popular JavaScript animation library with SVG support is Velocity.js. Since Velocity.js contains extensive cross-browser SVG support, is lightweight, and should already be your weapon of choice for web animation, that’s the library we’ll be using in this article.

Velocity.js automatically detects when it’s being used to animate an SVG element then seamlessly applies SVG-specific properties without you having to modify your code in any way.

SVG Styling

SVG elements accept a few of the standard CSS properties, but not all of them. (More on this shortly.) In addition, SVGs accept a special set of “presentational” attributes, such as fill, x, and y, which also serve to define how an SVG is visually rendered. There is no functional difference between specifying an SVG style via CSS or as an attribute — the SVG spec merely divides properties amongst the two.

Here is an example of an SVG circle element next to an SVG rect element — both of which are contained inside a mandatory SVG container element (which tells the browser that what’s contained within is SVG markup instead of HTML markup). Notice how color styles are defined using CSS, but dimensional properties are defined via attributes:

<svg version="1.1" width="300" height="200" xmlns="http://www.w3.org/2000/svg">
    <circle cx="100" cy="100" r="200" style="fill: blue" />
    <rect x="100" y="100" width="200" height="200" style="fill: blue" />

(There are other special SVG elements, such as ellipse, line, and text. For a complete listing, refer to MDN.)

There are three broad categories of SVG-specific styling properties: color, gradient, dimensional, and stroke. For a full list of SVG elements’ animatable CSS properties and presentational attributes, refer to Velocity.js’s SVG animation documentation.

The color properties consist of fill and stroke. fill is equivalent to background-color in CSS, whereas stroke is equivalent to border-color. Using Velocity, these properties are animated the same way that you animate standard CSS properties:

// Animate the SVG element to a red fill and a black stroke
$svgElement.velocity({ fill: "#ff0000", stroke: "#000000" });

// Note that the following WON'T work since these CSS properties are NOT supported by SVG:
$svgElement.velocity({ backgroundColor: "#ff0000", borderColor: "#000000" });

The gradient properties include stopColor, stopOpacity, and offset. They are used to define multi-purpose gradients that you structure via SVG markup. To learn more about SVG gradients, refer to MDN’s SVG Gradient Guide.

The dimensional properties are those that describe an SVG element’s position and size. These attributes differ slightly amongst SVG element types (e.g. rect vs. circle):

// Unlike HTML, SVG positioning is NOT defined with top/right/bottom/left, float, or margin properties
// Rectangles have their x (left) and y (top) values defined relative to their top-left corner
$("rect").velocity({ x: 100, y: 100 });

// In contrast, circles have their x and y values defined relative to their center (hence, cx and cy properties)
$("circle").velocity({ cx: 100, cy: 100 });
// Rectangles have their width and height defined the same way that DOM elements do
$("rect").velocity({ width: 200, height: 200 });

// Circles have no concept of "width" or "height"; instead, they take a radius attribute (r):
$("circ").velocity({ r: 100 });

Stroke properties are a unique set of SVG styling definitions that amount to putting the CSS border property on steroids. Two key differences from border are the ability to create custom strokes and the ability to animate a stroke’s movement. Use cases include handwriting effects, gradual reveal effects, and much more. Refer to SVG Line Animation for an overview of SVG stroke animation.

Putting it all together, here’s a complete SVG animation demo using Velocity.js:

See the Pen Velocity.js – Feature: SVG by Julian Shapiro (@julianshapiro) on CodePen.

Fork that pen and start toying around with SVG animation.

Positional Attributes vs. CSS Transforms

You may be wondering, What’s the difference between using the x/cx y/cy positional attributes instead of using CSS transforms (e.g. translateX, translateY)? The answer is browser support: IE (including IE11) do not support CSS transforms on SVG elements. As for the topic of hardware acceleration, all browsers (including IE) hardware-accelerate positional attributes by default — so, when it comes to SVG animation performance, attributes are equivalent to CSS properties.

To summarize:

// The x and y attributes work everywhere that SVG elements do (IE8+, Android 3+)
$("rect").velocity({ x: 100, y: 100 });

// Alternatively, positional transforms (such as *translateX* and *translateY*) work everywhere EXCEPT IE
$("rect").velocity({ translateX: 100, translateY: 100 });

Diving Deeper

With our introduction to SVG animation complete, head on over to MDN’s SVG guide for an in-depth walkthrough of every aspect of working with SVG’s.

Read the full article at: The Simple Intro to SVG Animation


Author: "Julian Shapiro" Tags: "Canvas & SVG, CSS Animations, Demos, Gue..."
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 31 Jul 2014 13:17

I’ve put a great amount of effort into making sure the comment system on this blog is fast and feature-filled.  The comment system is AJAX-based so you don’t need to worry about page refreshes.  You can also post links to GitHub gists, CodePen pens, and JSFiddle fiddles and see them rendered within the comment.  Those tasks I accomplish after a comment has been registered in the system.  But what if you want to modify comment content before it is processed, and subsequently marked as SPAM or scrubbed?  That’s super easy with WordPress hooks!


The preprocess_comment hook allows us to get at the comment data before it is processed.  Here is how I use this hook, wrapping `text` strings in <code> elements and encoding angle characters in <pre> elements:

// Manage comment submissions
function preprocess_new_comment($commentdata) {
	// Replace `code` with <code>code</code>
	$commentdata['comment_content'] = preg_replace("/`(.*)`/Um", "<code>$1</code>", $commentdata['comment_content']);

	// Ensure that code inside pre's is allowed
	preg_match_all("/<pre(.*?)>(.*)<\/pre>/", $commentdata['comment_content'], $pre_matches); // $2
	foreach($pre_matches as $match) {
		$immediate_match = str_replace(array('<', '>'), array('<', '>'), $match[2]);
		$commentdata['comment_content'] = str_replace($match[2], $immediate_match, $commentdata['comment_content']);

	// Return
	return $commentdata;
add_action('preprocess_comment', 'preprocess_new_comment');

This snippet should be added to functions.php, as you would expect of a WordPress theme enhancement.

I love the WordPress hook system — it makes the CMS incredibly powerful and customizable.  I also use this hook to prevent WordPress comment SPAM.  And since many users place HTML code in my comments, it’s important I encode those angle characters properly.  In the end, you never know what your user will submit and what each site will accept — use this WordPress hook to take control!

Read the full article at: Preprocess Comment Content in WordPress


Author: "David Walsh" Tags: "PHP, WordPress"
Comments Send by mail Print  Save  Delicious 
Date: Wednesday, 30 Jul 2014 01:25

One of the big efforts of this blog is to make it as fast and compact as possible. I shave bytes and do everything I can to make the site as lightning fast as possible. In looking at my site’s main JavaScript file, I saw a few blocks which have no value on production, even after minification. After some basic experimentation, I realized that we can abuse console.log statements, which are removed by minifiers, to execute functions on development servers but not on production!

The JavaScript

The traditional call to console.log is one or several strings, but you can pass a self-executing function if you want:

console.log((function() {
  // Do whatever...

  // Example for local dev: convert live links to local

  // Return a string to be logged, if you'd like
  return "Debug: {x} has been executed and is now working";

The console.log method really doesn’t do much here, but we get the added benefit of not only function execution but removal during uglify runs.

Using console.x methods is a big help during development, and it’s awesome that we can bastardize a minifier to work during both development and productions!

Read the full article at: Abusing console.log to Remove Debug Code


Author: "David Walsh" Tags: "JavaScript, Quick Tips"
Comments Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jul 2014 14:37

Just as every client project is unique, each designer is different by nature, and has distinct tastes, skills, style and experience. Yet, web/graphic designers always like to think they’ve made the best decisions and signed the most handsome deals when it comes to resources and tools, which are indispensable to their work. While fully aware that opinions on these matters may differ, I would like to share some of the most useful resources and tools in terms of website builders, WordPress themes, hosting providers, design deals, font and vector resources, stock photos, icons, project management tools, PSD to HTML services, and E-commerce solutions.

Website Builders


A good website builder has the power to transform your work by allowing you to channel your creativity without dealing with the irksome task of coding. ALLYOU.net is all about professionally crafted showcase websites that require minimum effort to set up. The Swiss platform was originally created in 2011 to serve the needs of creative individuals from various fields of experience. Graphic designers, as well as photographers, or stylists, can make fabulous portfolios with the utmost simplicity.

As long as you’re not fixed on ending up with a sophisticated interface, feel free to single out a template and then use the drag-and-drop front end editor to customize it and piece your website together. ALLYOU does not charge a single dime for its services, not even for cloud hosting. However, it’s possible to further upgrade the portfolio site starting from $8/month, by opting for the Carbon or the Titanium pricing plan.


Wix is rounding up more and more users every day. I’m not surprised, since it’s a handy cloud-based tool that helps people deliver clean-cut HTML 5 websites without writing a single line of code. Besides, it offers a multitude of apps or allows you to add them to your site from a third party source, and add functionalities like E-commerce, social plug-ins, contact forms, community forums, and e-mail marketing.


Admittedly, it takes a little while to get the hang of this website builder, but once you’ve crossed that bridge, nothing is holding you back from creating beautiful websites for your clients. You may even notice that Moonfruit presents several advantages that are hard to come by with other online tools, such as well designed site templates to choose from, and effective SEO calibrating on the platter.

WordPress Themes

X Theme

Versatility is the operative word when you’re creating WordPress websites. You may be asked to build an online store, a blog, a business website, a portfolio, a photography website, an event website, and so on, and so forth. Considering the supersaturated theme market out there, I believe that X Theme is one of a kind.

The magnificent brainchild from ThemeForest has got you covered on any occasion, so this may well be the only WordPress theme you’ll ever need. The so-called ‘Stacks’ make sure of that. With any other theme, you’d be confined to choosing a skin and then customizing its appearance. The X Theme comes with four unique designs in one package, and allows you to easily edit your site in a live previewer.

Template Monster

Template Monster is a giant library of site templates where you’re bound to find anything you could possibly wish for. Well over 20.000 items are vying for your attention. You can browse through them and take your pick, depending on the format you prefer: HTML format, CSS with flash, or PSD.  If your services are solicited by small or medium-sized companies, you can use Template Monster to provide them with fully supported and hosted turnkey websites.

Another interesting fact about this service is the way you go about acquiring templates. You can either opt for a limited use license, or purchase a unique license to keep anyone else from using a specific template – whichever route suits your needs best.


There’s one more collection of themes for WordPress sites that I absolutely have to mention here. A team of dedicated theme developers work hard to contribute to a superb database that presently holds more than 80 responsive specimens. The CrocoBlock Club has a helpful support team, and is counting on Cherry Framework to power all of its offerings.

If you try out some of their themes, you’ll see that they’re ready to go right after you’ve installed them, and besides, the admin panel is very easy to navigate. Apart from that, advanced options are at placed your disposal whenever you’re working on a portfolio or a blog. You’d also get constant upgrades, a built-in SEO tool, customized widgets, and an integrated plugin with shortcodes.



Your job also requires that you sometimes ensure trustworthy web hosting for websites, emails and applications. HeartInternet is a viable solution, regardless of the magnitude of your client’s business. Moreover, it’s good to know that help is always just within reach and that they don’t outsource – the support team being entirely based in U.K.


UK based Pixeno offers a top notch reseller hosting service for web designers and developers. You can request a 30 day free trial to test drive the service (no credit card required). Their reseller hosting platform is scalable, meaning that it can be adjusted as the consumer grows. Moreover, Pixeno provides WHM (web host manager) for all resellers who can then add websites to their package, each with their own cPanel account. Lastly, their planning to launch a new U.S location in the upcoming weeks. Use coupon code DESIGN15 and get 15% off their awesome service.

Design deals

Mighty Deals

It can’t hurt to be on the lookout for practical designer bargains that may be waiting around the corner. Who knows, you might just come across useful resources that would otherwise cost tremendous amounts of money. Mighty Deals  is the perfect venue for frequent discounts.

Take a minute to subscribe to their newsletter and take advantage of special offers while they stand. The discounts titan has daily discounts of 50-90%  for web development lessons, Mac toolkits, professional templates, and design tools like Photoshop actions or royalty-free vectors, but they are only available for a very short period of time. Besides, you can rest assured that this service is altogether free, so you don’t have to register or pay any fees.


Of course, not all design deal services on the market are free of charge. However, that doesn’t mean that they’re not worthy of your attention. Case in point, a Medialoot membership may cost you $14 per month or $99 for the entire year, but keep in mind that there is much you stand to gain.

Members get unlimited access to top class resources, which are altogether created by Medialoot designers and hence carry a solid quality guarantee. You can expect to get your hands on many types of treats, including web and UI kits, mobile-specific design kits, vector and illustration kits, individual icon sets, countless font families, and hundreds of Photoshop add-ons.

Font resources


Images and graphics aside, we all know how important it is to also display content in a proper manner. The headings and body text used in a website, for instance, have to be attractive and hold everyone’s attention. Therefore, it’s crucial that you seek and use the appropriate fonts for the project at hand. UrbanFonts has one of the best collections on the web. Just take your pick from over 8k free fonts, as well as loads of premium fonts, and dingbats.


At times, you may happen to see an image that contains a word written in pleasant typography, and you don’t have a clue what font that is, so you get the sudden urge to try and find out. There’s no telling how long the web search turns out to be. WhatFontIs.com exists just for that type of emergencies. This is an excellent tool for designers, who work against deadlines and don’t have anytime to waste.

You won’t believe how fast you’d get that elusive font, either by entering the URL with the location of that image, or by taking a screenshot, and uploading it onto the platform. Soon enough, you’d receive a list with the exact or closest matches to that font.

Vector resources


When you’re scouting for the best microstock agencies to get vectors from, remember to visit Shutterstock. It’s one of my top options, and it’s been a favorite for years. This marketplace is very popular, and makes it a habit of favoring brand new items, so contributors are encouraged to create and keep adding new vectors on the website.  

Vector Open Stock

If you don’t want to spend any money, than there’s always time to scan through websites that offer high quality free vector graphics. You may wish to consider Vector Open Stock above all others, as it has almost 7k precious freebies on its shelves, just waiting to be snatched. While some items are subject to Creative Commons license, others are under an Open Stock License.

In addition, Vector Open Stock also sports some sort of a Social Network. As such, you will find useful features here, which you can use to browse collections, as well as like other people’s works, follow them, and voice your opinion.

Stock images


Creative professionals are often forced to sort through rubbish until they find the perfect images to work with. Unsplash is an example of stock photo merchant that has top notch content, which is really worth your while. Not to mention, ten new photos are added once every ten days, and they are all associated with CC0 license, so you can use them freely.


I have only just discovered the small treasure called PicJumbo. Its owner keeps uploading incredibly neat looking photographs of 10 megapixels that look like anything but stock photos. In fact, most of them are made available for free, and can be used in the making of templates, themes, and websites, as long as you’re not selling them.  You can also go Premium for $6/month and receive a monthly intake of beautiful images.

Project management


Without proper organizing skills, teamwork would turn to chaos. Project management is particularly important in web design, so reaching out to helpful software is highly recommended. My favorite tool is Azendoo, which is available on the web and on mobile. The free version allows you to create an endless number of workspaces, as well as set tasks and subjects indefinitely. With Premium comes advanced admin capabilities and increased storage.

Azendoo is complete with integrations like Dropbox, Evernote, Google Drive, and Box. Thus, you are at liberty to link accounts, and share all work in one place, so that you and your team-mates are in sync.


We’re not project managers, we’re designers. In case you feel that most project management software out there is cumbersome, and the vast majority of designers get that feeling sooner or later, then I know what can deliver you. If you’re looking for something more visual and generally uncomplicated, your salvation lies in a handy tool called Casual.

You’ll be happy to notice that Casual doesn’t overwhelm you with heavy Gantt charts or other managerial features that come across as unfriendly to designers. Instead, you can draw a MindMap of your tasks, plan easily, and have absolute control over your projects from beginning to end.



Next time you’re interested in acquiring icons, check out Picons.me – that’s where you’ll find some crisp and hand-crafted items that will stick to your memory for a very long time, and compliment your website, offline media, desktop or mobile app. The icon sets are vectorized, created with Illustrator, and can be scaled to any size.

An 80-strong set is at your service, cost-free: the Social bundle contains icons pertaining to the most popular networks. Lastly, be sure to follow Picons.me on Twitter, seeing as they announce promotions from time to time.


Roundicons commands the largest collection I’ve seen so far. No less than 5K icons are stacked in the vaults and organized in sets. Here are icons that fit every style, taste, and format, from flat colorful elements, to flat scenes, lines, outlines, and filled icons. They are available either as PSD, AI, SVG, PNG, or CSH.

I love the fact that all icons from this source have Extended License. What’s more, once you’ve purchased an icon set, all the items that appear thenceforth and belong to that set are automatically sent to you, without any extra fees.

PSD to HTML services

Direct Basing

When you’ve designed a PSD and don’t wish to mess around with coding in order to launch it in the online world, it’s time to enlist a PSD to HTML service. Direct Basing should be among your first choices, because they have one of the fastest turnarounds in the industry.When needed it’s also possible to let them implement one of your favourite CMS systems such as: WordPress, Joomla or Magento eCommerce.

Furthermore, when you place your order, you’re also invited to choose several layout and JavaScript features. The (x)HTML/CSS code written by the expert team from Direct Basing will optimize your website for search engines, and make it function well on all browsers, old or new.


Yet another first-rate PSD to HTML service is XHTMLized. It takes pride in having a staggering experience: with 10 years of dedicated service as back-end developers, top quality is in their DNA. The moment you ask for their help, you sign a non-disclosure agreement that protects your data and your property. The only people who come in contact with your designs will be on first name basis with you, because XHTMLized believes that a healthy relationship with the clients is essential. Moreover, the Project Producer assigned to you can be reached at any moment, should you raise any questions or have anything at all to add.

E-commerce solutions


As far as online stores are concerned, Shopify is the perfect answer for retailers and for designers who would rather sidestep technicalities altogether. This tools allows you to easily customize the site design, manage orders, add new products, and accept credit cards. Heavy faces like Encyclopaedia Britannica or Amnesty International work with Shopify, so what’s holding you back?


If you’re currently engaged in the sphere of WordPress websites and need an E-commerce plugin, WooCoomerce is your best bet. You can use it entirely free of charge, and so create stores that sell anything from software, to clothing, photos, music, and even products that originate from an affiliate marketplace.  

This list could go on for ever, but I’ve only singled out the tools and resources that I believe would make the biggest difference in the life of a graphic/web designer. Naturally, I don’t expect everyone to agree with me on the best options, and I wholeheartedly welcome other opinions.

Read the full article at: Popular Tools and Resources That a Web Designer Should Use (Sponsored)


Author: "David Walsh" Tags: "Sponsored"
Comments Send by mail Print  Save  Delicious 
Date: Sunday, 27 Jul 2014 19:15

If you’re still unfamiliar with ES6 generators, first go read and play around with the code in “Part 1: The Basics Of ES6 Generators”. Once you think you’ve got the basics down, now we can dive into some of the deeper details.

Error Handling

One of the most powerful parts of the ES6 generators design is that the semantics of the code inside a generator are synchronous, even if the external iteration control proceeds asynchronously.

That’s a fancy/complicated way of saying that you can use simple error handling techniques that you’re probably very familiar with — namely the try..catch mechanism.

For example:

function *foo() {
    try {
        var x = yield 3;
        console.log( "x: " + x ); // may never get here!
    catch (err) {
        console.log( "Error: " + err );

Even though the function will pause at the yield 3 expression, and may remain paused an arbitrary amount of time, if an error gets sent back to the generator, that try..catch will catch it! Try doing that with normal async capabilities like callbacks. :)

But, how exactly would an error get sent back into this generator?

var it = foo();

var res = it.next(); // { value:3, done:false }

// instead of resuming normally with another `next(..)` call,
// let's throw a wrench (an error) into the gears:
it.throw( "Oops!" ); // Error: Oops!

Here, you can see we use another method on the iterator — throw(..) — which “throws” an error into the generator as if it had occurred at the exact point where the generator is currently yield-paused. The try..catch catches that error just like you’d expect!

Note: If you throw(..) an error into a generator, but no try..catch catches it, the error will (just like normal) propagate right back out (and if not caught eventually end up as an unhandled rejection). So:

function *foo() { }

var it = foo();
try {
    it.throw( "Oops!" );
catch (err) {
    console.log( "Error: " + err ); // Error: Oops!

Obviously, the reverse direction of error handling also works:

function *foo() {
    var x = yield 3;
    var y = x.toUpperCase(); // could be a TypeError error!
    yield y;

var it = foo();

it.next(); // { value:3, done:false }

try {
    it.next( 42 ); // `42` won't have `toUpperCase()`
catch (err) {
    console.log( err ); // TypeError (from `toUpperCase()` call)

Delegating Generators

Another thing you may find yourself wanting to do is call another generator from inside of your generator function. I don’t just mean instantiating a generator in the normal way, but actually delegating your own iteration control to that other generator. To do so, we use a variation of the yield keyword: yield * (“yield star”).


function *foo() {
    yield 3;
    yield 4;

function *bar() {
    yield 1;
    yield 2;
    yield *foo(); // `yield *` delegates iteration control to `foo()`
    yield 5;

for (var v of bar()) {
    console.log( v );
// 1 2 3 4 5

Just as explained in part 1 (where I used function *foo() { } instead of function* foo() { }), I also use yield *foo() here instead of yield *foo() as many other articles/docs do. I think this is more accurate/clear to illustrate what’s going on.

Let’s break down how this works. The yield 1 and yield 2 send their values directly out to the for..of loop’s (hidden) calls of next(), as we already understand and expect.

But then yield* is encountered, and you’ll notice that we’re yielding to another generator by actually instantiating it (foo()). So we’re basically yielding/delegating to another generator’s iterator — probably the most accurate way to think about it.

Once yield* has delegated (temporarily) from *bar() to *foo(), now the for..of loop’s next() calls are actually controlling foo(), thus the yield 3 and yield 4 send their values all the way back out to the for..of loop.

Once *foo() is finished, control returns back to the original generator, which finally calls the yield 5.

For simplicity, this example only yields values out. But of course, if you don’t use a for..of loop, but just manually call the iterator’s next(..) and pass in messages, those messages will pass through the yield* delegation in the same expected manner:

function *foo() {
    var z = yield 3;
    var w = yield 4;
    console.log( "z: " + z + ", w: " + w );

function *bar() {
    var x = yield 1;
    var y = yield 2;
    yield *foo(); // `yield*` delegates iteration control to `foo()`
    var v = yield 5;
    console.log( "x: " + x + ", y: " + y + ", v: " + v );

var it = bar();

it.next();      // { value:1, done:false }
it.next( "X" ); // { value:2, done:false }
it.next( "Y" ); // { value:3, done:false }
it.next( "Z" ); // { value:4, done:false }
it.next( "W" ); // { value:5, done:false }
// z: Z, w: W

it.next( "V" ); // { value:undefined, done:true }
// x: X, y: Y, v: V

Though we only showed one level of delegation here, there’s no reason why *foo() couldn’t yield* delegate to another generator iterator, and that to another, and so on.

Another “trick” that yield* can do is receive a returned value from the delegated generator.

function *foo() {
    yield 2;
    yield 3;
    return "foo"; // return value back to `yield*` expression

function *bar() {
    yield 1;
    var v = yield *foo();
    console.log( "v: " + v );
    yield 4;

var it = bar();

it.next(); // { value:1, done:false }
it.next(); // { value:2, done:false }
it.next(); // { value:3, done:false }
it.next(); // "v: foo"   { value:4, done:false }
it.next(); // { value:undefined, done:true }

As you can see, yield *foo() was delegating iteration control (the next() calls) until it completed, then once it did, any return value from foo() (in this case, the string value "foo") is set as the result value of the yield* expression, to then be assigned to the local variable v.

That’s an interesting distinction between yield and yield*: with yield expressions, the result is whatever is sent in with the subsequent next(..), but with the yield* expression, it receives its result only from the delegated generator’s return value (since next(..) sent values pass through the delegation transparently).

You can also do error handling (see above) in both directions across a yield* delegation:

function* foo() {
    try {
        yield 2;
    catch (err) {
        console.log( "foo caught: " + err );

    // now, throw another error
    throw "Oops!";

function *bar() {
    yield 1;
    try {
        yield *foo();
    catch (err) {
        console.log( "bar caught: " + err );

var it = bar();

it.next(); // { value:1, done:false }
it.next(); // { value:2, done:false }

it.throw( "Uh oh!" ); // will be caught inside `foo()`
// foo caught: Uh oh!

it.next(); // { value:undefined, done:true }  --> No error here!
// bar caught: Oops!

As you can see, the throw("Uh oh!") throws the error through the yield* delegation to the try..catch inside of *foo(). Likewise, the throw "Oops!" inside of foo() throws back out to *bar(), which then catches that error with another try..catch. Had we not caught either of them, the errors would have continued to propagate out as you’d normally expect.


Generators have synchronous execution semantics, which means you can use the try..catch error handling mechanism across a yield statement. The generator iterator also has a throw(..) method to throw an error into the generator at its paused position, which can of course also be caught by a try..catch inside the generator.

yield* allows you to delegate the iteration control from the current generator to another one. The result is that yield* acts as a pass-through in both directions, both for messages as well as errors.

But, one fundamental question remains unanswered so far: how do generators help us with async code patterns? Everything we’ve seen so far in these two articles is synchronous iteration of generator functions.

The key will be to construct a mechanism where the generator pauses to start an async task, and then resumes (via its iterator’s next() call) at the end of the async task. We will explore various ways of going about creating such asynchronicity-control with generators in the next article. Stay tuned!

Read the full article at: Diving Deeper With ES6 Generators


Author: "Kyle Simpson" Tags: "Guest Blogger, JavaScript"
Comments Send by mail Print  Save  Delicious 
Date: Thursday, 24 Jul 2014 16:11

There are many tasks related to arrays that sound quite simple but (1) aren’t and (2) aren’t required of a developer very often. I was encountered with one such task recently: inserting an item into an existing array at a specific index. Sounds easy and common enough but it took some research to figure it out.

// The original array
var array = ["one", "two", "four"];
// splice(position, numberOfItemsToRemove, item)
array.splice(2, 0, "three");

array;  // ["one", "two", "three", "four"]

If you aren’t adverse to extending natives in JavaScript, you could add this method to the Array prototype:

Array.prototype.insert = function (index, item) {
  this.splice(index, 0, item);

I’ve tinkered around quite a bit with arrays, as you may have noticed:

Arrays are super useful — JavaScript just makes some tasks a bit more … code-heavy than they need to be. Keep these snippets in your toolbox for the future!

Read the full article at: Array: Insert an Item at a Specific Index with JavaScript


Author: "David Walsh" Tags: "JavaScript, Quick Tips"
Comments Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader