• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Tuesday, 29 Jul 2014 12:58

Custom ActiveModel::Validators are an easy way to validate individual attributes on your Rails models. All that's required is a Ruby class that inherits from ActiveModel::EachValidator and implements a validate_each method that takes three arguments: record, attribute, and value. I have written a few lately, so I pinged the rest of the amazingly talented Viget developers for some contributions. Here's what we came up with.

Simple URI Validator

A "simple URI" can be either a relative path or an absolute URL. In this case, any value that could be parsed by Ruby's URI module is allowed:

class UriValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless valid_uri?(value)
      record.errors[attribute] << (options[:message] || 'is not a valid URI')
    end
  end


  private

  def valid_uri?(uri)
    URI.parse(uri)
    true

  rescue URI::InvalidURIError
    false
  end
end

Full URL Validator

A "full URL" is defined as requiring a host and scheme. Ruby provides a regular expression to match against, so that's what is used in this validator:

class FullUrlValidator < ActiveModel::EachValidator
  VALID_SCHEMES = %w(http https)
 
  def validate_each(record, attribute, value)
    unless value =~ URI::regexp(VALID_SCHEMES)
      record.errors[attribute] << (options[:message] || 'is not a valid URL')
    end
  end
end

The Ruby regular expression can be seen as too permissive. For a stricter regular expression, Brian Landau shared this Github gist.

Email Validator

My good friends Lawson Kurtz and Mike Ackerman contributed the following email address validator:

class EmailValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i
      record.errors[attribute] << (options[:message] || "is not a valid e-mail address")
    end
  end
end

If you'd rather validate by performing a reverse DNS lookup, Brian Landau has you covered with this Github gist.

Secure Password Validator

Lawson provided this secure password validator (though credit goes to former Viget developer, James Cook):

class SecurePasswordValidator < ActiveModel::EachValidator
  WORDS = YAML.load_file("config/bad_passwords.yml")

  def validate_each(record, attribute, value)
    if value.in?(WORDS)
      record.errors.add(attribute, "is a common password. Choose another.")
    end
  end
end

Twitter Handle Validator

Lawson supplied this validator that checks for valid Twitter handles:

class TwitterHandleValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /^[A-Za-z0-9_]{1,15}$/
      record.errors[attribute] << (options[:message] || "is not a valid Twitter handle")
    end
  end
end

Hex Color Validator

A validator that's useful when an attribute should be a hex color value:

class HexColorValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ %r{\A([a-fA-F0-9]{3}){1,2}\z}
      record.errors[attribute] << (options[:message] || 'is not a valid hex color value')
    end
  end
end

Regular Expression Validator

A great solution for attributes that should be a regular expression:

class RegexpValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless valid_regexp?(value)
      record.errors[attribute] << (options[:message] || 'is not a valid regular expression')
    end
  end


  private

  def valid_regexp?(value)
    Regexp.compile(value)
    true

  rescue RegexpError
    false
  end
end

Bonus Round

Replace all of those default error messages above with I18n translated strings for great justice. For the Regular Expression Validator above, the validate_each method could look something like this:

def validate_each(record, attribute, value)
  unless valid_regexp?(value)
    default_message = record.errors.generate_message(attribute, :invalid_regexp)
    
    record.errors[attribute] << (options[:message] || default_message)
  end
end

Then the following could be added to config/locales/en.yml:

en:
  errors:
    messages:
      invalid_regexp: is not a valid regular expression

Now the default error messages can be driven by I18n.

Conclusion

We've found these to be very helpful at Viget. What do you think? Which validators do you find useful? Are there others worth sharing? Please share in the comments below.

Author: "Zachary Porter" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Jul 2014 15:29

As a developer, nothing makes me more nervous than third-party dependencies and things that can fail in unpredictable ways1. More often than not, these two go hand-in-hand, taking our elegant, robust applications and dragging them down to the lowest common denominator of the services they depend upon. A recent internal project called for slurping in and then reporting against data from Harvest, our time tracking service of choice and a fickle beast on its very best days.

I knew that both components (/(im|re)porting/) were prone to failure. How to handle that failure in a graceful way, so that our users see something more meaningful than a 500 page, and our developers have a fighting chance at tracking and fixing the problem? Here’s the approach we took.

Step 1: Model the processes

Rather than importing the data or generating the report with procedural code, create ActiveRecord models for them. In our case, the models are HarvestImport and Report. When a user initiates a data import or a report generation, save a new record to the database immediately, before doing any work.

Step 2: Give ’em status

These models have a status column. We default it to “queued,” since we offload most of the work to a series of Resque tasks, but you can use “pending” or somesuch if that’s more your speed. They also have an error field for reasons that will become apparent shortly.

Step 3: Define an interface

Into both of these models, we include the following module:

module ProcessingStatus
  def mark_processing
    update_attributes(status: "processing")
  end

  def mark_successful
    update_attributes(status: "success", error: nil)
  end

  def mark_failure(error)
    update_attributes(status: "failed", error: error.to_s)
  end

  def process(cleanup = nil)
    mark_processing
    yield
    mark_successful
  rescue => ex
    mark_failure(ex)
  ensure
    cleanup.try(:call)
  end
end

Lines 2–12 should be self-explanatory: methods for setting the object’s status. The mark_failure method takes an exception object, which it stores in the model’s error field, and mark_successful clears said error.

Line 14 (the process method) is where things get interesting. Calling this method immediately marks the object “processing,” and then yields to the provided block. If the block executes without error, the object is marked “success.” If any2 exception is thrown, the object marked “failure” and the error message is logged. Either way, if a cleanup lambda is provided, we call it (courtesy of Ruby’s ensure keyword).

Step 4: Wrap it up

Now we can wrap our nasty, fail-prone reporting code in a process call for great justice.

class ReportGenerator
  attr_accessor :report

  def generate_report
    report.process -> { File.delete(file_path) } do
      # do some fail-prone work
    end
  end

  # ...
end

The benefits are almost too numerous to count: 1) no 500 pages, 2) meaningful feedback for users, and 3) super detailed diagnostic info for developers – better than something like Honeybadger, which doesn’t provide nearly the same level of context. (-> { File.delete(file_path) } is just a little bit of file cleanup that should happen regardless of outcome.)

* * *

I’ve always found it an exercise in futility to try to predict all the ways a system can fail when integrating with an external dependency. Being able to blanket rescue any exception and store it in a way that’s meaningful to users and developers has been hugely liberating and has contributed to a seriously robust platform. This technique may not be applicable in every case, but when it fits, it’s good.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Jul 2014 14:56

Ever find yourself in a situation where you were given an ActiveRecord model and you wanted to figure out all the models it had a foreign key dependency (belongs_to association) with? Well, I had to do just that in some recent sprig-reap work. Given the class for a model, I needed to find all the class names for its belongs_to associations.

In order to figure this out, there were a few steps I needed to take..

Identify the Foreign Keys / belongs_to Associations

ActiveRecord::Base-inherited classes (models) provide a nice interface for inspecting associations -- the reflect_on_all_associations method. In my case, I was looking specifically for belongs_to associations. I was in luck! The method takes an optional argument for the kind of association. Here's an example:

Post.reflect_on_all_associations(:belongs_to)
# => array of ActiveRecord::Reflection::AssociationReflection objects

Once I had a list of all the belongs_to associations, I needed to then figure out what the corresponding class names were.

Identify the Class Name from the Associations

When dealing with ActiveRecord::Reflection::AssociationReflection objects, there are two places where class names can be found. These class names are downcased symbols of the actual class. Here are examples of how to grab a class name from both a normal belongs_to association and one with an explicit class_name.

Normal belongs_to:

class Post < ActiveRecord::Base
  belongs_to :user
end

association = Post.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

name = association.name
# => :user

With an explicit class_name:

class Post < ActiveRecord::Base
  belongs_to :creator, class_name: 'User'
end

association = Post.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

name = association.options[:class_name]
# => 'User'

Getting the actual class:

ActiveRecord associations have a build in klass method that will return the actual class based on the appropriate class name:

Post.reflect_on_all_associations(:belongs_to).first.klass
# => User

Handle Polymorphic Associations

Polymorphism is tricky. When dealing with a polymorphic association, you have a single identifier. Calling association.name would return something like :commentable. In a polymorphic association, we're probably looking to get back multiple class names -- like Post and Status for example.

class Comment < ActiveRecord::Base
  belongs_to :commentable, polymorphic: true
end

class Post < ActiveRecord::Base
  has_many :comments, as: :commentable
end

class Status < ActiveRecord::Base
  has_many :comments, as: :commentable
end

association = Comment.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

polymorphic = association.options[:polymorphic]
# => true

associations = ActiveRecord::Base.subclasses.select do |model|
  model.reflect_on_all_associations(:has_many).any? do |has_many_association|
    has_many_association.options[:as] == association.name
  end
end
# => [Post, Status]

Polymorphic?

To break down the above example, association.options[:polymorphic] gives us true if our association is polymorphic and nil if it isn't.

Models with Polymorphic has_many Associations

If we know an association is polymorphic, the next step is to check all the models (ActiveRecord::Base.subclasses, could also do .descendants depending on how you want to handle subclasses of subclasses) that have a matching has_many polymorphic association (has_many_association.options[:as] == association.name from the example). When there's a match on a has_many association, you know that model is one of the polymorphic belongs_to associations!

Holistic Dependency Finder

As an illustration of how I handled my dependency sleuthing -- covering all the cases -- here's a class I made that takes a belongs_to association and provides a nice interface for returning all its dependencies (via its dependencies method):

class Association < Struct.new(:association)
  delegate :foreign_key, to: :association

  def klass
    association.klass unless polymorphic?
  end

  def name
    association.options[:class_name] || association.name
  end

  def polymorphic?
    !!association.options[:polymorphic]
  end

  def polymorphic_dependencies
    return [] unless polymorphic?
    @polymorphic_dependencies ||= ActiveRecord::Base.subclasses.select { |model| polymorphic_match? model }
  end

  def polymorphic_match?(model)
    model.reflect_on_all_associations(:has_many).any? do |has_many_association|
      has_many_association.options[:as] == association.name
    end
  end

  def dependencies
    polymorphic? ? polymorphic_dependencies : Array(klass)
  end

  def polymorphic_type
    association.foreign_type if polymorphic?
  end
end

Here's a full example with the Association class in action:

class Comment < ActiveRecord::Base
  belongs_to :commentable, polymorphic: true
end

class Post < ActiveRecord::Base
  belongs_to :creator, class_name: 'User'
  has_many :comments, as: :commentable
end

class Status < ActiveRecord::Base
  belongs_to :user
  has_many :comments, as: :commentable
end

class User < ActiveRecord::Base
  has_many :posts
  has_many :statuses
end

Association.new(Comment.reflect_on_all_associations(:belongs_to).first).dependencies
# => [Post, Status]

Association.new(Post.reflect_on_all_associations(:belongs_to).first).dependencies
# => [User]

Association.new(Status.reflect_on_all_associations(:belongs_to).first).dependencies
# => [User]

The object-oriented approach cleanly handles all the cases for us! Hopefully this post has added a few tricks to your repertoire. Next time you find yourself faced with a similar problem, use reflect_on_all_associations for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 03 Jul 2014 09:38

Ever since we made Say Viget! we've had a bunch of people asking us to explain exactly how we did it. This post is a first go at that explanation -- and a long one at that. Because so much goes into making a game, this is Part 1 of a multi-part series on how to build a 2D Javascript game, explaining some theory, best practices, and highlighting some helpful libraries.

If you're reading this then you probably know the basics: we use Javascript to add things to the context of a canvas while moving those things around through a loop (which is hopefully firing at ~60fps). But how do you achieve collisions, gravity, and general gameplay-smoothness? That's what this post will cover.

Screenshot

View Demo

Code on Github

Take note: I set up a few gulp tasks to make editing and playing with the code simple. It uses Browserify to manage dependencies and coded in Coffeescript. If you're unfamiliar with Gulp or Browserify, I recommend reading this great guide.

Using Box2D and EaselJS

There's a bunch of complex math involved to get a game functioning as we would expect. Even simple things can quickly become complex. For example, when we jump on a concrete sidewalk there is almost no restitution (bounciness) compared to when we jump on a trampoline. A smooth glass surface has less friction than a jagged rock. And still, objects that have a greater mass should push those with lesser mass out of the way. To account for all these scenarios we'll be using the Box2D phyics library.

Box2D is the de facto standard 2D physics engine. To get it working in the browser we'll need to use a Javascript port, which I found here.

Since the syntax for drawing things to a canvas can tend to be verbose, I'll be using a library called EaselJS, which makes working with the <canvas> pretty enjoyable. If you're unfamiliar with EaselJS, definitely check out out this Getting Started guide.

Let's get started.

What's in a Game?

Think high-level. Really high-level. The first thing you realize we need is a kind of world or Reality. Things like gravity, mass, restitution, and friction; these things exist in the real world and we probably want them to exist in our game world, too. Next, we know we will have at least two types of objects in our world: a Hero and some Platforms. We'll also need a place to put our these two objects -- let's call this thing we put them on a a Stage. And, just like the stage for a play, we'll need something that tells our Stage what and where things should be put. For that, we'll create a concept of a Scene. Lastly, we'll pull it all together as into something I'll name Game.

Code Organization

As you can see we start to have clear separation of concerns, with each our Scene, Stage, Hero, etc., all having a different responsibility. To future proof and better organize our project we'll create a separate Class for each:

  • Reality - Get our game and debug canvas ready and define our virtual world.
  • Stage - Holds and keeps track of what the user can see.
  • Hero - Builds our special hero object that can roll and jump around.
  • Platform - Builds a platform at a given x, y, width, and height.
  • Scene - Calls our hero and creates the platforms.
  • Game - Pulls together all our classes. We also put the start/stop and the game loop in here.

Additionally, we'll create two extra files which define some variables being used throughout our project.

  • Config - Which holds some sizing and preferences
  • Keys - Defines keyboard input codes and their corresponding value.

Getting Started

We'll have two <canvas>s, one that EaselJS will interact with (<canvas id="arcade">; which I'll refer to as Arcade), and another for Box2D (<canvas id="debug"> referred to as Debug). These two canvases run completely independently of eachother, but we allow them to talk to eachother. Our Debug canvas is it's own world, a Box2D world, which is where we define gravity, how objects (bodies) within that world interact, and where we place those things that the user can see. 

The objects we can see, like our hero and the platforms, we'll draw to the Arcade canvas using EaselJS. The Box2D objects (or bodies) that represent our hero and platforms will be drawn to the Debug canvas.

Since Box2D defines sizes in meters, we'll need to translate our input into something the browser can understand (Moving a platform over 10 meters doesn't make sense, 300 pixels does). What this means is for every value we pass a Box2D function that accepts say an X and Y coordinate, we'll need to divide by a scale that basically converts those meters into pixels. That magic number is 30. So, if we want our hero to start at 25 pixels from the left of the screen and 475 pixels from the top, we would do:

scale = 30

# b2Vec2 creates a mathematically vector object,
# which can be a magnitude and direction
position = new box2d.b2Vec2( 25 / scale , 475 / scale)
@body.SetPosition(position)

Simple enough, right? Let's jump into what a Box2D body is and what we can do with it.

Creating a Box2D Body

Many of the objects in the game are made up of something we can see like the color and size of a platform, and world constraints on that object we cannot see, like mass, friction, etc. To handle this, we need to draw the visible representation of a platform to our Arcade canvas, while creating a Box2D body on the Debug canvas.

Box2D objects, or bodies, are made up of a Fixture definition and a Body definition. Fixture's represent what an object, like our Platform, is made up of and how it responds to other objects. Attributes like friction, density, and its shape (Whether it's a circle or polygon) are part of our Platform's Fixture. A Body definition defines where in our world a Platform should be. Some base level code for a Platform to be added to our Debug <canvas> would be:

scale  = 30
width  = 50
height = 50

# Creates what the shape is
@fixtureDef             = new box2d.b2FixtureDef
@fixtureDef.friction    = 0.5
@fixtureDef.restitution = 0.25 # Slightly bouncing
@fixtureDef.shape       = new box2d.b2PolygonShape
@fixtureDef.shape.SetAsBox( width / 2 / scale, height / 2 / scale )
# Note: SetAsBox Expects values to be 
# half the size, hence dividing by 2

# Where the shape should be
@bodyDef      = new box2d.b2BodyDef
@bodyDef.type = box2d.b2Body.b2_staticBody
@bodyDef.position.Set(width / scale, height / scale)

# Add to world
@body = world.CreateBody( @bodyDef )
@body.CreateFixture( @fixtureDef )​

Note that static body types (as defined above with box2d.b2Body.b2_staticBody) are not effected by gravity. Dynamic body types, like our hero, will respond to gravity.

Adding EaselJS

In the same place we created our Box2D fixture and body definitions we can create a new EaselJS Shape which simply builds a rectangle with the same dimensions as our Box2D body and add it to our EaselJS Stage.

# ...from above...
# Add to world
@body = world.CreateBody( @bodyDef )
@body.CreateFixture( @fixtureDef )

@view = new createjs.Shape
@view.graphics.beginFill('#000').drawRect(100, 100, width, height)

Stage.addChild @view

From there, we now have one EaselJS Shape, or View, which is being drawn to our Arcade canvas, while the body that represents that Shape is drawn to our Debug canvas. In the case of our hero we want to move our EaselJS shape with its corresponding Box2D body. To do that, we would do something like:

# Get the current position of the body
position = @body.GetPosition()
# Multiply by our scale
@view.x = position.x * scale
@view.y = position.y * scale

The trick to all of this is tying these two objects together -- our Box2D body on our Debug canvas is effected by gravity and thus moves around. When it moves around, we get the position of the body and assign update the position of our EaselJS Shape or @view. That's it.

Accounting for User Input and Controls

Think about how you normally control a character in a video game. You move the joystick up and the player moves forward... and keeps moving forward until you let go. We want to mimic that functionality in our game. To do this we will set a `moving` variable to true when the user pressed down on a key (onKeyDown) and set it to false what the user lets go (onKeyUp). Something like:

assignControls: =>
    document.onkeydown = @handleDown
    document.onkeyup   = @handleUp

handleDown: (e) =>
    switch e.which
        when 37 # Left arrow
            @moving_left = true
        when 39 # right arrow
            @moving_right = true

handleUp: (e) =>
    switch e.which
        when 37
            @moving_left = false
        when 39
            @moving_right = false

And on each iteration of our loop, we would do something like:

update: =>
    # Move right
    if @moving_right
        @hero_speed += 1
    # Move left
    else if @moving_left
        @hero_speed -= 1
    # Come to a stop
    else
        @hero_speed = 0

Again, this is a pretty simple concept.

Look Through The Code

From here I recommending looking through the code on Github for great justice. In it you'll find more refined examples, in an actual game context, which will provide for a fuller understanding of the concepts explained above.

Conclusion

So far we've covered:

  • Using two canvases, one to handle drawing and the other to handle physics
  • What makes up a Box2D body
  • How to tie our EaselJS objects to our Box2D bodies
  • A strategy for controlling our hero with user input. 

​In Part 2 we'll cover:

  • How to follow our hero throughout our scene
  • How to build complex shapes
  • Handling collisions with special objects.

In addition to what I'll be covering in Part 2, is there anything else you would like covered relating to game development? Have questions or feedback on how we could be doing something differently? Let me know in the comments below.

Author: "Tommy Marshall" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Jul 2014 13:52

This May, Viget worked with Dick's Sporting Goods to launch Women's Fitness, an interactive look at women’s fitness apparel and accessories. One of it's most interesting features is the grid of hexagonal product tiles shown in each scene. To draw the hexagons, I chose to use SVG polygon elements.

I've had experience using SVG files as image sources and in icon fonts, but this work was my first opportunity to really dig into it's most powerful use case, inline in HTML. Inline SVG simply refers to SVG markup that is included in the markup for a webpage.

 

<div><svg><!-- WHERE THE MAGIC HAPPENS. --></svg></div>

 

Based on this experience, here are a few simple things I learned about SVG.

1. Browser support is pretty good

http://caniuse.com/#feat=svg-html5

2. SVG can be styled with CSS

Many SVG attributes, like fill and stroke, can be styled right in your CSS.

See the Pen eLbCy by Chris Manning (@cwmanning) on CodePen.

3. SVG doesn't support CSS z-index

Setting the z-index in CSS has asbolutely no effect on the stacking order of svg. The only thing that does is the position of the node in the document. In the example below, the orange circle comes after the blue circle in the document, so it is stacked on top.

See the Pen qdgtk by Chris Manning (@cwmanning) on CodePen.

4. SVG can be created and manipulated with JavaScript

Creation

Creating namespaced elements (or attributes, more on that later) requires a slightly different approach than HTML:

// HTML
document.createElement('div');

// SVG
document.createElementNS('http://www.w3.org/2000/svg', 'svg');

If you're having problems interacting with or updating elements, double check that you're using createElementNS with the proper namespace. More on SVG namespaces.

With Backbone.js

In a Backbone application like Women's Fitness, to use svg or another namespaced element as the view's el, you can explictly override this line in Backbone.View._ensureElement:

// https://github.com/jashkenas/backbone/blob/1.1.2/backbone.js#L1105
var $el = Backbone.$('<' + _.result(this, 'tagName') + '>').attr(attrs);

I made a Backbone View for SVG and copied the _ensureElement function, replacing the line above with this:

// this.nameSpace = 'http://www.w3.org/2000/svg'; this.tagName = 'svg';
var $el = $(window.document.createElementNS(_.result(this, 'nameSpace'), _.result(this, 'tagName'))).attr(attrs);

Setting Attributes

  • Some SVG attributes are namespaced, like the href of an image or anchor: xlink:href. To set or modify these, use setAttributeNS.
// typical
node.setAttribute('width', '150');

// namespaced
node.setAttributeNS('http://www.w3.org/1999/xlink', 'xlink:href', 'http://viget.com');
  • Tip: attributes set with jQuery are always converted to lowercase! Watch out for issues like this gem:
// jQuery sets 'patternUnits' as 'patternunits'
this.$el.attr('patternUnits', 'userSpaceOnUse');

// Works as expected
this.el.setAttribute('patternUnits', 'userSpaceOnUse');
  • Another tip: jQuery's addClass doesn't work on SVG elements. And element.classList isn't supported on SVG elements in Internet Explorer. But you can stil update the class with $.attr('class', value) or setAttribute('class', value).

5. SVG can be animated

CSS

As mentioned in #2, SVG elements can be styled with CSS. The following example uses CSS animations to transform rotatation and SVG attributes like stroke and fill. In my experience so far, browser support is not as consistent as SMIL or JavaScript. 

Browser support: Chrome, Firefox, Safari. Internet Explorer does not support CSS transitions, transforms, and animations on SVG elements. In this particular example, the rotation is broken in Firefox because CSS transform-origin is not supported on SVG elements: https://bugzilla.mozilla.org/show_bug.cgi?id=923193.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.

SMIL

SVG allows animation with SMIL (Synchronized Multimedia Integration Language, pronounced "smile"), which supports the changing of attributes with SVG elements like animate, animateTransform, and animateMotion. See https://developer.mozilla.org/en-US/docs/Web/SVG/SVG_animation_with_SMIL and http://www.w3.org/TR/SVG/animate.html for more. The following example is animated without any CSS or JavaScript.

Browser support: Chrome, Firefox, Safari. Internet Explorer does not support SMIL animation of SVG elements.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.

JavaScript

Direct manipulation of SVG element attributes allows for the most control over animations. It's also the only method of the three that supports animation in Internet Explorer. If you are doing a lot of work, there are many libraries to speed up development time like svg.js (used in this example), Snap.svg, and d3.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.

TL;DR

SVG isn't limited to whatever Illustrator outputs. Using SVG in HTML is well-supported and offers many different options to style and animate content. If you're interested in learning more, check out the resources below.

Additional Resources

Author: "Chris Manning" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 01 Jul 2014 16:00

I’m a little belated, but I was lucky enough to attend and speak at the first Craft CMS Summit two weeks ago. This was the first online conference that I had ever attended, and I was thoroughly impressed. I had always been a bit hesitant to attend online conferences because I was unsure about the quality, but after experiencing it firsthand I won't hesitate in the future. Everything was very well organized and the speakers all gave excellent presentations. It was also nice to sit and learn in the comfort of my own home instead of having to deal with the extra burdon of traveling for a conference. Side note: Environments for Humans, the company who hosted the conference, has additional upcoming events.

State of Craft CMS

Brandon Kelly, started the conference by giving a brief history of Craft and a peek at some new features. Here are a couple bullets I pulled out:

  • There have been over 10 iterations of just the basic Control Panel layout.
  • They invited 10 people into the Blocks (Craft’s previous name) private alpha. There was minimal functionality, no Twig templating language (they created their own), and they just wanted to get some eyes on the interface.
  • There have been over 13,600 licenses issued.
  • 3,300 unique sites with recent CP usage in the last 30 days.
  • Revenue has been excellent. He also said June is going to have another spike.
  • They are considering Craft as of now, hitting 80% of what a site needs to do. The other 20% being really custom stuff that won’t be baked in.
  • Next batch of stuff is going to be usability improvements and improving their docs.
  • They are hiring a third party company to help with docs.
  • Saving big stuff for 3.0.
  • 3.0 will have in-browser asset editing, which they demoed.
  • Plugin store is coming this year. This will allow developers to submit their plugins to Pixel & Tonic. Then, those plugins will be available for download, and update from within the Craft control panel.

E4H has also made the entire recording available for free.

Twig for Designers

Ben Parizek next gave a presentation on Twig, the templating engine that Craft uses. He shared this awesome spreadsheet which is a nice resource for example code for all of the default Craft custom fields.

Template Organization

Anthony Colangelo gave an interesting presentation about template organization. My main takeaway was to think about the structure of your templates based on the type of template, and not just the sections of the site. You can view the slides on Speaker Deck.

Craft Tips & Tricks

I was struggling to come up with a topic, so I just ran through a collection of real-world tips and tricks I had come across while building Craft sites. Here are a couple of my favorite ones from the presentation:

Merge

The Twig merge filter can help to reduce duplication in your template:

{% set filters = ['type', 'product', 'activity', 'element'] %}
{% set params = { section: 'media', limit: 12 } %}

{# Apply filter? #}
{% if craft.request.segments[2] is defined and craft.request.segments[1] in filters %}
	{% switch craft.request.segments[1] %}
		{% case 'product' %}
			{% set product = craft.entries({ slug: craft.request.segments[2], section: 'product' }).first() %}
			{% set params = params | merge({ relatedTo: product }) %}
		{% case 'type' %}
			{% set params = params | merge({ type: craft.request.segments[2] }) %}

		...
	{% endswitch %}
{% endif %}

{% set entries = craft.entries(params) %}

That code sample was used to apply filters on a media page. This way, we could reuse a single template.

Macros

Macros are kinda like helpers. They are useful for creating little reusable functions:

_helpers/index.html

{%- macro map_link(address) -%}
	http://maps.google.com/?q={{ address | url_encode }}
{%- endmacro -%}

contact/index.html

{% import "_helpers" as helpers %}

<a href="{{ helpers.map_link('400 S. Maple Avenue, Suite 200, Falls Church, VA 22046') }}">Map</a>

That code will result in: Map

Element Types and Plugin Development

Ben Croker talked about building plugins, and specifically Element Types. Element Types are the foundation of Craft’s entries, users, assets, globals, categories, etc. Craft has given us the ability to create Element Types through plugins. It’s not thoroughly documented yet, this is all they have, but you can use the existing Element Types to learn how to build them. You can take a look at his slides on Speaker Deck, but the bulk of the presentation was demoing an Element Type that he built.

Craft Q&A Round Table

The Pixel & Tonic team sat around and answered questions from the audience. Here's a smattering of the notes I took:

  • “We have some ideas of how to get DB syncing working, thats why every table has a uid column.” It’s an itch they want to scratch for themselves too.
  • On their list: using a JSON file for field/section setup
  • Matrix within Matrix will be coming eventually. The UI is tough, but the code is all in place.
  • There is the possibility that it will eventually be public on GitHub, but they have to work out some of the app setup stuff.
  • They’ve considered renaming “localization” to just be “sites”, then people can run multiple sites with one Craft install.
  • “Will comments be a part of core?”, “No, that’s plugin territory”
  • Duplicating entries will be coming in Craft 2.2
  • They have a lot of plans for the edit field layout page to make it more user friendly
Author: "Trevor Davis" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 10:16

One difficult aspect about responsive development is how to manage complexity in navigation systems. For simple headers and navigation structures, it’s typically straightforward to just use a single HTML structure. Then write some clever styles which re-adjusts the navigation system from a small-screen format, to one that takes advantage of the increased real-estate of larger screens. Finally, write a small bit of JavaScript for opening and closing a menu on small screens and you’re done. The amount of overhead for delivering two presentation options to all screens in these cases is fairly low.

However, for cases where more complex navigation patterns are used, and where interactions are vastly different across screen sizes, this approach can be rather bloated, as unnecessary markup, styles and assets are downloaded for devices that don’t end up using them.

On one recent project, we were faced with such a problem. The mobile header was simple and the navigation trigger was the common hamburger icon. The navigation system itself employed a fairly complicated multi-level nested push menu which revealed itself from the left side of the screen. The desktop header and navigation system was arranged differently and implemented a full-screen mega-menu in place of the push menu previously mentioned. Due to the differences and overall complexity of each approach, different sets of markup and styles were required for presentation, and different JavaScript assets were required for each interaction pattern.

View Animated GIF: Mobile | Desktop

Mobile First to the Rescue

In order to have the small-screen experience be as streamlined as possible, we employed a mobile-first approach by using a combination of RequireJS, enquire.js & Handlebars. Here’s how it’s setup:

// main.js
require([
    'enquire'
], function(enquire) {
    enquire.register('screen and (max-width: 1000px)', {
        match: function() {
            require(['mobile-header']);
        }
    });
    enquire.register('screen and (min-width: 1001px)', {
        match: function() {
            require(['desktop-header']);
        }
    });
});

In the above code, we’re using enquire’s register method to check the viewport size, and load the bundled set of JavaScript assets for the appropriate screen size.

Handle the Small Screen Version

// mobile-header.js
require([
    'enquire',
    'dependency1',
    'dependency2'
], function(enquire, Dependency1, Dependency2) {
    enquire.register('screen and (max-width: 1000px)', {
        setup: function() {
            // initialize mobile header/nav
        },
        match: function() {
            // show mobile header/nav
        },
        unmatch: function() {
            // hide mobile header/nav
        }
    });
});

Here, mobile-header.js loads the necessary script dependencies for the mobile header and navigation, and sets up another enquire block for initializing, showing and hiding.

Handle the Large Screen Version

// desktop-header.js
requirejs.config({
    paths: {
        handlebars: 'handlebars.runtime'
    },
    shim: {
        handlebars: {
            exports: 'Handlebars'
        }
    }
});

require([
    'enquire',
    'handlebars.runtime',
    'dependency3',
    'dependency4'
], function(enquire, Handlebars, Dependency3, Dependency4) {
    enquire.register('screen and (min-width: 1001px)', {
        setup: function() {
            // get template and insert markup
            require(['../templates/desktop-header'], function() {
                var markup = JST['desktop-header']();
                $('#mobile-header').after(markup);
            });
        },
        match: function() {
            // show desktop header/nav
        },
        unmatch: function() {
            // hide desktop header/nav
        }
    });
});

* The handlebars runtime is being used for faster render times. It requires that the desktop header template (referenced on line 22 above) be a pre-compiled handlebar template. It looks like this and can be auto-generated using grunt-contrib-handlebars.

Finally, desktop-header.js loads the necessary script dependencies for the desktop header and navigation. Another enquire block is set up for fetching and rendering the template, and showing and hiding.

Pros & Cons

The code examples above are heavily stripped down from the original implementation, and it’s also important to note that the RequireJS Optimizer was used to combine related scripts together into a few key modules (main, mobile and desktop), in order to keep http requests to a minimum.

Which brings me to a downside: splitting the JS into small and large modules does add one extra http request as opposed to simply bundling ALL THE THINGS into one JS file. For your specific implementation, the bandwidth and memory savings would have to be weighed against the slight penalty of an extra http request. That penalty may or may not be worth it. There is also an ever so slight flash of the mobile header on desktop before it is replaced with the desktop header. We mitigated this with css, by simply hiding the mobile header at the large breakpoint.

On the plus side, the advantage here is that the desktop header and associated assets are only loaded when the viewport size is large enough to accommodate it. Also, the JavaScript assets for the mobile multi-level push menu are only loaded for small screens. Bandwidth is more efficiently utilized in that mobile users’ data plans aren’t taxed with downloading unnecessary assets. The browser also has less work to do overall. Everyone rejoices!

Taking it Further

Several ways this could be taken to the next level would be to modularize the styles required for rendering the mobile and desktop header and navigation, and bundle those within their respective modules. Another completely different approach for managing this type of complexity would be to implement a RESS solution with something like Detector. If you have any other clever ways of managing complexity in responsive navigation patterns, or any responsive components for that matter, let me know in the comments below.

Author: "Jeremy Frank" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 18 Jun 2014 15:43

Recently, Lawson and Ryan launched Sprig, a gem for seeding Rails applications. sprig_logo

Sprig seed files are easy to write, but they do take some time -- time which you may not have enough of. We wanted to generate seed files from records already in the database, and we received similar requests from other Sprig users. At Viget, we try to give the people what the people want, so I jumped in and created Sprig-Reap!

Introducing Sprig-Reap

Sprig-Reap is a rubygem that allows you to generate Sprig-formatted seed files from your Rails app's database.

It provides both a command-line interface via a rake task and a method accessible inside the Rails console.

Command Line

rake db:seed:reap

Rails Console

Sprig.reap

The Defaults

Sprig-Reap, by default, will create a seed file for every model in your Rails app with an entry for each record. The .yml seed files will be placed inside the db/seeds/env folder, where env is the current Rails.env.

Don't like these defaults? No problem!

Customizing the Target Environment Seed Folder

Sprig-Reap can write to a seeds folder named after any environment you want. If the target folder doesn't already exist, Sprig-Reap will create it for you!

# Command Line
rake db:seed:reap TARGET_ENV='dreamland'

# Rails Console
Sprig.reap(target_env: 'dreamland')

Customizing the Set of Models

You tell Sprig-Reap which models you want seeds for and -- BOOM -- it's done:

# Command Line
rake db:seed:reap MODELS=User,Post,Comment

# Rails Console
Sprig.reap(models: [User, Post, Comment])

Omitting Specific Attributes from Seed Files

Tired of seeing those created_at/updated_at timestamps when you don't care about them? Don't want encrypted passwords dumped into your seed files? Just ignore 'em!

# Command Line
rake db:seed:reap IGNORED_ATTRS=created_at,updated_at,password

# Rails Console
Sprig.reap(ignored_attrs: [:created_at, :updated_at, :password])

Reaping with Existing Seed Files

If you have existing seed files you're already using with Sprig, have no fear! Sprig-Reap is friendly with other Sprig seed files and will append to what you already have -- appropriately assigning unique sprig_ids to each entry.

Use Case

If you're wondering what the point of all this is, perchance this little example will pique your interest:

At Viget, QA is a critical part of every project. During the QA process, we generate all kinds of data so we can test all the things. Oftentimes this data describes a very particular, complicated state. Being able to easily take a snapshot of the application's data state is super helpful. Sprig-Reap lets us do this with a single command -- and gives us seed files that can be shared and re-used across the entire project team. If someone happens to run into a hard-to-reproduce issue related to a specific data state, use Sprig-Reap for great justice!

Your Ideas

We'd love to hear what people think about Sprig-Reap and how they're using it. Please share! If you have any comments or ideas of your own when it comes to enhancements, leave a comment below or add an issue to the GitHub repo.

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 11 Jun 2014 19:37

Last September, while in Brighton for dConstruct, I attended the second annual IndieWebCampUK, a two-day gathering of web developers focused on building IndieWeb tools.

If you're unfamiliar with the IndieWeb movement, its guiding principle is that you should own your data. In practical terms, this amounts to publishing content on a website at a domain that you own (instead of, say, posting all of your photos to a service like Facebook). Surrounding that principle are a variety of other ideas and tools being created by some amazing people (including Tantek Çelik, Aaron Parecki, Amber Case, and others).

IndieWebCampUK rekindled my desire to publish on my own website and build tools that would help others do the same.

Of all the IndieWeb building blocks being worked on, webmention caught my attention the most. From the wiki:

Webmention is a simple way to notify any URL when you link to it on your site. From the receiver's perspective, it's a way to request notifications when other sites link to it. Webmention is a modern update to Pingback, using only HTTP and x-www-urlencoded content rather than XML-RPC requests.

The power of webmention is its simplicity. Unlike sending Pingbacks with XML-RPC, sending a webmention can be as simple as using cURL on the command line to POST to a URL (as shown in this example). Very cool and relatively easy.

In the months since IndieWebCampUK, I've been trying to figure out how to best contribute to webmention. Which brings us to…

Webmention Client Plugin for Craft CMS

With some help from Trevor, I've just released version 1.0.0 of a webmention client that adds the ability to send webmentions from Craft. Installation and setup is really easy and is detailed in the project README on GitHub.

For the initial release, the plugin makes available a new "Webmention (targets)" Field Type that can be added to any of your site's Field Layouts. When saving an entry with a webmention field, the plugin will ping each target supplied, looking for a webmention endpoint. If an endpoint is found, then the endpoint, target, and source (the Craft entry's URL) are stored in a queue for processing. Once the queue is ready to be processed, a background task kicks off and sends webmentions to the appropriate websites.

That's it! Your Craft-powered site is now sending webmentions.

Issues, Updates, etc.

I spent some time looking through the FAQs, Issues, and Brainstorming sections of the Webmention wiki page and I think the Craft plugin handles most of the primary use cases. There are some things I'd like to do better in future versions, though:

  • Send a webmention when a URL is removed from the list of targets.
  • Have the plugin crawl an entry's body field(s) for URLs to ping.

The latter item would involve a lot of heavy lifting and some potentially tricky UI, but I'm hoping to tackle that down the line. In the mean time, give the plugin a try let me know if you run into any problems or have any feature suggestions.

In true IndieWeb fashion, I've published this on my own website first and syndicated it here.

Author: "Jason Garber" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 10 Jun 2014 16:00

Traditionally stylesheets describe the majority of the presentation layer for a website. However as JavaScript becomes necessary to present information in a stylistically consistent way, it becomes troublesome to keep these mediums in sync. Data visualizations and break-point based interaction are prime examples of this; something I bumped into on my most recent project.

I should note that this is not an unsolved problem, and there are many interesting examples of this technique in the wild. However I wanted a simpler solution and I've been wanting to write a Sass plugin anyway.

The result of this curiousity is sass-json-vars. After requiring it, this gem allows JSON files to be included as valid @import paths; converting the top level values into any of the Sass data types (strings, maps, lists).

Usage

Consider the following snippet of JSON (breakpoints shortened for brevity):

{
    "colors": {
        "red"  : "#c33",
        "blue" : "#33c"
    },

    "breakpoints": {
        "landscape" : "only screen and (orientation : landscape)",
        "portrait"  : "only screen and (orientation : portrait)"
    }
}

sass-json-vars exposes the top level keys as values whenever a JSON file is included using @import.

@import "variables.json"; 

.element {
    color: map-get($colors, red);
    width: 75%;

    @media (map-get($breakpoints, portrait) {
        width: 100%;
    }
}

Similarly, these values can be accessed in JavaScript using a module system such as CommonJS with browserify. For example, if we need to determine if the current browser's orientation is at landscape:

var breakpoints = require("./variables.json").breakpoints;

// https://developer.mozilla.org/en-US/docs/Web/API/Window.matchMedia
var isLandscape  = matchMedia(breakpoints.landscape).matches;

if (isLandscape) {
    // do something in landscape mode
}

Integration

sass-json-vars can be included similarly to sass-globbing or other plugins that add functionality to @import. Simply include it as a dependency in your Gemfile:

gem 'sass-json-vars'

or within Rails:

group :assets do
    gem 'sass-json-vars'
end

Asset paths when using sass-json-vars with the Ruby on Rails asset pipeline should automatically be handled.

Final thoughts

sass-json-vars supports all of the data types provided by Sass. This could be used to describe media queries for the breakpoint Sass plugin, or store special characters for icons generated by IcoMoon.

Checkout the repo on Github and feel free to comment about how you use it!

Author: "Nate Hunzaker" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 06 Jun 2014 15:24

I recently built an API with Sinatra and ran into a recurring challenge when dealing with resource-specific routes (like /objects/:id). The first thing I had to handle in each of those routes was whether or not a record for both the resource type and id existed. If it didn't, I wanted to send back a JSON response with some meaningful error message letting the API consumer know that they asked for a certain kind of resource with an ID that didn't exist.

My first pass looked something like this:

get '/objects/:id' do |id|
  object = Object.find_by_id(id)

  if object.nil?
    status 404
    json(errors: "Object with an ID of #{id} does not exist")
  else
    json object
  end
end

put '/objects/:id' do |id|
  object = Object.find_by_id(id)

  if object.nil?
    status 404
    json(errors: "Object with an ID of #{id} does not exist")
  else
    if object.update_attributes(params[:object])
      json object
    else
      json(errors: object.errors)
    end
  end
end

Seems ok, but there would be a lot of duplication if I had these if/else statements in every resource-specific route. Lately, I've looked for common if/else conditionals like this as an opportunity for method abstraction, particularly with the use of blocks and yield. The following methods are an example of this kind of abstraction:

def ensure_resource_exists(resource_type, id)
  resource = resource_type.find_by_id(id)

  if resource.nil?
    status 404
    json(errors: "#{resource_type} with an ID of #{id} does not exist")
  else
    yield resource if block_given?
  end
end

Then the initial example would look something like:

get '/objects/:id' do |id|
  ensure_resource_exists(Object, id) do |obj|
    json obj
  end
end

put '/objects/:id' do |id|
  ensure_resource_exists(Object, id) do |obj|
    if obj.update_attributes(params[:object])
      json obj
    else
      json(errors: obj.errors)
    end
  end
end

It hides away the distracting error case handling and gives us a readable, declarative method body.  Next time you find yourself dealing with repetitive error cases, use blocks like this for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 21 May 2014 17:05

Have you ever wanted to use an enumerated type in your Rails app? After years of feature requests, Rails 4.1 finally added them: a simple implementation that maps strings to integers. But what if you need something different?

On a recent project, I implemented a survey, where animals are matched by answering a series of multiple-choice questions. The models looked like this:

class Animal < ActiveRecord::Base
  has_many :answer_keys
end

class AnswerKey < ActiveRecord::Base
  belongs_to :animal

  validates :color, :hair_length, presence: true
end

An animal has many answer keys, where an answer key is a set of survey answers that matches that animal. color and hair_length each represent a multiple-choice answer and are natural candidates for an enum.

The simplest possible implementation might look like this:

validates :color,       inclusion: { in: %w(black brown gray orange yellow white) }
validates :hair_length, inclusion: { in: %w(less_than_1_inch 1_to_3_inches longer_than_3_inches) }

However, there were additional requirements for each of these enums:

  • Convert the value to a human readable name, for display in the admin interface
  • Export all of the values and their human names to JSON, for consumption and display by a mobile app

Currently, the enum values are strings; what I really need is an object that looks like a string but has some custom behavior. A subclass of String should do nicely:

module Survey
  class Enum < String
    # Locale scope to use for translations
    class_attribute :i18n_scope

    # Array of all valid values
    class_attribute :valid_values

    def self.values
      @values ||= Array(valid_values).map { |val| new(val) }
    end

    def initialize(s)
      unless s.in?(Array(valid_values))
        raise ArgumentError, "#{s.inspect} is not a valid #{self.class} value"
      end

      super
    end

    def human_name
      if i18n_scope.blank?
        raise NotImplementedError, 'Your subclass must define :i18n_scope'
      end

      I18n.t!(value, scope: i18n_scope)
    end

    def value
      to_s
    end

    def as_json(opts = nil)
      {
        'value'      => value,
        'human_name' => human_name
      }
    end
  end
end

This base class handles everything we need: validating the values, converting to human readable names, and exporting to JSON. All we have to do is subclass it and set the two class attributes:

module Survey
  class Color < Enum
    self.i18n_scope = 'survey.colors'

    self.valid_values = %w(
      black
      brown
      gray
      orange
      yellow
      white
    )
  end

  class HairLength < Enum
    self.i18n_scope = 'survey.hair_lengths'

    self.valid_values = %w(
      less_than_1_inch
      1_to_3_inches
      longer_than_3_inches
    )
  end
end

Finally, we need to add our human readable translations to the locale file:

en:
  survey:
    colors:
      black: Black
      brown: Brown
      gray: Gray
      orange: Orange
      yellow: Yellow/Blonde
      white: White
    hair_lengths:
      less_than_1_inch: Less than 1 inch
      1_to_3_inches: 1 to 3 inches
      longer_than_3_inches: Longer than 3 inches

We now have an enumerated type in pure Ruby. The values look like strings while also having the custom behavior we need.

Survey::Color.values

# => ["black", "brown", "gray", "orange", "yellow", "white"]

Survey::Color.values.first.human_name

# => "Black"

Survey::Color.values.as_json

# => [{"value"=>"black", "human_name"=>"Black"}, {"value"=>"brown", "human_name"=>"Brown"}, ...]

The last step is to hook our new enumerated types into our AnswerKey model for great justice. We want color and hair_length to be automatically converted to instances of our new enum classes. Fortunately, my good friend Zachary has already solved that problem. We just have to update our Enum class with the right methods:

def self.load(value)
  if value.present?
    new(value)
  else
    # Don't try to convert nil or empty strings
    value
  end
end

def self.dump(obj)
  obj.to_s
end

And set up our model:

class AnswerKey < ActiveRecord::Base
  belongs_to :animal

  serialize :color,       Survey::Color
  serialize :hair_length, Survey::HairLength

  validates :color,       inclusion: { in: Survey::Color.values }
  validates :hair_length, inclusion: { in: Survey::HairLength.values }
end

BONUS TIP — We probably need to add these enums to a form in the admin interface, right? If you're using Formtastic, it automatically looks at our #human_name method and does the right thing:

f.input :color, as: :select, collection: Survey::Color.values

Shazam.


 

Hey friend, have you implemented enums in one of your Rails apps? How did you do that? Let me know in the comments below. Have a nice day.

Author: "Chris Jones" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 16 May 2014 09:23

As we have the opportunity work on more Craft sites at Viget, we’ve been able to do some interesting integrations, like our most recent integration with the ecommerce platform Shopify. Below is a step-by-step guide to implementing Craft and Shopify by utilizing a plugin I built.

Craft Configuration

First, you need to download and install the Craft Shopify plugin, and add your Shopify API credentials in the plugin settings.

With the plugin installed, you also get a custom fieldtype that let’s you select a Shopify Product from a dropdown. So let’s create a field called Shopify Product.

Next, let’s create a Product section in Craft and add our Shopify Product field to that section. Now, when we go to publish a Product, we can associate the product in Craft with a product in Shopify.

The idea here is that we can use the powerful custom field functionality of Craft to build out product pages but still pull in the Shopify specific data (price, variants, etc). Then, we let Shopify handle the cart and checkout process.

Craft Templates

The Shopify plugin also provides some functionality to retrieve information from the Shopify API. So on our products/_entry template, we can use the value from our shopifyProduct field to grab information about the product from Shopify.

{% set shopify = craft.shopify.getProductById({ id: entry.shopifyProduct }) %}

That will hit the Shopify product endpoint, and return the data you can use in Craft. You can also specify particular fields to make the response smaller.

This means we can pretty easily create an Add to Cart form in our Craft templates.

{% set shopify = craft.shopify.getProductById({ id: entry.shopifyProduct, fields: 'variants' }) %}

<form action="http://your.shopify.url/cart/add" method="post">
	<select name="id">
		{% for variant in shopify.variants %}
			<option value="{{ variant.id }}">{{ variant.title }} - ${{ variant.price }}</option>
		{% endfor %}
	</select>
	<input type="hidden" name="return_to" value="back">
	<button type="submit">Add to Cart</button>
</form>

The plugin also provides a couple of additional methods to retrieve product information from Shopify.

craft.shopify.getProducts()

This method hits the products endpoint, and you can pass in any of the parameters that the documentation notes.

{% for product in craft.shopify.getProducts({ fields: 'title,variants', limit: 5 }) %}
	<div class="product">
		<h2>{{ product.title }}</h2>
		<ul>
			{% for variant in product.variants %}
				<li>{{ variant.title }} - ${{ variant.price }}</li>
			{% endfor %}
		</ul>
	</div>
{% endfor %}

craft.shopify.getProductsVariants()

I ended up creating this method because on the products index, I wanted an easy way to output the variants for each Craft product without having to hit the API multiple times. So basically what you do is call this method once, and then the keys of the array are the ID of the product. Again, you can pass in any of the parameters that the products endpoint documentation references, but if you don’t include the ID and variants, there isn’t any point in using this method!

{% set shopifyProducts = craft.shopify.getProductsVariants() %}

{% for entry in craft.entries({ section: 'product' }) %}
	<div class="product">
		<h2><a href="{{ entry.url }}">{{ entry.title }}</a></h2>

		{% if entry.shopifyProduct and shopifyProducts[entry.shopifyProduct] %}
			<ul>
				{% for variant in shopifyProducts[entry.shopifyProduct] %}
					<li>{{ variant.title }} - ${{ variant.price }}</li>
				{% endfor %}
			</ul>
		{% endif %}
	</div>
{% endfor %}

Really you can do whatever you want with the data that's returned; this is just a simple example to output the variants for each product.

So download the Craft Shopify plugin, and enjoy the simple integration with Shopify!

Author: "Trevor Davis" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 14 May 2014 19:00

Z-indexes: They're a huge problem.

Okay, they're not a huge problem. Still, there comes a moment on every large project where I'm opening several files to figure out why the header is above the modal but under the breadcrumb but over the content area. It's a common CSS issue that takes only a minute to resolve, but just for fun, let's ask ourselves man's most important question:

The answer: yes! With a little Sass magic, you can automate the +1/-1 stacking of z-index items and keep your hands clean of all those gross numbers.

Notes:

  1. The basic solution depends on Sass maps, which might not be available in your version of Sass. For a map-less version, read alternate solutions
  2. I'm using .scss syntax for these examples, although I prefer the indented syntax. Currently, .scss syntax is better for maps, because it allows multi-line rules. If this bugs you as much as it does me, comment here.

Basic example

http://codepen.io/averyvery/pen/jpyax

Start with a list

Begin with a simple list of the item names you want. These will be the keys you reference later, when you're building actual components:

$z-indexed-items: 'footer', 'content', 'sidebar', 'content-announcement', 'header', 'modal', 'survey-overlay';

Iterate and map

Now, step over your list and create a map that uses your items for keys, and your counter for the z-index:

$z-index-map: ();
$counter: 1;

@each $item in $z-indexed-items {
  $z-index-map: map-merge($z-index-map, ($item: $counter));
  $counter: $counter + 1;
}

Mix in

z is a simple mixin that returns the z-index from the map you've created. Done!

@mixin z($key) {
  z-index: map-get($z-index-map, $key);
}

@include z('survey-overlay); // sets z-index to 7

Alternate solutions

Nested maps

http://codepen.io/averyvery/pen/Hmwhb

$z-indexed-items: (
  'global': (
    'footer',
    'header'
  ),
...

@include z('global', 'header'); // sets z-index to 2

One global set of z-indexes is useful, but it's not going to totally replace z-index for you. Here's an example of the same idea (pre-building a map of z-indexes, then performing look-ups) in a two-tier map that uses groups.

Selectors instead of keys

http://codepen.io/averyvery/pen/gIpjy

// sets .footer z-index to 1, .header to 2
$z-indexed-selectors: '.footer' '.header';

Iterating over selectors might be simpler if you're defining a basic global stack, but as the site grows it's probable going to come back to bite you. Tricky selectors, media queries, and dynamic class changes can trip up this approach.

List matching

http://codepen.io/averyvery/pen/xondH

$modal-z: 'photo', 'caption', 'title', 'close-button';

.close-button {
  @include z('close-button', $modal-z);
}

The above methods have a downside: they require you define your z-index stacks up front, but in a large system you might want to write them as needed alongside your components. In that case, you can use a mixin that iterates over a provided list, like in this example.


Admittedly, z-index stacks aren't the most pressing problem in web development, but hopefully this post outlined a few clever ways to use Sass to abstract them (or other issues) out of your code. If you have any suggested solutions or questions about the idea, let me know in the comments!

Author: "Doug Avery" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 30 Apr 2014 14:44

In case you're unfamiliar, ActiveRecord#serialize is a method that allows an attribute to be saved in the database and later retrieved as an object. The common example given involves storing a user's preferences like so:

# Serialize a preferences attribute.
class User < ActiveRecord::Base
  serialize :preferences
end

user = User.new
user.preferences = { send_spam: false, send_deals: true }
user.save # => preferences saved as YAML in a text field, later retrieved as a Hash

You can even enforce the type of the object stored in the column:

class User < ActiveRecord::Base
  serialize :preferences, Hash
end

user = User.new
user.preferences = 'lolwut'
user.save # => raises ActiveRecord::SerializationTypeMismatch

Pretty neat stuff, but it requires the database column to be a text field. What if I want the database column to be a different type? Well, I recently dug into the Rails source code to find out how serialize works and what's required to use it with a custom data type.

I had a feature that allowed a user to manage a time duration in minutes and seconds. This time duration would be stored as the total amount of seconds in the database so that it could easily be converted to other units of time. It would be ideal if that seconds field had an integer type. After several iterations, I settled on a Duration class that would handle the conversions. The next step was to figure out how to store and retrieve an instance of Duration in the database.

Upon inspection of the serialize method within the Rails source, I discovered this interesting conditional (copied here for convenience):

coder = if [:load, :dump].all? { |x| class_name.respond_to?(x) }
          class_name
        else
          Coders::YAMLColumn.new(class_name)
        end

So, if the provided class_name responds to load and dump, then the serialize method will use that class. Otherwise, it will fallback to the Coders::YAMLColumn class to handle the loading and saving to the database. I took a look at how the load and dump methods were implemented on the Coders::YAMLColumn class:

def dump(obj)
  return if obj.nil?

  unless obj.is_a?(object_class)
    raise SerializationTypeMismatch,
      "Attribute was supposed to be a #{object_class}, but was a #{obj.class}. -- #{obj.inspect}"
  end
  YAML.dump obj
end

def load(yaml)
  return object_class.new if object_class != Object && yaml.nil?
  return yaml unless yaml.is_a?(String) && yaml =~ /^---/
  obj = YAML.load(yaml)

  unless obj.is_a?(object_class) || obj.nil?
    raise SerializationTypeMismatch,
      "Attribute was supposed to be a #{object_class}, but was a #{obj.class}"
  end
  obj ||= object_class.new if object_class != Object

  obj
end

To briefly summarize the above code, the dump method takes an object and returns the value to be stored in the database, and the load method takes the database value and returns an instance of the specified class.

I took everything that I learned about the serialize method and applied it in a custom Duration data type:

class Duration
  # Used for `serialize` method in ActiveRecord
  class << self
    def load(duration)
      self.new(duration || 0)
    end

    def dump(obj)
      unless obj.is_a?(self)
        raise ::ActiveRecord::SerializationTypeMismatch,
          "Attribute was supposed to be a #{self}, but was a #{obj.class}. -- #{obj.inspect}"
      end

      obj.length
    end
  end


  attr_accessor :minutes, :seconds

  def initialize(duration)
    @minutes = duration / 60
    @seconds = duration % 60
  end

  def length
    (minutes.to_i * 60) + seconds.to_i
  end
end

Then, in my ActiveRecord model, I added the following snippet of code:

serialize :duration_field, Duration

delegate :minutes, :minutes=, :seconds, :seconds=, to: :duration_field

And there we have it -- a lightweight class that's able to take advantage of a method provided by the Rails framework.

What do you think? Are you using the serialize method with your custom data types? Let me know in the comments below.

Author: "Zachary Porter" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 25 Apr 2014 14:04

One of my favorite parts of being a developer here at Viget is our developer book club. We’ve read some fantastic books, but for our most recent go-round, we decided to try something different. A few of us have been interested in the Go programming language for some time, so we decided to combine two free online texts, An Introduction to Programming in Go and Go By Example, plus a few other resources, into a short introduction to the language. Chris and Ryan put together a curriculum that I thought was too good not to share with the internet at large.

Week 1

Chapter 1: Getting Started

  • Files and Folders
  • The Terminal
  • Text Editors
  • Go Tools
  • Go By Example

Chapter 2: Your First Program

  • How to Read a Go Program

Chapter 3: Types

Chapter 4: Variables

Chapter 5: Control Structures

Chapter 6: Arrays, Slices and Maps

Week 2

Chapter 7: Functions

Chapter 8: Pointers

Week 3

Chapter 9: Structs and Interfaces

Chapter 10: Concurrency

Week 4

Week 5

Chapter 11: Packages

  • Creating Packages
  • Documentation

Chapter 12: Testing

Chapter 13: The Core Packages

Chapter 14: Next Steps

  • Study the Masters
  • Make Something
  • Team Up

* * *

Go is an exciting language, and a great complement to the Ruby work we do. Working through this program was a fantastic intro to the language and prepared us to create our own Go programs for great justice. Give it a shot and let us know how it goes.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 18 Apr 2014 09:40

Drink in that impressive title for a moment. It's definitely better than the reality of me saying "I implemented Conway's Game of Life ... in Processing ... poorly ... using Khan Academy's online code editor". It's the truth, though.

After working on pixel drawings with the kids, I was inspired to try more programs of my own. Having never implemented the Game of Life before, I thought it would be a good opportunity to put my recently-discovered expertise with pixel drawing to work. The rules of the game are simple:

  1. If a live cell has fewer than 2 neighbors, it dies (from underpopulation)
  2. If a live cell has more than 3 neighbors, it dies (from overcrowding)
  3. If a live cell has 2 or 3 neighbors, it lives on to the next generation
  4. If a dead cell has exactly 3 neighbors, it is born (from reproduction)

The bonus is that it's a "zero player" game — my favorite!

My First Attempt

Yes, it's true. I did admit to never implementing this simulation before. I'm sure this will end well.

The overall approach is simple:

  1. Create a matrix of values (0 = dead, 1 = alive)
  2. Iterate through each cell in the matrix and count the neighbors
  3. Apply the rules to the current cell to turn it on or off
  4. Repeat with the remaining cells
  5. Draw the updated grid
  6. Repeat

Following these steps allowed me to create a convincing simulation:

Not bad. After looking closer at how the simulation behaved, I noticed that the initial patterns that should create ocillators didn't ... oscillate:

My simulation wasn't treating each drawing iteration as an independent generation. Instead, the current board state was constantly being modified as I analyzed each cell in the matrix.

A Generation at a Time

To create a more conforming simulation, I modified my initial approach to buffer the next state of the board:

  1. Create 2 matrices of values to represent the current and next generation
  2. Iterate through each cell in the current generation and count the neighbors
  3. Apply the rules to turn the cell on / off in the next generation
  4. Copy the next generation to the current generation
  5. Draw the updated grid
  6. Repeat

This resulted in a more convincing simulation:

Now that I was on the right track, I wanted to see what other simulations were possible.

Changing the Rules

The Wikipedia page for the Game of Life describes some variations on the rules that result in different patterns. The well-known rules are expressed as "B3/S23" — a cell is born if it has exactly 3 neighbors, and stays alive if there are 2 or 3 neighbors. Changing these rules results in other interesting patterns. Take B1/S12, for example:

There are many other possibilities out there (try B1/S1, for example), so check out the simulation on my programs page and tweak some of the rules to see what you come up with.

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 11 Apr 2014 16:31

Here at Viget, we've successfully used ActiveAdmin on a number of custom CMS projects. ActiveAdmin is a great help in providing a sensible set of features out-of-the-box, while still allowing heavy customization for great justice. It also has a very opinionated way of doing things, which can make customization a bit tricky (eg. layouts via Arbre, Custom Pages, etc.)

After working with ActiveAdmin for a couple of years, here are 8 customizations that I find myself using often:

1. Adding a custom behavior to ActiveAdmin::BaseController:

Often times, you'll want to add a before_filter to something like your typical ApplicationController, but scoped to the ActiveAdmin engine. In this case, you can add custom behavior by following this pattern:

# config/initializers/active_admin_extensions.rb:
ActiveAdmin::BaseController.send(:include, ActiveAdmin::SiteRestriction)
# lib/active_admin/site_restriction.rb:
module ActiveAdmin
  module SiteRestriction
    extend ActiveSupport::Concern

    included do
      before_filter :restrict_to_own_site
    end

    private

    def restrict_to_own_site
      unless current_site == current_admin_user.site
        render_404
      end
    end
  end
end

2. Conditionally add a navigation menu item:

When you have multiple admin types, you may want to only show certain menu items to a specific type of admin user. You can conditionally show menu items:

# app/admin/resource.rb:
ActiveAdmin.register Resource do
  menu :parent => "Super Admin Only", :if => proc { current_admin_user.super_admin? }
end

3. To display an already uploaded image on the form:

ActiveAdmin uses Formtastic behind the scenes for forms. Persisting uploads between invalid form submissions can be accomplished via f.input :image_cache, :as => :hidden. However you may want to display the already uploaded image upon visiting the form for editing an existing item. You could use set one using a hint (f.input :image, :hint => (f.template.image_tag(f.object.image.url) if f.object.image?)), but this won't allow you to set any text as a hint. Instead, you could add some custom behavior to the Formtastic FileInput:

# app/admin/inputs/file_input.rb
class FileInput < Formtastic::Inputs::FileInput
  def to_html
    input_wrapping do
      label_html <<
      builder.file_field(method, input_html_options) <<
      image_preview_content
    end
  end

  private

  def image_preview_content
    image_preview? ? image_preview_html : ""
  end

  def image_preview?
    options[:image_preview] && @object.send(method).present?
  end

  def image_preview_html
    template.image_tag(@object.send(method).url, :class => "image-preview")
  end
end

# app/admin/my_class.rb
ActiveAdmin.register MyClass do

  form do |f|
    f.input :logo_image, :image_preview => true
  end
end

4. Scoping queries:

ActiveAdmin uses Inherited Resources behind the scenes. Inherited Resources uses the concept of resource and collection. ActiveAdmin uses a controller method called scoped_collection which we can override to add our own scope.

In your ActiveAdmin resource definition file:

# app/admin/my_class.rb
ActiveAdmin.register MyClass do 

  controller do
    def scoped_collection
      Post.for_site(current_site)
    end
  end
end

Similarly, you can override the resource method to customize how the singular resource is found.

5. Customizing the method by which a resource is found by the URL:

Often times we add a “slug” to a resource for prettier URLs. You can use your friendly URL parameters in the CMS as well:

# app/admin/page.rb
ActiveAdmin.register Page do

  controller do
    defaults :finder => :find_by_slug!
  end
end

I found that when a user submits the form while changing the slug value and the form is invalid, the next form submission will attempt to POST to the invalid slug. To fix this, I updated my model's to_param method from:

def to_param
  slug
end

To:

def to_param
  if invalid? && slug_changed?
    # return original slug value
    changes["slug"].first
  else
    slug
  end
end

6. Inserting arbitrary content into the form:

It can be handy to insert some explaining text, or even images, to within the Formtastic form itself. You'll need to insert the content into the form buffer:

Text:

form do |f|
  f.form_buffers.last << "<li>My Text</li>".html_safe
end

Image:

form do |f|
  f.form_buffers.last << f.template.image_tag('http://doge.com/image.png', :height => 350)
end

7. Dynamic Site Title:

It's possible you have multiple user types or sites that are managed via a single CMS. In this case, you may want to update the title displayed in the Navigation Menu.

# config/initializers/active_admin.rb
config.site_title = proc { "#{current_site.name} CMS" }

8. Manually defining sort order of top level navigation menu items:

It's easy to define the priority (sort order) of menu items when they belong to a parent, however if you are trying to set the sort order of top level parent items, it's a bit trickier.

# config/initializers/active_admin.rb
config.namespace :admin do |admin|
  admin.build_menu do |menu|
    menu.add :label => "First Item",  :priority => 1
    menu.add :label => "Second Item", :priority => 2
    menu.add :label => "Third Item",  :priority => 3
  end
end

What customizations have you found that you think are worth sharing? Please leave a note in the comments below!

Author: "Mike Ackerman" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 08 Apr 2014 10:28

Getting Started

One weekend, I decided to really immerse myself in Grunt and RequireJS. Gotta stay up on these things right? Done. Then Monday rolls around, “and just like that Grunt and RequireJS are out, it’s all about Gulp and Browserify now.”

(╯°□°)╯︵ ┻━┻

When I was done flipping tables, I set aside my newly acquired Grunt + RequireJS skills, and started over again with Gulp and Browserify to see what all the fuss was about.

You guys. The internet was right. To save you some googling, doc crawling, and trial and error I went through, I've assembled some resources and information I think you'll find helpful in getting started.

 ┬─┬ノ( º _ ºノ) 

Gulp + Browserify starter repo

I've created a Gulp + Browserify starter repo with examples of how to accomplish some common tasks and workflows.

Frequently Asked Questions Wiki

Node, npm, CommonJS Modules, package.json…wat? When I dove into this stuff, much of the documentation out there assumed a familiarity with things with which I was not at familiar all. I've compiled some background knowledge into a FAQ Wiki attached to the above mentioned starter repo to help fill in any knowledge gaps.

Why Gulp is Great

It makes sense.

I picked up Gulp just days after learning Grunt. For whatever reason, I found Gulp to be immediately easier and more enjoyable to work with. The idea of piping a stream a files through different processes makes a lot of sense.

gulp's use of streams and code-over-configuration makes for a simpler and more intuitive build. - gulpjs.com

Here's what a basic image processing task might look like:

var gulp       = require('gulp');
var imagemin   = require('gulp-imagemin');

gulp.task('images', function(){
    return gulp.src('./src/images/**')
        .pipe(imagemin())
        .pipe(gulp.dest('./build/images'));
});

First, gulp.src sucks in a stream of files and gets them ready to be piped through whatever tasks you've made available. In this instance, I'm running all the files through gulp-imagemin, then outputting them to my build folder using gulp.dest. To add additional processing (renaming, resizing, liveReloading, etc.), just tack on more pipes with tasks to run.

Speed!

It's really fast! I just finished building a fairly complex JS app. It handled compiling SASS, CoffeeScript with source maps, Handlebars Templates, and running LiveReload like it was no big deal.

By harnessing the power of node's streams you get fast builds that don't write intermediary files to disk. - gulpjs.com

This killer way to break up your gulpfile.js

A gulpfile is what gulp uses to kick things off when you run gulp. If you're coming from Grunt, it's just like a gruntfile. After some experimenting, some Pull Request suggestions, and learning how awesome Node/CommonJS modules are (more on that later), I broke out all my tasks into individual files, and came up with this gulpfile.js. I'm kind of in love with it.

var gulp = require('./gulp')([
    'browserify',
    'compass',
    'images',
    'open',
    'watch',
    'serve'
]);

gulp.task('build', ['browserify', 'compass', 'images']);
gulp.task('default', ['build', 'watch', 'serve', 'open']);

~200 characters. So clean, right? Here's what's happening: I'm requireing a gulp module I've created at ./gulp/index.js, and am passing it a list of tasks that correspond to task files I've saved in ./gulp/tasks.

var gulp = require('gulp');

module.exports = function(tasks) {
    tasks.forEach(function(name) {
        gulp.task(name, require('./tasks/' + name));
    });

    return gulp;
};

For each task name in the array we're passing to this method, a gulp.task gets created with that name, and with the method exported by a file of the same name in my ./tasks/ folder. Now that each individual task has been registered, we can use them in the bulk tasks like default that we defined at the bottom of our gulpfile.

folder structure

This makes reusing and setting up tasks on new projects really easy. Check out the starter repo, and read the Gulp docs to learn more.

Why Browserify is Great

“Browserify lets you require('modules') in the browser by bundling up all of your dependencies.” - Browserify.org

Browserify looks at a single JavaScript file, and follows the require dependency tree, and bundles them into a new file. You can use Browserify on the command line, or through its API in Node (using Gulp in this case).

Basic API example

app.js

var hideElement = require('./hideElement');

hideElement('#some-id');

hideElement.js

var $ = require('jquery');
module.exports = function(selector) {
    return $(selector).hide();
};

gulpfile.js

var browserify = require('browserify');
var bundle = browserify('./app.js').bundle()

Running app.js through Browserify does the following:

  1. See that app.js requires hideElement.js
  2. See that hideElement.js requires a module called jquery
  3. Bundles together jQuery, hideElement.js, and app.js into one file, making sure each dependency is available when and where it needs to be.

CommonJS > AMD

Our team had already moved towards module-based js with Require.js and Almond.js, which both are implementations of the AMD module pattern. We loved the organization and benefits of this provided, but…

AMD / RequireJS Modules felt cumbersome and awkward.

require([
    './thing1',
    './thing2',
    './thing3'
], function(thing1, thing2, thing3) {
    // Tell the module what to return/export
    return function() {
        console.log(thing1, thing2, thing3);
    };
});

The first time using CommonJS (Node) modules was a breath of fresh air.

var thing1 = require('./thing1');
var thing2 = require('./thing2');
var thing3 = require('./thing3');

// Tell the module what to return/export
module.exports = function() {
    console.log(thing1, thing2, thing3);
};

Make sure to read up on how require calls resolve to files, folders, and node_modules.

Browserify is awesome because Node and NPM are awesome.

Node uses the CommonJS pattern for requiring modules. What really makes it powerful though is the ability to quickly install, update, and manage dependencies with Node Package Manager (npm). Once you've tasted this combination, you'll want that power for always. Browserify is what lets us have it in the browser.

Say you need jQuery. Traditionally, you might open you your browser, find the latest version on jQuery.com, download the file, save it to a vendor folder, then add a script tag to your layout, and let it attach itself to window as a global object.

With npm and Browserify, all you have to do is this:

Command Line

npm install jquery --save

app.js

var $ = require('jquery');
$('.haters').fadeOut();

This fetches the latest version of jQuery from NPM, and downloads the module into a node_modules folder at the root of your project. The --save flag automatically adds the package to your dependencies object in your package.json file. Now you can require('jquery') in any file that needs it. The jQuery object gets exported locally to var $, instead of globally on window. This was especially nice when I built a script that needed to live on unknown third party sites that may or may not already have another version of jQuery loaded. The jQuery packaged with my script is completely private to the js that requires it, eliminating the possibility of version conflict issues.

The Power of Transforms

Before bundling your JavaScript, Browserify makes it easy for you to preprocess your files through a number of transforms before including them in the bundle. This is how you'd compile .coffee or .hbs files into your bundle as valid JavaScript.

The most common way to do this is by listing your transforms in a browserify.transform object your package.json file. Browserify will apply the transforms in the order in which they're listed. This assumes you've npm install'd them already.

  "browserify": {
    "transform": ["coffeeify", "hbsfy" ]
  },
 "devDependencies": {
    "browserify": "~3.36.0",
    "coffeeify": "~0.6.0",
    "hbsfy": "~1.3.2",
    "gulp": "~3.6.0",
    "vinyl-source-stream": "~0.1.1"
  }

Notice that I've listed the transforms under devDependencies since they're only used for preprocessing, and not in our final javascript output. You can do this automatically by adding the --save-dev or -D flag when you install.

npm install someTransformModule --save-dev

Now we can require('./view.coffee') and require('./template.hbs') like we would any other javascript file! We can also use the extentions option with the Browserify API to tell browserify to recognize these extensions, so we don't have to explicitly type them in our requires.

browserify({
    entries: ['./src/javascript/app.coffee'],
    extensions: ['.coffee', '.hbs']
})
.bundle()
...

See this in action here.

Using them together: Gulp + Browserify

Initially, I started out using the gulp-browserify plugin. A few weeks later though, Gulp added it to their blacklist. Turns out the plugin was unnecessary - you can can node-browserify API straight up, with a little help from vinyl-source-stream This just converts the bundle into the type of stream gulp is expecting. Using browserify directly is great because you'll always have access to 100% of the features, as well as the most up-to-date version.

Basic Usage

var browserify = require('browserify');
var gulp = require('gulp');
var source = require('vinyl-source-stream');

gulp.task('browserify', function() {
    return browserify('./src/javascript/app.js')
        .bundle()
        //Pass desired output filename to vinyl-source-stream
        .pipe(source('bundle.js'))
        // Start piping stream to tasks!
        .pipe(gulp.dest('./build/'));
});

Awesome Usage

Take a look at the browserify.js task and package.json in my starter repo to see how to apply transforms for CoffeeScript and Handlebars, set up non-common js modules and dependencies with browserify-shim, and handle compile errors through Notification Center. To Learn more about everything else you can do with Browserify, read through the API. I hope you have as much fun with it as I'm having. Enjoy!

Author: "Dan Tello" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 07 Apr 2014 11:32

Like many other programmers with children I'm interested in passing my skills on to them. If you've ever had the opportunity to work with me, this may or may not be a horrifying proposition — I'll let you be the judge.

My two oldest (9 and 7) are now at a good age to start learning the basics, so I've been spending every Wednesday morning before work teaching them with the help of Khan Academy. Since I'm new to this, there's definitely a lot of trial and error to figure out the intersection between their interests and capabilities. In the process I've discovered a few things that work and a few that don't. These lessons are specific to me, but you may find them helpful.

A False Start

My initial approach combined a general introduction to computing with basic programming concepts in Ruby. I spent a couple of Wednesday mornings talking to my son about the basics of computing answering his questions, researching answers with him, and working on basic programming concepts in IRB.

This was fun for the both of us and his questions were really awesome, but I found that it didn't hold his interest for long. While the programming exercises I came up with were basic, they weren't all that compelling for a child his age. The positive feedback was there — seeing 1 + 1 output 2 when typed in to an IRB prompt showed that something was happening, but it wasn't as exciting as I thought an introduction to programming should be.

Play on Their Interests

While my son may not have been excited about our forays into IRB, I knew there was one thing he was excited about. For the last 2 years, we've been teaching him math at home with the help of Khan Academy. He's enjoyed it for the most part, but it seems that every moment I left him unsupervised he would jump over to the programming section and explore the other students' creations. To be fair, there's some really impressive stuff there — he's particular to Minecraft while I've enjoyed playing Falling Pixel.

At first his lack of focus frustrated me, but rather than fighting to keep him focused on math I thought I could instead use his interest in games to my advantage. At the time Khan Academy introduced the programming curriculum I thought that he wasn't quite ready, but now I was motivated to give it a shot.

Read Ahead

A few days before introducing the curriculum to him, I went through it myself. I watched the videos, worked through the sample exercises, and created some simple programs of my own. Between the quality of the walk-through videos and the immediate feedback of seeing my drawing appear right after I typed my code, I knew this approach would be a hit.

I quickly printed off a few grid worksheets and a cheat sheet to prepare for that Wednesday's class. The next morning, at breakfast, I presented the idea to him. I knew he would be excited, but I wasn't prepared for what came next — my daughter wanted to join in as well!

The Technique

Now that I had two students in my class, I had to figure out a way to work with both of them at an appropriate pace. For our first lesson, I decided to have both kids share a laptop and watch the videos together. When it was time to do the first exercise, they took turns typing in rect, the appropriate coordinates, and adjusting the values as necessary.

After this first class, I had a little bit of "homework" for them — I presented both with a few of the grid worksheets and a quick pixel drawing I did of some Space Invaders and asked them to come up with their own versions.

I demonstrated the basic technique, told them that we would spend the next class creating them on Khan Academy, and then set them loose on their own creations. To my surprise, my daughter made a rather detailed drawing of a red-haired girl that she planned to turn into a video game.

During the next class, we re-familiarized ourselves with the basic drawing functions and set out to recreate her sketch. Looking at the graph paper, I helped her come up with the coordinates, and she typed in the commands to make her drawing come to life.

Since my son had been learning how to add color to drawings, he jumped in later in the day to help improve what she and I had done earlier that morning.

The kids were really excited to see their drawings come to "life" — the feedback that they got from seeing their creations appear immediately after they typed in a valid function was a big motivator.

Future

The Khan Academy program takes a progressive approach to teaching programming in a way that keeps my kids engaged. We will continue working through some basic drawing programs and then move on to variables and animation. As you can imagine, they are both excited about the idea of building a video game — I'm excited to see what they come up with.

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader