• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Wednesday, 01 Oct 2014 13:00

Any web developer who has been working in the industry for more than a few days has probably heard of WordPress. Stay for a couple more months and there’s a good chance you’ve worked on a WordPress site — it’s a popular platform since it’s well-known, easy-to-use, and free.

You may have also heard of ExpressionEngine, Drupal, Joomla, and a few other CMS heavy-hitters. They all have their benefits and flaws, which is a topic for another time. I’d rather talk about the next up-and-comer and my new favorite, Craft.

Craft is a small CMS that was developed fairly recently by ExpressionEngine add-on developers Pixel & Tonic. Having worked with ExpressionEngine for a while, it’s obvious these guys really know the pain-points in any client-facing CMS. Everything they’ve built into Craft solves a problem I’ve had on almost every site I ever made with WordPress. If I had to make a CMS myself, it would probably resemble Craft pretty closely.

So, how does Craft stack up against the industry go-to?

The Good

Craft is like WordPress if it was stripped naked and then clothed in Advanced Custom Fields

WordPress strives to give its users as much as possible out-of-the-box, whether that user is a novice blogger or a talented developer who needs a good admin panel. The result is often too much functionality, which forces developers to strip out or disable features to meet the needs of your very custom website (like Comments or the Links Manager, pre WordPress 3.5).

Craft, on the other hand, starts with just the basic building blocks and minimal defaults. The Sections and Fields it does provide let you build up your content types and inputs to get to the custom dashboard you want. I would even say it takes less time to build Craft up to WordPress status than it does to fight with WordPress settings to take out all of those Blog-centric things you don’t need.

Another win for Craft is that its Fields build your content interface in much the same way as WordPress’s Advanced Custom Fields plugin (a must-have tool with WordPress in my book), but without downloading and installing another piece.

Building instead of Manipulating

This difference in the platforms’ philosophies is apparent in their templating tools. WordPress supplies and spits out a lot of its own default HTML that requires manipulation of the API to change. Craft comes with nothing. No HTML at all. Which is glorious.

There is no API to coerce into the markup that you want and there is no lazy settling for the default because there is no default. As the O.C.D., semantic-obsessed front-end developer that I am, this is perfect. I set all my own HTML (with Twig templates in Craft), styles, and attributes from scratch. <3

Relationships are Hard

One thing that is a recurring pain to work around in WordPress is its lack of relationships between Post Types. If one Post Type needs to be related to another Post Type, you have to make some middle-man taxonomy or category to relate them, do some PHP magic to make your own custom inputs in the post editing screen, or find a plugin that meets your need. With Craft, Entries (the Crafty brother to Posts) are easily related with a simple Field type. Drag, drop, done. Hallelujah!

A Common Example: Say a non-profit site has a Section of “Social Causes” pages and they want all of their News and information to be categorized by and related to these Causes. Creating those relationships is far more difficult in WordPress.

Welcome to the Matrix

One other great client-serving feature of Craft is its handy Matrices. With Matrices, you can set up your interface in Blocks, which your client can then use to build their page — it’s a win-win-win for clients, designers, and developers. Clients can control the order of their well-designed content without hacks or careful content input into a catch-all Editor; designers can rest assured that their designs won’t be fouled-up by user error; and developers have complete control over the mark-up which is generated by these blocks.

In comparison, WordPress can do repeating blocks of the same content in a row. Not too bad. The catch is that you need Advanced Custom Fields, plus their Repeater Field addition which, unlike the main plugin, does cost a few bucks.

Less PHP! And More PHP! Wat?

This last “Good” point is relative to the type of developer you are, so I admit this could easily go in the “Bad” section. Although Craft is built off of PHP like WordPress, Craft uses Twig templates. This is great for front-end developers already familiar with other templating languages like Handlebars or Liquid, but may not be for all you PHP gurus. I personally like the change, since loops feel a little less clunky and the data syntax is closer to JavaScript.

You will find PHP in Craft’s plugins. Since Craft does not have Themes, there really isn’t anywhere to put your common template helpers and useful functions. Instead, be prepared to write your own Plugin to add what you need. Gone are the days of plopping in a random PHP function into functions.php. This is great since your site is then based off of good modular code and proper PHP Classes, but you may need to read up a bit to get there.


The Bad

Google Maps vs. Apple Maps

As Apple found out the hard way, it can be tough to beat the big dog in the market. WordPress has been around longer, has more resources, and has more developers actively contributing to it. If you hit an issue with Craft, resources are few. I’m sure the Craft community will catch up, but in the mean time I recommend making a few new Twitter friends. @Craftcms, the folks from Pixel & Tonic themselves, and Viget’s own Trevor Davis are good follows. Those passionate about Craft are happy to answer questions.

Craft? Crafts? Kraft? Minecraft?

Craft picked a pretty tough name in the Googleverse. Searching for common problems becomes a real chore, simply because you have to sort through 50 articles about Minecraft before getting to the few sources that are available. Compare that to WordPress results, which will stretch for pages and probably include at least five well-written solutions to your problem on Stack Overflow.

I search “Craft CMS” for the best results and include “Twig” if it’s a templating problem.

The Deal-breaker

One obstacle for some clients when it comes to Craft is the price. One can’t help but second guess the choice to throw down $299 when the usual go-to CMS is FREE. It’s not such a tough sell on the agency level since clients have usually come prepared to spend much larger sums, but freelancers might have a harder time justifying the cost. Even so, I recommend you try — it’s a one-time fee that then unlocks all of Craft’s best features and goes toward the support and further development of the system.

Can I get both Pills? The Matrix + Inception

For good reason, Pixel & Tonic have restricted Craft’s Matrix a bit. Some things I would love to see in future releases:

Re-using Blocks: I’d like to use already made Fields or sets of Fields inside Matrix blocks. If the same “module” exists both inside and outside of my Matrix, keeping the data the same requires careful duplication.

Matrix Inception: Please?

Craft leaves a easter-egg comment in their code: No Matrix Inception, sorry buddy.

I’ve hit data that could really use one more level inside of Craft’s Matrices. (A page builder matrix with a repeating section inside of it does not compute.)  It sounds like Pixel & Tonic are working on this, so my wishes should be fulfilled soon.

The Ugly Data

I am not a database-taming type of dev. I cannot whip up SQL queries and the like to properly clean and clear up my database. I may not be the right person to comment on this particular area, but I will say that the data for both of these platforms is rather Ugly.


The biggest headache when working with WordPress in multiple environments is, of course, its data. Of all things to store in your database, WordPress stores the root URL of your website. If you’ve worked with WordPress, you know exactly what I’m talking about. If a developer ever has to do a manual find-and-replace within a SQL file, something is very, very wrong. Other undesirables include excessive settings management and a single data model for any kind of content.


Craft data is beautiful until you get to the “craft_content” table. And then you realize where all your great Fields went. After careful curation of Fields, Field Groups, and to which Entry Types they are assigned, each Field becomes a new column in this table and you’re hit with a wall of NULL data.

Sometimes you hits areas of the database where it's just column after column of NULL data.

Such NULL. It seems like such a waste when each Entry Type could be broken into its own table to greatly reduce the NULL.

Aside from that, I would say Craft stores too much of its structure as data only. Again like WordPress’s Advanced Custom Fields, Craft stores the new Fields you add to your admin panel as data. This is a big drawback when juggling environments. Advanced Custom Fields later solved this problem by adding an export tool to spit out and store your Fields as a PHP object. I hope Craft will soon follow suit so I can stop adding “To Be Used Later” fields to Production sites in order to preserve the flow of data.


TL;DR: Craft is better than WordPress for more custom websites because developers can build instead of manipulate. This philosophy applies to creating the admin, content entry, and templating. Prominent downsides include difficulty finding solutions, the price, and storing all Fields as data only.

Author: "Megan Zlock" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 15:33

Despite some exciting advances in the field, like Node, Redis, and Go, a well-structured relational database fronted by a Rails or Sinatra (or Django, etc.) app is still one of the most effective toolsets for building things for the web. In the coming weeks, I’ll be publishing a series of posts about how to be sure that you’re taking advantage of all your RDBMS has to offer.

ASSUMING MY LAST POST CONVINCED YOU of the why of marking required fields NOT NULL, the next question is how. When creating a brand new table, it’s straightforward enough:

CREATE TABLE employees (
    id integer NOT NULL,
    name character varying(255) NOT NULL,
    created_at timestamp without time zone,

When adding a column to an existing table, things get dicier. If there are already rows in the table, what should the database do when confronted with a new column that 1) cannot be null and 2) has no default value? Ideally, the database would allow you to add the column if there is no existing data, and throw an error if there is. As we’ll see, depending on your choice of database platform, this isn’t always the case.

A Naïve Approach

Let’s go ahead and add a required age column to our employees table, and let’s assume I’ve laid my case out well enough that you’re going to require it to be non-null. To add our column, we create a migration like so:

class AddAgeToEmployees < ActiveRecord::Migration
  def change
    add_column :employees, :age, :integer, null: false

The desired behavior on running this migration would be for it to run cleanly if there are no employees in the system, and to fail if there are any. Let’s try it out, first in Postgres, with no employees:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer, {:null=>false})
   -> 0.0006s
==  AddAgeToEmployees: migrated (0.0007s) =====================================

Bingo. Now, with employees:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer, {:null=>false})
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

PG::NotNullViolation: ERROR:  column "age" contains null values

Exactly as we’d expect. Now let’s try SQLite, without data:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer, {:null=>false})
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

SQLite3::SQLException: Cannot add a NOT NULL column with default value NULL: ALTER TABLE "employees" ADD "age" integer NOT NULL

Regardless of whether or not there are existing rows in the table, SQLite won’t let you add NOT NULL columns without default values. Super strange. More information on this … quirk … is available on this StackOverflow thread.

Finally, our old friend MySQL. Without data:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer, {:null=>false})
   -> 0.0217s
==  AddAgeToEmployees: migrated (0.0217s) =====================================

Looks good. Now, with data:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer, {:null=>false})
   -> 0.0190s
==  AddAgeToEmployees: migrated (0.0191s) =====================================

It … worked? Can you guess what our existing user’s age is?

> be rails runner "p Employee.first"
#<Employee id: 1, name: "David", created_at: "2014-07-09 00:41:08", updated_at: "2014-07-09 00:41:08", age: 0>

Zero. Turns out that MySQL has a concept of an implicit default, which is used to populate existing rows when a default is not supplied. Neat, but exactly the opposite of what we want in this instance.

A Better Approach

What’s the solution to this problem? Should we just always use Postgres?


But if that’s not an option (say your client’s support contract only covers MySQL), there’s still a way to write your migrations such that Postgres, SQLite, and MySQL all behave in the same correct way when adding NOT NULL columns to existing tables: add the column first, then add the constraint. Your migration would become:

class AddAgeToEmployees < ActiveRecord::Migration
  def up
    add_column :employees, :age, :integer
    change_column_null :employees, :age, false

  def down
    remove_column :employees, :age, :integer

Postgres behaves exactly the same as before. SQLite, on the other hand, shows remarkable improvement. Without data:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer)
   -> 0.0024s
-- change_column_null(:employees, :age, false)
   -> 0.0032s
==  AddAgeToEmployees: migrated (0.0057s) =====================================

Success – the new column is added with the null constraint. And with data:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer)
   -> 0.0024s
-- change_column_null(:employees, :age, false)
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

SQLite3::ConstraintException: employees.age may not be NULL

Perfect! And how about MySQL? Without data:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer)
   -> 0.0145s
-- change_column_null(:employees, :age, false)
   -> 0.0176s
==  AddAgeToEmployees: migrated (0.0323s) =====================================

And with:

==  AddAgeToEmployees: migrating ==============================================
-- add_column(:employees, :age, :integer)
   -> 0.0142s
-- change_column_null(:employees, :age, false)
rake aborted!
StandardError: An error has occurred, all later migrations canceled:

Mysql2::Error: Invalid use of NULL value: ALTER TABLE `employees` CHANGE `age` `age` int(11) NOT NULL

BOOM. Flawless victory.

* * *

To summarize: never use add_column with null: false. Instead, add the column and then use change_column_null to set the constraint for correct behavior regardless of database platform. In a follow-up post, I’ll focus on what to do when you don’t want to simply error out if there is existing data, but rather migrate it into a good state before setting NOT NULL.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 25 Sep 2014 16:12

Despite some exciting advances in the field, like Node, Redis, and Go, a well-structured relational database fronted by a Rails or Sinatra (or Django, etc.) app is still one of the most effective toolsets for building things for the web. In the coming weeks, I’ll be publishing a series of posts about how to be sure that you’re taking advantage of all your RDBMS has to offer.

A “NOT NULL constraint” enforces that a database column does not accept null values. Null, according to Wikipedia, is

a special marker used in Structured Query Language (SQL) to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL Null serves to fulfill the requirement that all true relational database management systems (RDBMS) support a representation of “missing information and inapplicable information.”

One could make the argument that null constraints in the database are unnecessary, since Rails includes the presence validation. What’s more, the presence validation handles blank (e.g. empty string) values that null constraints do not. For several reasons that I will lay out through the rest of this section, I contend that null constraints and presence validations should not be mutually exclusive, and in fact, if an attribute’s presence is required at the model level, its corresponding database column should always require a non-null value.

Why use non-null columns for required fields?

Data Confidence

The primary reason for using NOT NULL constraints is to have confidence that your data has no missing values. Simply using a presence validation offers no such confidence. For example, update_attribute ignores validations, as does save if you call it with the validate: false option. Additionally, database migrations that manipulate the schema with raw SQL using execute bypass validations.

Undefined method ‘foo’ for nil:NilClass

One of my biggest developer pet peeves is seeing a undefined method 'foo' for nil:NilClass come through in our error tracking service du jour. Someone assumed that a model’s association would always be present, and one way or another, that assumption turned out to be false. The merits of the Law of Demeter are beyond the scope of this post, but suffice it to say that if you’re going to say something like @athlete.team.name in your code, you better be damn sure that a) the athlete’s team_id has a value and b) it corresponds to the ID of an actual team. We’ll get to that second bit in our discussion of foreign key constraints in a later post, but the first part, ensuring that team_id has a value, demands a NOT NULL column.

Migration Issues

Another benefit of using NOT NULL constraints is that they force you to deal with data migration issues. Suppose a change request comes in to add a required age attribute to the Employee model. The easy approach would be to add the column, allow it to be null, and add a presence validation to the model. This works fine for new employees, but all of your existing employees are now in an invalid state. If, for example, an employee then attempts a password reset, updating their password_reset_token field would fail due to the missing age value.

If you’d created the age column to require a non-null value, you would have been forced to deal with the issue of existing users immediately and thus avoided this issue. That said, there’s no obvious value for what to fill in for all of the existing users’ ages, but better to have that discussion at development time than to spend weeks or months dealing with the fallout of invalid users in the system.

* * *

I hope I’ve laid out a case for using non-null constraints for all required database fields for great justice. In the next post, I’ll show the proper way to add non-null columns to existing tables.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 22 Sep 2014 14:36

The Problem

Here at the Viget Boulder office we have four conference rooms for shared use between a dozen people (+/- 5 on any given day). Given that we’re a remote office with two other locations to collaborate with, the rooms get a fair amount of scheduled use, as well as impromptu “I need a room for a quick call” use. All of our rooms are hooked into Google Calendar, so it is possible to check the status of a room through that interface, but that process is … not Google’s finest user experience work.

So what do you do when you have a slightly annoying yet persistent problem? Build something cool!

The Solution

Introducing: Illumigami. Every conference room is now dressed with its very own Illumigami Lantern, indicating the status of the room.

(These things look the coolest at night — pictures of which are at the end of this carousel of photos -> actually click through this carousel for once)

  • Green - Available
  • Yellow - Available now, Booked in the next 10 minutes
  • Red - Booked
  • Blue - Booked now, Available in the next 5 minutes

The ability to glance quickly at a room and know instantly if you’re able to hop in it has been huge. No longer do you get kicked out of a room unexpectedly for a scheduled meeting, and if you do need a room for an important call it’s easy to spot who’s in a booked meeting and who’s just squatting.

We picked up some origami skills making of these Lanterns; Mike’s box making expertise produced the first, and the internet provided instructions for the rest. Origami links for those interested - box, sphere (video), crane, stellated icosahedron. If you’re curious about the inner workings, a more detailed description below.

How They Work

The brains behind the operation is a fairly simple web app that pings the Google Calendar API every minute, and the muscle is all handled by the Spark Core. These things are so awesome, they’re more or less an Arduino with embedded wifi out of the box. There’s a great community around them, and they just work (unless you try to power them with 12 volts instead of 5, then they don’t work anymore … forever). The web app determines the status of the room based on a series of rules and the data gathered from the Calendar API, and sends messages to the Spark Core with simple instructions along the lines of “make this Lantern Green”, “make this Lantern Red”, or “party” (what good are glowing origami lanterns if they can’t do cool synchronized art displays?).

The Lanterns

Every Lantern is internally equipped with up to one foot of an LED strip (way cheaper here) wrapped around a ball of paper. One end of that strip is soldered to four wires (one for Power, and one each for Red, Green, and Blue control) and run along the wall to the Spark Core base. We used an ethernet cable to cleanly run that many wires across long distances.

LED strip ball

The Hardware

Two Spark Cores are needed to control the four Lanterns due to the the physical limitations of the Cores. Here's what everything looks like when it's all hooked up, along with a diagram of a Spark Core and the components required to control one of the LED strips within a Lantern:

final electronics

fritzing diagram


We need two different power levels coming in to control all of the components here. 12 volts are required to power the LED strips within each Lantern, and 5 volts are required for the Spark Core. As somewhat mentioned before, you can’t power a Spark Core with 12 volts like you can an Arduino UNO (by can’t I mean you fry the Core if you try -- don’t do this). Since we already had a 12V power source from a wall wart, we just needed to drop that down to 5V to power the Cores. Turns out the best way to do this is to obtain some cheap car cell phone chargers (which convert 12V to 5V), bust one open, and swap in your own wires. Easy peasy!

So we have 2 power rails on our breadboard, the top is running 5V and powering our Spark Core with a wire into the VIN pin, and the bottom is running 12V and passing power into the LED strips (denoted by “R G B POWER” in the diagram).

Now that everything’s powered up properly, we run wires out of some PWM (Pulse Width Modulation) pins on the Spark Core and into the gates of N-Channel MOSFETs. These components let us control high voltages (12V running through the LEDs) with lower voltages (3.3V which the Spark Core provides), much like a light switch, but tiny, made with chemicals, and very very fast. We need one wire each to control the Red, Green, and Blue light amounts, and we use PWM pins so we can fake the sending of lower voltages by quickly pulsing the 3.3V output -- this allows you to turn on the various colors at a percentage of their brightness giving you the whole color spectrum at your fingertips.


The Spark Cores allow you to wirelessly flash code onto them, and program in functions which can be called over the internet. This lets me, through the internet, tell a Core to change a specific Lantern to a specific color. Using the Ruby Spark gem, the syntax looks like this:

core = RubySpark::Core.new(core_id)

core.function("crane", "red")
core.function("sphere", "green")

The first function call there will call the crane function on the Core with an argument of "red". In real life, the Crane Lantern turns red. I used Rails and Active Admin to quickly spin up an app that lets me manage the rooms and Spark Cores used for this project. A room belongs to a Spark Core so when the app determines a room's status has changed, it knows which Core to send the right message to.


This project has been a blast to work on. There's nothing better than combining web and hardware hacking skills to come up with something neat looking and useful. Big thanks to our very own Jeremy Fields for the great idea and Mitch Daniels for the brilliant name. Comments are welcome below.

Author: "Eli Fatsi" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 28 Aug 2014 12:13

I find myself working on Ruby gems from time to time, and am often met with the following task:

  • Allow the gem user to configure gem-related variables
  • Provide defaults for configurable variables that are not assigned

I know this is a solved problem since countless gems offer (or require) configuration. Some big name examples - ActiveAdmin, Simple Form, any API wrapper ever. Despite the fact that so many gem developers have solved this problem, digging around the code reveals some fairly complicated logic typically involving a Configuration class (example), or a dependency on ActiveSupport and the mattr_accessor/@@class_variable combination (example). I didn’t want either of those things, so for that reason (and great justice), I’ve pieced together a simple module you can just include in your gem.

Throw the following bit of code in a file such as my_gem/lib/helpers/configuration.rb

module Configuration

  def configuration
    yield self

  def define_setting(name, default = nil)
    class_variable_set("@@#{name}", default)

    define_class_method "#{name}=" do |value|
      class_variable_set("@@#{name}", value)

    define_class_method name do


  def define_class_method(name, &block)
    (class << self; self; end).instance_eval do
      define_method name, &block


This exposes two class methods on anything that extends this module - define_setting and configuration. I’ll go over each of these below.

define_setting: This allows you, as a gem developer, to define configurable gem variables and set defaults if you care to do so. The usage would look something like this:

# my_gem/lib/my_gem.rb

require 'helpers/configuration'

module MyGem
  extend Configuration

  define_setting :access_token
  define_setting :access_secret

  define_setting :favorite_liquid,       "apple juice"
  define_setting :least_favorite_liquid, "seltzer water"

Comparing this to the executions of the previously mentioned gems, we’re not relying on ActiveSupport, can optionally define defaults, and don’t have to hide our configuration variable definitions behind some other object. I would also argue that this is one of the simplest DSLs for adding and managing configurable values.

configuration: This allows you, now as a gem user, to get real fancy when you’re setting these variables by using a configuration block.

# config/initializers/my_gem.rb

MyGem.configuration do |config|
  config.access_token  = "token"

  config.favorite_liquid = "gluten free apple juice"

With this configuration set, you’ve got the following variables available to use within your gem and the app using the gem:

MyGem.access_token  #=> "token"
MyGem.access_secret #=> nil (was never assigned so remains nil)

MyGem.favorite_liquid       #=> "gluten free apple juice"
MyGem.least_favorite_liquid #=> "seltzer water"

Wrapup notes

This method is effectively the same as using mattr_accessor (available with ActiveSupport) to create your getter and setter, and defining @@favorite_liquid = "apple juice" for the default, but a recent gem I was working on did not have access to ActiveSupport, and this is cleaner anyway.

Have a better way to set up configurable variables with ease? Let me know in the comments below!

Author: "Eli Fatsi" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 25 Aug 2014 15:44

Have you heard of Docker? You probably have—everybody’s talking about it. It’s the new hotness. Even my dad’s like, “what’s Docker? I saw someone twitter about it on the Facebook. You should call your mom.”

Docker is a program that makes running and managing containers super easy. It has the potential to change all aspects of server-side applications, from development and testing to deployment and scaling. It’s pretty cool.

strong.note { color: black; font-size: 24px; vertical-align: middle; } pre.no-highlight { overflow-x: scroll; }

Recently, I’ve been working through The Docker Book. It’s a top notch book and I highly recommend it, but I’ve had some problems running the examples on OS X. After a certain point, the book assumes you’re using Linux and skips some of the extra configuration required to make the examples work on OS X. This isn’t the book’s fault; rather, it speaks to underlying issues with how Docker works on OS X.

This post is a walkthrough of the issues you’ll face running Docker on OS X and the workarounds to deal with them. It’s not meant to be a tutorial on Docker itself, but I encourage you to follow along and type in all the commands. You’ll get a better understanding of how Docker works in general and on OS X specifically. Plus, if you decide to dig deeper into Docker on your Mac, you’ll be saved hours of troubleshooting. Don’t say I never gave you nothing.

First, let’s talk about how Docker works and why running it on OS X no work so good.

How Docker Works

Docker is a client-server application. The Docker server is a daemon that does all the heavy lifting: building and downloading images, starting and stopping containers, and the like. It exposes a REST API for remote management.

The Docker client is a command line program that communicates with the Docker server using the REST API. You will interact with Docker by using the client to send commands to the server.

The machine running the Docker server is called the Docker host. The host can be any machine—your laptop, a server in the Cloud™, etc—but, because Docker uses features only available to Linux, that machine must be running Linux (more specifically, the Linux kernel).

Docker on Linux

Suppose we want to run containers directly on our Linux laptop. Here’s how it looks:

Docking on Linux

The laptop is running both the client and the server, thus making it the Docker host. Easy.

Docker on OS X

Here’s the thing about OS X: it’s not Linux. It doesn’t have the kernel features required to run Docker containers natively. We still need to have Linux running somewhere.

Enter boot2docker. boot2docker is a “lightweight Linux distribution made specifically to run Docker containers.” Spoiler alert: you’re going to run it in a VM on your Mac.

Here’s a diagram of how we’ll use boot2docker:

Docking on OS X

We’ll run the Docker client natively on OS X, but the Docker server will run inside our boot2docker VM. This also means boot2docker, not OS X, is the Docker host, not OS X.

Make sense? Let’s install dat software.


Step 1: Install VirtualBox

Go here and do it. You don’t need my help with that.

Step 2: Install Docker and boot2docker

You have two choices: the offical package from the Docker site or homebrew. I prefer homebrew because I like to manage my environment from the command line. The choice is yours.

> brew update
> brew install docker
> brew install boot2docker

Step 3: Initialize and start boot2docker

First, we need to initialize boot2docker (we only have to do this once):

> boot2docker init
2014/08/21 13:49:33 Downloading boot2docker ISO image...
    [ ... ]
2014/08/21 13:49:50 Done. Type `boot2docker up` to start the VM.

Next, we can start up the VM. Do like it says:

> boot2docker up
2014/08/21 13:51:29 Waiting for VM to be started...
2014/08/21 13:51:50 Started.
2014/08/21 13:51:51   Trying to get IP one more time
2014/08/21 13:51:51 To connect the Docker client to the Docker daemon, please set:
2014/08/21 13:51:51     export DOCKER_HOST=tcp://

Step 4: Set the DOCKER_HOST environment variable

The Docker client assumes the Docker host is the current machine. We need to tell it to use our boot2docker VM by setting the DOCKER_HOST environment variable:

> export DOCKER_HOST=tcp://

Your VM might have a different IP address—use whatever boot2docker up told you to use. You probably want to add that environment variable to your shell config.

Step 5: Profit

Let’s test it out:

> docker info
Containers: 0
Images: 0
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.15.3-tinycore64
Debug mode (server): true
Debug mode (client): false
Fds: 10
Goroutines: 10
EventsListeners: 0
Init Path: /usr/local/bin/docker
Sockets: [unix:///var/run/docker.sock tcp://]

Great success. To recap: we’ve set up a VirtualBox VM running boot2docker. The VM runs the Docker server, and we’re communicating with it using the Docker client on OS X.

Bueno. Let’s do some containers.

Common Problems

We have a “working” Docker installation. Let’s see where it falls apart and how we can fix it.

Problem #1: Port Forwarding

The Problem: Docker forwards ports from the container to the host, which is boot2docker, not OS X.

Let’s start a container running nginx:

> docker run -d -P --name web nginx
Unable to find image 'nginx' locally
Pulling repository nginx
    [ ... ]

This command starts a new container as a daemon (-d), automatically forwards the ports specified in the image (-P), gives it the name ‘web’ (--name web), and uses the nginx image. Our new container has the unique identifier 0092c03e1eba....

Verify the container is running:

> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                   NAMES
0092c03e1eba        nginx:latest        nginx               44 seconds ago      Up 41 seconds>80/tcp   web

Under the PORTS heading, we can see our container exposes port 80, and Docker has forwarded this port from the container to a random port, 49153, on the host.

Let’s curl our new site:

> curl localhost:49153
curl: (7) Failed connect to localhost:49153; Connection refused

It didn’t work. Why?

Remember, Docker is mapping port 80 to port 49153 on the Docker host. If we were on Linux, our Docker host would be localhost, but we aren’t, so it’s not. It’s our VM.

The Solution: Use the VM’s IP address.

boot2docker comes with a command to get the IP address of the VM:

> boot2docker ip

The VM’s Host only interface IP address is:

Let’s plug that into our curl command:

> curl $(boot2docker ip):49153

The VM’s Host only interface IP address is:

<!DOCTYPE html>
<title>Welcome to nginx!</title>
    [ ... ]

Success! Sort of. We got the web page, but we got The VM’s Host only interface IP address is:, too. What’s the deal with that nonsense.

Turns out, boot2docker ip outputs the IP address to standard output and The VM's Host only interface IP address is: to standard error. The $(boot2docker ip) subcommand captures standard output but not standard error, which still goes to the terminal. Scumbag boot2docker.

This is annoying. I am annoyed. Here’s a bash function to fix it:

docker-ip() {
  boot2docker ip 2> /dev/null

Stick that in your shell config, then use it like so:

> curl $(docker-ip):49153
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    [ ... ]

Groovy. This gives us a reference for the IP address in the terminal, but it would be nice to have something similar for other apps, like the browser. Let’s add a dockerhost entry to the /etc/hosts file:

> echo $(docker-ip) dockerhost | sudo tee -a /etc/hosts

Now we can use it everywhere:

Great success. Make sure to stop and remove the container before continuing:

> docker stop web
> docker rm web

VirtualBox assigns IP addresses using DHCP, meaning the IP address could change. If you’re only using one VM, it should always get the same IP, but if you’re VMing on the reg, it could change. Fair warning.

Bonus Alternate Solution: Forward all of Docker’s ports from the VM to localhost.

If you really want to access your Docker containers via localhost, you can forward all of the ports in Docker’s port range from the VM to localhost. Here’s a bash script, taken from here, to do that:


for i in {49000..49900}; do
  VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port$i,tcp,,$i,,$i";
  VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port$i,udp,,$i,,$i";

By doing this, Docker will forward port 80 to, say, port 49153 on the VM, and VirtualBox will forward port 49153 from the VM to localhost. Soon, inception. You should really just use the VM’s IP address mmkay.

Problem #2: Mounting Volumes

The Problem: Docker mounts volumes from the boot2docker VM, not from OS X.

Docker supports volumes: you can mount a directory from the host into your container. Volumes are one way to give your container access to resources in the outside world. For example, we could start an nginx container that serves files from the host using a volume. Let’s try it out.

First, let’s create a new directory and add an index.html:

> cd /Users/Chris
> mkdir web
> cd web
> echo 'yay!' > index.html

(Make sure to replace /Users/Chris with your own path).

Next, we’ll start another nginx container, this time mounting our new directory inside the container at nginx’s web root:

> docker run -d -P -v /Users/Chris/web:/usr/local/nginx/html --name web nginx

We need the port number for port 80 on our container:

> docker port web 80

Let’s try to curl our new page:

> curl dockerhost:49154
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>

Well, that didn’t work. The problem, again, is our VM. Docker is trying to mount /Users/Chris/web from the host into our container, but the host is boot2docker, not OS X. boot2docker doesn’t know anything about files on OS X.

The Solution: Mount OS X’s /Users directory into the VM.

By mounting /Users into our VM, boot2docker gains a /Users volume that points to the same directory on OS X. Referencing /Users/Chris/web inside boot2docker now points directly to /Users/Chris/web on OS X, and we can mount any path starting with /Users into our container. Pretty neat.

boot2docker doesn’t support the VirtualBox Guest Additions that allow us to make this work. Fortunately, a very smart person has solved this problem for us with a custom build of boot2docker containing the Guest Additions and the configuration to make this all work. We just have to install it.

First, let’s remove the web container and shut down our VM:

> docker stop web
> docker rm web
> boot2docker down

Next, we’ll download the custom build:

> curl http://static.dockerfiles.io/boot2docker-v1.2.0-virtualbox-guest-additions-v4.3.14.iso > ~/.boot2docker/boot2docker.iso

Finally, we share the /Users directory with our VM and start it up again:

> VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
> boot2docker up

Replacing the boot2docker image won’t erase any of the data in your VM, so don’t worry about losing any of your containers. Good guy boot2docker.

Let’s try this again:

> docker run -d -P -v /Users/Chris/web:/usr/local/nginx/html --name web nginx
> docker port web 80
> curl dockerhost:49153

Great success! Let’s verify that we’re using a volume by creating a new file on OS X and seeing if nginx serves it up:

> echo 'hooray!' > hooray.html
> curl dockerhost:49153/hooray.html

Sweet damn. Make sure to stop and remove the container:

> docker stop web
> docker rm web

If you update index.html and curl it, you won’t see your changes. This is because nginx ships with sendfile turned on, which doesn’t play well with VirtualBox. The solution is simple—turn off sendfile in the nginx config file—but outside the scope of this post.

Problem #3: Getting Inside a Container

The Problem: How do I get in there?

So you’ve got your shiny new container running. The ports are forwarding and the volumes are ... voluming. Everything’s cool, until you realize something’s totally uncool. You’d really like to start a shell in there and poke around.

The Solution: Linux Magic

Enter nsenter. nsenter is a program that allows you to run commands inside a kernel namespace. Since a container is just a process running inside its own kernel namespace, this is exactly what we need to start a shell inside our container. Let’s make it so.

This part deals with shells running in three different places. Trés confusing. I’ll use a different prompt to distinguish each:

  • > for OS X
  • $ for the boot2docker VM
  • % for inside a Docker container

First, let’s SSH into the boot2docker VM:

> boot2docker ssh

Next, install nsenter:

$ docker run --rm -v /var/lib/boot2docker:/target jpetazzo/nsenter

(How does that install it? jpetazzo/nsenter is a Docker image configured to build nsenter from source. When we start a container from this image, it builds nsenter and installs it to /target, which we’ve set to be a volume pointing to /var/lib/boot2docker in our VM.

In other words, we start a prepackaged build environment for nsenter, which compiles and installs it to our VM using a volume. How awesome is that? Seriously, how awesome? Answer me!)

Finally, we need to add /var/lib/boot2docker to the docker user’s PATH inside the VM:

$ echo 'export PATH=/var/lib/boot2docker:$PATH' >> ~/.profile
$ source ~/.profile

We should now be able to use the installed binary:

$ which nsenter

Let’s start our nginx container again and see how it works (remember, we’re still SSH’d into our VM):

$ docker run -d -P --name web nginx

Time to get inside that thing. nsenter needs the pid of the running container. Let’s get it:

$ PID=$(docker inspect --format '{{ .State.Pid }}' web)

The moment of truth:

$ sudo nsenter -m -u -n -i -p -t $PID
% hostname

Great success! Let’s confirm we’re inside our container by listing the running processes (we have to install ps first):

% apt-get update
% apt-get install -y procps
% ps -A
  PID TTY          TIME CMD
    1 ?        00:00:00 nginx
    8 ?        00:00:00 nginx
   29 ?        00:00:00 bash
  237 ?        00:00:00 ps
% exit

We can see two nginx processes, our shell, and ps. How cool is that?

Getting the pid and feeding it to nsenter is kind of a pain. jpetazzo/nsenter includes docker-enter, a shell script that does it for you:

$ sudo docker-enter web
% hostname
% exit

The default command is sh, but we can run any command we want by passing it as arguments:

$ sudo docker-enter web ps -A
  PID TTY          TIME CMD
    1 ?        00:00:00 nginx
    8 ?        00:00:00 nginx
  245 ?        00:00:00 ps

This is totally awesome. It would be more totally awesomer if we could do it directly from OS X. jpetazzo’s got us covered there, too (that guy thinks of everything), with a bash script we can install on OS X. Below is the same script, but with a minor change to default to bash, because that’s how I roll.

Just stick this bro anywhere in your OS X PATH (and chmod +x it, natch) and you’re all set:

set -e

# Check for nsenter. If not found, install it
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'

# Use bash if no command is specified
if [[ $# = 1 ]]; then

boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter "${args[@]}"

Let’s test it out:

> docker-enter web
% hostname

Yes. YES. Cue guitar solo.

Don’t forget to stop and remove your container (nag nag nag):

> docker stop web
> docker rm web

The End

You now have a Docker environment running on OS X that does all the things you’d expect. You’ve also hopefully learned a little about how Docker works and how to use it. We’ve had some laughs, and we’ve learned a lot, too. I’m glad we’re friends.

If you’re ready to learn more about Docker, check out The Docker Book. I can’t recommend it enough. Throw some money at that guy.

The Future Soon

Docker might be the new kid on the block, but we’re already thinking about ways to add it to our workflow. Stay tuned for great justice.

Was this post helpful? How are you using Docker? Let me know down there in the comments box. Have a great . Call your mom.

Author: "Chris Jones" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 20 Aug 2014 15:36

Despite some exciting advances in the field, like Node, Redis, and Go, a well-structured relational database fronted by a Rails or Sinatra (or Django, etc.) app is still one of the most effective toolsets for building things for the web. In the coming weeks, I’ll be publishing a series of posts about how to be sure that you’re taking advantage of all your RDBMS has to offer.

IF YOU ONLY REQUIRE a few attributes from a table, rather than instantiating a collection of models and then running a .map over them to get the data you need, it’s much more efficient to use .pluck to pull back only the attributes you need as an array. The benefits are twofold: better SQL performance and less time and memory spent in Rubyland.

To illustrate, let’s use an app I’ve been working on that takes Harvest data and generates reports. As a baseline, here is the execution time and memory usage of rails runner with a blank instruction:

$ time rails runner ""
real  0m2.053s
user  0m1.666s
sys   0m0.379s

$ memory_profiler.sh rails runner ""
Peak: 109240

In other words, it takes about two seconds and 100MB to boot up the app. We calculate memory usage with a modified version of this Unix script.

Now, consider a TimeEntry model in our time tracking application (of which there are 314,420 in my local database). Let’s say we need a list of the dates of every single time entry in the system. A naïve approach would look something like this:

dates = TimeEntry.all.map { |entry| entry.logged_on }

It works, but seems a little slow:

$ time rails runner "TimeEntry.all.map { |entry| entry.logged_on }"
real  0m14.461s
user  0m12.824s
sys   0m0.994s

Almost 14.5 seconds. Not exactly webscale. And how about RAM usage?

$ memory_profiler.sh rails runner "TimeEntry.all.map { |entry| entry.logged_on }"
Peak: 1252180

About 1.25 gigabytes of RAM. Now, what if we use .pluck instead?

dates = TimeEntry.pluck(:logged_on)

In terms of time, we see major improvements:

$ time rails runner "TimeEntry.pluck(:logged_on)"
real  0m4.123s
user  0m3.418s
sys   0m0.529s

So from roughly 15 seconds to about four. Similarly, for memory usage:

$ memory_profiler.sh bundle exec rails runner "TimeEntry.pluck(:logged_on)"
Peak: 384636

From 1.25GB to less than 400MB. When we subtract the overhead we calculated earlier, we’re going from 15 seconds of execution time to two, and 1.15GB of RAM to 300MB.

Using SQL Fragments

As you might imagine, there’s a lot of duplication among the dates on which time entries are logged. What if we only want unique values? We’d update our naïve approach to look like this:

dates = TimeEntry.all.map { |entry| entry.logged_on }.uniq

When we profile this code, we see that it performs slightly worse than the non-unique version:

$ time rails runner "TimeEntry.all.map { |entry| entry.logged_on }.uniq"
real  0m15.337s
user  0m13.621s
sys   0m1.021s

$ memory_profiler.sh rails runner "TimeEntry.all.map { |entry| entry.logged_on }.uniq"
Peak: 1278784

Instead, let’s take advantage of .pluck’s ability to take a SQL fragment rather than a symbolized column name:

dates = TimeEntry.pluck("DISTINCT logged_on")

Profiling this code yields surprising results:

$ time rails runner "TimeEntry.pluck('DISTINCT logged_on')"
real  0m2.133s
user  0m1.678s
sys   0m0.369s

$ memory_profiler.sh rails runner "TimeEntry.pluck('DISTNCT logged_on')"
Peak: 107984

Both running time and memory usage are virtually identical to executing the runner with a blank command, or, in other words, the result is calculated at an incredibly low cost.

Using .pluck Across Tables

Requirements have changed, and now, instead of an array of timestamps, we need an array of two-element arrays consisting of the timestamp and the employee’s last name, stored in the “employees” table. Our naïve approach then becomes:

dates = TimeEntry.all.map { |entry| [entry.logged_on, entry.employee.last_name] }

Go grab a cup of coffee, because this is going to take awhile.

$ time rails runner "TimeEntry.all.map { |entry| [entry.logged_on, entry.employee.last_name] }"
real  7m29.245s
user  6m52.136s
sys   0m15.601s

memory_profiler.sh rails runner "TimeEntry.all.map { |entry| [entry.logged_on, entry.employee.last_name] }"
Peak: 3052592

Yes, you’re reading that correctly: 7.5 minutes and 3 gigs of RAM. We can improve performance somewhat by taking advantage of ActiveRecord’s eager loading capabilities.

dates = TimeEntry.includes(:employee).map { |entry| [entry.logged_on, entry.employee.last_name] }

Benchmarking this code, we see significant performance gains, since we’re going from over 300,000 SQL queries to two.

$ time rails runner "TimeEntry.includes(:employee).map { |entry| [entry.logged_on, entry.employee.last_name] }"
real  0m21.270s
user  0m19.396s
sys   0m1.174s

$ memory_profiler.sh rails runner "TimeEntry.includes(:employee).map { |entry| [entry.logged_on, entry.employee.last_name] }"
Peak: 1606204

Faster (from 7.5 minutes to 21 seconds), but certainly not fast enough. Finally, with .pluck:

dates = TimeEntry.includes(:employee).pluck(:logged_on, :last_name)


$ time rails runner "TimeEntry.includes(:employee).pluck(:logged_on, :last_name)"
real  0m4.180s
user  0m3.414s
sys   0m0.543s

$ memory_profiler.sh rails runner "TimeEntry.includes(:employee).pluck(:logged_on, :last_name)"
Peak: 407912

A hair over 4 seconds execution time and 400MB RAM – hardly any more expensive than without employee names.


  • Prefer .pluck to instantiating a collection of ActiveRecord objects and then using .map to build an array of attributes.

  • .pluck can do more than simply pull back attributes on a single table: it can run SQL functions, pull attributes from joined tables, and tack on to any scope.

  • Whenever possible, let the database do the heavy lifting.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 15 Aug 2014 14:29

When I first found out that I managed to land a Rails internship at Viget, I was incredibly excited: not only was I getting to intern at a great company, I’d even get the chance to learn all I could ever need to know about Ruby on Rails in the gap between my exams and the internship. By the time that June 9 came around, I felt like I had a pretty solid foundation; by June 10, I wasn’t so sure. I wasn’t truly struggling, but I didn’t really feel like I was able to do any of my assignments better than I would have a month prior.

For the first several weeks, I wondered how I could have better prepared. At first, I thought that it was simply a matter of time—had I dedicated more hours per day, I surely would have been better off, right? Now that I’ve had more time to learn and reflect, however, I’ve realized that the issue was less about the quantity of time I spent but rather the quality of that time. Since I was completely new to Ruby, I just didn’t know what to prioritize.

Now that I have a little experience under my belt, I have a much better grasp on what my prep work should have entailed. To help any future interns here at Viget—or anyone else who wants to start working with Rails—avoid using their time poorly, I’ve created a small guide to help other Ruby newbies gain the knowledge and experience they’ll need to be ready for an entry-level position in Rails.

Obligatory disclaimer: since learning anything as significant as Rails is a non-trivial undertaking, you might not end up liking the resources that I’ve outlined in this post. However, I do think that the technologies that I’ll cover are pretty important. As a result, I’ve compiled a list of resources for all of the different technologies that I’ve needed to know in order to effectively use Rails this summer to supplement this blog post. If you’re not a fan of the individual resources that I cover here, feel free to substitute others, either from the Gist or from your own findings. With that said, let’s get started.


Surprise, surprise: in order to develop with Rails, you’ll need to know how to code. In particular, you’ll need to know how to use Ruby and have some experience in object-oriented programming (OOP).

If you’re completely new to programming, you’re definitely going to want to start with the basics. While I was lucky enough to take a course introducing me to OOP—which, for me, was the best option—I really like this interactive Ruby course from Codecademy. It starts off very simply, but it advances in complexity as the lessons go on. From here, you’ll probably want to do a few more tutorials to make sure that you get as much practice in as possible.

For those who already have programming experience prior to learning Ruby, RubyMonk is the way to go. The beginning lessons are a little less basic than most other interactive courses, and each book is organized well enough to allow you to skip between different topics very easily. This means you don’t have to trudge through programming basics to get to what you need. In addition, there are also several books for non-beginners, which allows you to touch on some more advanced topics if you have the time and inclination.

Finally, if you’re the type of person that prefers books over tutorials, then check out Mr. Neighborly’s Humble Little Ruby Book or Why’s Poignant Guide to Ruby. Each talks about Ruby as a programming language in addition to teaching you how to use it; personally, I think that’s incredibly useful (and interesting!) knowledge that can help you to write better code. Plus, since neither is interactive, you can download them to a reader or tablet and won’t have to worry about being tied to a Wifi connection.

Local Environment Setup

Since most of your Ruby work up to this point has likely been via web interfaces, it’s a great time to set up your very own, personalized development environment. For most people, this will mean installing Ruby to a machine of your choosing and figuring out how you’re going to want to write and run your code.

My personal preference is to do everything on my computer’s command line: I can easily access Vim for code writing, set up a server to check out my work in-browser or run my test suite (more on tests later). Plus, since navigating more than ten lines of code with nothing but arrow keys would have driven me crazy, this approach forced me to learn some of the more advanced features of my text editor. If you decide to use Vim, make sure to check out and install some of its Ruby plugins to make your development that much more fluid and personalized.

If Vim doesn’t suit your fancy, other solid text editors include Emacs, Sublime and Atom. Aptana, an integrated development environment (IDE), is also an option and might be the best bet for anyone that’s a fan of Eclipse. Regardless of how you decide to run and execute your code, I’d strongly recommend learning the basics of utilizing the command line quickly—it ends up being both very simple and very valuable.

Now that you’ve got a snazzy, new development environment set up, you can use it to write some basic Ruby programs to hone your skills. For ideas, check out CodeEval: it has a lot of different challenges of varying difficulties. It even includes some common interview questions, like FizzBuzz. For additional practice, you can also try out CodeWars.

Front End Technologies

Since Rails is a web framework, you’re naturally going to need an understanding of the web application ecosphere, and that means learning about the trifecta of web technologies: HTML, CSS and Javascript. Regardless of the work you plan to be doing, you should be able to utilize HTML and CSS effectively enough to create static web pages that look fairly nice before tackling Rails; luckily, the basics for both are pretty easy to grasp. In addition, there are also a lot of different resources for learning the two. Personally, I like the approach that Thoughtbot takes.

Javascript, on the other hand, is a little more complicated. As “the scripting language of the web”, JS is one of the most useful and utilized tools on the Internet, but, as far as modern programming languages go, it’s fairly different from Ruby. If your work will primarily be on the back end, you probably won’t need incredibly strong Javascript skills, but you will need to know why and when it’s used. If you’re also doing significant front end work, your Javascript knowledge will need to be considerably more comprehensive from the get-go. As with HTML and CSS, it’s fortunately not difficult to find tutorials and guides online. Again, Thoughtbot has a pretty solid list.

Relational Databases

If you’re going to work with persistent data—hint: you will—you’re going to need a basic understanding of relational databases. Luckily, a relational database is exactly as it sounds: a database that can store relations. For example, the clone of Hacker News that I created for my internship has the notion of both a User and an Article. To keep track of who posted what, my app is implemented such that a User has_many Articles; in other words, there’s a relationship between Users and Articles. Pretty basic, right? As with most any important technology, there’s more to it than that, so it’s worth checking out the Resources Gist or doing some Googling to learn more.

In addition to the notion of designing a relational database, there’s also the concern of being able to get data out of said database. Traditionally, a developer would write a query to the database in SQL to access data; luckily for us, Rails can handle a lot of the querying you’ll need by itself. That said, you may very well run into a situation where using Rails’ built-in queries won’t make the most sense. In those cases, a basic understanding of SQL is incredibly useful. Even if you happen to be lucky enough to avoid needing to write your own SQL queries like me—but unlike Nathanael—understanding how to use SQL will make your more complicated Rails queries much easier, which is due to the fact that Rails and SQL share many of the verbs used for querying, such as join and merge. As far as learning SQL goes, I’m a big fan of Learn SQL the Hard Way—just don’t be intimidated by the name.

Version Control

You’re now just one step away from getting to Rails itself: learning how to use version control. While there are a few different options available, Git is easily the most popular. As with Ruby, you’ll want to install Git to your preferred machine. You can then run through Git Immersion to familiarize yourself with running Git via the command line.

Next, you’ll probably want to learn how to use Github, which offers you the ability to store all of your Git repositories online and even view and participate in open source projects. In addition, it’s a great way to show off your code to potential employers. You can also throw single file programs, like CodeEval solutions you’ve written, up in Gists.

Rails (and Testing!)

Now that you’re a bonafide pro with most of its companion technologies, it’s time to dive into Rails in earnest. The most commonly recommended Rails tutorial—aptly named the Ruby on Rails Tutorial by Michael Hartl—also happens to be, in my opinion, the best. While it’s not perfect, it does a great job of teaching Rails while touching on Git, Heroku and testing. I’d highly recommend doing the additional problems that Hartl leaves at the end of each chapter: they’re not terribly complicated, but they do ensure that you’re developing in Rails yourself rather than just following what the book lays out for you.

While it is covered in the Rails Tutorial, I feel like I really need to stress the importance of learning how to test your code properly. As part of my internship, I primarily used Rspec, Capybara and FactoryGirl for testing, though several other options do exist.

Generally, I’d write a test for any features of my applications—like being able to post an article on my Hacker News clone or submit a pick on Winsome. For these feature tests, I’d do my best to ensure that my tests were emulating a user’s experience as closely as possible: rather than ensuring that an article was added to the database, for example, I’d check to make sure it showed up on the home page. The second form of test I’d usually write would be a model test. These would just check that each of my objects—rather than my features—were performing as expected. For example, I wrote a test for my Hacker News clone to ensure that you couldn’t create a new User with the email address of an existing User. To be safe, I’d also write tests for any significant amount of code that wasn’t covered in one of the previous two cases.


Now that you have an idea of a curriculum you might want to follow to learn Rails, you may be thinking: this looks like a lot of work. Unfortunately, there’s no guaranteed quick and easy way to learn Rails. That said, keep in mind that—for entry level positions or internships, at least—you shouldn’t need to be a complete pro. Even now that I’ve finished a really comprehensive Rails internship, there’s still plenty for me to learn about and improve upon. In particular, I’m really interested in metaprogramming Ruby, and my SQL and Javascript skills can definitely use some more work. The real key to this guide is that you should have an idea of how each of these different pieces work together to make a working web application.

I also encourage anyone that’s trying to learn Rails to keep track of what did and didn’t work for you. Feel free to fork the Resources Gist I mentioned earlier and create your own version that you can share with other aspiring Rails devs in the future. Last, but certainly not least: good luck!

Author: "Andy Andrea" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 11 Aug 2014 17:44

While working on a new feature, I ran into a problem where the response time for a single page was approaching 30 seconds. Each request made an API call, so I first ruled that out as the culprit by logging the response time for every external request. I then turned my attention to the Rails log.

Rails gives a decent breakdown of where it's spending time before delivering a response to the browser — you can see how much time is spent rendering views, individual partials, and interacting with the database:

Completed 200 OK in 32625.1ms (Views: 31013.9ms | ActiveRecord: 16.8ms)

In my case, this didn't give enough detail to know exactly what the problem was, but it gave me a good place to start. In order to see what needed optimization, I turned to Ruby's Benchmark class to log the processing time for blocks of code. I briefly looked at Rails' instrumentation facilities, but it wasn't clear how to use it to get the result I wanted.

Instead, I whipped up a quick class with a corresponding helper to log processing times for a given block of code that I've since turned into a gem. While I used this in a Rails application, it will work in any Ruby program as well. To use, include the helper in your class and wrap any code you want to benchmark in a log_execution_for block:

class Foo
  include SimpleBenchmark::Helper

  def do_a_thing
    log_execution_for('wait') { sleep(1); 'done' }

Calling Foo#do_a_thing will create a line in your logfile with the label "wait" and the time the block of code took to execute. By default, it will use Rails.logger if available or will write to the file benchmark.log in the current directory. You can always override this by setting the value of SimpleBenchmark.logger. When moving to production, you can either delete the benchmarking code or leave it in and disable it with SimpleBenchmark.enabled = false.

If you're using it inside of Rails like I was, create an initializer and optionally add SimpleBenchmark::Helper to the top of your ApplicationController:

# config/initializers/benchmarking.rb
require 'simple_benchmark'
SimpleBenchmark.enabled = Rails.env.development?

# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
  include SimpleBenchmark::Helper
  helper_method :log_execution_for

Happy benchmarking!

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 08 Aug 2014 14:56

Great programming is worthless if you’re building the wrong thing. Testing early and often can help validate your assumptions to keep a project or company on the right track. This type of testing can often require traffic distribution strategies for siphoning users into test groups.

Nginx configured as a load balancer can accomodate complex distribution logic and provides a simple solution for most split test traffic distribution needs.

Full App Tests

In straightforward cases, when we’re testing an entirely new version of an application that runs on a distinct server, we can create a load balancer to proxy requests for the domain, passing a desired portion of requests to the test server.

Full App Test Diagram

The Nginx configuration for this load balancer could be as simple as:

http {
    upstream appServer {
        server old.app.com weight=9;
        server new.app.com;

    server {
        listen 80;

        location / {
            proxy_pass http://appServer;

In this example, Nginx is configured to choose between passing requests to one of two app servers. (old.app.com and new.app.com could be listed as IP addresses instead if you’re into that sort of thing.)

Session Persistence

ip_hash is used for session persistence (to avoid having visitors see two different versions of the app on subsequent requests.)

Distribution Weighting

Most test cases require that just a small fraction of requests be passed to the test version of the app. Here a weight parameter is used to adjust how frequently requests are passed to the new version of the app that is under test. A weight of 9 has the same effect as having 9 seperate entries for this old.app.com server. When the routing decision is being made, Nginx will choose one server from the (effective) 10 server entries. In this case, the new app server will be passed 10% of requests.

Partial App Tests

Split testing a partial replacement for an existing app is more complicated than a full replacement. If we simply switched requests between servers with a partial replacement, requests could be made to the new app server for portions of the existing app that do not exist in the version under test. 404 time.

Subdomain Partial Redirection Strategy

In these complex cases, it may be best to make your test version available via a subdomain, preserving the naked domain for accessing necessary portions of the old application.

Partial App Test Diagram

We recently tested a replacement for a homepage and several key pages for an existing application. Our Nginx configuration looked like that below. (Or here if you would prefer an uncommented version.)

# Define a cluster to which you can proxy requests. In this case, we will be
# proxying requests to just a single server: the original app server.
# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html

upstream app.com {
  server old.app.com;

# Assign to a variable named $upstream_variant a psuedo-randomly chosen value,
# with "test" being assigned 10% of the time, and "original" assigned the
# remaining 90% of the time. (This is group determination for requests not
# already containing a group cookie.)
# See http://nginx.org/en/docs/http/ngx_http_split_clients_module.html

split_clients "app${remote_addr}${http_user_agent}${date_gmt}" $upstream_variant {
  10% "test";
  * "original";

# Assign to a variable named $updstream_group a variable mapped from the the
# value present in the group cookie. If the cookie's value is present, preserve
# the existing value. If it is not, assign to the value of the $upstream_variant.
# See http://nginx.org/en/docs/http/ngx_http_map_module.html
# Note: the value of cookies is available via the $cookie_ variables
# (i.e. $cookie_my_cookie_name will return the value of the cookie named 'my_cookie_name').

map $cookie_split_test_version $upstream_group {
  default    $upstream_variant;
  "test"     "test";
  "original" "original";

# Assign to a variable named $internal_request a value indicating whether or
# not the given request originates from an internal IP address. If the request
# originates from an IP within the range defined by to, assign 1.
# Otherwise assign 0.
# See http://nginx.org/en/docs/http/ngx_http_geo_module.html

geo $internal_request {
  ranges; 1;
  default 0;

server {
  listen 80 default_server;
  listen [::]:80 default_server ipv6only=on;
  server_name app.com www.app.com;

  # For requests made to the root path (in this case, the hompage):
  location = / {
    # Set a cookie containing the selected group as it's value. Expire after
    # 6 days (the length of our test).
    add_header Set-Cookie "split_test_version=$upstream_group;Path=/;Max-Age=518400;";

    # Requests by default are not considered candidates for the test group.
    set $test_group 0;

    # If the request has been randomly selected for the test group, it is now
    # a candidate for redirection to the test site.
    if ($upstream_group = "test") {
      set $test_group 1;

    # Regardless of the group determination, if the request originates from
    # an internal IP it is not a candidate for the test group.
    if ($internal_request = 1) {
      set $test_group 0;

    # Redirect test group candidate requests to the test subdomain.
    # See http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#return
    if ($test_group = 1) {
      return 302 http://new.app.com/;

    # Pass all remaining requests through to the old application.
    # See http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
    proxy_pass http://app.com;

  # For requests to all other paths:
  location / {
    # Pass the request through to the old application.
    proxy_pass http://app.com;

Switch Location

Partial split tests require a specific location from which to assign the test group and begin the test. In this case, the root location (homepage) is used. Requests to the root path are assigned a test group and redirected or passed through as appropriate. Requests to all other locations are passed through to the original app server.

Session Persistence (and Expiration)

In this case, we use a cookie (add_header Set-Cookie ...) to establish session persistence. When a request is assigned to either the test or original group, this group is recorded in a cookie which is accessed upon subsequent requests to ensure a consistent experience for the duration of the test.

Since our split test was set to last less than 6 days, we set a six day expiration on the cookie.

IP Filtering

In many cases, split tests require the filtering of certain parties (in this case the company owner of the application) to avoid skewing test results. In this case, we use Nginx’s geo to determine whether or not the request originates from the company’s internal network. Later on this determination is used to direct the request straight through to the original version of the app.

302 Redirection

Since most of the original application must remain accessible, instead of passing test group traffic through to the new server, we instead 302 redirect the request to the new application subdomain. This allows the user to seemlessly switch between viewing content provided by both the new and original applications.

Nginx is a terrific tool for distributing traffic for split tests. It’s stable, it’s blazingly fast, and configurations for typical use cases are prevalent online. More complex configuration can be accomplished after just a couple hours exploring the documentation. Give Nginx a shot next time you split test!

Author: "Lawson Kurtz" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 07 Aug 2014 18:01

The Viget development and front-end development teams use GitHub pull requests to critique and improve code on all our projects. I built a tool for visualizing how the team members comment on each other's PRs, and exposing some neat facts about the interactions.

Checkoning (GitHub link) does three things:

  • Pulls down PR data for a specific team and saves it as a massive JSON file.
  • Digs through the raw data to find interesting data to visualize.
  • Expose the data on a single page, mostly using D3 visualizations.

The results from running it on Viget teams are pretty cool:

Force-directed graph of who Dan comments on most (and vice versa)


Had a slow May, I guess.


Tommy sure likes PHP.


Even though Checkoning is just an experiment, you can clone it and run it on your own teams. Teams roughly the size of Viget’s should work fine, but other sizes will need some tweaking to produce nice visualizations. Have fun!

Author: "Doug Avery" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 15:33

One of Viget's recent internal projects, SocialPiq, had some pretty heavy requirements surrounding user-driven search. The main feature of the site was to allow users to search by a number of various criteria, many of which were backed by ActiveRecord models.

Fortunately, one of Rails' strengths is its ability to associate objects and allow easy inspection and traversal of relationships. We could make a form from scratch using a combination of #text_field, #select, and #collection_select; however, we'd have to tell our controller how to interpret the search parameters and how to match and fetch results. Why not have Rails and its built-in constructs do most of that work for us?

First-Class Search Object

Instead of having to fill in all the logic ourselves, we can create an ActiveRecord model to represent a single search. We'll call this model Search. With this approach, each search is an instance of our Search model that can be passed around, respond to method calls, and be persisted in our database. We can create associations to any of the other models that we want to be included as search critera.

For example, in Socialpiq, users needed to be able to select a Capability as well as any number of SocialSites via SiteIntegrations. Capability, SocialSite, and SiteIntegration are models, so we can set up associations for each of them. In addition, lets say we're trying to match against a Tool and we want a results method that gives us all the tools for a given search. Here's what our model might look like:

class Search < ActiveRecord::Base
  belongs_to :capability
  has_many :site_integrations
  has_many :social_sites, through: :site_integrations

  def results
    @results ||= begin
      tools         = Tool.joins(:site_integrations)
      matched_tools = scope.empty? ? tools : tools.where(scope)



  def scope
      capability_id: capability_id,
      site_integrations: site_ids_scope
    }.delete_if { |key, value| value.nil? }

  def site_ids_scope
    ids = social_sites.pluck(:id)
    { social_site_id: ids } if ids.any?

Breaking Down the Model

There are two main things we're doing in our model.

  1. Defining our associations
  2. Defining a results method along with a few private helper methods to aid in finding our search results.

The purpose for our model is to look at a given Search and compare its associated records against the associated records for each Tool. For example, if a Search has the same Capability as a Tool, we want to include that Tool in our results set.

To do this, we can utilize Rails' querying methods to find matching Tools. Our scope method returns a hash based on the ids of our search's associated records, which we can simply feed into the where query method (like Tool.where(scope)). In our case, we want to show all records when a user doesn't select a value for given search criteria. To handle that, when a Search doesn't have any associated records, its scope method returns an empty hash, which we'll check for and then return all the tools instead of calling where with an empty scope.

The Search Form

With our Search model and using the SimpleForm gem, we get a beautifully simple form:

<%= simple_form_for @search do |f| %>
  <%= f.association :capability, include_blank: 'Any' %>
  <%= f.association :social_sites, include_blank: 'Any' %>
  <%= f.button :submit, 'Search' %>
<% end %>

Super clean! What happens in our controller once we get the parameters from the search form submission though?

The Search Controller

Again, when we're following Rails conventions, everything seems to drop in really well:

class SearchesController < ApplicationController
  def new
    @search = Search.new

  def create
    @search = Search.create(search_params)
    redirect_to search_path(search), notice: "#{search.results.size} results found."

  def show


  def search_params
    params.require(:search).permit(:capability_id, social_site_ids: [])

  def search
    @search ||= Search.find(params[:id])
  helper_method :search

Once users submit our search form, they'll be taken to the show page for a search, where we can simply call search.results to get a list of matching tools. Since we're persisting searches, we could easily add edit and update actions to our controller, allowing users to fine-tune their searches without having to start from scratch.

A Note on ActiveRecord vs. ActiveModel Searches

You may choose to persist your searches, creating a full-fledged Rails model inheriting from ActiveRecord::Base, as I've illustrated in our example. However, if searches don't need to be persisted, check out ActiveModel which lets you include other ActiveModel modules like validations and callbacks.


By making Search a first-class object in our application, we're able to create a well-defined model (literally) of our search and its criteria, simplify the form, work with Rails conventions in our controller, and get persisted searches practically free. Next time you're in a situation where you need to construct custom searches across your models, consider making Search a first-class object for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jul 2014 12:58

Custom ActiveModel::Validators are an easy way to validate individual attributes on your Rails models. All that's required is a Ruby class that inherits from ActiveModel::EachValidator and implements a validate_each method that takes three arguments: record, attribute, and value. I have written a few lately, so I pinged the rest of the amazingly talented Viget developers for some contributions. Here's what we came up with.

Simple URI Validator

A "simple URI" can be either a relative path or an absolute URL. In this case, any value that could be parsed by Ruby's URI module is allowed:

class UriValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless valid_uri?(value)
      record.errors[attribute] << (options[:message] || 'is not a valid URI')


  def valid_uri?(uri)

  rescue URI::InvalidURIError

Full URL Validator

A "full URL" is defined as requiring a host and scheme. Ruby provides a regular expression to match against, so that's what is used in this validator:

class FullUrlValidator < ActiveModel::EachValidator
  VALID_SCHEMES = %w(http https)
  def validate_each(record, attribute, value)
    unless value =~ URI::regexp(VALID_SCHEMES)
      record.errors[attribute] << (options[:message] || 'is not a valid URL')

The Ruby regular expression can be seen as too permissive. For a stricter regular expression, Brian Landau shared this Github gist.

Email Validator

My good friends Lawson Kurtz and Mike Ackerman contributed the following email address validator:

class EmailValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i
      record.errors[attribute] << (options[:message] || "is not a valid e-mail address")

If you'd rather validate by performing a DNS lookup, Brian Landau has you covered with this Github gist.

Secure Password Validator

Lawson provided this secure password validator (though credit goes to former Viget developer, James Cook):

class SecurePasswordValidator < ActiveModel::EachValidator
  WORDS = YAML.load_file("config/bad_passwords.yml")

  def validate_each(record, attribute, value)
    if value.in?(WORDS)
      record.errors.add(attribute, "is a common password. Choose another.")

Twitter Handle Validator

Lawson supplied this validator that checks for valid Twitter handles:

class TwitterHandleValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /^[A-Za-z0-9_]{1,15}$/
      record.errors[attribute] << (options[:message] || "is not a valid Twitter handle")

Hex Color Validator

A validator that's useful when an attribute should be a hex color value:

class HexColorValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /\A([a-f0-9]{3}){,2}\z/i
      record.errors[attribute] << (options[:message] || 'is not a valid hex color value')

UPDATE: The regular expression has been simplified thanks to a comment from HappyNoff.

Regular Expression Validator

A great solution for attributes that should be a regular expression:

class RegexpValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless valid_regexp?(value)
      record.errors[attribute] << (options[:message] || 'is not a valid regular expression')


  def valid_regexp?(value)

  rescue RegexpError

Bonus Round

Replace all of those default error messages above with I18n translated strings for great justice. For the Regular Expression Validator above, the validate_each method could look something like this:

def validate_each(record, attribute, value)
  unless valid_regexp?(value)
    default_message = record.errors.generate_message(attribute, :invalid_regexp)
    record.errors[attribute] << (options[:message] || default_message)

Then the following could be added to config/locales/en.yml:

      invalid_regexp: is not a valid regular expression

Now the default error messages can be driven by I18n.


We've found these to be very helpful at Viget. What do you think? Which validators do you find useful? Are there others worth sharing? Please share in the comments below.

Author: "Zachary Porter" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Jul 2014 15:29

As a developer, nothing makes me more nervous than third-party dependencies and things that can fail in unpredictable ways1. More often than not, these two go hand-in-hand, taking our elegant, robust applications and dragging them down to the lowest common denominator of the services they depend upon. A recent internal project called for slurping in and then reporting against data from Harvest, our time tracking service of choice and a fickle beast on its very best days.

I knew that both components (/(im|re)porting/) were prone to failure. How to handle that failure in a graceful way, so that our users see something more meaningful than a 500 page, and our developers have a fighting chance at tracking and fixing the problem? Here’s the approach we took.

Step 1: Model the processes

Rather than importing the data or generating the report with procedural code, create ActiveRecord models for them. In our case, the models are HarvestImport and Report. When a user initiates a data import or a report generation, save a new record to the database immediately, before doing any work.

Step 2: Give ’em status

These models have a status column. We default it to “queued,” since we offload most of the work to a series of Resque tasks, but you can use “pending” or somesuch if that’s more your speed. They also have an error field for reasons that will become apparent shortly.

Step 3: Define an interface

Into both of these models, we include the following module:

module ProcessingStatus
  def mark_processing
    update_attributes(status: "processing")

  def mark_successful
    update_attributes(status: "success", error: nil)

  def mark_failure(error)
    update_attributes(status: "failed", error: error.to_s)

  def process(cleanup = nil)
  rescue => ex

Lines 2–12 should be self-explanatory: methods for setting the object’s status. The mark_failure method takes an exception object, which it stores in the model’s error field, and mark_successful clears said error.

Line 14 (the process method) is where things get interesting. Calling this method immediately marks the object “processing,” and then yields to the provided block. If the block executes without error, the object is marked “success.” If any2 exception is thrown, the object marked “failure” and the error message is logged. Either way, if a cleanup lambda is provided, we call it (courtesy of Ruby’s ensure keyword).

Step 4: Wrap it up

Now we can wrap our nasty, fail-prone reporting code in a process call for great justice.

class ReportGenerator
  attr_accessor :report

  def generate_report
    report.process -> { File.delete(file_path) } do
      # do some fail-prone work

  # ...

The benefits are almost too numerous to count: 1) no 500 pages, 2) meaningful feedback for users, and 3) super detailed diagnostic info for developers – better than something like Honeybadger, which doesn’t provide nearly the same level of context. (-> { File.delete(file_path) } is just a little bit of file cleanup that should happen regardless of outcome.)

* * *

I’ve always found it an exercise in futility to try to predict all the ways a system can fail when integrating with an external dependency. Being able to blanket rescue any exception and store it in a way that’s meaningful to users and developers has been hugely liberating and has contributed to a seriously robust platform. This technique may not be applicable in every case, but when it fits, it’s good.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Jul 2014 14:56

Ever find yourself in a situation where you were given an ActiveRecord model and you wanted to figure out all the models it had a foreign key dependency (belongs_to association) with? Well, I had to do just that in some recent sprig-reap work. Given the class for a model, I needed to find all the class names for its belongs_to associations.

In order to figure this out, there were a few steps I needed to take..

Identify the Foreign Keys / belongs_to Associations

ActiveRecord::Base-inherited classes (models) provide a nice interface for inspecting associations -- the reflect_on_all_associations method. In my case, I was looking specifically for belongs_to associations. I was in luck! The method takes an optional argument for the kind of association. Here's an example:

# => array of ActiveRecord::Reflection::AssociationReflection objects

Once I had a list of all the belongs_to associations, I needed to then figure out what the corresponding class names were.

Identify the Class Name from the Associations

When dealing with ActiveRecord::Reflection::AssociationReflection objects, there are two places where class names can be found. These class names are downcased symbols of the actual class. Here are examples of how to grab a class name from both a normal belongs_to association and one with an explicit class_name.

Normal belongs_to:

class Post < ActiveRecord::Base
  belongs_to :user

association = Post.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

name = association.name
# => :user

With an explicit class_name:

class Post < ActiveRecord::Base
  belongs_to :creator, class_name: 'User'

association = Post.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

name = association.options[:class_name]
# => 'User'

Getting the actual class:

ActiveRecord associations have a build in klass method that will return the actual class based on the appropriate class name:

# => User

Handle Polymorphic Associations

Polymorphism is tricky. When dealing with a polymorphic association, you have a single identifier. Calling association.name would return something like :commentable. In a polymorphic association, we're probably looking to get back multiple class names -- like Post and Status for example.

class Comment < ActiveRecord::Base
  belongs_to :commentable, polymorphic: true

class Post < ActiveRecord::Base
  has_many :comments, as: :commentable

class Status < ActiveRecord::Base
  has_many :comments, as: :commentable

association = Comment.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

polymorphic = association.options[:polymorphic]
# => true

associations = ActiveRecord::Base.subclasses.select do |model|
  model.reflect_on_all_associations(:has_many).any? do |has_many_association|
    has_many_association.options[:as] == association.name
# => [Post, Status]


To break down the above example, association.options[:polymorphic] gives us true if our association is polymorphic and nil if it isn't.

Models with Polymorphic has_many Associations

If we know an association is polymorphic, the next step is to check all the models (ActiveRecord::Base.subclasses, could also do .descendants depending on how you want to handle subclasses of subclasses) that have a matching has_many polymorphic association (has_many_association.options[:as] == association.name from the example). When there's a match on a has_many association, you know that model is one of the polymorphic belongs_to associations!

Holistic Dependency Finder

As an illustration of how I handled my dependency sleuthing -- covering all the cases -- here's a class I made that takes a belongs_to association and provides a nice interface for returning all its dependencies (via its dependencies method):

class Association < Struct.new(:association)
  delegate :foreign_key, to: :association

  def klass
    association.klass unless polymorphic?

  def name
    association.options[:class_name] || association.name

  def polymorphic?

  def polymorphic_dependencies
    return [] unless polymorphic?
    @polymorphic_dependencies ||= ActiveRecord::Base.subclasses.select { |model| polymorphic_match? model }

  def polymorphic_match?(model)
    model.reflect_on_all_associations(:has_many).any? do |has_many_association|
      has_many_association.options[:as] == association.name

  def dependencies
    polymorphic? ? polymorphic_dependencies : Array(klass)

  def polymorphic_type
    association.foreign_type if polymorphic?

Here's a full example with the Association class in action:

class Comment < ActiveRecord::Base
  belongs_to :commentable, polymorphic: true

class Post < ActiveRecord::Base
  belongs_to :creator, class_name: 'User'
  has_many :comments, as: :commentable

class Status < ActiveRecord::Base
  belongs_to :user
  has_many :comments, as: :commentable

class User < ActiveRecord::Base
  has_many :posts
  has_many :statuses

# => [Post, Status]

# => [User]

# => [User]

The object-oriented approach cleanly handles all the cases for us! Hopefully this post has added a few tricks to your repertoire. Next time you find yourself faced with a similar problem, use reflect_on_all_associations for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 03 Jul 2014 09:38

Ever since we made Say Viget! we've had a bunch of people asking us to explain exactly how we did it. This post is a first go at that explanation -- and a long one at that. Because so much goes into making a game, this is Part 1 of a multi-part series on how to build a 2D Javascript game, explaining some theory, best practices, and highlighting some helpful libraries.

If you're reading this then you probably know the basics: we use Javascript to add things to the context of a canvas while moving those things around through a loop (which is hopefully firing at ~60fps). But how do you achieve collisions, gravity, and general gameplay-smoothness? That's what this post will cover.


View Demo

Code on Github

Take note: I set up a few gulp tasks to make editing and playing with the code simple. It uses Browserify to manage dependencies and coded in Coffeescript. If you're unfamiliar with Gulp or Browserify, I recommend reading this great guide.

Using Box2D and EaselJS

There's a bunch of complex math involved to get a game functioning as we would expect. Even simple things can quickly become complex. For example, when we jump on a concrete sidewalk there is almost no restitution (bounciness) compared to when we jump on a trampoline. A smooth glass surface has less friction than a jagged rock. And still, objects that have a greater mass should push those with lesser mass out of the way. To account for all these scenarios we'll be using the Box2D phyics library.

Box2D is the de facto standard 2D physics engine. To get it working in the browser we'll need to use a Javascript port, which I found here.

Since the syntax for drawing things to a canvas can tend to be verbose, I'll be using a library called EaselJS, which makes working with the <canvas> pretty enjoyable. If you're unfamiliar with EaselJS, definitely check out out this Getting Started guide.

Let's get started.

What's in a Game?

Think high-level. Really high-level. The first thing you realize we need is a kind of world or Reality. Things like gravity, mass, restitution, and friction; these things exist in the real world and we probably want them to exist in our game world, too. Next, we know we will have at least two types of objects in our world: a Hero and some Platforms. We'll also need a place to put our these two objects -- let's call this thing we put them on a a Stage. And, just like the stage for a play, we'll need something that tells our Stage what and where things should be put. For that, we'll create a concept of a Scene. Lastly, we'll pull it all together as into something I'll name Game.

Code Organization

As you can see we start to have clear separation of concerns, with each our Scene, Stage, Hero, etc., all having a different responsibility. To future proof and better organize our project we'll create a separate Class for each:

  • Reality - Get our game and debug canvas ready and define our virtual world.
  • Stage - Holds and keeps track of what the user can see.
  • Hero - Builds our special hero object that can roll and jump around.
  • Platform - Builds a platform at a given x, y, width, and height.
  • Scene - Calls our hero and creates the platforms.
  • Game - Pulls together all our classes. We also put the start/stop and the game loop in here.

Additionally, we'll create two extra files which define some variables being used throughout our project.

  • Config - Which holds some sizing and preferences
  • Keys - Defines keyboard input codes and their corresponding value.

Getting Started

We'll have two <canvas>s, one that EaselJS will interact with (<canvas id="arcade">; which I'll refer to as Arcade), and another for Box2D (<canvas id="debug"> referred to as Debug). These two canvases run completely independently of eachother, but we allow them to talk to eachother. Our Debug canvas is it's own world, a Box2D world, which is where we define gravity, how objects (bodies) within that world interact, and where we place those things that the user can see. 

The objects we can see, like our hero and the platforms, we'll draw to the Arcade canvas using EaselJS. The Box2D objects (or bodies) that represent our hero and platforms will be drawn to the Debug canvas.

Since Box2D defines sizes in meters, we'll need to translate our input into something the browser can understand (Moving a platform over 10 meters doesn't make sense, 300 pixels does). What this means is for every value we pass a Box2D function that accepts say an X and Y coordinate, we'll need to divide by a scale that basically converts those meters into pixels. That magic number is 30. So, if we want our hero to start at 25 pixels from the left of the screen and 475 pixels from the top, we would do:

scale = 30

# b2Vec2 creates a mathematically vector object,
# which can be a magnitude and direction
position = new box2d.b2Vec2( 25 / scale , 475 / scale)

Simple enough, right? Let's jump into what a Box2D body is and what we can do with it.

Creating a Box2D Body

Many of the objects in the game are made up of something we can see like the color and size of a platform, and world constraints on that object we cannot see, like mass, friction, etc. To handle this, we need to draw the visible representation of a platform to our Arcade canvas, while creating a Box2D body on the Debug canvas.

Box2D objects, or bodies, are made up of a Fixture definition and a Body definition. Fixture's represent what an object, like our Platform, is made up of and how it responds to other objects. Attributes like friction, density, and its shape (Whether it's a circle or polygon) are part of our Platform's Fixture. A Body definition defines where in our world a Platform should be. Some base level code for a Platform to be added to our Debug <canvas> would be:

scale  = 30
width  = 50
height = 50

# Creates what the shape is
@fixtureDef             = new box2d.b2FixtureDef
@fixtureDef.friction    = 0.5
@fixtureDef.restitution = 0.25 # Slightly bouncing
@fixtureDef.shape       = new box2d.b2PolygonShape
@fixtureDef.shape.SetAsBox( width / 2 / scale, height / 2 / scale )
# Note: SetAsBox Expects values to be 
# half the size, hence dividing by 2

# Where the shape should be
@bodyDef      = new box2d.b2BodyDef
@bodyDef.type = box2d.b2Body.b2_staticBody
@bodyDef.position.Set(width / scale, height / scale)

# Add to world
@body = world.CreateBody( @bodyDef )
@body.CreateFixture( @fixtureDef )​

Note that static body types (as defined above with box2d.b2Body.b2_staticBody) are not effected by gravity. Dynamic body types, like our hero, will respond to gravity.

Adding EaselJS

In the same place we created our Box2D fixture and body definitions we can create a new EaselJS Shape which simply builds a rectangle with the same dimensions as our Box2D body and add it to our EaselJS Stage.

# ...from above...
# Add to world
@body = world.CreateBody( @bodyDef )
@body.CreateFixture( @fixtureDef )

@view = new createjs.Shape
@view.graphics.beginFill('#000').drawRect(100, 100, width, height)

Stage.addChild @view

From there, we now have one EaselJS Shape, or View, which is being drawn to our Arcade canvas, while the body that represents that Shape is drawn to our Debug canvas. In the case of our hero we want to move our EaselJS shape with its corresponding Box2D body. To do that, we would do something like:

# Get the current position of the body
position = @body.GetPosition()
# Multiply by our scale
@view.x = position.x * scale
@view.y = position.y * scale

The trick to all of this is tying these two objects together -- our Box2D body on our Debug canvas is effected by gravity and thus moves around. When it moves around, we get the position of the body and assign update the position of our EaselJS Shape or @view. That's it.

Accounting for User Input and Controls

Think about how you normally control a character in a video game. You move the joystick up and the player moves forward... and keeps moving forward until you let go. We want to mimic that functionality in our game. To do this we will set a `moving` variable to true when the user pressed down on a key (onKeyDown) and set it to false what the user lets go (onKeyUp). Something like:

assignControls: =>
    document.onkeydown = @handleDown
    document.onkeyup   = @handleUp

handleDown: (e) =>
    switch e.which
        when 37 # Left arrow
            @moving_left = true
        when 39 # right arrow
            @moving_right = true

handleUp: (e) =>
    switch e.which
        when 37
            @moving_left = false
        when 39
            @moving_right = false

And on each iteration of our loop, we would do something like:

update: =>
    # Move right
    if @moving_right
        @hero_speed += 1
    # Move left
    else if @moving_left
        @hero_speed -= 1
    # Come to a stop
        @hero_speed = 0

Again, this is a pretty simple concept.

Look Through The Code

From here I recommending looking through the code on Github for great justice. In it you'll find more refined examples, in an actual game context, which will provide for a fuller understanding of the concepts explained above.


So far we've covered:

  • Using two canvases, one to handle drawing and the other to handle physics
  • What makes up a Box2D body
  • How to tie our EaselJS objects to our Box2D bodies
  • A strategy for controlling our hero with user input. 

​In Part 2 we'll cover:

  • How to follow our hero throughout our scene
  • How to build complex shapes
  • Handling collisions with special objects.

In addition to what I'll be covering in Part 2, is there anything else you would like covered relating to game development? Have questions or feedback on how we could be doing something differently? Let me know in the comments below.

Author: "Tommy Marshall" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Jul 2014 13:52

This May, Viget worked with Dick's Sporting Goods to launch Women's Fitness, an interactive look at women’s fitness apparel and accessories. One of it's most interesting features is the grid of hexagonal product tiles shown in each scene. To draw the hexagons, I chose to use SVG polygon elements.

I've had experience using SVG files as image sources and in icon fonts, but this work was my first opportunity to really dig into it's most powerful use case, inline in HTML. Inline SVG simply refers to SVG markup that is included in the markup for a webpage.


<div><svg><!-- WHERE THE MAGIC HAPPENS. --></svg></div>


Based on this experience, here are a few simple things I learned about SVG.

1. Browser support is pretty good


2. SVG can be styled with CSS

Many SVG attributes, like fill and stroke, can be styled right in your CSS.

See the Pen eLbCy by Chris Manning (@cwmanning) on CodePen.

3. SVG doesn't support CSS z-index

Setting the z-index in CSS has asbolutely no effect on the stacking order of svg. The only thing that does is the position of the node in the document. In the example below, the orange circle comes after the blue circle in the document, so it is stacked on top.

See the Pen qdgtk by Chris Manning (@cwmanning) on CodePen.

4. SVG can be created and manipulated with JavaScript


Creating namespaced elements (or attributes, more on that later) requires a slightly different approach than HTML:


// SVG
document.createElementNS('http://www.w3.org/2000/svg', 'svg');

If you're having problems interacting with or updating elements, double check that you're using createElementNS with the proper namespace. More on SVG namespaces.

With Backbone.js

In a Backbone application like Women's Fitness, to use svg or another namespaced element as the view's el, you can explictly override this line in Backbone.View._ensureElement:

// https://github.com/jashkenas/backbone/blob/1.1.2/backbone.js#L1105
var $el = Backbone.$('<' + _.result(this, 'tagName') + '>').attr(attrs);

I made a Backbone View for SVG and copied the _ensureElement function, replacing the line above with this:

// this.nameSpace = 'http://www.w3.org/2000/svg'; this.tagName = 'svg';
var $el = $(window.document.createElementNS(_.result(this, 'nameSpace'), _.result(this, 'tagName'))).attr(attrs);

Setting Attributes

  • Some SVG attributes are namespaced, like the href of an image or anchor: xlink:href. To set or modify these, use setAttributeNS.
// typical
node.setAttribute('width', '150');

// namespaced
node.setAttributeNS('http://www.w3.org/1999/xlink', 'xlink:href', 'http://viget.com');
  • Tip: attributes set with jQuery are always converted to lowercase! Watch out for issues like this gem:
// jQuery sets 'patternUnits' as 'patternunits'
this.$el.attr('patternUnits', 'userSpaceOnUse');

// Works as expected
this.el.setAttribute('patternUnits', 'userSpaceOnUse');
  • Another tip: jQuery's addClass doesn't work on SVG elements. And element.classList isn't supported on SVG elements in Internet Explorer. But you can stil update the class with $.attr('class', value) or setAttribute('class', value).

5. SVG can be animated


As mentioned in #2, SVG elements can be styled with CSS. The following example uses CSS animations to transform rotatation and SVG attributes like stroke and fill. In my experience so far, browser support is not as consistent as SMIL or JavaScript. 

Browser support: Chrome, Firefox, Safari. Internet Explorer does not support CSS transitions, transforms, and animations on SVG elements. In this particular example, the rotation is broken in Firefox because CSS transform-origin is not supported on SVG elements: https://bugzilla.mozilla.org/show_bug.cgi?id=923193.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.


SVG allows animation with SMIL (Synchronized Multimedia Integration Language, pronounced "smile"), which supports the changing of attributes with SVG elements like animate, animateTransform, and animateMotion. See https://developer.mozilla.org/en-US/docs/Web/SVG/SVG_animation_with_SMIL and http://www.w3.org/TR/SVG/animate.html for more. The following example is animated without any CSS or JavaScript.

Browser support: Chrome, Firefox, Safari. Internet Explorer does not support SMIL animation of SVG elements.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.


Direct manipulation of SVG element attributes allows for the most control over animations. It's also the only method of the three that supports animation in Internet Explorer. If you are doing a lot of work, there are many libraries to speed up development time like svg.js (used in this example), Snap.svg, and d3.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.


SVG isn't limited to whatever Illustrator outputs. Using SVG in HTML is well-supported and offers many different options to style and animate content. If you're interested in learning more, check out the resources below.

Additional Resources

Author: "Chris Manning" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 01 Jul 2014 16:00

I’m a little belated, but I was lucky enough to attend and speak at the first Craft CMS Summit two weeks ago. This was the first online conference that I had ever attended, and I was thoroughly impressed. I had always been a bit hesitant to attend online conferences because I was unsure about the quality, but after experiencing it firsthand I won't hesitate in the future. Everything was very well organized and the speakers all gave excellent presentations. It was also nice to sit and learn in the comfort of my own home instead of having to deal with the extra burdon of traveling for a conference. Side note: Environments for Humans, the company who hosted the conference, has additional upcoming events.

State of Craft CMS

Brandon Kelly, started the conference by giving a brief history of Craft and a peek at some new features. Here are a couple bullets I pulled out:

  • There have been over 10 iterations of just the basic Control Panel layout.
  • They invited 10 people into the Blocks (Craft’s previous name) private alpha. There was minimal functionality, no Twig templating language (they created their own), and they just wanted to get some eyes on the interface.
  • There have been over 13,600 licenses issued.
  • 3,300 unique sites with recent CP usage in the last 30 days.
  • Revenue has been excellent. He also said June is going to have another spike.
  • They are considering Craft as of now, hitting 80% of what a site needs to do. The other 20% being really custom stuff that won’t be baked in.
  • Next batch of stuff is going to be usability improvements and improving their docs.
  • They are hiring a third party company to help with docs.
  • Saving big stuff for 3.0.
  • 3.0 will have in-browser asset editing, which they demoed.
  • Plugin store is coming this year. This will allow developers to submit their plugins to Pixel & Tonic. Then, those plugins will be available for download, and update from within the Craft control panel.

E4H has also made the entire recording available for free.

Twig for Designers

Ben Parizek next gave a presentation on Twig, the templating engine that Craft uses. He shared this awesome spreadsheet which is a nice resource for example code for all of the default Craft custom fields.

Template Organization

Anthony Colangelo gave an interesting presentation about template organization. My main takeaway was to think about the structure of your templates based on the type of template, and not just the sections of the site. You can view the slides on Speaker Deck.

Craft Tips & Tricks

I was struggling to come up with a topic, so I just ran through a collection of real-world tips and tricks I had come across while building Craft sites. Here are a couple of my favorite ones from the presentation:


The Twig merge filter can help to reduce duplication in your template:

{% set filters = ['type', 'product', 'activity', 'element'] %}
{% set params = { section: 'media', limit: 12 } %}

{# Apply filter? #}
{% if craft.request.segments[2] is defined and craft.request.segments[1] in filters %}
	{% switch craft.request.segments[1] %}
		{% case 'product' %}
			{% set product = craft.entries({ slug: craft.request.segments[2], section: 'product' }).first() %}
			{% set params = params | merge({ relatedTo: product }) %}
		{% case 'type' %}
			{% set params = params | merge({ type: craft.request.segments[2] }) %}

	{% endswitch %}
{% endif %}

{% set entries = craft.entries(params) %}

That code sample was used to apply filters on a media page. This way, we could reuse a single template.


Macros are kinda like helpers. They are useful for creating little reusable functions:


{%- macro map_link(address) -%}
	http://maps.google.com/?q={{ address | url_encode }}
{%- endmacro -%}


{% import "_helpers" as helpers %}

<a href="{{ helpers.map_link('400 S. Maple Avenue, Suite 200, Falls Church, VA 22046') }}">Map</a>

That code will result in: Map

Element Types and Plugin Development

Ben Croker talked about building plugins, and specifically Element Types. Element Types are the foundation of Craft’s entries, users, assets, globals, categories, etc. Craft has given us the ability to create Element Types through plugins. It’s not thoroughly documented yet, this is all they have, but you can use the existing Element Types to learn how to build them. You can take a look at his slides on Speaker Deck, but the bulk of the presentation was demoing an Element Type that he built.

Craft Q&A Round Table

The Pixel & Tonic team sat around and answered questions from the audience. Here's a smattering of the notes I took:

  • “We have some ideas of how to get DB syncing working, thats why every table has a uid column.” It’s an itch they want to scratch for themselves too.
  • On their list: using a JSON file for field/section setup
  • Matrix within Matrix will be coming eventually. The UI is tough, but the code is all in place.
  • There is the possibility that it will eventually be public on GitHub, but they have to work out some of the app setup stuff.
  • They’ve considered renaming “localization” to just be “sites”, then people can run multiple sites with one Craft install.
  • “Will comments be a part of core?”, “No, that’s plugin territory”
  • Duplicating entries will be coming in Craft 2.2
  • They have a lot of plans for the edit field layout page to make it more user friendly
Author: "Trevor Davis" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 10:16

One difficult aspect about responsive development is how to manage complexity in navigation systems. For simple headers and navigation structures, it’s typically straightforward to just use a single HTML structure. Then write some clever styles which re-adjusts the navigation system from a small-screen format, to one that takes advantage of the increased real-estate of larger screens. Finally, write a small bit of JavaScript for opening and closing a menu on small screens and you’re done. The amount of overhead for delivering two presentation options to all screens in these cases is fairly low.

However, for cases where more complex navigation patterns are used, and where interactions are vastly different across screen sizes, this approach can be rather bloated, as unnecessary markup, styles and assets are downloaded for devices that don’t end up using them.

On one recent project, we were faced with such a problem. The mobile header was simple and the navigation trigger was the common hamburger icon. The navigation system itself employed a fairly complicated multi-level nested push menu which revealed itself from the left side of the screen. The desktop header and navigation system was arranged differently and implemented a full-screen mega-menu in place of the push menu previously mentioned. Due to the differences and overall complexity of each approach, different sets of markup and styles were required for presentation, and different JavaScript assets were required for each interaction pattern.

View Animated GIF: Mobile | Desktop

Mobile First to the Rescue

In order to have the small-screen experience be as streamlined as possible, we employed a mobile-first approach by using a combination of RequireJS, enquire.js & Handlebars. Here’s how it’s setup:

// main.js
], function(enquire) {
    enquire.register('screen and (max-width: 1000px)', {
        match: function() {
    enquire.register('screen and (min-width: 1001px)', {
        match: function() {

In the above code, we’re using enquire’s register method to check the viewport size, and load the bundled set of JavaScript assets for the appropriate screen size.

Handle the Small Screen Version

// mobile-header.js
], function(enquire, Dependency1, Dependency2) {
    enquire.register('screen and (max-width: 1000px)', {
        setup: function() {
            // initialize mobile header/nav
        match: function() {
            // show mobile header/nav
        unmatch: function() {
            // hide mobile header/nav

Here, mobile-header.js loads the necessary script dependencies for the mobile header and navigation, and sets up another enquire block for initializing, showing and hiding.

Handle the Large Screen Version

// desktop-header.js
    paths: {
        handlebars: 'handlebars.runtime'
    shim: {
        handlebars: {
            exports: 'Handlebars'

], function(enquire, Handlebars, Dependency3, Dependency4) {
    enquire.register('screen and (min-width: 1001px)', {
        setup: function() {
            // get template and insert markup
            require(['../templates/desktop-header'], function() {
                var markup = JST['desktop-header']();
        match: function() {
            // show desktop header/nav
        unmatch: function() {
            // hide desktop header/nav

* The handlebars runtime is being used for faster render times. It requires that the desktop header template (referenced on line 22 above) be a pre-compiled handlebar template. It looks like this and can be auto-generated using grunt-contrib-handlebars.

Finally, desktop-header.js loads the necessary script dependencies for the desktop header and navigation. Another enquire block is set up for fetching and rendering the template, and showing and hiding.

Pros & Cons

The code examples above are heavily stripped down from the original implementation, and it’s also important to note that the RequireJS Optimizer was used to combine related scripts together into a few key modules (main, mobile and desktop), in order to keep http requests to a minimum.

Which brings me to a downside: splitting the JS into small and large modules does add one extra http request as opposed to simply bundling ALL THE THINGS into one JS file. For your specific implementation, the bandwidth and memory savings would have to be weighed against the slight penalty of an extra http request. That penalty may or may not be worth it. There is also an ever so slight flash of the mobile header on desktop before it is replaced with the desktop header. We mitigated this with css, by simply hiding the mobile header at the large breakpoint.

On the plus side, the advantage here is that the desktop header and associated assets are only loaded when the viewport size is large enough to accommodate it. Also, the JavaScript assets for the mobile multi-level push menu are only loaded for small screens. Bandwidth is more efficiently utilized in that mobile users’ data plans aren’t taxed with downloading unnecessary assets. The browser also has less work to do overall. Everyone rejoices!

Taking it Further

Several ways this could be taken to the next level would be to modularize the styles required for rendering the mobile and desktop header and navigation, and bundle those within their respective modules. Another completely different approach for managing this type of complexity would be to implement a RESS solution with something like Detector. If you have any other clever ways of managing complexity in responsive navigation patterns, or any responsive components for that matter, let me know in the comments below.

Author: "Jeremy Frank" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 18 Jun 2014 15:43

Recently, Lawson and Ryan launched Sprig, a gem for seeding Rails applications. sprig_logo

Sprig seed files are easy to write, but they do take some time -- time which you may not have enough of. We wanted to generate seed files from records already in the database, and we received similar requests from other Sprig users. At Viget, we try to give the people what the people want, so I jumped in and created Sprig-Reap!

Introducing Sprig-Reap

Sprig-Reap is a rubygem that allows you to generate Sprig-formatted seed files from your Rails app's database.

It provides both a command-line interface via a rake task and a method accessible inside the Rails console.

Command Line

rake db:seed:reap

Rails Console


The Defaults

Sprig-Reap, by default, will create a seed file for every model in your Rails app with an entry for each record. The .yml seed files will be placed inside the db/seeds/env folder, where env is the current Rails.env.

Don't like these defaults? No problem!

Customizing the Target Environment Seed Folder

Sprig-Reap can write to a seeds folder named after any environment you want. If the target folder doesn't already exist, Sprig-Reap will create it for you!

# Command Line
rake db:seed:reap TARGET_ENV='dreamland'

# Rails Console
Sprig.reap(target_env: 'dreamland')

Customizing the Set of Models

You tell Sprig-Reap which models you want seeds for and -- BOOM -- it's done:

# Command Line
rake db:seed:reap MODELS=User,Post,Comment

# Rails Console
Sprig.reap(models: [User, Post, Comment])

Omitting Specific Attributes from Seed Files

Tired of seeing those created_at/updated_at timestamps when you don't care about them? Don't want encrypted passwords dumped into your seed files? Just ignore 'em!

# Command Line
rake db:seed:reap IGNORED_ATTRS=created_at,updated_at,password

# Rails Console
Sprig.reap(ignored_attrs: [:created_at, :updated_at, :password])

Reaping with Existing Seed Files

If you have existing seed files you're already using with Sprig, have no fear! Sprig-Reap is friendly with other Sprig seed files and will append to what you already have -- appropriately assigning unique sprig_ids to each entry.

Use Case

If you're wondering what the point of all this is, perchance this little example will pique your interest:

At Viget, QA is a critical part of every project. During the QA process, we generate all kinds of data so we can test all the things. Oftentimes this data describes a very particular, complicated state. Being able to easily take a snapshot of the application's data state is super helpful. Sprig-Reap lets us do this with a single command -- and gives us seed files that can be shared and re-used across the entire project team. If someone happens to run into a hard-to-reproduce issue related to a specific data state, use Sprig-Reap for great justice!

Your Ideas

We'd love to hear what people think about Sprig-Reap and how they're using it. Please share! If you have any comments or ideas of your own when it comes to enhancements, leave a comment below or add an issue to the GitHub repo.

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader