• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Friday, 11 Apr 2014 16:31

Here at Viget, we've successfully used ActiveAdmin on a number of custom CMS projects. ActiveAdmin is a great help in providing a sensible set of features out-of-the-box, while still allowing heavy customization for great justice. It also has a very opinionated way of doing things, which can make customization a bit tricky (eg. layouts via Arbre, Custom Pages, etc.)

After working with ActiveAdmin for a couple of years, here are 8 customizations that I find myself using often:

1. Adding a custom behavior to ActiveAdmin::BaseController:

Often times, you'll want to add a before_filter to something like your typical ApplicationController, but scoped to the ActiveAdmin engine. In this case, you can add custom behavior by following this pattern:

# config/initializers/active_admin_extensions.rb:
ActiveAdmin::BaseController.send(:include, ActiveAdmin::SiteRestriction)
# lib/active_admin/site_restriction.rb:
module ActiveAdmin
  module SiteRestriction
    extend ActiveSupport::Concern

    included do
      before_filter :restrict_to_own_site
    end

    private

    def restrict_to_own_site
      unless current_site == current_admin_user.site
        render_404
      end
    end
  end
end

2. Conditionally add a navigation menu item:

When you have multiple admin types, you may want to only show certain menu items to a specific type of admin user. You can conditionally show menu items:

# app/admin/resource.rb:
ActiveAdmin.register Resource do
  menu :parent => "Super Admin Only", :if => proc { current_admin_user.super_admin? }
end

3. To display an already uploaded image on the form:

ActiveAdmin uses Formtastic behind the scenes for forms. Persisting uploads between invalid form submissions can be accomplished via f.input :image_cache, :as => :hidden. However you may want to display the already uploaded image upon visiting the form for editing an existing item. You could use set one using a hint (f.input :image, :hint => (f.template.image_tag(f.object.image.url) if f.object.image?)), but this won't allow you to set any text as a hint. Instead, you could add some custom behavior to the Formtastic FileInput:

# app/admin/inputs/file_input.rb
class FileInput < Formtastic::Inputs::FileInput
  def to_html
    input_wrapping do
      label_html <<
      builder.file_field(method, input_html_options) <<
      image_preview_content
    end
  end

  private

  def image_preview_content
    image_preview? ? image_preview_html : ""
  end

  def image_preview?
    options[:image_preview] && @object.send(method).present?
  end

  def image_preview_html
    template.image_tag(@object.send(method).url, :class => "image-preview")
  end
end

# app/admin/my_class.rb
ActiveAdmin.register MyClass do

  form do |f|
    f.input :logo_image, :image_preview => true
  end
end

4. Scoping queries:

ActiveAdmin uses Inherited Resources behind the scenes. Inherited Resources uses the concept of resource and collection. ActiveAdmin uses a controller method called scoped_collection which we can override to add our own scope.

In your ActiveAdmin resource definition file:

# app/admin/my_class.rb
ActiveAdmin.register MyClass do 

  controller do
    def scoped_collection
      Post.for_site(current_site)
    end
  end
end

Similarly, you can override the resource method to customize how the singular resource is found.

5. Customizing the method by which a resource is found by the URL:

Often times we add a “slug” to a resource for prettier URLs. You can use your friendly URL parameters in the CMS as well:

# app/admin/page.rb
ActiveAdmin.register Page do

  controller do
    defaults :finder => :find_by_slug!
  end
end

I found that when a user submits the form while changing the slug value and the form is invalid, the next form submission will attempt to POST to the invalid slug. To fix this, I updated my model's to_param method from:

def to_param
  slug
end

To:

def to_param
  if invalid? && slug_changed?
    # return original slug value
    changes["slug"].first
  else
    slug
  end
end

6. Inserting arbitrary content into the form:

It can be handy to insert some explaining text, or even images, to within the Formtastic form itself. You'll need to insert the content into the form buffer:

Text:

form do |f|
  f.form_buffers.last << "<li>My Text</li>".html_safe
end

Image:

form do |f|
  f.form_buffers.last << f.template.image_tag('http://doge.com/image.png', :height => 350)
end

7. Dynamic Site Title:

It's possible you have multiple user types or sites that are managed via a single CMS. In this case, you may want to update the title displayed in the Navigation Menu.

# config/initializers/active_admin.rb
config.site_title = proc { "#{current_site.name} CMS" }

8. Manually defining sort order of top level navigation menu items:

It's easy to define the priority (sort order) of menu items when they belong to a parent, however if you are trying to set the sort order of top level parent items, it's a bit trickier.

# config/initializers/active_admin.rb
config.namespace :admin do |admin|
  admin.build_menu do |menu|
    menu.add :label => "First Item",  :priority => 1
    menu.add :label => "Second Item", :priority => 2
    menu.add :label => "Third Item",  :priority => 3
  end
end

What customizations have you found that you think are worth sharing? Please leave a note in the comments below!

Author: "Mike Ackerman" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 08 Apr 2014 10:28

Getting Started

One weekend, I decided to really immerse myself in Grunt and RequireJS. Gotta stay up on these things right? Done. Then Monday rolls around, “and just like that Grunt and RequireJS are out, it’s all about Gulp and Browserify now.”

(╯°□°)╯︵ ┻━┻

When I was done flipping tables, I set aside my newly acquired Grunt + RequireJS skills, and started over again with Gulp and Browserify to see what all the fuss was about.

You guys. The internet was right. To save you some googling, doc crawling, and trial and error I went through, I've assembled some resources and information I think you'll find helpful in getting started.

 ┬─┬ノ( º _ ºノ) 

Gulp + Browserify starter repo

I've created a Gulp + Browserify starter repo with examples of how to accomplish some common tasks and workflows.

Frequently Asked Questions Wiki

Node, npm, CommonJS Modules, package.json…wat? When I dove into this stuff, much of the documentation out there assumed a familiarity with things with which I was not at familiar all. I've compiled some background knowledge into a FAQ Wiki attached to the above mentioned starter repo to help fill in any knowledge gaps.

Why Gulp is Great

It makes sense.

I picked up Gulp just days after learning Grunt. For whatever reason, I found Gulp to be immediately easier and more enjoyable to work with. The idea of piping a stream a files through different processes makes a lot of sense.

gulp's use of streams and code-over-configuration makes for a simpler and more intuitive build. - gulpjs.com

Here's what a basic image processing task might look like:

var gulp       = require('gulp');
var imagemin   = require('gulp-imagemin');

gulp.task('images', function(){
    gulp.src('./src/images/**')
        .pipe(imagemin())
        .pipe(gulp.dest('./build/images'));
});

First, gulp.src sucks in a stream of files and gets them ready to be piped through whatever tasks you've made available. In this instance, I'm running all the files through gulp-imagemin, then outputting them to my build folder using gulp.dest. To add additional processing (renaming, resizing, liveReloading, etc.), just tack on more pipes with tasks to run.

Speed!

It's really fast! I just finished building a fairly complex JS app. It handled compiling SASS, CoffeeScript with source maps, Handlebars Templates, and running LiveReload like it was no big deal.

By harnessing the power of node's streams you get fast builds that don't write intermediary files to disk. - gulpjs.com

This killer way to break up your gulpfile.js

A gulpfile is what gulp uses to kick things off when you run gulp. If you're coming from Grunt, it's just like a gruntfile. After some experimenting, some Pull Request suggestions, and learning how awesome Node/CommonJS modules are (more on that later), I broke out all my tasks into individual files, and came up with this gulpfile.js. I'm kind of in love with it.

var gulp = require('./gulp')([
    'browserify',
    'compass',
    'images',
    'open',
    'watch',
    'serve'
]);

gulp.task('build', ['browserify', 'compass', 'images']);
gulp.task('default', ['build', 'watch', 'serve', 'open']);

~200 characters. So clean, right? Here's what's happening: I'm requireing a gulp module I've created at ./gulp/index.js, and am passing it a list of tasks that correspond to task files I've saved in ./gulp/tasks.

var gulp = require('gulp');

module.exports = function(tasks) {
    tasks.forEach(function(name) {
        gulp.task(name, require('./tasks/' + name));
    });

    return gulp;
};

For each task name in the array we're passing to this method, a gulp.task gets created with that name, and with the method exported by a file of the same name in my ./tasks/ folder. Now that each individual task has been registered, we can use them in the bulk tasks like default that we defined at the bottom of our gulpfile.

folder structure

This makes reusing and setting up tasks on new projects really easy. Check out the starter repo, and read the Gulp docs to learn more.

Why Browserify is Great

“Browserify lets you require('modules') in the browser by bundling up all of your dependencies.” - Browserify.org

Browserify looks at a single JavaScript file, and follows the require dependency tree, and bundles them into a new file. You can use Browserify on the command line, or through its API in Node (using Gulp in this case).

Basic API example

app.js

var hideElement = require('./hideElement');

hideElement('#some-id');

hideElement.js

var $ = require('jquery');
module.exports = function(selector) {
    return $(selector).hide();
};

gulpfile.js

var browserify = require('browserify');
var bundle = browserify('./app.js').bundle()

Running app.js through Browserify does the following:

  1. See that app.js requires hideElement.js
  2. See that hideElement.js requires a module called jquery
  3. Bundles together jQuery, hideElement.js, and app.js into one file, making sure each dependency is available when and where it needs to be.

CommonJS > AMD

Our team had already moved towards module-based js with Require.js and Almond.js, which both are implementations of the AMD module pattern. We loved the organization and benefits of this provided, but…

AMD / RequireJS Modules felt cumbersome and awkward.

require([
    './thing1',
    './thing2',
    './thing3'
], function(thing1, thing2, thing3) {
    // Tell the module what to return/export
    return function() {
        console.log(thing1, thing2, thing3);
    };
});

The first time using CommonJS (Node) modules was a breath of fresh air.

var thing1 = require('./thing1');
var thing2 = require('./thing2');
var thing3 = require('./thing3');

// Tell the module what to return/export
module.exports = function() {
    console.log(thing1, thing2, thing3);
};

Make sure to read up on how require calls resolve to files, folders, and node_modules.

Browserify is awesome because Node and NPM are awesome.

Node uses the CommonJS pattern for requiring modules. What really makes it powerful though is the ability to quickly install, update, and manage dependencies with Node Package Manager (npm). Once you've tasted this combination, you'll want that power for always. Browserify is what lets us have it in the browser.

Say you need jQuery. Traditionally, you might open you your browser, find the latest version on jQuery.com, download the file, save it to a vendor folder, then add a script tag to your layout, and let it attach itself to window as a global object.

With npm and Browserify, all you have to do is this:

Command Line

npm install jquery --save

app.js

var $ = require('jquery');
$('.haters').fadeOut();

This fetches the latest version of jQuery from NPM, and downloads the module into a node_modules folder at the root of your project. The --save flag automatically adds the package to your dependencies object in your package.json file. Now you can require('jquery') in any file that needs it. The jQuery object gets exported locally to var $, instead of globally on window. This was especially nice when I built a script that needed to live on unknown third party sites that may or may not already have another version of jQuery loaded. The jQuery packaged with my script is completely private to the js that requires it, eliminating the possibility of version conflict issues.

The Power of Transforms

Before bundling your JavaScript, Browserify makes it easy for you to preprocess your files through a number of transforms before including them in the bundle. This is how you'd compile .coffee or .hbs files into your bundle as valid JavaScript.

The most common way to do this is by listing your transforms in a browserify.transform object your package.json file. Browserify will apply the transforms in the order in which they're listed. This assumes you've npm install'd them already.

  "browserify": {
    "transform": ["coffeeify", "hbsfy" ]
  },
 "devDependencies": {
    "browserify": "~3.36.0",
    "coffeeify": "~0.6.0",
    "hbsfy": "~1.3.2",
    "gulp": "~3.6.0",
    "vinyl-source-stream": "~0.1.1"
  }

Notice that I've listed the transforms under devDependencies since they're only used for preprocessing, and not in our final javascript output. You can do this automatically by adding the --save-dev or -D flag when you install.

npm install someTransformModule --save-dev

Now we can require('./view.coffee') and require('./template.hbs') like we would any other javascript file! We can also use the extentions option with the Browserify API to tell browserify to recognize these extensions, so we don't have to explicitly type them in our requires.

browserify({
    entries: ['./src/javascript/app.coffee'],
    extensions: ['.coffee', '.hbs']
})
.bundle()
...

See this in action here.

Using them together: Gulp + Browserify

Initially, I started out using the gulp-browserify plugin. A few weeks later though, Gulp added it to their blacklist. Turns out the plugin was unnecessary - you can can node-browserify API straight up, with a little help from vinyl-source-stream This just converts the bundle into the type of stream gulp is expecting. Using browserify directly is great because you'll always have access to 100% of the features, as well as the most up-to-date version.

Basic Usage

var browserify = require('browserify');
var gulp = require('gulp');
var source = require('vinyl-source-stream');

gulp.task('browserify', function() {
    return browserify('./src/javascript/app.js')
        .bundle()
        //Pass desired output filename to vinyl-source-stream
        .pipe(source('bundle.js'))
        // Start piping stream to tasks!
        .pipe(gulp.dest('./build/'));
});

Awesome Usage

Take a look at the browserify.js task and package.json in my starter repo to see how to apply transforms for CoffeeScript and Handlebars, set up non-common js modules and dependencies with browserify-shim, and handle compile errors through Notification Center. To Learn more about everything else you can do with Browserify, read through the API. I hope you have as much fun with it as I'm having. Enjoy!

Author: "Dan Tello" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 07 Apr 2014 11:32

Like many other programmers with children I'm interested in passing my skills on to them. If you've ever had the opportunity to work with me, this may or may not be a horrifying proposition — I'll let you be the judge.

My two oldest (9 and 7) are now at a good age to start learning the basics, so I've been spending every Wednesday morning before work teaching them with the help of Khan Academy. Since I'm new to this, there's definitely a lot of trial and error to figure out the intersection between their interests and capabilities. In the process I've discovered a few things that work and a few that don't. These lessons are specific to me, but you may find them helpful.

A False Start

My initial approach combined a general introduction to computing with basic programming concepts in Ruby. I spent a couple of Wednesday mornings talking to my son about the basics of computing answering his questions, researching answers with him, and working on basic programming concepts in IRB.

This was fun for the both of us and his questions were really awesome, but I found that it didn't hold his interest for long. While the programming exercises I came up with were basic, they weren't all that compelling for a child his age. The positive feedback was there — seeing 1 + 1 output 2 when typed in to an IRB prompt showed that something was happening, but it wasn't as exciting as I thought an introduction to programming should be.

Play on Their Interests

While my son may not have been excited about our forays into IRB, I knew there was one thing he was excited about. For the last 2 years, we've been teaching him math at home with the help of Khan Academy. He's enjoyed it for the most part, but it seems that every moment I left him unsupervised he would jump over to the programming section and explore the other students' creations. To be fair, there's some really impressive stuff there — he's particular to Minecraft while I've enjoyed playing Falling Pixel.

At first his lack of focus frustrated me, but rather than fighting to keep him focused on math I thought I could instead use his interest in games to my advantage. At the time Khan Academy introduced the programming curriculum I thought that he wasn't quite ready, but now I was motivated to give it a shot.

Read Ahead

A few days before introducing the curriculum to him, I went through it myself. I watched the videos, worked through the sample exercises, and created some simple programs of my own. Between the quality of the walk-through videos and the immediate feedback of seeing my drawing appear right after I typed my code, I knew this approach would be a hit.

I quickly printed off a few grid worksheets and a cheat sheet to prepare for that Wednesday's class. The next morning, at breakfast, I presented the idea to him. I knew he would be excited, but I wasn't prepared for what came next — my daughter wanted to join in as well!

The Technique

Now that I had two students in my class, I had to figure out a way to work with both of them at an appropriate pace. For our first lesson, I decided to have both kids share a laptop and watch the videos together. When it was time to do the first exercise, they took turns typing in rect, the appropriate coordinates, and adjusting the values as necessary.

After this first class, I had a little bit of "homework" for them — I presented both with a few of the grid worksheets and a quick pixel drawing I did of some Space Invaders and asked them to come up with their own versions.

I demonstrated the basic technique, told them that we would spend the next class creating them on Khan Academy, and then set them loose on their own creations. To my surprise, my daughter made a rather detailed drawing of a red-haired girl that she planned to turn into a video game.

During the next class, we re-familiarized ourselves with the basic drawing functions and set out to recreate her sketch. Looking at the graph paper, I helped her come up with the coordinates, and she typed in the commands to make her drawing come to life.

Since my son had been learning how to add color to drawings, he jumped in later in the day to help improve what she and I had done earlier that morning.

The kids were really excited to see their drawings come to "life" — the feedback that they got from seeing their creations appear immediately after they typed in a valid function was a big motivator.

Future

The Khan Academy program takes a progressive approach to teaching programming in a way that keeps my kids engaged. We will continue working through some basic drawing programs and then move on to variables and animation. As you can imagine, they are both excited about the idea of building a video game — I'm excited to see what they come up with.

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 03 Apr 2014 15:48

There seems to be a common issue with 2002 Mitsubishi Montero Sports* where the horn will suddenly turn on if it’s cold out, and have no intentions of turning itself off. I discovered this first hand with my own Montero Sport earlier this winter on a particularly chilly night. The quick fix was to disconnect the battery, plug it back in when I wanted to drive, and then remember to unplug it again overnight (did not always go well). Unrelated, I have been experimenting with Arduinos here and there, and this is the story of how working with Arduinos let me (pretty much) fix my car horn.

* a few others online have posted their similar woes.

How/What I learned working with Arduinos

My introduction to Arduino was through the Sparkfun Inventor’s Kit, a fantastic resource for learning the basics of electronics and microcontrollers. Electronic circuits are for the most part straight forward and intuitive - electricity flows from + to - and if you make it go through something like an LED, you can make things turn on or move around. While it may not seem all that important to be able to turn a light on and off or move a motor back and forth, you can take this knowledge, get your hands on a few parts, and make some pretty cool things (maybe a blog post on that device later).

Aside from the raw knowledge I learned from working through the Inventor’s Kit, I’ve also gained a greater understanding of how electronics work in general. While this is common knowledge to many, it turns out that most electronic things are not in fact magic. So when my research of “my car horn turns on by itself in the middle of cold nights” led me to believe I would have to pay upwards of a thousand dollars to fix this, I instead thought “for Great Justice, I can probably hack something together myself!”

The fix

Looking around the car and the internet I found the relay supplying power to the horn. Big Win #1 - unplug this to disable my horn without disconnecting my battery. Relays (chapter 13 of the Inventor’s Kit guide) are switches which are “flipped” by relatively small amounts of electricity but can allow large amounts of electricity to pass through them. This relay is “flipped” when you press the horn button in your steering wheel and allows electricity from the battery to power the horn. My plan was to wire my own switch into this system so I could disable/enable the system from inside my car, and so the journey began.

working-with-bodhi

I picked a random wire feeding the horn relay and cut it. When I plugged the relay back in, I was unable to activate the horn - SUCCESS! I then proceeded to solder two separate wires to the ends of the wire I’d just cut, and run those from the relay into the cabin of the car.

soldered-relay

I then soldered the two wires to a simple rocker switch thus completing my side circuit. When I flip the switch closed, I complete the horn relay circuit and am able to blare my horn. Flip the switch back open (disconnecting the circuit) and my horn is rendered useless.

rocker-switch

The last step was to make it look good. Another lesson from my past Arduino projects - no one likes messy projects. I found something that looked like I could carve a knife into without causing any new issues and hooked everything up!

final-switch

The finished product is a switch that I flip when I need to use my horn, and an easy way to guarantee I won’t wake the neighborhood next time it gets cold in Boulder (video proof). Aside from being useful and 1/1000th the cost of having it repaired by a professional, coming up with a decent solution on my own was extremely satisying. So I’d recommend getting your hands on some intro-to-electronics resources if you haven’t already, and the next time your office microwave bites the bullet, see if you can be the one to bring it back to life*.

* electricity can be extremely dangerous - be sure to educate yourself and always use caution to avoid electrocuting yourself!

Author: "Eli Fatsi" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 28 Mar 2014 10:15

You've got an ice cold Tab sitting next to you as you hack away on your Pong clone while blasting Wham! on your Walkman. You finally got a ball bouncing back and forth across your terminal window (see my previous post) and you're wondering "what's next?"

Let's split our Pong game into two parts: the playing field and the score display. We'll plan for the playing field to be at the top portion of the screen, and the player's current score will be displayed at the bottom. This makes sense, but how do we do it?

A simple implementation might be to just move the current drawing position to a location on the bottom of the screen and print out the current score value (using mvwprintw), but that would require you to bounce the ball before it overlaps the score display. While that would work, it's less than ideal for a couple of reasons:

  • You can't update the score display independently from the main playing field. Each time the playing field needs to be redrawn you need to also redraw the score display, even if it hasn't changed.

  • More importantly, your code is now littered with confusing collision detection logic. No longer are you checking if y >= max_y, you now need to see if (y + score_height) >= max_y.

Fortunately, ncurses provides the ability to split these two concerns into separate windows, each updated independently. It might be a little more work to manage multiple windows in your program, but it's the right thing to do. In the end, you envision splitting the windows into something like this:

Let's see how we would implement that.

Window Basics

You're already familiar with initscr() and the global variable stdscr that it initializes. Here are four functions that allow you to create and manipulate additional windows:

We already know how to output text on the screen, but there are some differences when printing to other windows. To do this, you'll need the w* variants:

Again, create a Makefile:

# Makefile
LDFLAGS=-lncurses

all: demo

And a source file:

// demo.c
#include <ncurses.h>
#include <unistd.h>

int main(int argc, char *argv[]) {
  int parent_x, parent_y;
  int score_size = 3;

  initscr();
  noecho();
  curs_set(FALSE);

  // get our maximum window dimensions
  getmaxyx(stdscr, parent_y, parent_x);

  // set up initial windows
  WINDOW *field = newwin(parent_y - score_size, parent_x, 0, 0);
  WINDOW *score = newwin(score_size, parent_x, parent_y - score_size, 0);

  // draw to our windows
  mvwprintw(field, 0, 0, "Field");
  mvwprintw(score, 0, 0, "Score");

  // refresh each window
  wrefresh(field);
  wrefresh(score);

  sleep(5);

  // clean up
  delwin(field);
  delwin(score);

  endwin();

  return 0;
}

Then compile and run the program:

make && ./demo

You'll see your terminal enter curses mode and the text 'Field' will appear in the uppper window, while 'Score' appears at the bottom. This isn't much different from what we did in the previous post, but you'll notice that the coordinates specified with mvwprintw are relative to each window and not stdscr.

Printing text to each window is a good way of testing how the windowing functions work, but it makes it difficult to see where our windows are. Let's draw a border around them.

Drawing Borders

Since we'll want to draw a border around both our field and score display in this example, let's create a function that will draw a border around an arbitrary window:

void draw_borders(WINDOW *screen) {
  int x, y, i;

  getmaxyx(screen, y, x);

  // 4 corners
  mvwprintw(screen, 0, 0, "+");
  mvwprintw(screen, y - 1, 0, "+");
  mvwprintw(screen, 0, x - 1, "+");
  mvwprintw(screen, y - 1, x - 1, "+");

  // sides
  for (i = 1; i < (y - 1); i++) {
    mvwprintw(screen, i, 0, "|");
    mvwprintw(screen, i, x - 1, "|");
  }

  // top and bottom
  for (i = 1; i < (x - 1); i++) {
    mvwprintw(screen, 0, i, "-");
    mvwprintw(screen, y - 1, i, "-");
  }
}

Nothing complicated about that -- we draw a '+' in the corners of the window, with a series of '|' and '-' characters for the top and bottom as appropriate. Now let's make it easier to see the "split" window:

int main(int argc, char *argv[]) {

  // ...

  // draw our borders
  draw_borders(field);
  draw_borders(score);

  // simulate the game loop
  while(1) {
    // draw to our windows
    mvwprintw(field, 1, 1, "Field");
    mvwprintw(score, 1, 1, "Score");

    // refresh each window
    wrefresh(field);
    wrefresh(score);
  }

  // clean up
  delwin(field);
  delwin(score);

  // ...

}

Now you'll be able to see the two windows more easily. Notice that I had to tweak the offset on the call to mvwprintw to prevent overwriting the top borders of the windows.

This version still suffers from problems I mentioned in my previous post -- when you resize your terminal window, the borders either disappear or don't snap to the edges of the screen depending on which way you resize.

Handling Window Resizing

Drawing borders around each window wasn't totally necessary, but it does allow us to see exactly how resizing the terminal window affects our two sub-windows. The process for redrawing the windows is straightforward, but detecting a resize event is a little tricky -- here's what we do:

  1. In the main game loop, check to see if the window dimensions have changed from the original.
  2. If changed, reset the original dimensions to the new ones.
  3. Resize the main playing field window, leaving room for the score display window.
  4. Reposition the score display window beneath the playing field and resize the width.
  5. Redraw all the window borders for great justice.

Here's what it looks like:

int main(int argc, char *argv[]) {
  int parent_x, parent_y, new_x, new_y;
  int score_size = 3;

  // ...

  draw_borders(field);
  draw_borders(score);

  while(1) {
    getmaxyx(stdscr, new_y, new_x);

    if (new_y != parent_y || new_x != parent_x) {
      parent_x = new_x;
      parent_y = new_y;

      wresize(field, new_y - score_size, new_x);
      wresize(score, score_size, new_x);
      mvwin(score, new_y - score_size, 0);

      wclear(stdscr);
      wclear(field);
      wclear(score);

      draw_borders(field);
      draw_borders(score);
    }

    // draw to our windows
    mvwprintw(field, 1, 1, "Field");
    mvwprintw(score, 1, 1, "Score");

    // refresh each window
    wrefresh(field);
    wrefresh(score);
  }

  // ...

}

Now our sub-windows behave as expected:

Gotchas

When I was originally preparing this code example, I tried using both subwin and derwin to create subwindows of the main screen. While this approach worked when first running the program, it left artifacts of printed characters on the screen when resizing the window. Switching to using newwin instead fixes the problem, but requires the programmer to free the memory afterwards -- a reasonable trade-off for a working program.

It's possible that these functions are better suited for showing a dialog box inside another window and not really for splitting a window. I'd encourage you to know about these other windowing functions and use them if they make sense in your programs.

Next Steps

You're able to move an object around the screen and can now create multiple windows to split up your game display. Now it's time to add some user interaction into your game -- I'll talk about that in my next post.

For reference, a complete version of the code discussed in this post is available as a Gist.

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 25 Mar 2014 09:10

If you've ever wanted to create a simple video game that oozes lo-fi 1980's home computer nostalgia, you should definitely check out the ncurses programming library. It's a modern implementation of the original curses library that shipped with early versions of BSD UNIX. You might not be familiar with the name "ncurses", but you use it every time you type the characters t-o-p into your terminal.

To show the most basic usage of how you would use the ncurses library in your program, let's create a simple simulation of a "ball" bouncing back and forth across the screen:

Make sure you have a C compiler installed on your machine and have the ncurses headers available. I'm writing this on OSX Mavericks (which requires installing XCode), but other flavors of Unix should have the headers available for installation if they don't ship with the OS (e.g. apt-get install libncurses5-dev on Ubuntu).

Once you have the necessary dependencies installed, let's start with the basic building block of any ncurses-based program: the window.

Windows

Since we have a simple simulation, we only need to create a single window and draw to it:

initscr();       // Initialize the window
noecho();        // Don't echo any keypresses
curs_set(FALSE); // Don't display a cursor

To test this right now, create a Makefile:

# Makefile
LDFLAGS=-lncurses

all: demo

And a source file (we'll continue to use this file as we enhance the functionality over the course of this post):

// demo.c
#include <ncurses.h>
#include <unistd.h>

int main(int argc, char *argv[]) {

  initscr();
  noecho();
  curs_set(FALSE);

  sleep(1);

  endwin(); // Restore normal terminal behavior
}

Then compile and run the program:

$ make && ./demo

You'll see the screen go blank for a second and then your previous terminal will be restored. Not particularly exciting, but it sets the stage for drawing to the screen.

Drawing

Printing characters to our newly-created window is pretty simple. You treat the screen as a series of XY coordinates when deciding where to display text. Since characters are buffered before they're displayed on the screen, you need to refresh the window to see them.

For this, we use the function mvprintw() in combination with refresh():

#include <ncurses.h>
#include <unistd.h>

int main(int argc, char *argv[]) {

  initscr();
  noecho();
  curs_set(FALSE);

  mvprintw(0, 0, "Hello, world!");
  refresh();

  sleep(1);

  endwin();
}

Now, the text 'Hello, world!' will appear in the upper left corner of the screen for a second before your terminal window is restored. Remember that it's important that you call refresh() when you want to update your display -- forgetting to do this will prevent the text from being printed on the screen.

Moving an Object

Now that we can print text to the screen, let's move a "ball" across the screen. To do that, we'll print the 'o' character at an initial position on the screen and advance it to the right (hit ^C to exit):

#include <ncurses.h>
#include <unistd.h>

#define DELAY 30000

int main(int argc, char *argv[]) {
  int x = 0, y = 0;

  initscr();
  noecho();
  curs_set(FALSE);

  while(1) {
    clear();             // Clear the screen of all
                         // previously-printed characters
    mvprintw(y, x, "o"); // Print our "ball" at the current xy position
    refresh();

    usleep(DELAY);       // Shorter delay between movements
    x++;                 // Advance the ball to the right
  }

  endwin();
}

This will advance the "ball" from the upper left portion of the screen all the way to the right in one-second intervals. After a few seconds, you'll notice some undesireable behavior -- when the ball reaches the right-most portion of the screen, it continues to move even though we can no longer see it.

Collision Detection

Now that we've got object movement down, let's make it bounce off the "walls". For this you'll use the getmaxyx() macro to get the dimensions of the screen and add some simple collision detection logic for great justice.

#include <ncurses.h>
#include <unistd.h>

#define DELAY 30000

int main(int argc, char *argv[]) {
  int x = 0, y = 0;
  int max_y = 0, max_x = 0;
  int next_x = 0;
  int direction = 1;

  initscr();
  noecho();
  curs_set(FALSE);

  // Global var `stdscr` is created by the call to `initscr()`
  getmaxyx(stdscr, max_y, max_x);

  while(1) {
    clear();
    mvprintw(y, x, "o");
    refresh();

    usleep(DELAY);

    next_x = x + direction;

    if (next_x >= max_x || next_x < 0) {
      direction*= -1;
    } else {
      x+= direction;
    }
  }

  endwin();
}

Now the ball bounces as expected -- from left to right and back again, but if you resize the right-hand side of the window you'll notice that the ball either goes off the screen or bounces back before it hits the edge.

Handling Window Resizing

In our simple example, handling the case where the user resizes the window while the simuluation is running is trivial. Moving the call to getmaxyx() into the main loop will reset the "wall" location:

#include <ncurses.h>
#include <unistd.h>

#define DELAY 30000

int main(int argc, char *argv[]) {

  // ...

  while(1) {
    getmaxyx(stdscr, max_y, max_x);

    clear();
    mvprintw(y, x, "o");
    refresh();

    // ...

  }

  // ...
}

Now resizing the window when the program is running produces the desired result -- the ball consistently bounces off the new "wall" location.

Next Steps

This should serve as a good starting point for making your own interactive console games. If you've read this far and want a more in-depth introduction to the features and useage of the library, check out the NCURSES Programming HOWTO and Writing Programs with NCURSES tutorials.

As your games become more complex, you'll want to read up on the advanced windowing capabilities of the library -- take a look at the functions newwin, subwin, wrefresh, and mvwprintw to get started. I'll talk more about these and other related topics in future posts.

For reference, a complete version of the code discussed in this post is available as a Gist.

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 11 Mar 2014 11:35

Managing deployments is one of the trickier aspects of creating software for the web. Several times a week, a project manager will ask the dev team something to the effect of “what’s new since the last deploy?” – if we did a deploy right now, what commits would that include? Fortunately, the tooling around this stuff has never been better (as Tim Bray says, “These are the good old days.”). Easy enough to pull this info via command line and paste a list into Campfire, but if you’re using GitHub and Capistrano, here’s a nifty way to see this information on the website without bothering the team. As the saying goes, teach a man to fetch and whatever shut up.

Tag deploys with Capistrano

The first step is to tag each deploy. Drop this recipe in your config/deploy.rb (original source):

namespace :git do
  task :push_deploy_tag do
    user = `git config --get user.name`.chomp
    email = `git config --get user.email`.chomp

    puts `git tag #{stage}-deploy-#{release_name} #{current_revision} -m "Deployed by #{user} <#{email}>"`
    puts `git push --tags origin`
  end
end

Then throw a after 'deploy:restart', 'git:push_deploy_tag' into the appropriate deploy environment files. Note that this task works with Capistrano version 2 with the capistrano-ext library. For Cap 3, check out this gist from Zachary.

GitHub Tag Interface

Now that you’re tagging the head commit of each deploy, you can take advantage of an (as far as I can tell) unadvertised GitHub feature: the tags interface. Simply visit (or have your PM visit) github.com/<organization>/<repo>/tags (e.g. https://github.com/rails/rails/tags) to see a list of tags in reverse chronological order. From here, they can click the most recent tag (production-deploy-2014...), and then the link that says “[N] commits to master since this tag” to see everything that would go out in a new deploy. Or if you’re more of a visual learner, here’s a gif for great justice:


This approach assumes a very basic development and deployment model, where deploys are happening straight from the same branch that features are being merged into. As projects grow more complex, so must your deployment model. Automatically tagging deploys as we’ve outlined here breaks down under more complex systems, but the GitHub tag interface continues to provide value if you’re tagging your deploys in any manner.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 06 Mar 2014 13:46

We love writing Rails apps at Viget. The framework covers so much ground right out of the box by convention. However, Rails is oddly silent about one thing: seed data. Beyond a simple Rake task, db:seed, each project is left on its own to define how best to provide seeds.

Your Seed Data Is Important

Without expounding too much on why seeds are vital over the lifetime of a project, we use them to provide:

  • data that's required for the app to function properly (admin accounts, roles, etc)
  • sample data so staging environments can feel like the real thing
  • data in a variety of potentially intricate states
  • a backdrop for QA testing
  • a reproducible "vanilla" integration server state

Given the many uses across the whole team, we saw a chance to create a standard for our own process. From that effort grew (oh ho!) Sprig.

Introducing Sprig

Sprig

Seed by Convention

Sprig provides common sense conventions for seeding Rails apps. The simplest use might look like:

Step 1:

db/seeds/development/users.yml

records:
  - sprig_id: 1
    first_name: "John"
    last_name: "Smith"
    date_of_birth: "<%= 1.year.ago %>"

You define your seeds in the formats you love: YML, JSON, and CSV are supported out of the box.

Step 2:

db/seeds/development.rb

include Sprig::Helpers

sprig User

You tell Sprig which classes should be seeded for the current environment. Sprig finds your lovely seed files and does the rest.

Step 3:

No wait... That's it.

Relational Seeding for Relational Data

All real-world applications contain relationships between models. Sprig allows seeds to reference one another as if they already exist, so seeding relationships is as easy as seeding records.

records:
  - sprig_id: 1
    user_id: "<%= sprig_record(User, 1).id %>"
    body: "Ipsum lorem"

Keeping track of the order and direction of your seeds' dependencies can be tricky. No worries. Sprig automatically determines the order with which seeds should be persisted. And if you accidently create a circular dependency, Sprig helps you resolve it.

Customization

Although the Sprig convention expects YML, CSV, and JSON seed files, it doesn't limit you to them. You can easily create and use your own custom parser (like this one that lets us use a Google Spreadsheet as our data source). Sprig can be extended to parse any kind of seed file from any source:

sprig [{
  class:  Post,
  source: open('https://spreadsheets.google.com/feeds/list/somerandomtoken/1/public/values?alt=json'),
  parser: Sprig::Data::Parser::GoogleSpreadsheetJson
}]

Sprig also accepts options like find_existing_by which tells Sprig to update records matching provided criteria instead of persisting a new record, allowing for idempotent seeding operations.

Give It A Spin

The full set of features can be found on the Sprig pages. We've already used Sprig on many of our own projects for great justice and hopefully you can too!

So what are you waiting for? gem install sprig

To Improve Is To Change

Something missing? We're actively developing Sprig to be better all the time and are happily accepting issues and pull requests at the project's home on GitHub.


Thanks to Ryan Foster for his significant contributions to this post and the Sprig project, to the entire Viget development team for helping to make Sprig awesome, and to Mindy Wagner for the sweet logo.

Author: "Lawson Kurtz" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 04 Mar 2014 12:08

As developers, we're occasionally tasked with maintaining software that we weren't directly involved in crafting. We don't have extensive knowledge of the domain, yet we can read the code, (hopefully) understand what's there, and modify it. We'll even attempt to improve the application and leave it in a better state than we found it. What follows is one such tale.

I recently had to make a change on a legacy Rails application. In order to make the change, I had to find the correct instance of Site. I didn't know which attribute on Site to query for, so the first thing I did was open the database schema to see a list of all attributes on Site. Wouldn't it be nice if I didn't need to look at the database schema to find the attribute I need? What if I told you I had an easier way of finding instances of your classes?

Avdi Grimm reminded me in his book Confident Ruby (which we loved) about Ruby's conversion methods. Specifically, methods like Array('thing') # => ["thing"] and URI('http://place.com') # => URI object with the URL of 'http://place.com'. I thought, wouldn't it be great if I could write Site('widget') and have it return the correct instance of Site? So that's just what I did.

Writing the conversion method was fairly simple:

def Site(site)
  if site.is_a?(Site)
    site
  elsif site.is_a?(String)
    Site.find_by_subdirectory!(site)
  else
    raise ArgumentError, 'bad argument (expected Site object or string)'
  end
end

The above code says that if site is already an instance of Site, then return it. If site is a string, find the instance of Site by subdirectory. If site is neither an instance of Site nor an instance of String, then raise an ArgumentError, informing the developer what the method was expecting. Great -- now I have a conversion method in which I can write Site('widget') and get the instance of the Widget site. But where do I put this method in a Rails application?

I could put it in an initializer. But it feels like it should really go with the Site class definition, in case a developer wants to extend it in the future. Where does Ruby place its conversion methods? To answer that question, I took a peek at Ruby's URI class. I noticed how the conversion method was with the class definition and at the bottom of the file, within the Kernel module. I took this precedence and modified the Site class to the following:

class Site < ActiveRecord::Base
  # … wow. such code …
end

module Kernel
  # Returns +site+ converted to a Site object.
  def Site(site)
    if site.is_a?(Site)
      site
    elsif site.is_a?(String)
      Site.find_by_subdirectory!(site)
    else
      raise ArgumentError, 'bad argument (expected Site object or string)'
    end
  end
end

I really like that I can now retrieve a site with Site('widget') and not have to look up the correct attribute to query by.

What do you think? Do you use conversion methods in your Rails applications? Let me know in the comments below.

Author: "Zachary Porter" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Feb 2014 13:20

We live in the age of remote resources. It's pretty rare to store uploaded files on the same machine as your server process. File storage these days is almost completely remote, and for very good reasons.

Using file storage services like S3 is awesome, but not having your files accessible locally can complicate the performance of file-oriented operations. In these cases, use Ruby's Tempfile to create local files that live just long enough to satisfy your processing needs.

Anatomy of Tempfile#new

# What you want the tempfile filename to start with
name_start = 'my_special_file'

# What you want the tempfile filename to end with (probably including an extension)
# by default, tempfilenames carry no extension
name_end = '.gif'

# Where you want the tempfile to live
location = '/path/to/some/dir'

# Options for the tempfile (e.g. which encoding to use)
options = { encoding: Encoding::UTF_8 }

# Will create a tempfile
# at /path/to/some/dir/my_special_file_20140224-1234-abcd123.gif
# (where '20140224-1234-abcd123' represents some unique timestamp & token)
# with a UTF-8 encoding
# with the contents 'Hello, tempfile!'
Tempfile.new([name_start, name_end], location, options) do |file|
  file.write('Hello, tempfile!')
end

Example Application

URL to Tempfile: Remote File Processing

We have a service that takes a URL and processes the file it represents using a Java command-line utility. Our command-line utility expects a filepath argument, so we must create a local file from the remote resource before processing.

class LocalResource
  attr_reader :uri

  def initialize(uri)
    @uri = uri
  end

  def file
    @file ||= Tempfile.new(tmp_filename, tmp_folder, encoding: encoding).tap do |f|
      io.rewind
      f.write(io.read)
      f.close
    end
  end

  def io
    @io ||= uri.open
  end

  def encoding
    io.rewind
    io.read.encoding
  end

  def tmp_filename
    [
      Pathname.new(uri.path).basename,
      Pathname.new(uri.path).extname
    ]
  end

  def tmp_folder
    # If we're using Rails:
    Rails.root.join('tmp')
    # Otherwise:
    # '/wherever/you/want'
  end
end

def local_resource_from_url(url)
  LocalResource.new(URI.parse(url))
end

# URL is provided as input
url = 'https://s3.amazonaws.com/your-bucket/file.gif'

begin
  # We create a local representation of the remote resource
  local_resource = local_resource_from_url(url)

  # We have a copy of the remote file for processing
  local_copy_of_remote_file = local_resource.file

  # Do your processing with the local file
  `some-command-line-utility #{local_copy_of_remote_file.path}`
ensure
  # It's good idea to explicitly close your tempfiles
  local_copy_of_remote_file.close
  local_copy_of_remote_file.unlink
end

Tempfiles vs Files

Ruby Tempfile objects act almost identically to regular File objects, but have a couple of advantages for transient processing or uploading tasks:

  • Tempfiles' filenames are unique, so you can put them in a shared tmp directory without worrying about name collision.
  • Tempfiles' files are deleted when the Tempfile object is garbage collected. This prevents a bunch of extra files from accidentally accumulating on your machine. (But you should of course still explicity close Tempfiles after working with them.)

Common Snags

Rewind

Certain IO operations (like reading contents to determine an encoding) move the file pointer away from the start of the IO object. In these cases, you will run into trouble when you attempt to perform subsequent operations (like reading the contents to write to a tempfile). Move the pointer back to the beginning of the IO object using #rewind.

io_object = StringIO.new("I'm an IO!")
encoding = io_object.read.encoding

# The pointer is now at the end of 'io_object'.
# When we read it again, the return is an empty string.
io_object.read
# => ""

# But if we rewind first, we can then read the contents.
io_object.rewind
io_object.read
# => "I'm an IO!"

Encoding

Often you'll need to ensure the proper encoding of your tempfiles. You can provide your desired encoding during Tempfile initialization as demonstrated below.

encoding = Encoding::UTF_8

Tempfile.new('some-filename', '/some/tmp/dir', encoding: encoding).tap do |file|
  # Your code here...
end

Obviously your desired encoding won't always be the same for every file. You can find your desired encoding on the fly by sending #encoding to your file contents string. Or if you're using an IO-object, you can call io.object.read.encoding.

encoding = file_contents_string.encoding
# or
# encoding = io_object.read.encoding

Tempfile.new('some-filename', '/some/tmp/dir', encoding: encoding).tap do |file|
  # Your code here...
end

Read more about Ruby encoding.

Extensions

By default, files created with Tempfile.new will not carry an extension. This can pose problems for applications or tools (like Carrierwave and soffice) that rely on a file's extension to perform their operations.

In these cases, you can pass an extension to the Tempfile initialization as demonstrated above in Anatomy of Tempfile#new.

# A quick refresher
Tempfile.new(['file_name_prefix', '.extension'], '/tmp')

If you need to dynamically determine your file's extension, you can usually grab it from the URL or file path you are reading into your Tempfile:

uri = URI.parse('https://example.com/some/path/to/file.gif')
path = '/some/path/to/file.gif'

Pathname.new(uri.path).extname
# => '.gif'

Pathname.new(path).extname
# => '.gif'

Local Development (Paths vs URLs)

Many developers use local file storage for their development environment. In these cases, local file paths often appear in methods that are expecting URLs. Not fun.

OpenURI to the Rescue

If you need to write code that supports reading files from both file paths and URLs, OpenURI is your saviour.

OpenURI is accessible via the Kernel function open, which provides a file-like API for both local and remote resources.

open('/path/to/your/local/file.gif') do |file|
  # Your code here...
end
open('https://s3.amazonaws.com/your-bucket/file.gif') do |file|
  # Your code here...
end

We like Ruby Tempfiles for performing file-oriented operations on remote resources. What do you use?

Thanks to Ryan Foster for his contributions to the sample code.

Author: "Lawson Kurtz" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 25 Feb 2014 14:37

The Background

Recently, we had to restructure a complicated piece of a pretty huge Ruby on Rails application. This resulted in significant changes to the model landscape, including the removal of a model that was rendered obsolete. While this model was going to go away, we didn’t want to lose all the data we had in production, but we also didn’t want to just leave around an unused table in our database.

This kind of thing happens all the time, but we ran into an interesting challenge with this particular data migration when one of the columns was used to store an Amazon S3 URL for a CarrierWave-uploaded file. CarrierWave is pretty magical, but we found it can be difficult to work with when trying to migrate existing data around a destructive Rails migration (where we make structural changes to our database that result in a loss of data). In this post, I’d like to share our experiences and two of the approaches we experimented with.

An Example

Here’s a simple example we’ll use to demonstrate each of the data migration approaches:

Say we have a Vehicle model with scopes car_sized and truck_sized and we need to break out those two scoped Vehicles into their own classes, Car and Truck. We’re getting rid of the Vehicle model, but we don’t want to lose all that data – rather, we want to just copy them over into the appropriate target model (either Car or Truck). Let’s also say our Vehicle model belongs_to a Make and a Model and has the following attributes:

  • :year (integer)
  • :color (string)
  • :owners_guide (string) – this will represent our CarrierWave uploader and will contain our Amazon S3 URL

Approach 1: Scripts Writing Scripts!

With this approach, we’re writing a Ruby script that utilizes a lot of puts statements to write out a bunch of strings containing valid Ruby code that we’ll run on the other side of our rails migrations. It’s definitely not the prettiest, but it is pretty neat. Given our example, here’s what a basic implementation might look like:

module VehicleData
  def self.migrate
    puts "def file_from_url(url)"
    puts "  #Create a file object from S3 URL"
    puts "end\n"

    ['car', 'truck'].each do |type|
      Vehicle.send("#{type}_sized").each do |vehicle|
        puts "#{type.capitalize}.create(:make_id => #{vehicle.make_id}, :model_id => #{vehicle.model_id}, :year => #{vehicle.year}, :color => \"#{vehicle.color}\", :owners_guide => file_from_url(\"#{vehicle.owners_guide.url}\"))"
      end
    end
  end
end

VehicleData.migrate

One quick thing to note before we continue, you’ll need to surround any String-type data with \" as with the vehicle.color.

Say we named the above script test_script.rb and saved it in our project root, we’d type in rails runner test_script.rb > output_script.rb. This command will run the test_script.rb file with the current rails environment loaded. If we were to do this in production, the command would probably look something like RAILS_ENV=production bundle exec rails runner test_script.rb > output_script.rb. Once our script completes, output_script.rb should contain something like the following:

def file_from_url(url)
  #Create a file object from S3 URL
end

Car.create(:make_id => 17, :model_id => 85, :year => 2012, :color => "red", :owners_guide => file_from_url('s3.com/example1.pdf'))
Car.create(:make_id => 3, :model_id => 27, :year => 2007, :color => "black", :owners_guide => file_from_url('s3.com/example2.pdf')) 
Truck.create(:make_id => 10, :model_id => 44, :year => 2014, :color => "silver", :owners_guide => file_from_url('s3.com/example3.pdf'))

After deploying and running migrations, simply run the output_script.rb just like the previous script and your data is migrated!

This approach works just fine for numeric and string data. Dates and times require a little extra work since you need to print out a date/time in string format to the output script wrapped in an appropriate .parse call. CarrierWave-uploaded S3 files proved pretty difficult to work with in this approach. Getting a path or url is simple enough, but when you create an object on the other side of your Rails migration that also has a CarrierWave uploader, it expects a file. In our case, the S3 url’s included quickly expiring tokens that would be invalid by the time our deployment would complete. With destructive migrations, we decided this was approach was a little too brittle.

Approach 2: Two-Phase Migration - Copy + Cleanup

With this approach, we’re also writing a Ruby script to copy over our data to our new objects, but the major difference is that we’re running this after our first set of migrations where we introduce our new application code while keeping the existing models (including their relationships and key things like our CarrierWave uploader) in tact. Here, we have both the old models and data along with our new models, so it makes copying the data pretty straightforward. Additionally, there’s no risk of losing the existing data since we’re only reading it in order to copy.  If problems arise during the copying of data, we can fix our script and retry. The downside to this approach is that there’s extra work involved with keeping the models and certain things in tact during the first deployment and then having to do a separate cleanup effort in a second deployment.

Given our example from above, our script for this approach would look something like this:

module VehicleData
  def self.migrate
    ['car', 'truck'].each do |type|
      Vehicle.send("#{type}_sized").each do |vehicle|
        type.capitalize.constantize.create(:make_id      => vehicle.make_id,
                                           :model_id     => vehicle.model_id,
                                           :year         => vehicle.year,
                                           :color        => vehicle.color,
                                           :owners_guide => vehicle.owners_guide.file)
      end
    end
  end
end

VehicleData.migrate

Pretty similar! If our example didn’t involve the CarrierWave-uploaded S3 files, it would probably be much less work to go with Approach 1 and handle everything in a single batch of application code + migrations.  However, if your data is especially sensitive and extra precaution is required, then this is the way to go. Since we did have to consider CarrierWave-uploaded S3 files, this approach made things much more simple.

After all the existing data has been copied over to the new objects, simply deploy your cleanup code + migrations.

In a Nutshell

There are many ways to handle data migrations besides the two covered here. The most important thing is to pick the right one for the job. Take the time to think through and plan ahead. Regardless of the approach you take, always back up your data and test with copies of the real data to ensure your game-time data migrations go smoothly!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 19 Feb 2014 15:08

This January, we teamed up with Dick's Sporting Goods to launch Gear in Action, a digital lookbook featuring products for the 2014 Baseball season. As you can see on this page of the finished site, video of the products in action displays at full height and width. This occurs regardless of browser size, and without distorting the video's original aspect ratio.

Because video is central to the site experience, video quality was a top priority. We couldn't risk cropping out critical subject matter, but we still needed video with a 16:9 aspect ratio to fill the screen, even on portrait tablets like the iPad.

Techniques

For reference, snippets below are based on the following markup and styles. We also created a demo site for previewing each technique.

<body>
    <video id="player" autoplay preload data-origin-x="20" data-origin-y="40">
        <source src="video/pitcher.mp4" type="video/mp4">
        <source src="video/pitcher.webm" type="video/webm">
    </video>
</body>
body {
    height: 100%;
    margin: 0;
    overflow: hidden;
    position: absolute;
    width: 100%;
}

Pure CSS

One of the first fullscreen solutions we tried uses pure CSS. It sets the video min-height and min-width to 100%:

video {
    min-height: 100%;
    min-width: 100%;
    position: absolute;
}

This technique is simple and works fairly well, but sizing can be an issue. When the browser viewport is smaller than native video dimensions, the video stops scaling down. In our case, this led to undesired cropping.


Transform: scale

CSS transforms are capable of doing a lot of things. But can videos scale? The answer is yes.

video {
    height: 100%;
    width: 100%;
    transform: scale(1.2);
    -webkit-transform: scale(1.2);
}

This solution is more flexible, but we needed Javascript to determine the right scale values for different browser sizes.

// Basic 'gist' only. See demo source for more.
var scale = 1;
var videoRatio = 16 / 9;
var viewportRatio = $(window).width() / $(window).height();

if (videoRatio < viewportRatio) {
    // viewport more widescreen than video aspect ratio
    scale = viewportRatio / videoRatio;
} else if (viewportRatio < videoRatio) {
    // viewport more square than video aspect ratio
    scale = videoRatio / viewportRatio;
}

Transform-origin

Scaling with transforms is an improvement, but what if video content isn't always perfectly centered?

You may have noticed the demo focuses on a pitcher in his throwing motion. Unfortunately, the pitcher is pretty far from the middle of the video. This isn't very noticeable at wider aspect ratios, like a typical laptop screen or monitor, but we start to lose content as the screen becomes more narrow. On an iPad's portrait orientation (16:9 video on a 3:4 screen), we miss out on a lot of details:

If not explicitly set, the default transform-origin is the element's center, 50% 50%. The scale applies like a center crop or zoom. To get the pitcher back in focus, we needed to update the transform-origin property.

video {
    transform-origin: 20% 50%;
    -webkit-transform-origin: 20% 50%;
}

This gets the pitcher back in focus.

The transform-origin is extremely useful because the effect is relative to the transform itself. Note the difference in these screenshots, all with the same transform-origin applied.


Don't forget to check out techniques (and code) for yourself in the demo. If you're using fullscreen video in your projects, we'd love to hear about it in the comments.

Author: "Chris Manning" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 07 Feb 2014 14:30

Back in November, Blair wrote a great post about designing device assets. In her post, she details the ins and outs of creating favicons, touch icons, and Windows 8 Tiles. If you’ve read her post (or worked with device assets before), you know that the sheer number of assets you could create is overwhelming. But which assets should you create?

In this post, I’ll outline the approach I used to create a practical and manageable set of device assets for World Wildlife Fund’s recently-launched Find Your Inner Animal quiz.

I’ll start with…

The Mighty favicon

Blair linked to this css-tricks.com article that describes how to create a favicon using Icon Slate. That pretty much sums up what you need to do to combine several PNGs into a compatible favicon.ico. The only additional piece of advice I have to offer is that you should optimize those PNGs (using ImageAlpha and ImageOptim, perhaps?) before dragging them into Icon Slate.

As you’d imagine, Wikipedia has a ridiculously thorough article on favicons, including compatibility charts. While you could include markup to link to your favicon, every browser released in the last ten years will look for a file named favicon.ico in the root of your website. Dropping a file named favicon.ico into your site’s root is sufficient (and nearly effortless!).

Touch Icons

Oh, heavens. Touch icons.

If you take a look at Mathias Bynens’ exhaustive post on the subject, you’d rightly conclude that the most appropriate course of action is to run screaming for the hills.

There are a couple of different approaches you could take here.

You could create images at every size Blair mentioned in her post and create link elements for each. This “kitchen sink” approach guarantees that an appropriately-sized touch icon is provided to every browser that supports touch icons. There is nothing wrong with this approach. I think it’s overkill, but by no means is it wrong.

My favorite hardline approach is the “no HTML” solution Mathias outlines. Simply drop whichever combination of properly-sized images you prefer into the root of your site and let browsers sort it out. This approach is great other than that it leaves Android browsers out in the cold.

We can do better! For Find Your Inner Animal, we used the following combination of icons and markup.

The icons:

  • apple-touch-icon-152x152-precomposed.png
  • apple-touch-icon-120x120-precomposed.png
  • apple-touch-icon-76x76-precomposed.png
  • apple-touch-icon-72x72-precomposed.png
  • apple-touch-icon-precomposed.png (sized 57x57)
  • apple-touch-icon.png (sized 57x57)

The markup:

<link rel="apple-touch-icon-precomposed" href="/apple-touch-icon-152x152-precomposed.png" sizes="152x152">
<link rel="apple-touch-icon-precomposed" href="/apple-touch-icon-120x120-precomposed.png" sizes="120x120">
<link rel="apple-touch-icon-precomposed" href="/apple-touch-icon-76x76-precomposed.png" sizes="76x76">
<link rel="apple-touch-icon-precomposed" href="/apple-touch-icon-precomposed.png">

This bizarre collection of images and markup provides Retina-quality touch icons to high-DPI displays and ever-so-slightly scaled touch icons to older devices running iOS and Android. The scaling isn’t the greatest, but it’s a practical tradeoff.

Windows Tiles

Adding support for Windows Tiles to your site requires a tiny bit of legwork. The first thing to know is that implementation differs dramatically between Internet Explorer versions 10 and 11. Let’s take a look at the following markup:

<meta name="msapplication-config" content="/ieconfig.xml">
<meta name="msapplication-TileColor" content="#6a9a22">
<meta name="msapplication-TileImage" content="/ms-tile-144x144.png">

I’ll come back to that first line in a moment. Lines 2 and 3 are custom meta elements that instruct IE 10 to use #6a9a22 as the tile’s background color and /ms-tile-144x144.png as the tile’s image. The color you choose and the file name are entirely up to you, but the image should be 144 pixels square.

Now for that first line. ieconfig.xml is a small file that looks like this:

<?xml version="1.0" encoding="utf-8"?>
<browserconfig>
    <msapplication>
        <tile>
            <square70x70logo src="ms-tile-128x128.png"/>
            <square150x150logo src="ms-tile-270x270.png"/>
            <wide310x150logo src="ms-tile-558x270.png"/>
            <square310x310logo src="ms-tile-558x558.png"/>
            <TileColor>#6a9a22</TileColor>
        </tile>
    </msapplication>
</browserconfig>

Ignore as best you can those heinously-named elements. The important parts are the various paths to file names and the TileColor element. As mentioned in Creating custom tiles for IE11 websites, tile images should be 1.8 times larger than you’d think in order to accommodate a wide range of devices. This accounts for the apparent discrepancy between an element’s name (square70x70logo) and its associated image (ms-tile-128x128.png).

Save that XML to a file named ieconfig.xml, throw it in the root of your project, and link to it using the meta element from the earlier code block. Fire up your favorite Windows 8/8.1 device and you’ll be able to pin your site to the Start screen.

All Set!

Device assets are by no means hard to implement, just tricky. There’s been significant implementation changes over the last year or two as devices shift toward high-DPI displays. The landscape has shifted so quickly and dramatically that supporting all features across all devices would be a real headache. With the above tips, you should now be able to support a broad swath of devices without pulling out your hair!

Author: "Jason Garber" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 06 Feb 2014 12:26

During our work on a responsive news website for WRAL, some design strategies called for custom content placements at various breakpoints. And by custom, I mean custom––the kind of flexible placements where CSS alone is not a realistic or responsible solution.

Reflow Options

Reflow Options Illustration

Our team identified a need for a 'blended' solution. We knew that JavaScript would be necessary, but we also wanted to create a framework that was easy for non-JS developers to use. With this in mind, Nate Hunzaker wrote Transport, a small (2kB) and simple jQuery plugin for moving HTML at matching media queries.

How it Works

Transport uses matchMedia (with a polyfill included for older browsers) to check for predetermnined breakpoints. When a match is found, the HTML is appended to the specified destination for that breakpoint.

HTML
<main id="main"></main>
<footer id="footer"></footer>
<aside id="sidebar">

    <!--
        Transport looks for a pattern in the data-transport attribute:
        [media query key]![jQuery selector to transport to]
        Multiple matches should be delimited by pipes, i.e. '|'
    -->
    <div data-transport="tablet!#main|mobile!#footer">
        <p>
            Breakdown: at tablet, (max-width: 1024px), this is transported to $("#main").
            At mobile, (max-width: 500px), this is transported to $("#footer").
        </p>
    </div>

</aside>
JavaScript
$('[data-transport]').transport({
    mobile: '(max-width: 500px)'
    tablet: '(max-width: 1024px)'
});

 

Here is an illustration of how content could transport to a completely different area of the page at the “tablet” breakpoint:

 

Tablet Layout Transport Locations

Ease of Use

Transport was designed to require as little JavaScript knowledge as possible. Once the initial script is set up, anyone with a working knowledge of HTML can customize the position of content by editing data-transport attributes.

Download and Demo

jQuery Transport is available on Github. A demo is also available here. Please let us if there are any ways we could improve Transport by opening an issue.

Visit http://viget.com/work/wral for more details about our work on WRAL's site relaunch.

Author: "Chris Manning" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 04 Feb 2014 08:04

One way to cut down on the initial download of site assets is to load the JavaScript necessary for special components only when those components are present on the page, instead of including the scripts all in a single concatenated global file. Depending on the number of components and the frequency with which they appear on a given site, this can be an easy win, especially for mobile. In theory, less JS to download on a page without any components = less JS to process = faster performance on mobile devices.

Similar in concept to the feature-based execution + script loader pattern that Trevor shared last year, this approach simply identifies the components in a different way and uses RequireJS to handle the script loading aspect.

Using a carousel and a video player as examples, which may appear on any page:

<div data-component="carousel">...</div>
<div data-component="video-player">...</div>

Get all elements with a data-component attribute and load the specified JS file via RequireJS:

var components = document.querySelectorAll('[data-component]');

for (var n = 0; n < components.length; n++) {
    var el = components[n];
    var componentName = el.dataset !== undefined ? el.dataset.component : el.getAttribute('data-component'); // el.dataset for modern browsers, el.getAttribute for IE8 - IE10

    require([componentName]);
};

Or with jQuery:

$('[data-component]').each(function() {
    require([$(this).attr('data-component')]);
});

Then in carousel.js and video-player.js, load any dependencies and initialize your components however you see fit.

That’s all there is to it! With this simple optimization, you can cut down on that initial JS load which can really help for pages that don’t have any components present.

Author: "Jeremy Frank" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Jan 2014 15:15

As a developer, a majority of my day is spent at my computer cranking away on various things. Inspired by a RubyRogues episode (Sharpening Tools with Ben Orenstein), I’ve taken an interest lately in making my day-to-day tasks more efficient, scratching the itches that are the minor annoyances in my development days. I thought I’d share the progress I have made in the hope that it will help others, so without further ado, I present: creating a Github repo from the command line.

The Command

Here is the straight-forward command line command to get the job done:

curl -u "$username:$token" https://api.github.com/user/repos -d '{"name":"'$repo_name'"}'

To use, you could simply replace $username with your Github username, $token with a Personal Access Token for the same user (available for generation in your Github Settings > Applications), and $repo_name with your desired new Repository name.

The Bash Function

Creating a repo from the command line is definitely faster than going to Github and using the web app to get the job done, but in order to truly make this task speedy, we need some Bash programming. Thanks to coworkers Patrick and Brian for ideas here.

github-create() {
  repo_name=$1

  dir_name=`basename $(pwd)`

  if [ "$repo_name" = "" ]; then
    echo "Repo name (hit enter to use '$dir_name')?"
    read repo_name
  fi

  if [ "$repo_name" = "" ]; then
    repo_name=$dir_name
  fi

  username=`git config github.user`
  if [ "$username" = "" ]; then
    echo "Could not find username, run 'git config --global github.user <username>'"
    invalid_credentials=1
  fi

  token=`git config github.token`
  if [ "$token" = "" ]; then
    echo "Could not find token, run 'git config --global github.token <token>'"
    invalid_credentials=1
  fi

  if [ "$invalid_credentials" == "1" ]; then
    return 1
  fi

  echo -n "Creating Github repository '$repo_name' ..."
  curl -u "$username:$token" https://api.github.com/user/repos -d '{"name":"'$repo_name'"}' > /dev/null 2>&1
  echo " done."

  echo -n "Pushing local code to remote ..."
  git remote add origin git@github.com:$username/$repo_name.git > /dev/null 2>&1
  git push -u origin master > /dev/null 2>&1
  echo " done."
}

Plop this function into your ~/.bash_profile, open a new Terminal window or source ~/.bash_profile, and the function will be loaded up and ready for use.

Then while in an existing git project, running github-create will create the repo and push your master branch up in one shot. You will need to set some github config variables (instructions will be spit out if you don’t have them). Here’s an example:

BASH:projects $ rails new my_new_project
  ..... (a whole lot of generated things)
BASH:projects $ cd my_new_project/
BASH:my_new_project $ git init && git add . && git commit -m 'Initial commit'
  ..... (a whole lot of git additions)
BASH:my_new_project $ github-create
  Repo name (hit enter to use 'my_new_project')?

  Creating Github repository 'my_new_project' ... done.
  Pushing local code to remote ... done.

Had I called the function with an argument — github-create my_project — then it would have used the argument and skipped the Repo name question.

I’d orginally whipped up something simpler and hard coded my username and access token into the function, but the provided example is a bit more robust. Plus it was a fun little exercise to experiment with Bash functions, not something I typically do, but am looking forward to doing more of as I tackle my next efficiency irks.

If you’ve done something similar, or have some comments on the implementation here, I’d love to hear about it in the comments below.

Author: "Eli Fatsi" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 28 Jan 2014 15:15

It's no secret that we're big fans of Craft, the new CMS by Pixel & Tonic. While ExpressionEngine remains our go-to CMS for most client work, we've been spending an increasing amount of time experimenting with Craft.

As we get further into Craft development, we're asking ourselves one big question: "How easily can we get a new project up and running?" With ExpressionEngine, we've built up our toolset and bootstrapping scripts over time to the point that spinning up a new EE project is a straightforward—and quick—process. We haven't had something like that for Craft.

Until now. #epicforeshadowing

Over the last few weeks, I spent some free time playing around with Craft. While I won't expound on the merits of choosing Craft for your project (I'll leave that to the experts), I will say that the app is beautifully designed and the installation docs are very thorough and well-written. Still, there are a bunch of setup tasks that would benefit from automation.

Introducing craft-master

craft-master is a set of tools written in Ruby using Rake tasks for common Craft-related installation tasks. By issuing a single command, rake install, you'll be guided through the process of installing and configuring Craft.

These steps include:

  • Downloading and installing the latest Craft core code base.
  • Configuring local database settings.
  • Creating a local MySQL database.
  • Setting up a virtual host for the local web server.

At each step, craft-master asks a series of basic questions, gathering the necessary information to complete that portion of Craft's setup. Much easier than doing all that work by hand.

Why Ruby?

Craft is written in PHP so wouldn't it have made sense to write a set of tools in PHP? That's an excellent question!

The answer is entirely practical. I'm much more comfortable writing Ruby and Rake tasks than I am writing PHP scripts. While far from being a Ruby master, I've got a decent grasp on the Ruby language and appreciate the organizational capabilities of namespaced Rake tasks. Additionally, using ERB for templating and YAML for configuration files is dead simple in Ruby.

Additionally, craft-master uses a set of tools (Ruby, Rake, and Bundler) that are already a part of our tool chain here at Viget. If you use front-end development tools like Sass and Compass for pre-processing stylesheets or Capistrano for deployment, craft-master lays the groundwork for including those tools in your project.

Your needs and opinions may vary, of course, but for me, building craft-master in Ruby was the obvious choice.

Free as in Beer

Since Craft's core code base is freely available , we've open sourced craft-master with the hopes that you'll find it as useful as we do. Additionally, we're looking to see what ideas you bring to the table. There are already a couple of open Issues on GitHub that we hope to tackle soon.

So that's craft-master! What do you think? Let us know if you give it a try and if it suits your needs.

Author: "Jason Garber" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 16 Jan 2014 13:45

78% of all emails may be spam, but the emails your web application sends are special.

For email-dependent workflows, the confirmed delivery of your app’s email is a necessity. You need to know when something goes wrong, so your application can adapt on the fly. This poses a significant challenge because email is inherently one-way. Email flies out; nothing comes back.

Fortunately a number of email services like Mailgun now offer a suite of tools to provide some insight into email deliverability. Perhaps the most helpful tool provided by Mailgun is their collection of webhooks which POST delivery event-related data back to your application in real-time.

The email delivery information provided via these webhooks can be a powerful knowledge resource for your app, but only if it knows what to do with it. That’s why we created Special Delivery.

Special Delivery is a Rails engine that makes it easy to perform actions in response to email delivery events POSTed via Mailgun webhooks. Special Delivery allows you to associate an ActiveRecord object to each outgoing email, which is accessible to the callback you execute in response to delivery events. Using these callbacks, your app can execute targeted responses simply and without the need for undesirable, one-off email parsing code.

Need to show an error message to a user who’s recent job application email bounced? Easy. Want to email sad gifs to your marketing team whenever emails are marked as spam by users? No problem.

Special Delivery makes email tracking and delivery error remediation as easy as sending an email. We’re very excited to open-source the Special Delivery project. It’s already made our lives more pleasant, and we hope it will do the same for you. Happy Mailgunning.

https://github.com/vigetlabs/special-delivery

Author: "Lawson Kurtz" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 15 Jan 2014 17:19

History

Here in the Viget Boulder office, there are anywhere from 10-16 people in and out on a given day. That plus the fact that we only have one available parking spot leads to a few small problems: who gets the spot? Is anyone in it at the moment? When will it be free if someone is in it? While parking spaces are only few blocks from the office, we took these problems as an oportunity to play around with some new technology.

Step #1 - Use Existing Technology

We all make use of Google Calendar to make our days easier and more organized, so we creating a “Room” that represents the parking spot answers the question “Who gets the spot?” most of the time. We claim no bragging rights on that solution, pretty run-of-the-mill-stuff.

Step #2 - Innovate A Bit

If you’re on the go, Google Calendar might not always be available, and it’s not uncommon for the “Room” to be inacurate about it’s occupied status. We decided that it would be for great justice if a computer could determine by itself if the spot was taken or not. Image processing seemed like the most plausible solution. Snap a picture, run it through … something, and spit out “TAKEN” or “AVAILABLE”.

So Mike Ackerman and I set out to master the basics of image processing, with the knowledge that OpenCV, an open source image processing library, would most likely be the tool we would lean on. Unfortunately the online community around this type of stuff is less prominent than the web community we’re so familiar with. Eventually we stumbled across SimpleCV, an open source python project that makes OpenCV a good bit more accessible.

We rigged up our Raspberry Pi to a webcam and fashioned it to the window overlooking the parking spot. Pretty quickly we were able to capture images of our parking space, so the next challenge was to figure out that … something to process the image.

rigging_up

Step #3 Pull Out The Big Guns

Canny Edge Detection is the brains behind the operation at the moment. You supply an image and it returns a new image that’s black everywhere with white lines on all the edges it detected. You can pass in different parameters that mean things we don’t quite understand. With enough fiddling we were able to drown out the noise of the image to a satisfactory level, and still highlight the object we were concerned with identifying.

comparison

Since an empty spot produces a nice big area void of white lines, we added some cropping and masking before running the edge detection. Thus we could only count the white pixels in the area where the car would be, set a threshold, and return “TAKEN” if the number of white pixels ever exceeded the threshold. It’s not 100% accurate, but it works quite well. If the spot is empty, we typically see a white pixel count from 0 to 400, and the presence of a car cranks that up in the 5,000 - 12,000 range. Performance at night and when it's snowing leaves room for improvement, but you can see the code for yourself here.

Step #4 Get It On The Internet

Back in our wheelhouse, we whipped up a quick little rails app to host the images and determined status of the parking spot. You can see it in all it’s (un-styled) glory here - pi-parking.herokuapp.com/. A cronjob on the Raspberry Pi then runs this flow every 10 minutes:

  • take a picture, save it
  • crop it, mask it, convert to edge-detected version, save that
  • count white pixels
  • make a POST to the heroku app with the image, edge-detected image, and status (“TAKEN” or “AVAILABLE”)

Step #5 Engage The Power Of Texting

The website is up, our Raspberry Pi is fairly good at determining the availability of the parking spot, all is sweet. But what if you’re on the go and don’t have good access to the internet, or still think flip phones are the way to true happiness? Not a problem! Thanks to the awesome service Twilio provides, you can literally send the app a text, and you’ll get a response with the parking spot status in seconds. You find out that it’s taken? Rats. Simply shoot another text over with the word “Alert” in it, and the app will text you as soon as the spot frees up (give or take a few mintues).

The Future

While side projects like this are extremely fun to work on, we can’t play around with internal office projects all the time. Regardless, I hope to make some strides in the next few months on the following fronts:

  • Better camera allowing for night vision
  • Make a Haar Classifier to recognize cars
  • Integrate the app with the Google Calendar
  • Get SimpleCV working on OSX so we don’t have to experiment/develop on a Raspberry Pi
Author: "Eli Fatsi" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 14 Jan 2014 09:30

In early 2013, the “Block Element Modifier” (BEM) css syntax emerged as a popular way to create better organization and uniformity across projects. For the most part, I love it. It’s clear, organized, and it makes sense. But one thing that kept bugging me (and my fellow FEDs) was how redundant and long some of the class attributes on elements were getting—especially when it came to modifiers.

<button class="button button--green button--rounded button--large">

Individual Modifiers: A shorter syntax

I really like the element--modifier notation visually. It makes it clear that the class is only meant to extend something. But repeating the element name each time is redundant. We could remove that redundancy and keep them visually distinct by attaching the modifiers directly to their element in our stylesheet, while keeping a leading hyphen on the class name to denote its “modifier”.

HTML

<button class="button -green -rounded -large">

SCSS

.button {
  &.-green {...}
  &.-rounded {...}
  &.-large {...}
}

This maintains the clear distinction between elements and modifiers, without the need to repeat yourself—and yes, a single leading hyphen IS a valid character for the start of a selector (double hyphens are not). If multiple selectors make you uneasy, you’re probably having IE 6 flashbacks. Don’t worry. She can’t hurt you anymore.

Another concern may arise over the added specificity. I haven’t found it to be an issue, but can imagine a scenario where you had a set of global overrides you want to be able to add to any element. In that case, you could do something like this:

SCSS

.button {
  &.-hoverable {
    &:hover {
      opacity: 0.75;
    }
  }
}

.overrides {
  &.-disabled {
    opacity: 0.25;
  }
}

HTML

<button class="button -hoverable overrides -disabled">Disabled</button>

The example is contrived, but you get the point. If you hover over the button the opacity does not change, because the .button.-hoverable class has been trumped the later defined and equally specific .overrides.-disabled selector.

Saved Variations: Extending with SASS

The flexibility of using modifiers in our markup is great, but if I notice a commonly reoccurring combination, I prefer to combine them in my stylesheet instead. SASS @extend lets us do this.

SCSS

.button--save {
  @extend %button;
  @extend %button--large;
  @extend %button--rounded;
  @extend %button--green;
}

HTML

<button class="button--save">Save</button>

So clean! You’ll notice a couple of things here. 1) I’m using the SASS % placeholder selector, and 2) I’m still using the normal BEM element--modifier syntax.

First, I create all of my styles using placeholder selectors, so I can @extend them into other classes later.

%button {
  background: #45beff;
  border: none;
  padding: 1em 2em;
  font-size: 16px;
  
  &:hover {
    opacity: 0.75;
  }
}

%button--green {
  background: #3efa95;
}

%button--red {
  background: #ff3a6a;
}

%button--large {
  font-size:20px;
}

%button--rounded {
  border-radius: 10px;
}

Now I can assemble my element styles, expose any modifiers I plan on using in the markup, and create reusable variations that extend from various combinations of modifiers.

.button {
    @extend %button;

    &.-green {
        @extend %button--green;
    }
  
    &.-large {
        @extend %button--large;
    }
}

.button--delete {
  @extend %button;
  @extend %button--large;
  @extend %button--rounded;
  @extend %button--red;
}

See it on CodePen.

BEM to BEVM?

When all’s said and done, the BEM block__element--modifer pattern has morphed into something more like block__element--variation -modifier, plus internal SASS %modifier selectors.

// Internal Modifier
%button--red {
  background: #ff3a6a;
}

// Element
.button {
  // Modifier
  &.-red {
    @extend %button--red;
  }
}

// Variation
.button--delete {
  @extend %button;
  @extend %button--red;
} 

I’m enjoying it so far, but what do you think? Useful? Weird? Discuss!

Author: "Dan Tello" Tags: "Extend"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader