• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 25 Aug 2014 15:44

Have you heard of Docker? You probably have—everybody’s talking about it. It’s the new hotness. Even my dad’s like, “what’s Docker? I saw someone twitter about it on the Facebook. You should call your mom.”

Docker is a program that makes running and managing containers super easy. It has the potential to change all aspects of server-side applications, from development and testing to deployment and scaling. It’s pretty cool.

strong.note { color: black; font-size: 24px; vertical-align: middle; } pre.no-highlight { overflow-x: scroll; }

Recently, I’ve been working through The Docker Book. It’s a top notch book and I highly recommend it, but I’ve had some problems running the examples on OS X. After a certain point, the book assumes you’re using Linux and skips some of the extra configuration required to make the examples work on OS X. This isn’t the book’s fault; rather, it speaks to underlying issues with how Docker works on OS X.

This post is a walkthrough of the issues you’ll face running Docker on OS X and the workarounds to deal with them. It’s not meant to be a tutorial on Docker itself, but I encourage you to follow along and type in all the commands. You’ll get a better understanding of how Docker works in general and on OS X specifically. Plus, if you decide to dig deeper into Docker on your Mac, you’ll be saved hours of troubleshooting. Don’t say I never gave you nothing.

First, let’s talk about how Docker works and why running it on OS X no work so good.

How Docker Works

Docker is a client-server application. The Docker server is a daemon that does all the heavy lifting: building and downloading images, starting and stopping containers, and the like. It exposes a REST API for remote management.

The Docker client is a command line program that communicates with the Docker server using the REST API. You will interact with Docker by using the client to send commands to the server.

The machine running the Docker server is called the Docker host. The host can be any machine—your laptop, a server in the Cloud™, etc—but, because Docker uses features only available to Linux, that machine must be running Linux (more specifically, the Linux kernel).

Docker on Linux

Suppose we want to run containers directly on our Linux laptop. Here’s how it looks:

Docking on Linux

The laptop is running both the client and the server, thus making it the Docker host. Easy.

Docker on OS X

Here’s the thing about OS X: it’s not Linux. It doesn’t have the kernel features required to run Docker containers natively. We still need to have Linux running somewhere.

Enter boot2docker. boot2docker is a “lightweight Linux distribution made specifically to run Docker containers.” Spoiler alert: you’re going to run it in a VM on your Mac.

Here’s a diagram of how we’ll use boot2docker:

Docking on OS X

We’ll run the Docker client natively on OS X, but the Docker server will run inside our boot2docker VM. This also means boot2docker, not OS X, is the Docker host, not OS X.

Make sense? Let’s install dat software.

Installation

Step 1: Install VirtualBox

Go here and do it. You don’t need my help with that.

Step 2: Install Docker and boot2docker

You have two choices: the offical package from the Docker site or homebrew. I prefer homebrew because I like to manage my environment from the command line. The choice is yours.

> brew update
> brew install docker
> brew install boot2docker

Step 3: Initialize and start boot2docker

First, we need to initialize boot2docker (we only have to do this once):

> boot2docker init
2014/08/21 13:49:33 Downloading boot2docker ISO image...
    [ ... ]
2014/08/21 13:49:50 Done. Type `boot2docker up` to start the VM.

Next, we can start up the VM. Do like it says:

> boot2docker up
2014/08/21 13:51:29 Waiting for VM to be started...
.......
2014/08/21 13:51:50 Started.
2014/08/21 13:51:51   Trying to get IP one more time
2014/08/21 13:51:51 To connect the Docker client to the Docker daemon, please set:
2014/08/21 13:51:51     export DOCKER_HOST=tcp://192.168.59.103:2375

Step 4: Set the DOCKER_HOST environment variable

The Docker client assumes the Docker host is the current machine. We need to tell it to use our boot2docker VM by setting the DOCKER_HOST environment variable:

> export DOCKER_HOST=tcp://192.168.59.103:2375

Your VM might have a different IP address—use whatever boot2docker up told you to use. You probably want to add that environment variable to your shell config.

Step 5: Profit

Let’s test it out:

> docker info
Containers: 0
Images: 0
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.15.3-tinycore64
Debug mode (server): true
Debug mode (client): false
Fds: 10
Goroutines: 10
EventsListeners: 0
Init Path: /usr/local/bin/docker
Sockets: [unix:///var/run/docker.sock tcp://0.0.0.0:2375]

Great success. To recap: we’ve set up a VirtualBox VM running boot2docker. The VM runs the Docker server, and we’re communicating with it using the Docker client on OS X.

Bueno. Let’s do some containers.

Common Problems

We have a “working” Docker installation. Let’s see where it falls apart and how we can fix it.

Problem #1: Port Forwarding

The Problem: Docker forwards ports from the container to the host, which is boot2docker, not OS X.

Let’s start a container running nginx:

> docker run -d -P --name web nginx
Unable to find image 'nginx' locally
Pulling repository nginx
    [ ... ]
0092c03e1eba5da5ccf9f858cf825af307aa24431978e75c1df431e22e03b4c3

This command starts a new container as a daemon (-d), automatically forwards the ports specified in the image (-P), gives it the name ‘web’ (--name web), and uses the nginx image. Our new container has the unique identifier 0092c03e1eba....

Verify the container is running:

> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                   NAMES
0092c03e1eba        nginx:latest        nginx               44 seconds ago      Up 41 seconds       0.0.0.0:49153->80/tcp   web

Under the PORTS heading, we can see our container exposes port 80, and Docker has forwarded this port from the container to a random port, 49153, on the host.

Let’s curl our new site:

> curl localhost:49153
curl: (7) Failed connect to localhost:49153; Connection refused

It didn’t work. Why?

Remember, Docker is mapping port 80 to port 49153 on the Docker host. If we were on Linux, our Docker host would be localhost, but we aren’t, so it’s not. It’s our VM.

The Solution: Use the VM’s IP address.

boot2docker comes with a command to get the IP address of the VM:

> boot2docker ip

The VM’s Host only interface IP address is: 192.168.59.103

Let’s plug that into our curl command:

> curl $(boot2docker ip):49153

The VM’s Host only interface IP address is:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
    [ ... ]

Success! Sort of. We got the web page, but we got The VM’s Host only interface IP address is:, too. What’s the deal with that nonsense.

Turns out, boot2docker ip outputs the IP address to standard output and The VM's Host only interface IP address is: to standard error. The $(boot2docker ip) subcommand captures standard output but not standard error, which still goes to the terminal. Scumbag boot2docker.

This is annoying. I am annoyed. Here’s a bash function to fix it:

docker-ip() {
  boot2docker ip 2> /dev/null
}

Stick that in your shell config, then use it like so:

> curl $(docker-ip):49153
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
    [ ... ]

Groovy. This gives us a reference for the IP address in the terminal, but it would be nice to have something similar for other apps, like the browser. Let’s add a dockerhost entry to the /etc/hosts file:

> echo $(docker-ip) dockerhost | sudo tee -a /etc/hosts

Now we can use it everywhere:

Great success. Make sure to stop and remove the container before continuing:

> docker stop web
> docker rm web

VirtualBox assigns IP addresses using DHCP, meaning the IP address could change. If you’re only using one VM, it should always get the same IP, but if you’re VMing on the reg, it could change. Fair warning.

Bonus Alternate Solution: Forward all of Docker’s ports from the VM to localhost.

If you really want to access your Docker containers via localhost, you can forward all of the ports in Docker’s port range from the VM to localhost. Here’s a bash script, taken from here, to do that:

#!/bin/bash

for i in {49000..49900}; do
  VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port$i,tcp,,$i,,$i";
  VBoxManage modifyvm "boot2docker-vm" --natpf1 "udp-port$i,udp,,$i,,$i";
done

By doing this, Docker will forward port 80 to, say, port 49153 on the VM, and VirtualBox will forward port 49153 from the VM to localhost. Soon, inception. You should really just use the VM’s IP address mmkay.

Problem #2: Mounting Volumes

The Problem: Docker mounts volumes from the boot2docker VM, not from OS X.

Docker supports volumes: you can mount a directory from the host into your container. Volumes are one way to give your container access to resources in the outside world. For example, we could start an nginx container that serves files from the host using a volume. Let’s try it out.

First, let’s create a new directory and add an index.html:

> cd /Users/Chris
> mkdir web
> cd web
> echo 'yay!' > index.html

(Make sure to replace /Users/Chris with your own path).

Next, we’ll start another nginx container, this time mounting our new directory inside the container at nginx’s web root:

> docker run -d -P -v /Users/Chris/web:/usr/local/nginx/html --name web nginx
485386b95ee49556b2cf669ea785dffff2ef3eb7f94d93982926579414eec278

We need the port number for port 80 on our container:

> docker port web 80
0.0.0.0:49154

Let’s try to curl our new page:

> curl dockerhost:49154
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.7.1</center>
</body>
</html>

Well, that didn’t work. The problem, again, is our VM. Docker is trying to mount /Users/Chris/web from the host into our container, but the host is boot2docker, not OS X. boot2docker doesn’t know anything about files on OS X.

The Solution: Mount OS X’s /Users directory into the VM.

By mounting /Users into our VM, boot2docker gains a /Users volume that points to the same directory on OS X. Referencing /Users/Chris/web inside boot2docker now points directly to /Users/Chris/web on OS X, and we can mount any path starting with /Users into our container. Pretty neat.

boot2docker doesn’t support the VirtualBox Guest Additions that allow us to make this work. Fortunately, a very smart person has solved this problem for us with a custom build of boot2docker containing the Guest Additions and the configuration to make this all work. We just have to install it.

First, let’s remove the web container and shut down our VM:

> docker stop web
> docker rm web
> boot2docker down

Next, we’ll download the custom build:

> curl http://static.dockerfiles.io/boot2docker-v1.2.0-virtualbox-guest-additions-v4.3.14.iso > ~/.boot2docker/boot2docker.iso

Finally, we share the /Users directory with our VM and start it up again:

> VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
> boot2docker up

Replacing the boot2docker image won’t erase any of the data in your VM, so don’t worry about losing any of your containers. Good guy boot2docker.

Let’s try this again:

> docker run -d -P -v /Users/Chris/web:/usr/local/nginx/html --name web nginx
0d208064a1ac3c475415c247ea90772d5c60985841e809ec372eba14a4beea3a
> docker port web 80
0.0.0.0:49153
> curl dockerhost:49153
yay!

Great success! Let’s verify that we’re using a volume by creating a new file on OS X and seeing if nginx serves it up:

> echo 'hooray!' > hooray.html
> curl dockerhost:49153/hooray.html
hooray!

Sweet damn. Make sure to stop and remove the container:

> docker stop web
> docker rm web

If you update index.html and curl it, you won’t see your changes. This is because nginx ships with sendfile turned on, which doesn’t play well with VirtualBox. The solution is simple—turn off sendfile in the nginx config file—but outside the scope of this post.

Problem #3: Getting Inside a Container

The Problem: How do I get in there?

So you’ve got your shiny new container running. The ports are forwarding and the volumes are ... voluming. Everything’s cool, until you realize something’s totally uncool. You’d really like to start a shell in there and poke around.

The Solution: Linux Magic

Enter nsenter. nsenter is a program that allows you to run commands inside a kernel namespace. Since a container is just a process running inside its own kernel namespace, this is exactly what we need to start a shell inside our container. Let’s make it so.

This part deals with shells running in three different places. Trés confusing. I’ll use a different prompt to distinguish each:

  • > for OS X
  • $ for the boot2docker VM
  • % for inside a Docker container

First, let’s SSH into the boot2docker VM:

> boot2docker ssh

Next, install nsenter:

$ docker run --rm -v /var/lib/boot2docker:/target jpetazzo/nsenter

(How does that install it? jpetazzo/nsenter is a Docker image configured to build nsenter from source. When we start a container from this image, it builds nsenter and installs it to /target, which we’ve set to be a volume pointing to /var/lib/boot2docker in our VM.

In other words, we start a prepackaged build environment for nsenter, which compiles and installs it to our VM using a volume. How awesome is that? Seriously, how awesome? Answer me!)

Finally, we need to add /var/lib/boot2docker to the docker user’s PATH inside the VM:

$ echo 'export PATH=/var/lib/boot2docker:$PATH' >> ~/.profile
$ source ~/.profile

We should now be able to use the installed binary:

$ which nsenter
/var/lib/boot2docker/nsenter

Let’s start our nginx container again and see how it works (remember, we’re still SSH’d into our VM):

$ docker run -d -P --name web nginx
f4c1b9530fefaf2ac4fedac15fd56aa4e26a1a01fe418bbf25b2a4509a32957f

Time to get inside that thing. nsenter needs the pid of the running container. Let’s get it:

$ PID=$(docker inspect --format '{{ .State.Pid }}' web)

The moment of truth:

$ sudo nsenter -m -u -n -i -p -t $PID
% hostname
f4c1b9530fef

Great success! Let’s confirm we’re inside our container by listing the running processes (we have to install ps first):

% apt-get update
% apt-get install -y procps
% ps -A
  PID TTY          TIME CMD
    1 ?        00:00:00 nginx
    8 ?        00:00:00 nginx
   29 ?        00:00:00 bash
  237 ?        00:00:00 ps
% exit

We can see two nginx processes, our shell, and ps. How cool is that?

Getting the pid and feeding it to nsenter is kind of a pain. jpetazzo/nsenter includes docker-enter, a shell script that does it for you:

$ sudo docker-enter web
% hostname
f4c1b9530fef
% exit

The default command is sh, but we can run any command we want by passing it as arguments:

$ sudo docker-enter web ps -A
  PID TTY          TIME CMD
    1 ?        00:00:00 nginx
    8 ?        00:00:00 nginx
  245 ?        00:00:00 ps

This is totally awesome. It would be more totally awesomer if we could do it directly from OS X. jpetazzo’s got us covered there, too (that guy thinks of everything), with a bash script we can install on OS X. Below is the same script, but with a minor change to default to bash, because that’s how I roll.

Just stick this bro anywhere in your OS X PATH (and chmod +x it, natch) and you’re all set:

#!/bin/bash
set -e

# Check for nsenter. If not found, install it
boot2docker ssh '[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter'

# Use bash if no command is specified
args=$@
if [[ $# = 1 ]]; then
  args+=(/bin/bash)
fi

boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter "${args[@]}"

Let’s test it out:

> docker-enter web
% hostname
f4c1b9530fef

Yes. YES. Cue guitar solo.

Don’t forget to stop and remove your container (nag nag nag):

> docker stop web
> docker rm web

The End

You now have a Docker environment running on OS X that does all the things you’d expect. You’ve also hopefully learned a little about how Docker works and how to use it. We’ve had some laughs, and we’ve learned a lot, too. I’m glad we’re friends.

If you’re ready to learn more about Docker, check out The Docker Book. I can’t recommend it enough. Throw some money at that guy.

The Future Soon

Docker might be the new kid on the block, but we’re already thinking about ways to add it to our workflow. Stay tuned for great justice.

Was this post helpful? How are you using Docker? Let me know down there in the comments box. Have a great . Call your mom.

Author: "Chris Jones" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 20 Aug 2014 15:36

Despite some exciting advances in the field, like Node, Redis, and Go, a well-structured relational database fronted by a Rails or Sinatra (or Django, etc.) app is still one of the most effective toolsets for building things for the web. In the coming weeks, I’ll be publishing a series of posts about how to be sure that you’re taking advantage of all your RDBMS has to offer.

IF YOU ONLY REQUIRE a few attributes from a table, rather than instantiating a collection of models and then running a .map over them to get the data you need, it’s much more efficient to use .pluck to pull back only the attributes you need as an array. The benefits are twofold: better SQL performance and less time and memory spent in Rubyland.

To illustrate, let’s use an app I’ve been working on that takes Harvest data and generates reports. As a baseline, here is the execution time and memory usage of rails runner with a blank instruction:

$ time rails runner ""
real  0m2.053s
user  0m1.666s
sys   0m0.379s

$ memory_profiler.sh rails runner ""
Peak: 109240

In other words, it takes about two seconds and 100MB to boot up the app. We calculate memory usage with a modified version of this Unix script.

Now, consider a TimeEntry model in our time tracking application (of which there are 314,420 in my local database). Let’s say we need a list of the dates of every single time entry in the system. A naïve approach would look something like this:

dates = TimeEntry.all.map { |entry| entry.logged_on }

It works, but seems a little slow:

$ time rails runner "TimeEntry.all.map { |entry| entry.logged_on }"
real  0m14.461s
user  0m12.824s
sys   0m0.994s

Almost 14.5 seconds. Not exactly webscale. And how about RAM usage?

$ memory_profiler.sh rails runner "TimeEntry.all.map { |entry| entry.logged_on }"
Peak: 1252180

About 1.25 gigabytes of RAM. Now, what if we use .pluck instead?

dates = TimeEntry.pluck(:logged_on)

In terms of time, we see major improvements:

$ time rails runner "TimeEntry.pluck(:logged_on)"
real  0m4.123s
user  0m3.418s
sys   0m0.529s

So from roughly 15 seconds to about four. Similarly, for memory usage:

$ memory_profiler.sh bundle exec rails runner "TimeEntry.pluck(:logged_on)"
Peak: 384636

From 1.25GB to less than 400MB. When we subtract the overhead we calculated earlier, we’re going from 15 seconds of execution time to two, and 1.15GB of RAM to 300MB.

Using SQL Fragments

As you might imagine, there’s a lot of duplication among the dates on which time entries are logged. What if we only want unique values? We’d update our naïve approach to look like this:

dates = TimeEntry.all.map { |entry| entry.logged_on }.uniq

When we profile this code, we see that it performs slightly worse than the non-unique version:

$ time rails runner "TimeEntry.all.map { |entry| entry.logged_on }.uniq"
real  0m15.337s
user  0m13.621s
sys   0m1.021s

$ memory_profiler.sh rails runner "TimeEntry.all.map { |entry| entry.logged_on }.uniq"
Peak: 1278784

Instead, let’s take advantage of .pluck’s ability to take a SQL fragment rather than a symbolized column name:

dates = TimeEntry.pluck("DISTINCT logged_on")

Profiling this code yields surprising results:

$ time rails runner "TimeEntry.pluck('DISTINCT logged_on')"
real  0m2.133s
user  0m1.678s
sys   0m0.369s

$ memory_profiler.sh rails runner "TimeEntry.pluck('DISTNCT logged_on')"
Peak: 107984

Both running time and memory usage are virtually identical to executing the runner with a blank command, or, in other words, the result is calculated at an incredibly low cost.

Using .pluck Across Tables

Requirements have changed, and now, instead of an array of timestamps, we need an array of two-element arrays consisting of the timestamp and the employee’s last name, stored in the “employees” table. Our naïve approach then becomes:

dates = TimeEntry.all.map { |entry| [entry.logged_on, entry.employee.last_name] }

Go grab a cup of coffee, because this is going to take awhile.

$ time rails runner "TimeEntry.all.map { |entry| [entry.logged_on, entry.employee.last_name] }"
real  7m29.245s
user  6m52.136s
sys   0m15.601s

memory_profiler.sh rails runner "TimeEntry.all.map { |entry| [entry.logged_on, entry.employee.last_name] }"
Peak: 3052592

Yes, you’re reading that correctly: 7.5 minutes and 3 gigs of RAM. We can improve performance somewhat by taking advantage of ActiveRecord’s eager loading capabilities.

dates = TimeEntry.includes(:employee).map { |entry| [entry.logged_on, entry.employee.last_name] }

Benchmarking this code, we see significant performance gains, since we’re going from over 300,000 SQL queries to two.

$ time rails runner "TimeEntry.includes(:employee).map { |entry| [entry.logged_on, entry.employee.last_name] }"
real  0m21.270s
user  0m19.396s
sys   0m1.174s

$ memory_profiler.sh rails runner "TimeEntry.includes(:employee).map { |entry| [entry.logged_on, entry.employee.last_name] }"
Peak: 1606204

Faster (from 7.5 minutes to 21 seconds), but certainly not fast enough. Finally, with .pluck:

dates = TimeEntry.includes(:employee).pluck(:logged_on, :last_name)

Benchmarks:

$ time rails runner "TimeEntry.includes(:employee).pluck(:logged_on, :last_name)"
real  0m4.180s
user  0m3.414s
sys   0m0.543s

$ memory_profiler.sh rails runner "TimeEntry.includes(:employee).pluck(:logged_on, :last_name)"
Peak: 407912

A hair over 4 seconds execution time and 400MB RAM – hardly any more expensive than without employee names.

Conclusion

  • Prefer .pluck to instantiating a collection of ActiveRecord objects and then using .map to build an array of attributes.

  • .pluck can do more than simply pull back attributes on a single table: it can run SQL functions, pull attributes from joined tables, and tack on to any scope.

  • Whenever possible, let the database do the heavy lifting.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 15 Aug 2014 14:29

When I first found out that I managed to land a Rails internship at Viget, I was incredibly excited: not only was I getting to intern at a great company, I’d even get the chance to learn all I could ever need to know about Ruby on Rails in the gap between my exams and the internship. By the time that June 9 came around, I felt like I had a pretty solid foundation; by June 10, I wasn’t so sure. I wasn’t truly struggling, but I didn’t really feel like I was able to do any of my assignments better than I would have a month prior.

For the first several weeks, I wondered how I could have better prepared. At first, I thought that it was simply a matter of time—had I dedicated more hours per day, I surely would have been better off, right? Now that I’ve had more time to learn and reflect, however, I’ve realized that the issue was less about the quantity of time I spent but rather the quality of that time. Since I was completely new to Ruby, I just didn’t know what to prioritize.

Now that I have a little experience under my belt, I have a much better grasp on what my prep work should have entailed. To help any future interns here at Viget—or anyone else who wants to start working with Rails—avoid using their time poorly, I’ve created a small guide to help other Ruby newbies gain the knowledge and experience they’ll need to be ready for an entry-level position in Rails.

Obligatory disclaimer: since learning anything as significant as Rails is a non-trivial undertaking, you might not end up liking the resources that I’ve outlined in this post. However, I do think that the technologies that I’ll cover are pretty important. As a result, I’ve compiled a list of resources for all of the different technologies that I’ve needed to know in order to effectively use Rails this summer to supplement this blog post. If you’re not a fan of the individual resources that I cover here, feel free to substitute others, either from the Gist or from your own findings. With that said, let’s get started.

Programming

Surprise, surprise: in order to develop with Rails, you’ll need to know how to code. In particular, you’ll need to know how to use Ruby and have some experience in object-oriented programming (OOP).

If you’re completely new to programming, you’re definitely going to want to start with the basics. While I was lucky enough to take a course introducing me to OOP—which, for me, was the best option—I really like this interactive Ruby course from Codecademy. It starts off very simply, but it advances in complexity as the lessons go on. From here, you’ll probably want to do a few more tutorials to make sure that you get as much practice in as possible.

For those who already have programming experience prior to learning Ruby, RubyMonk is the way to go. The beginning lessons are a little less basic than most other interactive courses, and each book is organized well enough to allow you to skip between different topics very easily. This means you don’t have to trudge through programming basics to get to what you need. In addition, there are also several books for non-beginners, which allows you to touch on some more advanced topics if you have the time and inclination.

Finally, if you’re the type of person that prefers books over tutorials, then check out Mr. Neighborly’s Humble Little Ruby Book or Why’s Poignant Guide to Ruby. Each talks about Ruby as a programming language in addition to teaching you how to use it; personally, I think that’s incredibly useful (and interesting!) knowledge that can help you to write better code. Plus, since neither is interactive, you can download them to a reader or tablet and won’t have to worry about being tied to a Wifi connection.

Local Environment Setup

Since most of your Ruby work up to this point has likely been via web interfaces, it’s a great time to set up your very own, personalized development environment. For most people, this will mean installing Ruby to a machine of your choosing and figuring out how you’re going to want to write and run your code.

My personal preference is to do everything on my computer’s command line: I can easily access Vim for code writing, set up a server to check out my work in-browser or run my test suite (more on tests later). Plus, since navigating more than ten lines of code with nothing but arrow keys would have driven me crazy, this approach forced me to learn some of the more advanced features of my text editor. If you decide to use Vim, make sure to check out and install some of its Ruby plugins to make your development that much more fluid and personalized.

If Vim doesn’t suit your fancy, other solid text editors include Emacs, Sublime and Atom. Aptana, an integrated development environment (IDE), is also an option and might be the best bet for anyone that’s a fan of Eclipse. Regardless of how you decide to run and execute your code, I’d strongly recommend learning the basics of utilizing the command line quickly—it ends up being both very simple and very valuable.

Now that you’ve got a snazzy, new development environment set up, you can use it to write some basic Ruby programs to hone your skills. For ideas, check out CodeEval: it has a lot of different challenges of varying difficulties. It even includes some common interview questions, like FizzBuzz. For additional practice, you can also try out CodeWars.

Front End Technologies

Since Rails is a web framework, you’re naturally going to need an understanding of the web application ecosphere, and that means learning about the trifecta of web technologies: HTML, CSS and Javascript. Regardless of the work you plan to be doing, you should be able to utilize HTML and CSS effectively enough to create static web pages that look fairly nice before tackling Rails; luckily, the basics for both are pretty easy to grasp. In addition, there are also a lot of different resources for learning the two. Personally, I like the approach that Thoughtbot takes.

Javascript, on the other hand, is a little more complicated. As “the scripting language of the web”, JS is one of the most useful and utilized tools on the Internet, but, as far as modern programming languages go, it’s fairly different from Ruby. If your work will primarily be on the back end, you probably won’t need incredibly strong Javascript skills, but you will need to know why and when it’s used. If you’re also doing significant front end work, your Javascript knowledge will need to be considerably more comprehensive from the get-go. As with HTML and CSS, it’s fortunately not difficult to find tutorials and guides online. Again, Thoughtbot has a pretty solid list.

Relational Databases

If you’re going to work with persistent data—hint: you will—you’re going to need a basic understanding of relational databases. Luckily, a relational database is exactly as it sounds: a database that can store relations. For example, the clone of Hacker News that I created for my internship has the notion of both a User and an Article. To keep track of who posted what, my app is implemented such that a User has_many Articles; in other words, there’s a relationship between Users and Articles. Pretty basic, right? As with most any important technology, there’s more to it than that, so it’s worth checking out the Resources Gist or doing some Googling to learn more.

In addition to the notion of designing a relational database, there’s also the concern of being able to get data out of said database. Traditionally, a developer would write a query to the database in SQL to access data; luckily for us, Rails can handle a lot of the querying you’ll need by itself. That said, you may very well run into a situation where using Rails’ built-in queries won’t make the most sense. In those cases, a basic understanding of SQL is incredibly useful. Even if you happen to be lucky enough to avoid needing to write your own SQL queries like me—but unlike Nathanael—understanding how to use SQL will make your more complicated Rails queries much easier, which is due to the fact that Rails and SQL share many of the verbs used for querying, such as join and merge. As far as learning SQL goes, I’m a big fan of Learn SQL the Hard Way—just don’t be intimidated by the name.

Version Control

You’re now just one step away from getting to Rails itself: learning how to use version control. While there are a few different options available, Git is easily the most popular. As with Ruby, you’ll want to install Git to your preferred machine. You can then run through Git Immersion to familiarize yourself with running Git via the command line.

Next, you’ll probably want to learn how to use Github, which offers you the ability to store all of your Git repositories online and even view and participate in open source projects. In addition, it’s a great way to show off your code to potential employers. You can also throw single file programs, like CodeEval solutions you’ve written, up in Gists.

Rails (and Testing!)

Now that you’re a bonafide pro with most of its companion technologies, it’s time to dive into Rails in earnest. The most commonly recommended Rails tutorial—aptly named the Ruby on Rails Tutorial by Michael Hartl—also happens to be, in my opinion, the best. While it’s not perfect, it does a great job of teaching Rails while touching on Git, Heroku and testing. I’d highly recommend doing the additional problems that Hartl leaves at the end of each chapter: they’re not terribly complicated, but they do ensure that you’re developing in Rails yourself rather than just following what the book lays out for you.

While it is covered in the Rails Tutorial, I feel like I really need to stress the importance of learning how to test your code properly. As part of my internship, I primarily used Rspec, Capybara and FactoryGirl for testing, though several other options do exist.

Generally, I’d write a test for any features of my applications—like being able to post an article on my Hacker News clone or submit a pick on Winsome. For these feature tests, I’d do my best to ensure that my tests were emulating a user’s experience as closely as possible: rather than ensuring that an article was added to the database, for example, I’d check to make sure it showed up on the home page. The second form of test I’d usually write would be a model test. These would just check that each of my objects—rather than my features—were performing as expected. For example, I wrote a test for my Hacker News clone to ensure that you couldn’t create a new User with the email address of an existing User. To be safe, I’d also write tests for any significant amount of code that wasn’t covered in one of the previous two cases.

Conclusion

Now that you have an idea of a curriculum you might want to follow to learn Rails, you may be thinking: this looks like a lot of work. Unfortunately, there’s no guaranteed quick and easy way to learn Rails. That said, keep in mind that—for entry level positions or internships, at least—you shouldn’t need to be a complete pro. Even now that I’ve finished a really comprehensive Rails internship, there’s still plenty for me to learn about and improve upon. In particular, I’m really interested in metaprogramming Ruby, and my SQL and Javascript skills can definitely use some more work. The real key to this guide is that you should have an idea of how each of these different pieces work together to make a working web application.

I also encourage anyone that’s trying to learn Rails to keep track of what did and didn’t work for you. Feel free to fork the Resources Gist I mentioned earlier and create your own version that you can share with other aspiring Rails devs in the future. Last, but certainly not least: good luck!

Author: "Andy Andrea" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 11 Aug 2014 17:44

While working on a new feature, I ran into a problem where the response time for a single page was approaching 30 seconds. Each request made an API call, so I first ruled that out as the culprit by logging the response time for every external request. I then turned my attention to the Rails log.

Rails gives a decent breakdown of where it's spending time before delivering a response to the browser — you can see how much time is spent rendering views, individual partials, and interacting with the database:

Completed 200 OK in 32625.1ms (Views: 31013.9ms | ActiveRecord: 16.8ms)

In my case, this didn't give enough detail to know exactly what the problem was, but it gave me a good place to start. In order to see what needed optimization, I turned to Ruby's Benchmark class to log the processing time for blocks of code. I briefly looked at Rails' instrumentation facilities, but it wasn't clear how to use it to get the result I wanted.

Instead, I whipped up a quick class with a corresponding helper to log processing times for a given block of code that I've since turned into a gem. While I used this in a Rails application, it will work in any Ruby program as well. To use, include the helper in your class and wrap any code you want to benchmark in a log_execution_for block:

class Foo
  include SimpleBenchmark::Helper

  def do_a_thing
    log_execution_for('wait') { sleep(1); 'done' }
  end
end

Calling Foo#do_a_thing will create a line in your logfile with the label "wait" and the time the block of code took to execute. By default, it will use Rails.logger if available or will write to the file benchmark.log in the current directory. You can always override this by setting the value of SimpleBenchmark.logger. When moving to production, you can either delete the benchmarking code or leave it in and disable it with SimpleBenchmark.enabled = false.

If you're using it inside of Rails like I was, create an initializer and optionally add SimpleBenchmark::Helper to the top of your ApplicationController:

# config/initializers/benchmarking.rb
require 'simple_benchmark'
SimpleBenchmark.enabled = Rails.env.development?

# app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
  include SimpleBenchmark::Helper
  helper_method :log_execution_for
end

Happy benchmarking!

Author: "Patrick Reagan" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 08 Aug 2014 14:56

Great programming is worthless if you’re building the wrong thing. Testing early and often can help validate your assumptions to keep a project or company on the right track. This type of testing can often require traffic distribution strategies for siphoning users into test groups.

Nginx configured as a load balancer can accomodate complex distribution logic and provides a simple solution for most split test traffic distribution needs.

Full App Tests

In straightforward cases, when we’re testing an entirely new version of an application that runs on a distinct server, we can create a load balancer to proxy requests for the domain, passing a desired portion of requests to the test server.

Full App Test Diagram

The Nginx configuration for this load balancer could be as simple as:

http {
    upstream appServer {
        ip_hash;
        server old.app.com weight=9;
        server new.app.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://appServer;
        }
    }
}

In this example, Nginx is configured to choose between passing requests to one of two app servers. (old.app.com and new.app.com could be listed as IP addresses instead if you’re into that sort of thing.)

Session Persistence

ip_hash is used for session persistence (to avoid having visitors see two different versions of the app on subsequent requests.)

Distribution Weighting

Most test cases require that just a small fraction of requests be passed to the test version of the app. Here a weight parameter is used to adjust how frequently requests are passed to the new version of the app that is under test. A weight of 9 has the same effect as having 9 seperate entries for this old.app.com server. When the routing decision is being made, Nginx will choose one server from the (effective) 10 server entries. In this case, the new app server will be passed 10% of requests.

Partial App Tests

Split testing a partial replacement for an existing app is more complicated than a full replacement. If we simply switched requests between servers with a partial replacement, requests could be made to the new app server for portions of the existing app that do not exist in the version under test. 404 time.

Subdomain Partial Redirection Strategy

In these complex cases, it may be best to make your test version available via a subdomain, preserving the naked domain for accessing necessary portions of the old application.

Partial App Test Diagram

We recently tested a replacement for a homepage and several key pages for an existing application. Our Nginx configuration looked like that below. (Or here if you would prefer an uncommented version.)

# Define a cluster to which you can proxy requests. In this case, we will be
# proxying requests to just a single server: the original app server.
# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html

upstream app.com {
  server old.app.com;
}

# Assign to a variable named $upstream_variant a psuedo-randomly chosen value,
# with "test" being assigned 10% of the time, and "original" assigned the
# remaining 90% of the time. (This is group determination for requests not
# already containing a group cookie.)
# See http://nginx.org/en/docs/http/ngx_http_split_clients_module.html

split_clients "app${remote_addr}${http_user_agent}${date_gmt}" $upstream_variant {
  10% "test";
  * "original";
}

# Assign to a variable named $updstream_group a variable mapped from the the
# value present in the group cookie. If the cookie's value is present, preserve
# the existing value. If it is not, assign to the value of the $upstream_variant.
# See http://nginx.org/en/docs/http/ngx_http_map_module.html
# Note: the value of cookies is available via the $cookie_ variables
# (i.e. $cookie_my_cookie_name will return the value of the cookie named 'my_cookie_name').

map $cookie_split_test_version $upstream_group {
  default    $upstream_variant;
  "test"     "test";
  "original" "original";
}

# Assign to a variable named $internal_request a value indicating whether or
# not the given request originates from an internal IP address. If the request
# originates from an IP within the range defined by 4.3.2.1 to 4.3.2.254, assign 1.
# Otherwise assign 0.
# See http://nginx.org/en/docs/http/ngx_http_geo_module.html

geo $internal_request {
  ranges;
  4.3.2.1-4.3.2.254 1;
  default 0;
}

server {
  listen 80 default_server;
  listen [::]:80 default_server ipv6only=on;
  server_name app.com www.app.com;

  # For requests made to the root path (in this case, the hompage):
  
  location = / {
    # Set a cookie containing the selected group as it's value. Expire after
    # 6 days (the length of our test).
    
    add_header Set-Cookie "split_test_version=$upstream_group;Path=/;Max-Age=518400;";

    # Requests by default are not considered candidates for the test group.
    
    set $test_group 0;

    # If the request has been randomly selected for the test group, it is now
    # a candidate for redirection to the test site.
    
    if ($upstream_group = "test") {
      set $test_group 1;
    }

    # Regardless of the group determination, if the request originates from
    # an internal IP it is not a candidate for the test group.
    
    if ($internal_request = 1) {
      set $test_group 0;
    }

    # Redirect test group candidate requests to the test subdomain.
    # See http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#return
    
    if ($test_group = 1) {
      return 302 http://new.app.com/;
      break;
    }

    # Pass all remaining requests through to the old application.
    # See http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
    
    proxy_pass http://app.com;
  }

  # For requests to all other paths:
  
  location / {
    # Pass the request through to the old application.
    
    proxy_pass http://app.com;
  }
}

Switch Location

Partial split tests require a specific location from which to assign the test group and begin the test. In this case, the root location (homepage) is used. Requests to the root path are assigned a test group and redirected or passed through as appropriate. Requests to all other locations are passed through to the original app server.

Session Persistence (and Expiration)

In this case, we use a cookie (add_header Set-Cookie ...) to establish session persistence. When a request is assigned to either the test or original group, this group is recorded in a cookie which is accessed upon subsequent requests to ensure a consistent experience for the duration of the test.

Since our split test was set to last less than 6 days, we set a six day expiration on the cookie.

IP Filtering

In many cases, split tests require the filtering of certain parties (in this case the company owner of the application) to avoid skewing test results. In this case, we use Nginx’s geo to determine whether or not the request originates from the company’s internal network. Later on this determination is used to direct the request straight through to the original version of the app.

302 Redirection

Since most of the original application must remain accessible, instead of passing test group traffic through to the new server, we instead 302 redirect the request to the new application subdomain. This allows the user to seemlessly switch between viewing content provided by both the new and original applications.


Nginx is a terrific tool for distributing traffic for split tests. It’s stable, it’s blazingly fast, and configurations for typical use cases are prevalent online. More complex configuration can be accomplished after just a couple hours exploring the documentation. Give Nginx a shot next time you split test!

Author: "Lawson Kurtz" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 07 Aug 2014 18:01

The Viget development and front-end development teams use GitHub pull requests to critique and improve code on all our projects. I built a tool for visualizing how the team members comment on each other's PRs, and exposing some neat facts about the interactions.

Checkoning (GitHub link) does three things:

  • Pulls down PR data for a specific team and saves it as a massive JSON file.
  • Digs through the raw data to find interesting data to visualize.
  • Expose the data on a single page, mostly using D3 visualizations.

The results from running it on Viget teams are pretty cool:

Force-directed graph of who Dan comments on most (and vice versa)

 

Had a slow May, I guess.

 

Tommy sure likes PHP.

 

Even though Checkoning is just an experiment, you can clone it and run it on your own teams. Teams roughly the size of Viget’s should work fine, but other sizes will need some tweaking to produce nice visualizations. Have fun!

Author: "Doug Avery" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Aug 2014 15:33

One of Viget's recent internal projects, SocialPiq, had some pretty heavy requirements surrounding user-driven search. The main feature of the site was to allow users to search by a number of various criteria, many of which were backed by ActiveRecord models.

Fortunately, one of Rails' strengths is its ability to associate objects and allow easy inspection and traversal of relationships. We could make a form from scratch using a combination of #text_field, #select, and #collection_select; however, we'd have to tell our controller how to interpret the search parameters and how to match and fetch results. Why not have Rails and its built-in constructs do most of that work for us?

First-Class Search Object

Instead of having to fill in all the logic ourselves, we can create an ActiveRecord model to represent a single search. We'll call this model Search. With this approach, each search is an instance of our Search model that can be passed around, respond to method calls, and be persisted in our database. We can create associations to any of the other models that we want to be included as search critera.

For example, in Socialpiq, users needed to be able to select a Capability as well as any number of SocialSites via SiteIntegrations. Capability, SocialSite, and SiteIntegration are models, so we can set up associations for each of them. In addition, lets say we're trying to match against a Tool and we want a results method that gives us all the tools for a given search. Here's what our model might look like:

class Search < ActiveRecord::Base
  belongs_to :capability
  has_many :site_integrations
  has_many :social_sites, through: :site_integrations

  def results
    @results ||= begin
      tools         = Tool.joins(:site_integrations)
      matched_tools = scope.empty? ? tools : tools.where(scope)

      matched_tools.distinct
    end
  end

  private

  def scope
    {
      capability_id: capability_id,
      site_integrations: site_ids_scope
    }.delete_if { |key, value| value.nil? }
  end

  def site_ids_scope
    ids = social_sites.pluck(:id)
    { social_site_id: ids } if ids.any?
  end
end

Breaking Down the Model

There are two main things we're doing in our model.

  1. Defining our associations
  2. Defining a results method along with a few private helper methods to aid in finding our search results.

The purpose for our model is to look at a given Search and compare its associated records against the associated records for each Tool. For example, if a Search has the same Capability as a Tool, we want to include that Tool in our results set.

To do this, we can utilize Rails' querying methods to find matching Tools. Our scope method returns a hash based on the ids of our search's associated records, which we can simply feed into the where query method (like Tool.where(scope)). In our case, we want to show all records when a user doesn't select a value for given search criteria. To handle that, when a Search doesn't have any associated records, its scope method returns an empty hash, which we'll check for and then return all the tools instead of calling where with an empty scope.

The Search Form

With our Search model and using the SimpleForm gem, we get a beautifully simple form:

<%= simple_form_for @search do |f| %>
  <%= f.association :capability, include_blank: 'Any' %>
  <%= f.association :social_sites, include_blank: 'Any' %>
  <%= f.button :submit, 'Search' %>
<% end %>

Super clean! What happens in our controller once we get the parameters from the search form submission though?

The Search Controller

Again, when we're following Rails conventions, everything seems to drop in really well:

class SearchesController < ApplicationController
  def new
    @search = Search.new
  end

  def create
    @search = Search.create(search_params)
    redirect_to search_path(search), notice: "#{search.results.size} results found."
  end

  def show
  end

  private

  def search_params
    params.require(:search).permit(:capability_id, social_site_ids: [])
  end

  def search
    @search ||= Search.find(params[:id])
  end
  helper_method :search
end

Once users submit our search form, they'll be taken to the show page for a search, where we can simply call search.results to get a list of matching tools. Since we're persisting searches, we could easily add edit and update actions to our controller, allowing users to fine-tune their searches without having to start from scratch.

A Note on ActiveRecord vs. ActiveModel Searches

You may choose to persist your searches, creating a full-fledged Rails model inheriting from ActiveRecord::Base, as I've illustrated in our example. However, if searches don't need to be persisted, check out ActiveModel which lets you include other ActiveModel modules like validations and callbacks.

Recap

By making Search a first-class object in our application, we're able to create a well-defined model (literally) of our search and its criteria, simplify the form, work with Rails conventions in our controller, and get persisted searches practically free. Next time you're in a situation where you need to construct custom searches across your models, consider making Search a first-class object for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Jul 2014 12:58

Custom ActiveModel::Validators are an easy way to validate individual attributes on your Rails models. All that's required is a Ruby class that inherits from ActiveModel::EachValidator and implements a validate_each method that takes three arguments: record, attribute, and value. I have written a few lately, so I pinged the rest of the amazingly talented Viget developers for some contributions. Here's what we came up with.

Simple URI Validator

A "simple URI" can be either a relative path or an absolute URL. In this case, any value that could be parsed by Ruby's URI module is allowed:

class UriValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless valid_uri?(value)
      record.errors[attribute] << (options[:message] || 'is not a valid URI')
    end
  end


  private

  def valid_uri?(uri)
    URI.parse(uri)
    true

  rescue URI::InvalidURIError
    false
  end
end

Full URL Validator

A "full URL" is defined as requiring a host and scheme. Ruby provides a regular expression to match against, so that's what is used in this validator:

class FullUrlValidator < ActiveModel::EachValidator
  VALID_SCHEMES = %w(http https)
 
  def validate_each(record, attribute, value)
    unless value =~ URI::regexp(VALID_SCHEMES)
      record.errors[attribute] << (options[:message] || 'is not a valid URL')
    end
  end
end

The Ruby regular expression can be seen as too permissive. For a stricter regular expression, Brian Landau shared this Github gist.

Email Validator

My good friends Lawson Kurtz and Mike Ackerman contributed the following email address validator:

class EmailValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\z/i
      record.errors[attribute] << (options[:message] || "is not a valid e-mail address")
    end
  end
end

If you'd rather validate by performing a DNS lookup, Brian Landau has you covered with this Github gist.

Secure Password Validator

Lawson provided this secure password validator (though credit goes to former Viget developer, James Cook):

class SecurePasswordValidator < ActiveModel::EachValidator
  WORDS = YAML.load_file("config/bad_passwords.yml")

  def validate_each(record, attribute, value)
    if value.in?(WORDS)
      record.errors.add(attribute, "is a common password. Choose another.")
    end
  end
end

Twitter Handle Validator

Lawson supplied this validator that checks for valid Twitter handles:

class TwitterHandleValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /^[A-Za-z0-9_]{1,15}$/
      record.errors[attribute] << (options[:message] || "is not a valid Twitter handle")
    end
  end
end

Hex Color Validator

A validator that's useful when an attribute should be a hex color value:

class HexColorValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless value =~ /\A([a-f0-9]{3}){,2}\z/i
      record.errors[attribute] << (options[:message] || 'is not a valid hex color value')
    end
  end
end

UPDATE: The regular expression has been simplified thanks to a comment from HappyNoff.

Regular Expression Validator

A great solution for attributes that should be a regular expression:

class RegexpValidator < ActiveModel::EachValidator
  def validate_each(record, attribute, value)
    unless valid_regexp?(value)
      record.errors[attribute] << (options[:message] || 'is not a valid regular expression')
    end
  end


  private

  def valid_regexp?(value)
    Regexp.compile(value)
    true

  rescue RegexpError
    false
  end
end

Bonus Round

Replace all of those default error messages above with I18n translated strings for great justice. For the Regular Expression Validator above, the validate_each method could look something like this:

def validate_each(record, attribute, value)
  unless valid_regexp?(value)
    default_message = record.errors.generate_message(attribute, :invalid_regexp)
    
    record.errors[attribute] << (options[:message] || default_message)
  end
end

Then the following could be added to config/locales/en.yml:

en:
  errors:
    messages:
      invalid_regexp: is not a valid regular expression

Now the default error messages can be driven by I18n.

Conclusion

We've found these to be very helpful at Viget. What do you think? Which validators do you find useful? Are there others worth sharing? Please share in the comments below.

Author: "Zachary Porter" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Jul 2014 15:29

As a developer, nothing makes me more nervous than third-party dependencies and things that can fail in unpredictable ways1. More often than not, these two go hand-in-hand, taking our elegant, robust applications and dragging them down to the lowest common denominator of the services they depend upon. A recent internal project called for slurping in and then reporting against data from Harvest, our time tracking service of choice and a fickle beast on its very best days.

I knew that both components (/(im|re)porting/) were prone to failure. How to handle that failure in a graceful way, so that our users see something more meaningful than a 500 page, and our developers have a fighting chance at tracking and fixing the problem? Here’s the approach we took.

Step 1: Model the processes

Rather than importing the data or generating the report with procedural code, create ActiveRecord models for them. In our case, the models are HarvestImport and Report. When a user initiates a data import or a report generation, save a new record to the database immediately, before doing any work.

Step 2: Give ’em status

These models have a status column. We default it to “queued,” since we offload most of the work to a series of Resque tasks, but you can use “pending” or somesuch if that’s more your speed. They also have an error field for reasons that will become apparent shortly.

Step 3: Define an interface

Into both of these models, we include the following module:

module ProcessingStatus
  def mark_processing
    update_attributes(status: "processing")
  end

  def mark_successful
    update_attributes(status: "success", error: nil)
  end

  def mark_failure(error)
    update_attributes(status: "failed", error: error.to_s)
  end

  def process(cleanup = nil)
    mark_processing
    yield
    mark_successful
  rescue => ex
    mark_failure(ex)
  ensure
    cleanup.try(:call)
  end
end

Lines 2–12 should be self-explanatory: methods for setting the object’s status. The mark_failure method takes an exception object, which it stores in the model’s error field, and mark_successful clears said error.

Line 14 (the process method) is where things get interesting. Calling this method immediately marks the object “processing,” and then yields to the provided block. If the block executes without error, the object is marked “success.” If any2 exception is thrown, the object marked “failure” and the error message is logged. Either way, if a cleanup lambda is provided, we call it (courtesy of Ruby’s ensure keyword).

Step 4: Wrap it up

Now we can wrap our nasty, fail-prone reporting code in a process call for great justice.

class ReportGenerator
  attr_accessor :report

  def generate_report
    report.process -> { File.delete(file_path) } do
      # do some fail-prone work
    end
  end

  # ...
end

The benefits are almost too numerous to count: 1) no 500 pages, 2) meaningful feedback for users, and 3) super detailed diagnostic info for developers – better than something like Honeybadger, which doesn’t provide nearly the same level of context. (-> { File.delete(file_path) } is just a little bit of file cleanup that should happen regardless of outcome.)

* * *

I’ve always found it an exercise in futility to try to predict all the ways a system can fail when integrating with an external dependency. Being able to blanket rescue any exception and store it in a way that’s meaningful to users and developers has been hugely liberating and has contributed to a seriously robust platform. This technique may not be applicable in every case, but when it fits, it’s good.

Author: "David Eisinger" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 21 Jul 2014 14:56

Ever find yourself in a situation where you were given an ActiveRecord model and you wanted to figure out all the models it had a foreign key dependency (belongs_to association) with? Well, I had to do just that in some recent sprig-reap work. Given the class for a model, I needed to find all the class names for its belongs_to associations.

In order to figure this out, there were a few steps I needed to take..

Identify the Foreign Keys / belongs_to Associations

ActiveRecord::Base-inherited classes (models) provide a nice interface for inspecting associations -- the reflect_on_all_associations method. In my case, I was looking specifically for belongs_to associations. I was in luck! The method takes an optional argument for the kind of association. Here's an example:

Post.reflect_on_all_associations(:belongs_to)
# => array of ActiveRecord::Reflection::AssociationReflection objects

Once I had a list of all the belongs_to associations, I needed to then figure out what the corresponding class names were.

Identify the Class Name from the Associations

When dealing with ActiveRecord::Reflection::AssociationReflection objects, there are two places where class names can be found. These class names are downcased symbols of the actual class. Here are examples of how to grab a class name from both a normal belongs_to association and one with an explicit class_name.

Normal belongs_to:

class Post < ActiveRecord::Base
  belongs_to :user
end

association = Post.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

name = association.name
# => :user

With an explicit class_name:

class Post < ActiveRecord::Base
  belongs_to :creator, class_name: 'User'
end

association = Post.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

name = association.options[:class_name]
# => 'User'

Getting the actual class:

ActiveRecord associations have a build in klass method that will return the actual class based on the appropriate class name:

Post.reflect_on_all_associations(:belongs_to).first.klass
# => User

Handle Polymorphic Associations

Polymorphism is tricky. When dealing with a polymorphic association, you have a single identifier. Calling association.name would return something like :commentable. In a polymorphic association, we're probably looking to get back multiple class names -- like Post and Status for example.

class Comment < ActiveRecord::Base
  belongs_to :commentable, polymorphic: true
end

class Post < ActiveRecord::Base
  has_many :comments, as: :commentable
end

class Status < ActiveRecord::Base
  has_many :comments, as: :commentable
end

association = Comment.reflect_on_all_associations(:belongs_to).first
# => ActiveRecord::Reflection::AssociationReflection instance

polymorphic = association.options[:polymorphic]
# => true

associations = ActiveRecord::Base.subclasses.select do |model|
  model.reflect_on_all_associations(:has_many).any? do |has_many_association|
    has_many_association.options[:as] == association.name
  end
end
# => [Post, Status]

Polymorphic?

To break down the above example, association.options[:polymorphic] gives us true if our association is polymorphic and nil if it isn't.

Models with Polymorphic has_many Associations

If we know an association is polymorphic, the next step is to check all the models (ActiveRecord::Base.subclasses, could also do .descendants depending on how you want to handle subclasses of subclasses) that have a matching has_many polymorphic association (has_many_association.options[:as] == association.name from the example). When there's a match on a has_many association, you know that model is one of the polymorphic belongs_to associations!

Holistic Dependency Finder

As an illustration of how I handled my dependency sleuthing -- covering all the cases -- here's a class I made that takes a belongs_to association and provides a nice interface for returning all its dependencies (via its dependencies method):

class Association < Struct.new(:association)
  delegate :foreign_key, to: :association

  def klass
    association.klass unless polymorphic?
  end

  def name
    association.options[:class_name] || association.name
  end

  def polymorphic?
    !!association.options[:polymorphic]
  end

  def polymorphic_dependencies
    return [] unless polymorphic?
    @polymorphic_dependencies ||= ActiveRecord::Base.subclasses.select { |model| polymorphic_match? model }
  end

  def polymorphic_match?(model)
    model.reflect_on_all_associations(:has_many).any? do |has_many_association|
      has_many_association.options[:as] == association.name
    end
  end

  def dependencies
    polymorphic? ? polymorphic_dependencies : Array(klass)
  end

  def polymorphic_type
    association.foreign_type if polymorphic?
  end
end

Here's a full example with the Association class in action:

class Comment < ActiveRecord::Base
  belongs_to :commentable, polymorphic: true
end

class Post < ActiveRecord::Base
  belongs_to :creator, class_name: 'User'
  has_many :comments, as: :commentable
end

class Status < ActiveRecord::Base
  belongs_to :user
  has_many :comments, as: :commentable
end

class User < ActiveRecord::Base
  has_many :posts
  has_many :statuses
end

Association.new(Comment.reflect_on_all_associations(:belongs_to).first).dependencies
# => [Post, Status]

Association.new(Post.reflect_on_all_associations(:belongs_to).first).dependencies
# => [User]

Association.new(Status.reflect_on_all_associations(:belongs_to).first).dependencies
# => [User]

The object-oriented approach cleanly handles all the cases for us! Hopefully this post has added a few tricks to your repertoire. Next time you find yourself faced with a similar problem, use reflect_on_all_associations for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Thursday, 03 Jul 2014 09:38

Ever since we made Say Viget! we've had a bunch of people asking us to explain exactly how we did it. This post is a first go at that explanation -- and a long one at that. Because so much goes into making a game, this is Part 1 of a multi-part series on how to build a 2D Javascript game, explaining some theory, best practices, and highlighting some helpful libraries.

If you're reading this then you probably know the basics: we use Javascript to add things to the context of a canvas while moving those things around through a loop (which is hopefully firing at ~60fps). But how do you achieve collisions, gravity, and general gameplay-smoothness? That's what this post will cover.

Screenshot

View Demo

Code on Github

Take note: I set up a few gulp tasks to make editing and playing with the code simple. It uses Browserify to manage dependencies and coded in Coffeescript. If you're unfamiliar with Gulp or Browserify, I recommend reading this great guide.

Using Box2D and EaselJS

There's a bunch of complex math involved to get a game functioning as we would expect. Even simple things can quickly become complex. For example, when we jump on a concrete sidewalk there is almost no restitution (bounciness) compared to when we jump on a trampoline. A smooth glass surface has less friction than a jagged rock. And still, objects that have a greater mass should push those with lesser mass out of the way. To account for all these scenarios we'll be using the Box2D phyics library.

Box2D is the de facto standard 2D physics engine. To get it working in the browser we'll need to use a Javascript port, which I found here.

Since the syntax for drawing things to a canvas can tend to be verbose, I'll be using a library called EaselJS, which makes working with the <canvas> pretty enjoyable. If you're unfamiliar with EaselJS, definitely check out out this Getting Started guide.

Let's get started.

What's in a Game?

Think high-level. Really high-level. The first thing you realize we need is a kind of world or Reality. Things like gravity, mass, restitution, and friction; these things exist in the real world and we probably want them to exist in our game world, too. Next, we know we will have at least two types of objects in our world: a Hero and some Platforms. We'll also need a place to put our these two objects -- let's call this thing we put them on a a Stage. And, just like the stage for a play, we'll need something that tells our Stage what and where things should be put. For that, we'll create a concept of a Scene. Lastly, we'll pull it all together as into something I'll name Game.

Code Organization

As you can see we start to have clear separation of concerns, with each our Scene, Stage, Hero, etc., all having a different responsibility. To future proof and better organize our project we'll create a separate Class for each:

  • Reality - Get our game and debug canvas ready and define our virtual world.
  • Stage - Holds and keeps track of what the user can see.
  • Hero - Builds our special hero object that can roll and jump around.
  • Platform - Builds a platform at a given x, y, width, and height.
  • Scene - Calls our hero and creates the platforms.
  • Game - Pulls together all our classes. We also put the start/stop and the game loop in here.

Additionally, we'll create two extra files which define some variables being used throughout our project.

  • Config - Which holds some sizing and preferences
  • Keys - Defines keyboard input codes and their corresponding value.

Getting Started

We'll have two <canvas>s, one that EaselJS will interact with (<canvas id="arcade">; which I'll refer to as Arcade), and another for Box2D (<canvas id="debug"> referred to as Debug). These two canvases run completely independently of eachother, but we allow them to talk to eachother. Our Debug canvas is it's own world, a Box2D world, which is where we define gravity, how objects (bodies) within that world interact, and where we place those things that the user can see. 

The objects we can see, like our hero and the platforms, we'll draw to the Arcade canvas using EaselJS. The Box2D objects (or bodies) that represent our hero and platforms will be drawn to the Debug canvas.

Since Box2D defines sizes in meters, we'll need to translate our input into something the browser can understand (Moving a platform over 10 meters doesn't make sense, 300 pixels does). What this means is for every value we pass a Box2D function that accepts say an X and Y coordinate, we'll need to divide by a scale that basically converts those meters into pixels. That magic number is 30. So, if we want our hero to start at 25 pixels from the left of the screen and 475 pixels from the top, we would do:

scale = 30

# b2Vec2 creates a mathematically vector object,
# which can be a magnitude and direction
position = new box2d.b2Vec2( 25 / scale , 475 / scale)
@body.SetPosition(position)

Simple enough, right? Let's jump into what a Box2D body is and what we can do with it.

Creating a Box2D Body

Many of the objects in the game are made up of something we can see like the color and size of a platform, and world constraints on that object we cannot see, like mass, friction, etc. To handle this, we need to draw the visible representation of a platform to our Arcade canvas, while creating a Box2D body on the Debug canvas.

Box2D objects, or bodies, are made up of a Fixture definition and a Body definition. Fixture's represent what an object, like our Platform, is made up of and how it responds to other objects. Attributes like friction, density, and its shape (Whether it's a circle or polygon) are part of our Platform's Fixture. A Body definition defines where in our world a Platform should be. Some base level code for a Platform to be added to our Debug <canvas> would be:

scale  = 30
width  = 50
height = 50

# Creates what the shape is
@fixtureDef             = new box2d.b2FixtureDef
@fixtureDef.friction    = 0.5
@fixtureDef.restitution = 0.25 # Slightly bouncing
@fixtureDef.shape       = new box2d.b2PolygonShape
@fixtureDef.shape.SetAsBox( width / 2 / scale, height / 2 / scale )
# Note: SetAsBox Expects values to be 
# half the size, hence dividing by 2

# Where the shape should be
@bodyDef      = new box2d.b2BodyDef
@bodyDef.type = box2d.b2Body.b2_staticBody
@bodyDef.position.Set(width / scale, height / scale)

# Add to world
@body = world.CreateBody( @bodyDef )
@body.CreateFixture( @fixtureDef )​

Note that static body types (as defined above with box2d.b2Body.b2_staticBody) are not effected by gravity. Dynamic body types, like our hero, will respond to gravity.

Adding EaselJS

In the same place we created our Box2D fixture and body definitions we can create a new EaselJS Shape which simply builds a rectangle with the same dimensions as our Box2D body and add it to our EaselJS Stage.

# ...from above...
# Add to world
@body = world.CreateBody( @bodyDef )
@body.CreateFixture( @fixtureDef )

@view = new createjs.Shape
@view.graphics.beginFill('#000').drawRect(100, 100, width, height)

Stage.addChild @view

From there, we now have one EaselJS Shape, or View, which is being drawn to our Arcade canvas, while the body that represents that Shape is drawn to our Debug canvas. In the case of our hero we want to move our EaselJS shape with its corresponding Box2D body. To do that, we would do something like:

# Get the current position of the body
position = @body.GetPosition()
# Multiply by our scale
@view.x = position.x * scale
@view.y = position.y * scale

The trick to all of this is tying these two objects together -- our Box2D body on our Debug canvas is effected by gravity and thus moves around. When it moves around, we get the position of the body and assign update the position of our EaselJS Shape or @view. That's it.

Accounting for User Input and Controls

Think about how you normally control a character in a video game. You move the joystick up and the player moves forward... and keeps moving forward until you let go. We want to mimic that functionality in our game. To do this we will set a `moving` variable to true when the user pressed down on a key (onKeyDown) and set it to false what the user lets go (onKeyUp). Something like:

assignControls: =>
    document.onkeydown = @handleDown
    document.onkeyup   = @handleUp

handleDown: (e) =>
    switch e.which
        when 37 # Left arrow
            @moving_left = true
        when 39 # right arrow
            @moving_right = true

handleUp: (e) =>
    switch e.which
        when 37
            @moving_left = false
        when 39
            @moving_right = false

And on each iteration of our loop, we would do something like:

update: =>
    # Move right
    if @moving_right
        @hero_speed += 1
    # Move left
    else if @moving_left
        @hero_speed -= 1
    # Come to a stop
    else
        @hero_speed = 0

Again, this is a pretty simple concept.

Look Through The Code

From here I recommending looking through the code on Github for great justice. In it you'll find more refined examples, in an actual game context, which will provide for a fuller understanding of the concepts explained above.

Conclusion

So far we've covered:

  • Using two canvases, one to handle drawing and the other to handle physics
  • What makes up a Box2D body
  • How to tie our EaselJS objects to our Box2D bodies
  • A strategy for controlling our hero with user input. 

​In Part 2 we'll cover:

  • How to follow our hero throughout our scene
  • How to build complex shapes
  • Handling collisions with special objects.

In addition to what I'll be covering in Part 2, is there anything else you would like covered relating to game development? Have questions or feedback on how we could be doing something differently? Let me know in the comments below.

Author: "Tommy Marshall" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 02 Jul 2014 13:52

This May, Viget worked with Dick's Sporting Goods to launch Women's Fitness, an interactive look at women’s fitness apparel and accessories. One of it's most interesting features is the grid of hexagonal product tiles shown in each scene. To draw the hexagons, I chose to use SVG polygon elements.

I've had experience using SVG files as image sources and in icon fonts, but this work was my first opportunity to really dig into it's most powerful use case, inline in HTML. Inline SVG simply refers to SVG markup that is included in the markup for a webpage.

 

<div><svg><!-- WHERE THE MAGIC HAPPENS. --></svg></div>

 

Based on this experience, here are a few simple things I learned about SVG.

1. Browser support is pretty good

http://caniuse.com/#feat=svg-html5

2. SVG can be styled with CSS

Many SVG attributes, like fill and stroke, can be styled right in your CSS.

See the Pen eLbCy by Chris Manning (@cwmanning) on CodePen.

3. SVG doesn't support CSS z-index

Setting the z-index in CSS has asbolutely no effect on the stacking order of svg. The only thing that does is the position of the node in the document. In the example below, the orange circle comes after the blue circle in the document, so it is stacked on top.

See the Pen qdgtk by Chris Manning (@cwmanning) on CodePen.

4. SVG can be created and manipulated with JavaScript

Creation

Creating namespaced elements (or attributes, more on that later) requires a slightly different approach than HTML:

// HTML
document.createElement('div');

// SVG
document.createElementNS('http://www.w3.org/2000/svg', 'svg');

If you're having problems interacting with or updating elements, double check that you're using createElementNS with the proper namespace. More on SVG namespaces.

With Backbone.js

In a Backbone application like Women's Fitness, to use svg or another namespaced element as the view's el, you can explictly override this line in Backbone.View._ensureElement:

// https://github.com/jashkenas/backbone/blob/1.1.2/backbone.js#L1105
var $el = Backbone.$('<' + _.result(this, 'tagName') + '>').attr(attrs);

I made a Backbone View for SVG and copied the _ensureElement function, replacing the line above with this:

// this.nameSpace = 'http://www.w3.org/2000/svg'; this.tagName = 'svg';
var $el = $(window.document.createElementNS(_.result(this, 'nameSpace'), _.result(this, 'tagName'))).attr(attrs);

Setting Attributes

  • Some SVG attributes are namespaced, like the href of an image or anchor: xlink:href. To set or modify these, use setAttributeNS.
// typical
node.setAttribute('width', '150');

// namespaced
node.setAttributeNS('http://www.w3.org/1999/xlink', 'xlink:href', 'http://viget.com');
  • Tip: attributes set with jQuery are always converted to lowercase! Watch out for issues like this gem:
// jQuery sets 'patternUnits' as 'patternunits'
this.$el.attr('patternUnits', 'userSpaceOnUse');

// Works as expected
this.el.setAttribute('patternUnits', 'userSpaceOnUse');
  • Another tip: jQuery's addClass doesn't work on SVG elements. And element.classList isn't supported on SVG elements in Internet Explorer. But you can stil update the class with $.attr('class', value) or setAttribute('class', value).

5. SVG can be animated

CSS

As mentioned in #2, SVG elements can be styled with CSS. The following example uses CSS animations to transform rotatation and SVG attributes like stroke and fill. In my experience so far, browser support is not as consistent as SMIL or JavaScript. 

Browser support: Chrome, Firefox, Safari. Internet Explorer does not support CSS transitions, transforms, and animations on SVG elements. In this particular example, the rotation is broken in Firefox because CSS transform-origin is not supported on SVG elements: https://bugzilla.mozilla.org/show_bug.cgi?id=923193.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.

SMIL

SVG allows animation with SMIL (Synchronized Multimedia Integration Language, pronounced "smile"), which supports the changing of attributes with SVG elements like animate, animateTransform, and animateMotion. See https://developer.mozilla.org/en-US/docs/Web/SVG/SVG_animation_with_SMIL and http://www.w3.org/TR/SVG/animate.html for more. The following example is animated without any CSS or JavaScript.

Browser support: Chrome, Firefox, Safari. Internet Explorer does not support SMIL animation of SVG elements.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.

JavaScript

Direct manipulation of SVG element attributes allows for the most control over animations. It's also the only method of the three that supports animation in Internet Explorer. If you are doing a lot of work, there are many libraries to speed up development time like svg.js (used in this example), Snap.svg, and d3.

See the Pen jtrLF by Chris Manning (@cwmanning) on CodePen.

TL;DR

SVG isn't limited to whatever Illustrator outputs. Using SVG in HTML is well-supported and offers many different options to style and animate content. If you're interested in learning more, check out the resources below.

Additional Resources

Author: "Chris Manning" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 01 Jul 2014 16:00

I’m a little belated, but I was lucky enough to attend and speak at the first Craft CMS Summit two weeks ago. This was the first online conference that I had ever attended, and I was thoroughly impressed. I had always been a bit hesitant to attend online conferences because I was unsure about the quality, but after experiencing it firsthand I won't hesitate in the future. Everything was very well organized and the speakers all gave excellent presentations. It was also nice to sit and learn in the comfort of my own home instead of having to deal with the extra burdon of traveling for a conference. Side note: Environments for Humans, the company who hosted the conference, has additional upcoming events.

State of Craft CMS

Brandon Kelly, started the conference by giving a brief history of Craft and a peek at some new features. Here are a couple bullets I pulled out:

  • There have been over 10 iterations of just the basic Control Panel layout.
  • They invited 10 people into the Blocks (Craft’s previous name) private alpha. There was minimal functionality, no Twig templating language (they created their own), and they just wanted to get some eyes on the interface.
  • There have been over 13,600 licenses issued.
  • 3,300 unique sites with recent CP usage in the last 30 days.
  • Revenue has been excellent. He also said June is going to have another spike.
  • They are considering Craft as of now, hitting 80% of what a site needs to do. The other 20% being really custom stuff that won’t be baked in.
  • Next batch of stuff is going to be usability improvements and improving their docs.
  • They are hiring a third party company to help with docs.
  • Saving big stuff for 3.0.
  • 3.0 will have in-browser asset editing, which they demoed.
  • Plugin store is coming this year. This will allow developers to submit their plugins to Pixel & Tonic. Then, those plugins will be available for download, and update from within the Craft control panel.

E4H has also made the entire recording available for free.

Twig for Designers

Ben Parizek next gave a presentation on Twig, the templating engine that Craft uses. He shared this awesome spreadsheet which is a nice resource for example code for all of the default Craft custom fields.

Template Organization

Anthony Colangelo gave an interesting presentation about template organization. My main takeaway was to think about the structure of your templates based on the type of template, and not just the sections of the site. You can view the slides on Speaker Deck.

Craft Tips & Tricks

I was struggling to come up with a topic, so I just ran through a collection of real-world tips and tricks I had come across while building Craft sites. Here are a couple of my favorite ones from the presentation:

Merge

The Twig merge filter can help to reduce duplication in your template:

{% set filters = ['type', 'product', 'activity', 'element'] %}
{% set params = { section: 'media', limit: 12 } %}

{# Apply filter? #}
{% if craft.request.segments[2] is defined and craft.request.segments[1] in filters %}
	{% switch craft.request.segments[1] %}
		{% case 'product' %}
			{% set product = craft.entries({ slug: craft.request.segments[2], section: 'product' }).first() %}
			{% set params = params | merge({ relatedTo: product }) %}
		{% case 'type' %}
			{% set params = params | merge({ type: craft.request.segments[2] }) %}

		...
	{% endswitch %}
{% endif %}

{% set entries = craft.entries(params) %}

That code sample was used to apply filters on a media page. This way, we could reuse a single template.

Macros

Macros are kinda like helpers. They are useful for creating little reusable functions:

_helpers/index.html

{%- macro map_link(address) -%}
	http://maps.google.com/?q={{ address | url_encode }}
{%- endmacro -%}

contact/index.html

{% import "_helpers" as helpers %}

<a href="{{ helpers.map_link('400 S. Maple Avenue, Suite 200, Falls Church, VA 22046') }}">Map</a>

That code will result in: Map

Element Types and Plugin Development

Ben Croker talked about building plugins, and specifically Element Types. Element Types are the foundation of Craft’s entries, users, assets, globals, categories, etc. Craft has given us the ability to create Element Types through plugins. It’s not thoroughly documented yet, this is all they have, but you can use the existing Element Types to learn how to build them. You can take a look at his slides on Speaker Deck, but the bulk of the presentation was demoing an Element Type that he built.

Craft Q&A Round Table

The Pixel & Tonic team sat around and answered questions from the audience. Here's a smattering of the notes I took:

  • “We have some ideas of how to get DB syncing working, thats why every table has a uid column.” It’s an itch they want to scratch for themselves too.
  • On their list: using a JSON file for field/section setup
  • Matrix within Matrix will be coming eventually. The UI is tough, but the code is all in place.
  • There is the possibility that it will eventually be public on GitHub, but they have to work out some of the app setup stuff.
  • They’ve considered renaming “localization” to just be “sites”, then people can run multiple sites with one Craft install.
  • “Will comments be a part of core?”, “No, that’s plugin territory”
  • Duplicating entries will be coming in Craft 2.2
  • They have a lot of plans for the edit field layout page to make it more user friendly
Author: "Trevor Davis" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 10:16

One difficult aspect about responsive development is how to manage complexity in navigation systems. For simple headers and navigation structures, it’s typically straightforward to just use a single HTML structure. Then write some clever styles which re-adjusts the navigation system from a small-screen format, to one that takes advantage of the increased real-estate of larger screens. Finally, write a small bit of JavaScript for opening and closing a menu on small screens and you’re done. The amount of overhead for delivering two presentation options to all screens in these cases is fairly low.

However, for cases where more complex navigation patterns are used, and where interactions are vastly different across screen sizes, this approach can be rather bloated, as unnecessary markup, styles and assets are downloaded for devices that don’t end up using them.

On one recent project, we were faced with such a problem. The mobile header was simple and the navigation trigger was the common hamburger icon. The navigation system itself employed a fairly complicated multi-level nested push menu which revealed itself from the left side of the screen. The desktop header and navigation system was arranged differently and implemented a full-screen mega-menu in place of the push menu previously mentioned. Due to the differences and overall complexity of each approach, different sets of markup and styles were required for presentation, and different JavaScript assets were required for each interaction pattern.

View Animated GIF: Mobile | Desktop

Mobile First to the Rescue

In order to have the small-screen experience be as streamlined as possible, we employed a mobile-first approach by using a combination of RequireJS, enquire.js & Handlebars. Here’s how it’s setup:

// main.js
require([
    'enquire'
], function(enquire) {
    enquire.register('screen and (max-width: 1000px)', {
        match: function() {
            require(['mobile-header']);
        }
    });
    enquire.register('screen and (min-width: 1001px)', {
        match: function() {
            require(['desktop-header']);
        }
    });
});

In the above code, we’re using enquire’s register method to check the viewport size, and load the bundled set of JavaScript assets for the appropriate screen size.

Handle the Small Screen Version

// mobile-header.js
require([
    'enquire',
    'dependency1',
    'dependency2'
], function(enquire, Dependency1, Dependency2) {
    enquire.register('screen and (max-width: 1000px)', {
        setup: function() {
            // initialize mobile header/nav
        },
        match: function() {
            // show mobile header/nav
        },
        unmatch: function() {
            // hide mobile header/nav
        }
    });
});

Here, mobile-header.js loads the necessary script dependencies for the mobile header and navigation, and sets up another enquire block for initializing, showing and hiding.

Handle the Large Screen Version

// desktop-header.js
requirejs.config({
    paths: {
        handlebars: 'handlebars.runtime'
    },
    shim: {
        handlebars: {
            exports: 'Handlebars'
        }
    }
});

require([
    'enquire',
    'handlebars.runtime',
    'dependency3',
    'dependency4'
], function(enquire, Handlebars, Dependency3, Dependency4) {
    enquire.register('screen and (min-width: 1001px)', {
        setup: function() {
            // get template and insert markup
            require(['../templates/desktop-header'], function() {
                var markup = JST['desktop-header']();
                $('#mobile-header').after(markup);
            });
        },
        match: function() {
            // show desktop header/nav
        },
        unmatch: function() {
            // hide desktop header/nav
        }
    });
});

* The handlebars runtime is being used for faster render times. It requires that the desktop header template (referenced on line 22 above) be a pre-compiled handlebar template. It looks like this and can be auto-generated using grunt-contrib-handlebars.

Finally, desktop-header.js loads the necessary script dependencies for the desktop header and navigation. Another enquire block is set up for fetching and rendering the template, and showing and hiding.

Pros & Cons

The code examples above are heavily stripped down from the original implementation, and it’s also important to note that the RequireJS Optimizer was used to combine related scripts together into a few key modules (main, mobile and desktop), in order to keep http requests to a minimum.

Which brings me to a downside: splitting the JS into small and large modules does add one extra http request as opposed to simply bundling ALL THE THINGS into one JS file. For your specific implementation, the bandwidth and memory savings would have to be weighed against the slight penalty of an extra http request. That penalty may or may not be worth it. There is also an ever so slight flash of the mobile header on desktop before it is replaced with the desktop header. We mitigated this with css, by simply hiding the mobile header at the large breakpoint.

On the plus side, the advantage here is that the desktop header and associated assets are only loaded when the viewport size is large enough to accommodate it. Also, the JavaScript assets for the mobile multi-level push menu are only loaded for small screens. Bandwidth is more efficiently utilized in that mobile users’ data plans aren’t taxed with downloading unnecessary assets. The browser also has less work to do overall. Everyone rejoices!

Taking it Further

Several ways this could be taken to the next level would be to modularize the styles required for rendering the mobile and desktop header and navigation, and bundle those within their respective modules. Another completely different approach for managing this type of complexity would be to implement a RESS solution with something like Detector. If you have any other clever ways of managing complexity in responsive navigation patterns, or any responsive components for that matter, let me know in the comments below.

Author: "Jeremy Frank" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 18 Jun 2014 15:43

Recently, Lawson and Ryan launched Sprig, a gem for seeding Rails applications. sprig_logo

Sprig seed files are easy to write, but they do take some time -- time which you may not have enough of. We wanted to generate seed files from records already in the database, and we received similar requests from other Sprig users. At Viget, we try to give the people what the people want, so I jumped in and created Sprig-Reap!

Introducing Sprig-Reap

Sprig-Reap is a rubygem that allows you to generate Sprig-formatted seed files from your Rails app's database.

It provides both a command-line interface via a rake task and a method accessible inside the Rails console.

Command Line

rake db:seed:reap

Rails Console

Sprig.reap

The Defaults

Sprig-Reap, by default, will create a seed file for every model in your Rails app with an entry for each record. The .yml seed files will be placed inside the db/seeds/env folder, where env is the current Rails.env.

Don't like these defaults? No problem!

Customizing the Target Environment Seed Folder

Sprig-Reap can write to a seeds folder named after any environment you want. If the target folder doesn't already exist, Sprig-Reap will create it for you!

# Command Line
rake db:seed:reap TARGET_ENV='dreamland'

# Rails Console
Sprig.reap(target_env: 'dreamland')

Customizing the Set of Models

You tell Sprig-Reap which models you want seeds for and -- BOOM -- it's done:

# Command Line
rake db:seed:reap MODELS=User,Post,Comment

# Rails Console
Sprig.reap(models: [User, Post, Comment])

Omitting Specific Attributes from Seed Files

Tired of seeing those created_at/updated_at timestamps when you don't care about them? Don't want encrypted passwords dumped into your seed files? Just ignore 'em!

# Command Line
rake db:seed:reap IGNORED_ATTRS=created_at,updated_at,password

# Rails Console
Sprig.reap(ignored_attrs: [:created_at, :updated_at, :password])

Reaping with Existing Seed Files

If you have existing seed files you're already using with Sprig, have no fear! Sprig-Reap is friendly with other Sprig seed files and will append to what you already have -- appropriately assigning unique sprig_ids to each entry.

Use Case

If you're wondering what the point of all this is, perchance this little example will pique your interest:

At Viget, QA is a critical part of every project. During the QA process, we generate all kinds of data so we can test all the things. Oftentimes this data describes a very particular, complicated state. Being able to easily take a snapshot of the application's data state is super helpful. Sprig-Reap lets us do this with a single command -- and gives us seed files that can be shared and re-used across the entire project team. If someone happens to run into a hard-to-reproduce issue related to a specific data state, use Sprig-Reap for great justice!

Your Ideas

We'd love to hear what people think about Sprig-Reap and how they're using it. Please share! If you have any comments or ideas of your own when it comes to enhancements, leave a comment below or add an issue to the GitHub repo.

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 11 Jun 2014 19:37

Last September, while in Brighton for dConstruct, I attended the second annual IndieWebCampUK, a two-day gathering of web developers focused on building IndieWeb tools.

If you're unfamiliar with the IndieWeb movement, its guiding principle is that you should own your data. In practical terms, this amounts to publishing content on a website at a domain that you own (instead of, say, posting all of your photos to a service like Facebook). Surrounding that principle are a variety of other ideas and tools being created by some amazing people (including Tantek Çelik, Aaron Parecki, Amber Case, and others).

IndieWebCampUK rekindled my desire to publish on my own website and build tools that would help others do the same.

Of all the IndieWeb building blocks being worked on, webmention caught my attention the most. From the wiki:

Webmention is a simple way to notify any URL when you link to it on your site. From the receiver's perspective, it's a way to request notifications when other sites link to it. Webmention is a modern update to Pingback, using only HTTP and x-www-urlencoded content rather than XML-RPC requests.

The power of webmention is its simplicity. Unlike sending Pingbacks with XML-RPC, sending a webmention can be as simple as using cURL on the command line to POST to a URL (as shown in this example). Very cool and relatively easy.

In the months since IndieWebCampUK, I've been trying to figure out how to best contribute to webmention. Which brings us to…

Webmention Client Plugin for Craft CMS

With some help from Trevor, I've just released version 1.0.0 of a webmention client that adds the ability to send webmentions from Craft. Installation and setup is really easy and is detailed in the project README on GitHub.

For the initial release, the plugin makes available a new "Webmention (targets)" Field Type that can be added to any of your site's Field Layouts. When saving an entry with a webmention field, the plugin will ping each target supplied, looking for a webmention endpoint. If an endpoint is found, then the endpoint, target, and source (the Craft entry's URL) are stored in a queue for processing. Once the queue is ready to be processed, a background task kicks off and sends webmentions to the appropriate websites.

That's it! Your Craft-powered site is now sending webmentions.

Issues, Updates, etc.

I spent some time looking through the FAQs, Issues, and Brainstorming sections of the Webmention wiki page and I think the Craft plugin handles most of the primary use cases. There are some things I'd like to do better in future versions, though:

  • Send a webmention when a URL is removed from the list of targets.
  • Have the plugin crawl an entry's body field(s) for URLs to ping.

The latter item would involve a lot of heavy lifting and some potentially tricky UI, but I'm hoping to tackle that down the line. In the mean time, give the plugin a try let me know if you run into any problems or have any feature suggestions.

In true IndieWeb fashion, I've published this on my own website first and syndicated it here.

Author: "Jason Garber" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Tuesday, 10 Jun 2014 16:00

Traditionally stylesheets describe the majority of the presentation layer for a website. However as JavaScript becomes necessary to present information in a stylistically consistent way, it becomes troublesome to keep these mediums in sync. Data visualizations and break-point based interaction are prime examples of this; something I bumped into on my most recent project.

I should note that this is not an unsolved problem, and there are many interesting examples of this technique in the wild. However I wanted a simpler solution and I've been wanting to write a Sass plugin anyway.

The result of this curiousity is sass-json-vars. After requiring it, this gem allows JSON files to be included as valid @import paths; converting the top level values into any of the Sass data types (strings, maps, lists).

Usage

Consider the following snippet of JSON (breakpoints shortened for brevity):

{
    "colors": {
        "red"  : "#c33",
        "blue" : "#33c"
    },

    "breakpoints": {
        "landscape" : "only screen and (orientation : landscape)",
        "portrait"  : "only screen and (orientation : portrait)"
    }
}

sass-json-vars exposes the top level keys as values whenever a JSON file is included using @import.

@import "variables.json"; 

.element {
    color: map-get($colors, red);
    width: 75%;

    @media (map-get($breakpoints, portrait) {
        width: 100%;
    }
}

Similarly, these values can be accessed in JavaScript using a module system such as CommonJS with browserify. For example, if we need to determine if the current browser's orientation is at landscape:

var breakpoints = require("./variables.json").breakpoints;

// https://developer.mozilla.org/en-US/docs/Web/API/Window.matchMedia
var isLandscape  = matchMedia(breakpoints.landscape).matches;

if (isLandscape) {
    // do something in landscape mode
}

Integration

sass-json-vars can be included similarly to sass-globbing or other plugins that add functionality to @import. Simply include it as a dependency in your Gemfile:

gem 'sass-json-vars'

or within Rails:

group :assets do
    gem 'sass-json-vars'
end

Asset paths when using sass-json-vars with the Ruby on Rails asset pipeline should automatically be handled.

Final thoughts

sass-json-vars supports all of the data types provided by Sass. This could be used to describe media queries for the breakpoint Sass plugin, or store special characters for icons generated by IcoMoon.

Checkout the repo on Github and feel free to comment about how you use it!

Author: "Nate Hunzaker" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 06 Jun 2014 15:24

I recently built an API with Sinatra and ran into a recurring challenge when dealing with resource-specific routes (like /objects/:id). The first thing I had to handle in each of those routes was whether or not a record for both the resource type and id existed. If it didn't, I wanted to send back a JSON response with some meaningful error message letting the API consumer know that they asked for a certain kind of resource with an ID that didn't exist.

My first pass looked something like this:

get '/objects/:id' do |id|
  object = Object.find_by_id(id)

  if object.nil?
    status 404
    json(errors: "Object with an ID of #{id} does not exist")
  else
    json object
  end
end

put '/objects/:id' do |id|
  object = Object.find_by_id(id)

  if object.nil?
    status 404
    json(errors: "Object with an ID of #{id} does not exist")
  else
    if object.update_attributes(params[:object])
      json object
    else
      json(errors: object.errors)
    end
  end
end

Seems ok, but there would be a lot of duplication if I had these if/else statements in every resource-specific route. Lately, I've looked for common if/else conditionals like this as an opportunity for method abstraction, particularly with the use of blocks and yield. The following methods are an example of this kind of abstraction:

def ensure_resource_exists(resource_type, id)
  resource = resource_type.find_by_id(id)

  if resource.nil?
    status 404
    json(errors: "#{resource_type} with an ID of #{id} does not exist")
  else
    yield resource if block_given?
  end
end

Then the initial example would look something like:

get '/objects/:id' do |id|
  ensure_resource_exists(Object, id) do |obj|
    json obj
  end
end

put '/objects/:id' do |id|
  ensure_resource_exists(Object, id) do |obj|
    if obj.update_attributes(params[:object])
      json obj
    else
      json(errors: obj.errors)
    end
  end
end

It hides away the distracting error case handling and gives us a readable, declarative method body.  Next time you find yourself dealing with repetitive error cases, use blocks like this for great justice!

Author: "Ryan Stenberg" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Wednesday, 21 May 2014 17:05

Have you ever wanted to use an enumerated type in your Rails app? After years of feature requests, Rails 4.1 finally added them: a simple implementation that maps strings to integers. But what if you need something different?

On a recent project, I implemented a survey, where animals are matched by answering a series of multiple-choice questions. The models looked like this:

class Animal < ActiveRecord::Base
  has_many :answer_keys
end

class AnswerKey < ActiveRecord::Base
  belongs_to :animal

  validates :color, :hair_length, presence: true
end

An animal has many answer keys, where an answer key is a set of survey answers that matches that animal. color and hair_length each represent a multiple-choice answer and are natural candidates for an enum.

The simplest possible implementation might look like this:

validates :color,       inclusion: { in: %w(black brown gray orange yellow white) }
validates :hair_length, inclusion: { in: %w(less_than_1_inch 1_to_3_inches longer_than_3_inches) }

However, there were additional requirements for each of these enums:

  • Convert the value to a human readable name, for display in the admin interface
  • Export all of the values and their human names to JSON, for consumption and display by a mobile app

Currently, the enum values are strings; what I really need is an object that looks like a string but has some custom behavior. A subclass of String should do nicely:

module Survey
  class Enum < String
    # Locale scope to use for translations
    class_attribute :i18n_scope

    # Array of all valid values
    class_attribute :valid_values

    def self.values
      @values ||= Array(valid_values).map { |val| new(val) }
    end

    def initialize(s)
      unless s.in?(Array(valid_values))
        raise ArgumentError, "#{s.inspect} is not a valid #{self.class} value"
      end

      super
    end

    def human_name
      if i18n_scope.blank?
        raise NotImplementedError, 'Your subclass must define :i18n_scope'
      end

      I18n.t!(value, scope: i18n_scope)
    end

    def value
      to_s
    end

    def as_json(opts = nil)
      {
        'value'      => value,
        'human_name' => human_name
      }
    end
  end
end

This base class handles everything we need: validating the values, converting to human readable names, and exporting to JSON. All we have to do is subclass it and set the two class attributes:

module Survey
  class Color < Enum
    self.i18n_scope = 'survey.colors'

    self.valid_values = %w(
      black
      brown
      gray
      orange
      yellow
      white
    )
  end

  class HairLength < Enum
    self.i18n_scope = 'survey.hair_lengths'

    self.valid_values = %w(
      less_than_1_inch
      1_to_3_inches
      longer_than_3_inches
    )
  end
end

Finally, we need to add our human readable translations to the locale file:

en:
  survey:
    colors:
      black: Black
      brown: Brown
      gray: Gray
      orange: Orange
      yellow: Yellow/Blonde
      white: White
    hair_lengths:
      less_than_1_inch: Less than 1 inch
      1_to_3_inches: 1 to 3 inches
      longer_than_3_inches: Longer than 3 inches

We now have an enumerated type in pure Ruby. The values look like strings while also having the custom behavior we need.

Survey::Color.values

# => ["black", "brown", "gray", "orange", "yellow", "white"]

Survey::Color.values.first.human_name

# => "Black"

Survey::Color.values.as_json

# => [{"value"=>"black", "human_name"=>"Black"}, {"value"=>"brown", "human_name"=>"Brown"}, ...]

The last step is to hook our new enumerated types into our AnswerKey model for great justice. We want color and hair_length to be automatically converted to instances of our new enum classes. Fortunately, my good friend Zachary has already solved that problem. We just have to update our Enum class with the right methods:

def self.load(value)
  if value.present?
    new(value)
  else
    # Don't try to convert nil or empty strings
    value
  end
end

def self.dump(obj)
  obj.to_s
end

And set up our model:

class AnswerKey < ActiveRecord::Base
  belongs_to :animal

  serialize :color,       Survey::Color
  serialize :hair_length, Survey::HairLength

  validates :color,       inclusion: { in: Survey::Color.values }
  validates :hair_length, inclusion: { in: Survey::HairLength.values }
end

BONUS TIP — We probably need to add these enums to a form in the admin interface, right? If you're using Formtastic, it automatically looks at our #human_name method and does the right thing:

f.input :color, as: :select, collection: Survey::Color.values

Shazam.


 

Hey friend, have you implemented enums in one of your Rails apps? How did you do that? Let me know in the comments below. Have a nice day.

Author: "Chris Jones" Tags: "Extend"
Send by mail Print  Save  Delicious 
Date: Friday, 16 May 2014 09:23

As we have the opportunity work on more Craft sites at Viget, we’ve been able to do some interesting integrations, like our most recent integration with the ecommerce platform Shopify. Below is a step-by-step guide to implementing Craft and Shopify by utilizing a plugin I built.

Craft Configuration

First, you need to download and install the Craft Shopify plugin, and add your Shopify API credentials in the plugin settings.

With the plugin installed, you also get a custom fieldtype that let’s you select a Shopify Product from a dropdown. So let’s create a field called Shopify Product.

Next, let’s create a Product section in Craft and add our Shopify Product field to that section. Now, when we go to publish a Product, we can associate the product in Craft with a product in Shopify.

The idea here is that we can use the powerful custom field functionality of Craft to build out product pages but still pull in the Shopify specific data (price, variants, etc). Then, we let Shopify handle the cart and checkout process.

Craft Templates

The Shopify plugin also provides some functionality to retrieve information from the Shopify API. So on our products/_entry template, we can use the value from our shopifyProduct field to grab information about the product from Shopify.

{% set shopify = craft.shopify.getProductById({ id: entry.shopifyProduct }) %}

That will hit the Shopify product endpoint, and return the data you can use in Craft. You can also specify particular fields to make the response smaller.

This means we can pretty easily create an Add to Cart form in our Craft templates.

{% set shopify = craft.shopify.getProductById({ id: entry.shopifyProduct, fields: 'variants' }) %}

<form action="http://your.shopify.url/cart/add" method="post">
	<select name="id">
		{% for variant in shopify.variants %}
			<option value="{{ variant.id }}">{{ variant.title }} - ${{ variant.price }}</option>
		{% endfor %}
	</select>
	<input type="hidden" name="return_to" value="back">
	<button type="submit">Add to Cart</button>
</form>

The plugin also provides a couple of additional methods to retrieve product information from Shopify.

craft.shopify.getProducts()

This method hits the products endpoint, and you can pass in any of the parameters that the documentation notes.

{% for product in craft.shopify.getProducts({ fields: 'title,variants', limit: 5 }) %}
	<div class="product">
		<h2>{{ product.title }}</h2>
		<ul>
			{% for variant in product.variants %}
				<li>{{ variant.title }} - ${{ variant.price }}</li>
			{% endfor %}
		</ul>
	</div>
{% endfor %}

craft.shopify.getProductsVariants()

I ended up creating this method because on the products index, I wanted an easy way to output the variants for each Craft product without having to hit the API multiple times. So basically what you do is call this method once, and then the keys of the array are the ID of the product. Again, you can pass in any of the parameters that the products endpoint documentation references, but if you don’t include the ID and variants, there isn’t any point in using this method!

{% set shopifyProducts = craft.shopify.getProductsVariants() %}

{% for entry in craft.entries({ section: 'product' }) %}
	<div class="product">
		<h2><a href="{{ entry.url }}">{{ entry.title }}</a></h2>

		{% if entry.shopifyProduct and shopifyProducts[entry.shopifyProduct] %}
			<ul>
				{% for variant in shopifyProducts[entry.shopifyProduct] %}
					<li>{{ variant.title }} - ${{ variant.price }}</li>
				{% endfor %}
			</ul>
		{% endif %}
	</div>
{% endfor %}

Really you can do whatever you want with the data that's returned; this is just a simple example to output the variants for each product.

So download the Craft Shopify plugin, and enjoy the simple integration with Shopify!

Author: "Trevor Davis" Tags: "Extend"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader