• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Sunday, 30 Mar 2014 19:39

I've never worked in the games business, and I have no major reason to have opinions about it. But I've been thinking about this one for a really long time and I don't see it being fully explained anywhere. So here's my thesis about what's going to happen with console gaming.

One day, Apple will expand Apple TV into an iOS device that can support third-party apps, perhaps renaming it in the process. The biggest reason they have to take this step is the rise of free-to-play (F2P) games. F2P games have made major inroads in mobile and PC gaming, but nobody has taken them into the living room. Apple will be the company to do this, beating Sony, Microsoft, Nintendo, and Valve to the punch. It will give the company a major footprint in the living room, strengthen its position as a distributor of streaming entertainment, and grow Apple TV into a massive new product line.

Free-to-play gaming is a tidal wave

Give this a try: Find good free games for your iPhone or iPad. Find them on the App Store, or even better, through gaming blogs like Kotaku. Download every single one, play it for as long you enjoy it. Don't spend any money. When you get bored, find another one. Repeat until you run out of good free games.

You can't actually do this, right? There's simply too much out there. There are so many good and even great games for free that to do this you would sacrifice your job, your family, and your social life.

Free-to-play games are, like any freemium good, one part marketing and one part for-sale product. The games business has always been incredibly competitive, but in the past you still had to charge for games because there was no other way to get paid. But once companies like Apple and Valve made it easy to embed payment opportunities inside the game after the download, that created an immense downward pressure on purchase price. That pressure has now brought the price all the way down to zero.

There are holdouts, to be sure. And they have good reasons, arguing that F2P dynamics ruin games. I think that's a respectable position, but I also think it's not a position that most studios will be able to afford.

Imagine a world where almost every restaurant let you try the first five bites of any dish for free. How many people would try the restaurant that made you pay up front? Wouldn't almost everybody walk up and down the street at dinner time, eating five bites at a time until they were full?

We have met the enemy, and he is currently downloading Flappy Bird

Now is probably the time to say that I don't think restauranteurs should offer five free bites: Their lives are hard enough as it is. And I don't at all think that F2P is all great for everybody all the time.

But it's interesting to compare mobile F2P and PC F2P, which are starkly different in terms of game quality. In mobile, F2P brings out the worst in app economics, with crippleware, shameless ripoffs, misleading names, ratings scammers, and everything else. But on the PC side, F2P is a done deal, and if you check out Steam's list of F2P games you'll find successful and critically respectable entries in a wide range of genres. If you want a team-based shooter, go for Team Fortress 2 or Loadout. If you want a grindy RPG, try Path of Exile. You can be a superhero in Marvel Heroes, or a mechanized war machine in Hawken.

And then there's League of Legends, the biggest e-sports game today. You probably couldn't find a player base more concerned with game balance, and yet LoL is F2P too. If it's harmed the game, the fans don't seem to care: For last year's world championships, Riot Games booked the Staples Center and then sold every last ticket.

So I guess, generally, I consider F2P a classic disruption that will be very noisy for a while but generally better for consumers. And I think when F2P hits the living room, it may look like the current mobile F2P at first, but as it gets bigger it will get a bit more mature, looking like the more sensible PC F2P scene.

(If you're curious: I play mostly console games with a few PC games on the side, with a leaning towards intense shooters and games that are grim in a novel way. Recent faves include the Dark Souls series, Battlefield 3, Don't Starve, Spec Ops: The Line, and Hotline Miami. I think Titanfall lives up to the hype. I'm trying to not suck at League of Legends.)

Who loses?

Once you call something a "classic disruption", you need to enumerate who's going to end up on what side when all the dust is settled. As I've written before, "disruptive" is often a synonym for "somebody's going to lose their job". Vote Democrat, kids.

AAA Game Studios

There are a handful of studios still launching $60 titles for consoles and PC. But as much as I'm enjoying Titanfall, I think that perch is going to get really precarious over the next few years. Some of those studios will figure out how to make the games they want with a blend of lower purchase prices and more in-game purchasing. Maybe a few will be able to justify their high purchase price with a high reputation for quality. (I'd probably pay $100 for Half-Life 3 at this point.) And some won't make the transition at all.

Microsoft and Sony

The Xbox and Playstation product lines are fairly expensive machines whose main selling point is the ability to power AAA titles. If all you want to play is AAA games, those consoles offer strong value, but as F2P games improve that value only gets weaker.

So even if they see F2P coming at them, it's hard to know how Microsoft and Sony would even maneuver. They're joined at the hip to the AAA studios, and the right console exclusive can drive a lot of hardware sales: Personally I bought my Xbox One bundled with Titanfall. Serious moves to make these consoles more F2P-friendly would likely erode the strength of the AAA studios, leading to a lot of stressful conference calls.

And, yes, Microsoft and Sony keep making noises about their consoles working as general entertainment hubs, but that's a secondary point at best when Roku and Chromecast cost less than $50. Nobody buys an Xbox One to watch Netflix. That'd be like buying a car because you need a cup holder.

Nintendo

Business schools will be picking over the bones of this one for decades to come: A game company with a string of hardware hits and a stable of beloved characters, failing because it couldn't see that F2P would catastrophically erode its hold among its casual, price-sensitive customers. F2P needs social dynamics and in-game payments, but Nintendo has not seriously invested in either. And while they can honestly say they provide a safer gaming environment for kids than Apple or anyone else, that will be scant consolation when they're looking like the next Sega, making one Mario game a year to release on other people's hardware.

Next stop: The living room

As for Apple, I believe they're headed to the living room to take on console makers Sony and Microsoft. It might go something like this:

One day—maybe tomorrow, maybe in a year—Apple will release an iOS console that integrates with your TV and supports iOS apps. This will make the iOS Controller API ten times more important than before, and iOS game developers everywhere will rush to update their existing titles for controller use. As soon as they're done with that enhancement they'll start work on new games, the kind that work better on a big flatscreen TV.

The reaction of the hardcore gaming community will be far more negative than positive. They'll call Apple console users "casuals" and worse. But it won't matter, because Apple isn't selling to them. Slashdot commenters complained that the iPod was expensive on a per-megabyte basis, and that turned out to be irrelevant too.

Instead, Apple will market to its existing customers, with a tagline something like: "$199. 10,000 free games." And then they'll have a foothold.

The first generation of this device will support fairly low-powered games, only a bit more graphically intensive than an iPad game, but as time goes on Apple will push up into bigger hardware, and some of their iOS developer base will follow. Before long Sony and Microsoft will lose some of their top-notch studios to this platform. And soon after that hardcore gamers will have fewer and fewer reasons to buy a box from Sony or Microsoft.

What about Valve?

Gabe Newell sees this coming. He needs to be a little paranoid, because Valve is also pushing into the living room with their Steam Machine initiative, in which third-party manufacturers sell PCs designed as under-the-TV game consoles with Steam pre-installed.

Valve and Apple resemble each other more than a little: They're both demanding, high-performing technology organizations founded by leaders with a perfectionist streak and a knack for playing a few steps ahead of their competitors. They might meet in the living room, and if that competition heats up it will be fascinating to watch.

However, I'm not sure it'll come to that. Valve might not connect with the mass-market audience that Apple is shooting for. Steam Machine might appeal to PC gamers who are willing to pay more for well-designed hardware, but the reliance on third-party vendors will fragment the brand and intimidate casual customers.

And even Valve themselves have been serving PC gamers for so long that they may not be capable of, or interested in, connecting with a broader audience. I'm a happy Steam user, but I also think it's a perfect example of the company's gamer-centric mindset. Everything about its design—from the dark color scheme to the lousy social features to the tiny fonts—just screams out that it was designed for 18-year-olds who are sitting too close to their monitors.

Predictions

This post would hardly qualify as tech-pundit windbaggery if I didn't make specific claims, would it? How else are you going to tease me when it turns out I'm wrong?

So, here are some things I'm predicting about the games business in five years:

In five years, Apple gaming in the living room will be mainstream

The new Apple console will be a mainstream success, far bigger than Apple TV is now. Over time the iOS gaming library will come to resemble that of the libraries for Xbox and Playstation, with major entries in genres such as FPS, racing, fighting games, etc. Most but not all of these games will be free-to-play.

The only major limitations of the Apple console library will be the same as of Xbox and Playstation: Nobody will be playing an RTS like Starcraft or a MOBA like League of Legends. Because nobody, not even Apple, can make mainstream consumers use a keyboard while sitting on a couch.

In five years, Valve's Steam Machines will be a respectable but minor niche in the console category

These will be quality products that allow Steam to extends its reach without embarrassing Gabe Newell, but they won't break out of the niche PC gaming audience.

In five years, either Sony or Microsoft will retire its console line

My guess is that there will be enough space for one console dependent on AAA titles. Sony or Microsoft will stare at each other for a long time, and then one side will blink.

I'm really not sure which side this will be. The needs of adjacent business units would favor Sony staying in the game: It's a full-fledged entertainment company. Meanwhile, a Microsoft run by Satya Nadella will likely be focusing around its enterprise and cloud offerings, making Xbox look like a red-headed stepchild inside of its own company.

But from the point of view of the developer ecosystem, Microsoft has a much easier time than Sony encouraging developers to make Xbox games, since Windows game programming is so similar. So while I think one of them will bow out, it's very hard to guess which one it'll be.

In five years, Nintendo will be done in the hardware business

Sorry, guys. It was a good run while it lasted.

What does disruption feel like?

I'm generally quite skeptical of tech business books, but I find myself recommending Clayton Christensen's The Innovator's Dilemma all the time. It's almost 20 years old, but it feels like it was written yesterday. And it defines disruptive innovation so precisely and so persuasively that sometimes I feel like you shouldn't be able to use the word "disruptive" without having read it.

One of the things Christensen talks about is "value networks": How new technologies don't just succeed or fail on their technical merits alone, but on the incentives of the various participants positioned around those technologies. You may have permission to work on some new kind of hard drive, but you're never going to get a salesperson to sell them if they have the same commission structure as the older models with a well-defined market. You will fail if you try selling new backhoes to the construction contractors who need the power of an older steam shovel, but maybe you can get the attention of groundskeepers and start from there.

I think one of the tricky parts about spotting a disruptive innovation is that while the old value network is established and easy to describe, the new value network doesn't exist yet. It has yet to be built out of unmet desires and interests that are diffused throughout the world, and watching for its formation is like watching sugar water crystallize into rock candy.

But I believe that this network will take shape, because there are a diffuse set of interests whose desires will be met more precisely by it. Consumers who want to try before they buy. Developers who love making games so much that they're willing to give away a lot of their work. Writers and web programmers who want to help people choose between good and bad in a world flooded with new games. Publishers and investors who understand that the power-law economics of social games means there's money to be made by having the deep pockets to fund twenty games only to see one succeed.

As for whether it'll be good for games, I think generally yes. I'd like to think for every shitty, deceptive piece of crippleware, we'll see two new entries that are sharp and imaginative. Of course, I work in tech, and maybe I'm predisposed to that optimism.

But I also don't think my optimism doesn't matter that much. We're headed there either way.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 21 Mar 2014 21:03

More than once, I've been asked to make sense of a document store underneath an out-of-control codebase. For my last project, I wrote JsonInference to help me see the entire data store all at once, looking for common patterns.

Given a bunch of JSON documents that are assumed to be similar, JsonInference reports on statistical patterns about commonality. For example, feed a report object a bunch of JSON hashes:

report = JsonInference.new_report
huge_json['docs'].each do |doc|
  report << doc
end
puts report.to_s

And you receive output that looks like this.

JsonInference report: 21 documents
:root > ._id: 21/21 (100.00%)
  String: 100.00%, 0.00% empty

:root > ._rev: 21/21 (100.00%)
  String: 100.00%, 0.00% empty

:root > .author_id: 14/21 (66.67%)
  Fixnum: 100.00%

:root > .sections: 21/21 (100.00%)
  Array: 100.00%, 0.00% empty
  :root > .sections:nth-child(): 50 children
    Hash: 100.00%
    :root > .sections:nth-child() > .title: 50/50 (100.00%)
      String: 100.00%, 0.00% empty
    :root > .sections:nth-child() > .subhead: 50/50 (100.00%)
      String: 100.00%, 2.00% empty
    :root > .sections:nth-child() > .body: 50/50 (100.00%)
      String: 100.00%, 0.00% empty
    :root > .sections:nth-child() > .permalink: 46/50 (92.00%)
      String: 100.00%, 15.22% empty

I keep meaning to write more about document stores and challenges they represent to teams in modeling data. I don't necessarily think they're worse than relational stores, but they do seem to offer lots of unfamiliar pitfalls.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 21 Mar 2014 20:56

For a recent consulting project, I found myself comparing a lot of large JSON documents in tests, which can be frustrating since differences don't show up well when comparing the hashes normally. Hence JsonDeepCompare, a Ruby gem for comparing large JSON documents and showing the most specific points of difference if they are unequal.

Let's say you've got a test case:

class MyTest
  include JsonDeepCompare::Assertions

  def test_comparison
    left_value = {
      'total_rows' => 2,
      'rows' => [
        {
          'id' => 'foo',
          'doc' => {
            '_id' => 'foo', 'title' => 'Foo', 'sub_document' => { 'one' => 'two' }
          }
        }
      ]
    }
    right_value = {
      'total_rows' => 2,
      'rows' => [
        {
          'id' => 'foo',
          'doc' => {
            '_id' => 'foo', 'title' => 'Foo', 'sub_document' => { 'one' => '1' }
          }
        }
      ]
    }
    assert_json_equal(left_value, right_value)
  end
end

Running it will output this error:

RuntimeError: ":root > .rows :nth-child(1) > .doc > .sub_document > .one" expected to be "two" but was "1"

The selector syntax uses a limited subset of JSONSelect to describe where to find the differences.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 03 Jun 2013 14:39

TL;DR:

Stick with SQL databases for now, and use whatever's easiest in whatever environment you're using. That's probably SQLite on your laptop, PostgreSQL if you're deploying to Heroku, and PostgreSQL or MySQL if you're deploying somewhere besides Heroku. And as soon as you deploy anything that has data that you care about, and will have multiple people using it, make sure to not be on SQLite.

More explanation below:

How much research should I be doing on my own?

Very little. If you're a complete newcomer to programming, you're going to have your hands full as it is. Your choice of database is not the first problem you have to solve.

SQL or NoSQL?

You may have heard of these new-fangled databases loosely called "NoSQL". They're cool, but they're a lot newer than SQL databases, which will cause you troubles when you're getting started. This isn't so much an issue of the technical merits of one style vs. another, but if you choose a NoSQL store that you're going to have an uphill climb with setup, installation, library support, documentation, etc.

SQL, on the other hand, is super-popular and might actually be older than you. That means that for every single SQL question you have, you're only a Google search away from an answer. Stick with SQL for now.

Which SQL database should I use?

There are three meaningful open-source SQL databases: SQLite, MySQL, and PostgreSQL. An over-simplified comparison might go like this:

  • SQLite, as the name implies, is meant to be a very light implementation of SQL functionality. You can use it, say, if you're making a Mac application and need a good way to store structured data, but you don't want to go to the trouble of setting up a SQL server. This also makes it great for developing on your own machine, but since it's multi-connection functionality is limited, you're not going to be able to run an app on production with it.
  • MySQL is a full-scale SQL server, and many large websites use it in production.
  • PostgreSQL is also a full-scale SQL server, used by other large websites in production. This is actually my personal favorite, but that distinction doesn't matter when you're just getting started.

Any of these will get you what you need: A place to store your Rails data and a way to start learning SQL bit-by-bit. You should optimize for ease of installation or deployment. So that means:

  • SQLite when you're working on your laptop
  • PostgreSQL if you're deploying to Heroku, since Heroku favors PostgreSQL over MySQL
  • PostgreSQL or MySQL if you're deploying somewhere else. Check their docs and use whatever they tell you to use.

And it's going to be fine, at first, to have the same app using different SQL engines in different environments. At some point you may end up using some really specific, optimized SQL that SQLite can't run, but that's not a newbie problem.

Author: "--"
Send by mail Print  Save  Delicious 
Vim   New window
Date: Sunday, 25 Nov 2012 16:48

So I switched to Vim, and now I love it.

For years, I was actually using jEdit, of all things, even in the face of continued mockery by other programmers. My reasoning was well-practiced: TextMate didn't support split-pane, all the multi-key control sequences in Emacs had helped give me RSI, and Vim was just too hard to learn. jEdit isn't very good at anything, but it's okay at lots of things, and for years it was fine.

But eventually, I took on a consulting gig where I was forced to learn Vim. And, as so many have promised, once I got over the immensely difficult beginning of the learning curve, I was hooked.

Beneath the text, a structure

I'm now one of those smug Vim partisans, and one of the cryptic things we like to say is that Insert mode is the weakest mode in Vim. So what the hell does that mean? It means that if you accept the (lunatic, counterintuitive) idea that you can't just insert the letter * by typing *, what you get is that every character on the keyboard becomes a way to manipulate text without having to prefix it with Control, Cmd, Option, etc. (In fact, the letter * searches for the next instance of the word that is currently under the cursor.)

When you edit source code, or any form of highly structured text, this matters, because over the course of your programming career you're far more likely to spend time navigating and editing existing text than inserting brand new text. So the promise of Vim is that if you optimize navigating and editing over inserting, your work will go faster. After months of practice, it does feel like I edit text far more quickly--that hitting Cmd/Ctrl/etc less often compensates for the up-front investment of learning these highly optimized keystroke sequences.

But it's definitely a strange mindset. In most text editors, you think of the document as a casual smear of characters on a screen, to be manipulated in a number of ways that are all okay but never extremely focused. But Vim assumes you're editing highly structured text for computers, and in some ways it pays more attention to the structure than the characters. So after a while it feels like you're operating an ancient, intricate machine that manipulates that structure, and that the text you see on the screen is just a side-effect of those manipulations.

Investments

Is it worth the time? To answer that question you have to get almost existential about your career: How many more decades of coding do you have in front of you? If you're planning on an IPO next year and then you're going to devote the rest of your life to your true passion of playing EVE Online, then maybe keep using your current text editor. But for most of us, a few weeks or more of hindered productivity might be worth the eventual gains.

As is often the case, it's not about raw, straight-line speed--it's about fewer chances to get distracted from the task at hand. Nobody ever codes at breakneck speed for 60 minutes straight. But when you're in the middle of a really thorny problem, maybe you'd be better off with a tool that's that much faster at finding/replacing/capitalizing/incrementing/etc, which might give you fewer chances to get distracted from the problem you're actually trying to solve.

As somebody who's used Vim for a little less than a year, that's what it feels like to me. Most of the time. I have to admit that once in a while the Vim part of my brain shuts down and I stare at the monitor for a few seconds. Those moments are happening less and less, though.

Beginner steps

When it came to learning Vim, here's what worked for me:

Practical Vim

Drew Neil's Practical Vim is, uh, the best book about a text editor I've ever read. It does a great job of explaining the concepts embedded inside of Vim. I skim through this every few months to try to remember even more tips, and can imagine myself doing that for years.

MacVim

As Yehuda recommends, I started in MacVim and not just raw Vim. At first you should let yourself use whatever crutches you need--mouse, arrow keys, Cmd-S--to help you feel like you can still do your work. I agree with Yehuda that tool dogmatism is going to be counterproductive if it makes the beginning of the learning process so painful that you give up.

And I still use the arrow keys, and don't really buy the argument that it's ever important to unlearn those.

Janus

As with Emacs, a lot of the power of Vim is in knowing which plugins & customizations to choose. Yehuda and Carl's Janus project is a pretty great place to start for this. I'd install it, and skim the README so you can at least know what sorts of features you're adding, even though you won't use them all for some time.

vim:essentials

Practical Vim is great for reading when you're on the subway or whatever, but you'll need something more boiled-down for day-to-day use. For a while I had this super-short vim:essentials page open all the time.

Extra credit: One intermediate step

After I got minimally familiar with Normal mode, I started hating the way that I would enter Insert mode, switch to my browser to look up something, and switch back to Vim forgetting which mode I was in. I entered the letter i into my code a lot.

I suspect many Vim users just get used to holding that state in their head, or never leaving the Vim window without hitting Esc first, but I decided to simply install a timeout which pulls out of Insert mode after 4 seconds of inactivity:

au CursorHoldI * stopinsert

As explained in Practical Vim, the best way to think of Insert mode is as a precise, transactional operation: Enter Insert mode, edit text, exit Insert mode. The timeout helped me get into that mindset quickly, and live in Normal mode, which is the place to see most of the gains from Vim.

This is an intermediate step, and you shouldn't try it right away. If you're a beginner you're probably not going to benefit from being in Normal mode all the time--if anything the frustration would be likely to make you give up on it. But once Vim starts feeling less disorienting, and you're ready to really learn what Practical Vim has been telling you, I'd give this a try.

Hope that's helpful. And I hope that after a month or two of this, you become as smug and self-assured about your text editor as I am today.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 09 Oct 2012 23:28

Over time, technological progress makes it easier to write automated tests for familiar forms of technology.

Meanwhile, economic progress forces you to spend more time working with unfamiliar forms of technology.

Thus, the amount of hassle that automated testing causes you is constant.

Author: "--"
Send by mail Print  Save  Delicious 
"-un"   New window
Date: Friday, 06 Jul 2012 14:39

Looks like your Twitter bots need to be better at quote-escaping:

bots

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 03 Jul 2012 22:57

My Goruco 2012 talk, "The Front-End Future", is now up. In it, I talk about thick-client development from a fairly macro point-of-view: Significant long-term benefits and drawbacks, the gains in fully embracing REST, etc.

I also talk a fair amount about the cultural issues facing Ruby and Rails programmers who may be spending more and more of their time in Javascript-land going forward. Programmers are people too: They have their own anxieties, desires, and values. Any look at this shift in web engineering has to treat programmers as people, not just as resources.

Francis Hwang - The Front-End Future from Gotham Ruby Conference on Vimeo.

As always, comments are welcome.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 24 Jun 2012 17:08

This isn't just the event New York Rubyists want. It's the event they deserve.

Thanks so much to all the co-organizers for pulling this off. Everybody else: See you next year, if not sooner.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 10 Jun 2012 00:02

And there's this, too: Ellipsifier is a Javascript library that truncates HTML. It will retain the tag structure, counting only visible characters in the resulting text.

new Ellipsifier("to be or not to be", 5).result
//              "to be&nbsp;&hellip;"
new Ellipsifier('to <strong>be or</strong> not to be', 20).result
//              "to <strong>be or</strong> not to be"
new Ellipsifier('to <strong>be or</strong> not to be', 5).result
//              "to <strong>be</strong>&nbsp;&hellip;"

Another chunk of code written with the good folks at HowAboutWe.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 10 Jun 2012 00:02

And there's this, too: Ellipsifier is a Javascript library that truncates HTML. It will retain the tag structure, counting only visible characters in the resulting text.

new Ellipsifier("to be or not to be", 5).result
//              "to be&nbsp;&hellip;"
new Ellipsifier('to <strong>be or</strong> not to be', 20).result
//              "to <strong>be or</strong> not to be"
new Ellipsifier('to <strong>be or</strong> not to be', 5).result
//              "to <strong>be</strong>&nbsp;&hellip;"

Another chunk of code written with the good folks at HowAboutWe.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 08 Jun 2012 19:18

The folks at HowAboutWe do a decent amount of HTML email, as you might expect for any site where people can send each other messages. Dissatisified with the tools for developing HTML email, I wrote EasyMailPreview, a Ruby gem that makes it very easy to see the results of an HTML email in development mode. It auto-discovers mail methods and method arguments, and lets you write them on the fly with in an admin-ish screen.

EasyMailPreview screenshot

I've been using it for a while, and I find it eases the pain of developing HTML emails. Somewhat.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 May 2012 00:23

I wrote a little something about Goruco on the Goruco site:

Let me make a confession: Before we hosted the first GORUCO, I kind of didn’t want to be bothered. NYC.rb had been a successful Meetup for years, and people would occasionally say “you know, we could have a great regional conference here.” Which made sense in theory, but just thinking about the work involved gave me a headache. I usually smiled politely and tried to change the subject.

It's worth a read if you're not already convinced this year will be the best Goruco evar. And if you have yet to buy your ticket.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 May 2012 00:23

I wrote a little something about Goruco on the Goruco site:

Let me make a confession: Before we hosted the first GORUCO, I kind of didn’t want to be bothered. NYC.rb had been a successful Meetup for years, and people would occasionally say “you know, we could have a great regional conference here.” Which made sense in theory, but just thinking about the work involved gave me a headache. I usually smiled politely and tried to change the subject.

It's worth a read if you're not already convinced this year will be the best Goruco evar. And if you have yet to buy your ticket.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 May 2012 00:54

So, I'm speaking at Goruco this year. On The Front-End Future:

With the rise of Javascript MVC frameworks like Ember and Backbone, web programmers find themselves at a fork in the road. If they keep doing server-side web programming, they'll benefit from tried-and-true tools and techniques. If they jump into Javascript MVC, they may be able to offer a more responsive web experience, but at significant added development cost. Which should they choose?

This talk will address the strategic costs and benefits of using Javascript MVC today. I will touch on subjects such as development speed, usability, conceptual similarities with desktop and mobile applications, the decoupling of rendering and routing from server logic, and the state of the emerging Javascript MVC community. I will also discuss the impact of this seismic change on Ruby, Rails, and your career as a software engineer.

Nobody should confuse me with a Javascript expert, and that's not why I'm giving this talk. There are many talks you can see that focus on the specifics of implementation that are being hashed out today. With my talk, I will be drawing out the macro trends in our field that affect the products we build, and the careers we craft.

In particular, I feel like the move to thick-client web apps is giving the Ruby and Rails community a bit of existential paralysis--we should be talking about this far more, and meeting this change head-on. The future is uncertain, but it is also bright.

Goruco is on Saturday, June 23. This is our sixth year, and without giving away the rest of the speakers, I think this might quite possibly be our best program yet. If you want to join us, tickets are still available.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 May 2012 00:54

So, I'm speaking at Goruco this year. On The Front-End Future:

With the rise of Javascript MVC frameworks like Ember and Backbone, web programmers find themselves at a fork in the road. If they keep doing server-side web programming, they'll benefit from tried-and-true tools and techniques. If they jump into Javascript MVC, they may be able to offer a more responsive web experience, but at significant added development cost. Which should they choose?

This talk will address the strategic costs and benefits of using Javascript MVC today. I will touch on subjects such as development speed, usability, conceptual similarities with desktop and mobile applications, the decoupling of rendering and routing from server logic, and the state of the emerging Javascript MVC community. I will also discuss the impact of this seismic change on Ruby, Rails, and your career as a software engineer.

Nobody should confuse me with a Javascript expert, and that's not why I'm giving this talk. There are many talks you can see that focus on the specifics of implementation that are being hashed out today. With my talk, I will be drawing out the macro trends in our field that affect the products we build, and the careers we craft.

In particular, I feel like the move to thick-client web apps is giving the Ruby and Rails community a bit of existential paralysis--we should be talking about this far more, and meeting this change head-on. The future is uncertain, but it is also bright.

Goruco is on Saturday, June 23. This is our sixth year, and without giving away the rest of the speakers, I think this might quite possibly be our best program yet. If you want to join us, tickets are still available.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 29 Mar 2012 12:00

Awesome.

Last Friday, the bot went a bit crazy and started throwing ["That what she said"] into the conversation with no apparent rhyme or reason. Finally, I had had enough. And then it came to me: I would write my OWN bot, that responded to TWSS with a quotation from a notable woman. If they are so keen on what she said, why don’t we get educated about what she really had to say. And so the “whatshereallysaid” bot was born. It might annoy the guys into shutting off the TWSS bot, or we might all learn about notable women. It’s a win either way, in my books!

As a side note, I've always found "That what she said" to be annoying humor, not just because it can be sexist but because it's also just the dumbest, sloppiest humor you can think of. It's used by Michael Scott in "The Office" ironically, as an example of what a socially inept man-child might think of as funny. When and how it got stripped of that irony I'll never know.

But is it too much to ask people to be less stupid than Michael Scott?

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 16 Mar 2012 00:13

This was surprisingly underdocumented out in the world. If you've got a Rails controller that ever uses request.body.read, you can set that in a functional test like so:

class ArticlesControllerTest < ActionController::TestCase
  test "creates a new article" do
    attributes = {
      :subject => "10 ways to get traffic by writing blog posts about arcane Rails tips",
      :body => "Just kidding, it's totally impossible"
    }
    @request.env['RAW_POST_DATA'] = attributes.to_json
    post(:create)
    assert Article.last
  end
end

@request is a local variable defined inside of ActionController::TestCase, and used when you make a test request by calling post, get, etc.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 06 Mar 2012 03:00

Martin Sutherland responds to my rich client thoughts with some insightful caveats:

Mobile browsers on devices not designed in Cupertino

First of all: mobile. If you're using an iPhone 4(S), you might not realize that a lot of web browsers on mobile devices are abominably slow. In terms of getting the first page of your app/site up and running on a mobile browser, an HTML page rendered on the server is going to beat a client-side JS application hands down in at least 90% of cases.

I agree that this is something worth considering. But, depending on your application and your audience's technical situation, you may not consider those mobile browsers worth trying to reach, or at least right away. Having recently been a BlackBerry user, I can attest that after a while, some users just stop using the mobile browser unless they're really in a pinch. So when your web site is only as broken as 50% of the rest of the web through that particular device & browser, you can probably get away with it.

There are no doubt sites that are specialized enough on those benighted platforms, and/or just big enough, that mobile non-JS support should be addressed. In this cases, perhaps one option would be to build a thin intermediary site that consumes the REST API and offers a stripped-down web interface. May I suggest redirecting to a subdomain like noniphone4mobilebrowsers.mywebapp.com?

This scenario exposes the fact that you can always have two different ways to access the data on the server, with one piggy-backing off of the other, but choosing which method is the canonical form of access has pretty big implications for how you architect everything. It used to be that you made a thin-client web app with mostly HTML pages, and then wrote the API later. But it may be time to write web apps with the API at the core, and then add non-JS support later, as an edge case.

And it's probably worth noting that to a large extent, this is a discussion about a tradeoff between two constrained resources: mobile device capability (CPU, memory, browser optimizations) and mobile network quality. So is it a cop-out to point to the rate of change here? Mobile device capability is improving at a much faster rate than mobile network quality. So maybe it makes sense to lean into that by building a web app for the idealized high-power mobile browser from the start.

URLs for humans and for others

Secondly, there's the small matter of linkability and history management. If there is any part of your application that you want people to jump to directly, either as a bookmark for their own benefit, or as a link to hand out to others, it has to be have a URL. Using hash fragments for navigation may be a well-established pattern, but it's still a hack. So long as you're using hash fragments, that URL can only be run on the client. pushState() and replaceState() can fix this, but we're still a little while away from these methods being universally available (IE10).

It's probably worth noting that there are really two primary audiences for URLs: Humans, and search engines. And I think the problem is far smaller for the first than the second.

Humans use URLs--in emails, bookmarks, and the back button, but the endpoint of all that link manipulation is the same: Eventually it gets entered into a browser with decent JS support and rendered into something you read with your eyes. In my (admittedly limited) experience using Backbone, the Router and History objects handled these use cases, including the back button, fairly well, and without a lot of engineering. I'd have to believe they're handled easily enough in the other frameworks as well. Yes, there are the still-in-flux UX conventions of what actions are significant enough to create a new point in the history, but this feels like an acceptable speed bump.

(And yes, navigation through hash fragments is a regrettable hack and hopefully will be behind us all soon enough. But is it really a worse hack than using a Microsoft browser in the first place?)

Search engines are a harder case. I haven't been able to find up-to-date info about how much Javascript the Googlebot uses, but I'd have to believe that you'd be a little reluctant to run a full browser environment if you were trying to crawl every URL in the entire universe. I suspect there are going to be some non-trivial stumbles over this one, especially from an engineering productivity point of view. It's not super-hard to imagine how you'd have one master route on the server that figured out how to render the page first server-side, and then do some work to boot the rich client into that state right from the start. But does this mean we're have to support two versions of each view forever--a server-side version and a client-side version? I guess that's one more argument for using server-side JS, but for a lot of people (including myself) that's going to be a non-starter for some time.

One more cop-out might apply here: A lot of pages in web applications are access-restricted, and thus not visible to Googlebot anyway. So that reduces the pain. It certainly doesn't eliminate it, though. I've been starting to wonder if there's a need for, say, a stripped-down Node.js proxy that can 1) share JS views and templates with a rich client and 2) render the full HTML as a response to a single HTTP GET. But, uh, I'm pretty sure I'm not going to write that. For the time being, I think I'll just keep toying around with access-restricted web applications and try not to worry too much about the Googlebots.

(I guess there's a third audience here that I can't speak to at all: Vision-impaired humans. Do rich client apps make accessibility prohibitively difficult? I guess this could be an issue for, say, government sites, but to be honest I can't remember the last time I heard this issue raised for a commercial web app.)

Let's see if I know the ledge, or if anybody else does for that matter

One reason I had for writing yesterday's post was because I sort of wanted to be convinced that I was wrong. I've been having these rich-client thoughts for a while, but holding off from making that jump because of how much work would be involved. So I wrote my thoughts down, hoping that friends and colleagues could help me focus my thinking.

But you know what? There have been a few people (such as Martin) voicing nuanced concerns, but there hasn't been anybody telling me, publically or privately, that it's a terrible idea. And, uh, I know a lot of server-side web programmers.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 05 Mar 2012 00:37

For the sake of discussion, I'm going to make a recommendation about the state of web application development today:

If you are writing a new web application, you should make it a rich-client application from the start. Your servers should not generate any HTML. You should do all that work in the browser with a Javascript framework such as Backbone.js or Ember.js, and the server should only talk to the browser via a REST API.

I'm not saying I believe this idea 100%, mind you. But I feel like we may be reaching some sort of specific tipping point, and I'm interested in teasing out why this would or wouldn't be a good idea.

There's bad news, and there's good news

There are plenty of good reasons to avoid jumping on this particular bandwagon:

  • Many of the tools, patterns, and practices are immature, and your development speed will be significantly slower than if you build your web app using more traditional techniques.
  • There are way fewer serious front-end engineers than there are serious Rails/PHP/Python/.NET engineers, and if you have to hire into this codebase you're going to either pay more for those engineers or hire other engineers and expect some non-trivial ramp-up time.
  • The Javascript framework space is new, and some significant churn is to be expected. I've used (and like) Backbone.js, but should I be concerned about the possibility of choosing the wrong horse, and then being faced with a massive amount of legacy code to port years down the road?
  • Although the Architects among us love nothing more than defining an API, I'm a firm believer that decoupling isn't free: When the code is brand new and the requirements poorly understood, it can waste engineering time and inhibit agility to have to make changes across architectural boundaries. Yes, passing an instance variable from a Rails controller to a view is sort of sloppy and will probably be insufficient one day, but when you're just getting started with a small codebase, that sloppiness works in your favor.

But no matter how rickety the wheels look, there are some good reasons to jump on this bandwagon right now:

  • One day, you'll want all those APIs defined anyway, for iPhone apps, Android apps, mobile-optimized sites, partners, power users, etc. In fact, those of you starting mobile-centric products have the moral clarity of needing an API from day one, so this decision might already be made for you ...
  • If you write both a traditional web app and an API, then you have two places where the business logic could end up living, which of course is a petri dish for code repetition. Yes, I know you are a smart and diligent enough programmer that you'd never copy and paste anything from one side to another when you're feeling rushed. But please spare a thought for your fellow coders, who aren't so blessed with your work ethic or your mighty intellect.
  • Very soon, your customers will expect your web site to be responsive and interactive like Gmail. And the only way you can do this in a web app is going to be lots of Javascript, and a lot of Javascript without a proper framework is way worse than with a proper framework.
  • Right this minute, there are successful, serious web applications whose teams are thinking "God, it would be awesome if we could rewrite this in Backbone/Ember/etc." But they might never do it, because they know the pain of that transition will be massive. So, it might make sense for your brand new site to future-proof itself by starting as a rich-client app from day one.

"You Aren't Gonna Need It" vs. "It's Gonna Suck To Get It Later"

A lot of this comes down to cost of change. If it were the case that you could make this transition incrementally, without stress, and without a ton of work, then maybe you could make the argument that going rich-client early on is just gilding the lily. But is that possible? Every time I've ever had the transition discussion with someone it ends up being vaguely stressy until we decide not to talk about it ever again.

Then again, there are probably teams taking this on right now, and they may discover good patterns to help the rest of us. Maybe we'll see a burst of blog posts titled "how we moved our traditional web app to Backbone.js without losing our hair or our money" in the near future. But if I had to bet, I'd guess that it will always be a tough nut to crack.

Nobody ever said "I like it, but I wish it involved more waiting around"

This all stems from my entirely subjective, unprovable opinion that rich-client applications are just better. Responsiveness doesn't just mean that the user can do things faster. It means that she'll like using the application more. It means she can can push ahead to her goal with fewer chances to lose focus. It means that the data and the interactions will feel more concrete and reliable. It means, fairly or unfairly, that she's going to trust the site more, and maybe even be more likely to pay for something.

This isn't some pie-in-the-sky futurism: Users are being trained to expect this right now. They don't even know what to call it, but applications like Gmail are feeding them the expectation that web sites should all be this responsive.

It wasn't long ago that making these apps was insanely difficult, so most of us shrugged our shoulders at Gmail and figured those sorts of problems were above our pay grade. But frameworks like Backbone and Ember are steadily pushing the cost of implementation downward. Your competitor may not have a rich-client app today, but maybe they'll have one tomorrow. When that happens, how many of your users do you think you'll lose?

I could just be jumping the gun here. But allow me a prediction (and feel free to tease me when it doesn't happen): In five years, the vast majority of significant web applications will be rich-client. I can't see any reason that the web wouldn't move forward in this way.

If that prediction is right, then right this minute we're in an uncomfortable position: We can see the future, but we can't see an easy path to it yet. Maybe it's on us to build it, or maybe we'd be better off to grit our teeth and stay back from the fray.

I'm not 100% sure what I think myself, but it's a topic worth discussing now. Thoughts?

Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader