• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Friday, 14 Jan 2011 21:05

The other day I was reading something on Twitter and I saw someone use a two-letter .io domain as a url shortener and was pretty surprised. I can't remember what domain it was, but I immediately thought "Wow, they're letting people register two letter domains. Interesting..."

I finally just remembered about it now, and after throwing a quick script to check two letter combos, I found hundreds that were available. Being the domain junkie I am, I just picked up rb.io, which is a nice little domain with my initials, and 33% shorter than my previous "tiny" domain rab.cc (the 'a' for my middle name).

IO is such a nice TLD! It's so techy, yet is also pretty commonly understood. Obviously, all the really good ones are taken - like on.io or in.io, but there are still some neat ones left: "ay.io would be great for domain hacking, like "saturd.ay.io" etc. They're not cheap, which is why I assume there's still so many out there, but even so, I can't imagine they'll last for long.

Here's the list I made of free domains as of this instant, in case you're in the market:

AH.io, AY.io, BQ.io, CB.io, CJ.io, CQ.io, DF.io, DH.io, DI.io, DP.io, DQ.io, DT.io, DU.io, DV.io, DW.io, DX.io, DY.io, EA.io, EB.io, EF.io, EI.io, EJ.io, EM.io, EO.io, EW.io, EY.io, FA.io, FC.io, FD.io, FE.io, FG.io, FH.io, FN.io, FP.io, FQ.io, FS.io, FT.io, FV.io, FW.io, FZ.io, GJ.io, GK.io, GV.io, GX.io, GZ.io, HA.io, HB.io, HC.io, HD.io, HE.io, HF.io, HH.io, HJ.io, HL.io, HP.io, HQ.io, HS.io, HV.io, HW.io, HX.io, HY.io, HZ.io, IB.io, IF.io, IG.io, IH.io, IJ.io, IK.io, IU.io, IV.io, IY.io, JA.io, JC.io, JD.io, JF.io, JG.io, JH.io, JI.io, JK.io, JQ.io, JU.io, JV.io, JW.io, JX.io, JY.io, JZ.io, KA.io, KC.io, KF.io, KJ.io, KL.io, KQ.io, KS.io, KU.io, KV.io, KX.io, LD.io, LE.io, LF.io, LG.io, LH.io, LJ.io, LM.io, LP.io, LQ.io, LW.io, LX.io, LZ.io, MB.io, MJ.io, NJ.io, NK.io, NM.io, NQ.io, NT.io, NV.io, NX.io, OA.io, OB.io, OE.io, OG.io, OJ.io, OQ.io, OT.io, OV.io, OW.io, OY.io, PD.io, PJ.io, PQ.io, PU.io, PV.io, PZ.io, QB.io, QC.io, QD.io, QE.io, QF.io, QG.io, QH.io, QJ.io, QK.io, QL.io, QM.io, QN.io, QO.io, QP.io, QT.io, QW.io, QX.io, QY.io, QZ.io, RA.io, RF.io, RG.io, RH.io, RJ.io, RK.io, RL.io, RN.io, RP.io, RQ.io, RS.io, RV.io, RY.io, SF.io, SQ.io, SW.io, SX.io, TE.io, TI.io, TL.io, TQ.io, TS.io, TU.io, TX.io, TY.io, UB.io, UC.io, UE.io, UF.io, UH.io, UJ.io, UO.io, UQ.io, UT.io, UV.io, UW.io, VF.io, VH.io, VJ.io, VK.io, VL.io, VP.io, VQ.io, VT.io, VV.io, VX.io, VY.io, VZ.io, WA.io, WB.io, WC.io, WD.io, WG.io, WH.io, WI.io, WJ.io, WL.io, WN.io, WP.io, WQ.io, WR.io, WT.io, WU.io, WV.io, WX.io, WY.io, WZ.io, XA.io, XB.io, XC.io, XD.io, XE.io, XF.io, XG.io, XH.io, XJ.io, XK.io, XM.io, XP.io, XQ.io, XR.io, XS.io, XT.io, XU.io, XV.io, XW.io, XZ.io, YA.io, YB.io, YC.io, YD.io, YF.io, YG.io, YH.io, YI.io, YJ.io, YK.io, YL.io, YM.io, YN.io, YP.io, YQ.io, YR.io, YS.io, YV.io, YW.io, YX.io, YZ.io, ZB.io, ZC.io, ZD.io, ZE.io, ZF.io, ZG.io, ZH.io, ZI.io, ZJ.io, ZK.io, ZL.io, ZN.io, ZO.io, ZP.io, ZQ.io, ZR.io, ZS.io, ZT.io, ZU.io, ZV.io, ZX.io, ZY.io

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Saturday, 08 Jan 2011 21:42

[image]

When I first read about OnLive, the streaming video game system, I was more than skeptical. It just seemed impossible - playing videogames online without downloading any software!? No way. I was sure that what I saw was just a technology demonstration, and that it'd never see the light of day. "Yeah, that's interesting," I thought, "but it's not like it'll ever work."

Wow, was I wrong.

I was amazed when I saw a month or so ago they went live and you could actually order the system. Seriously? It's real? Interesting! Then I was amazed at the price point - around $100! My initial thoughts were a low price would be the only way they'd actually be competitive against Sony, Microsoft and Nintendo and lo-and-behold that's what they're doing. Then I thought, "Well, the games will suck, or be weird PC-Only versions of 5 year old games..." Borderlands? UnReal Tournament? Lego Batman?! Batman: Arkham Asylum?!?! Holy crap! I just bought the last one for my brother for Christmas, so that's pretty damn new to me.

Then OnLive released their free iPad app a couple weeks ago which has the same "Arena" video portal they have on their system, where you can view live games as they're being played on the network. It's a very, very cool app - and supposedly they're going to be expanding it to actually play games as well (awesome!). In hindsight, I realize that's how they got me to sign up for a username/password, which was a brilliant marketing idea on their part. Targeted exactly the sort of early adopters they're looking for. Once I saw the live videos actually streaming to my iPad, I was pretty convinced this was something definitely worth checking out.

Then a couple days ago OnLive announced a price drop for CES - $66 for the system, plus a free game. I couldn't resist any more. For the price of a new XBox game, I could get a whole new gaming system which my brain still says should be impossible. Bargain! Despite the fact that I already have an XBox and a Wii, and barely any free time to play either of them, I handed over my credit card info and got the system in yesterday.

It's amazing.

The system itself isn't much bigger than a Roku box or an Apple TV box (though heavier). I mean, it's *tiny*. OnLive need to put pics of the box next to some pencils or something on their website, because even though I saw its size compared to the controller, I just mentally enlarged it to the size of an XBox or PS3. It's half the size of a Wii, if not tinier.

To get it working, I plugged in the power, HDMI and network cables, synced the controller (which is *very* nice btw - it feels solid. On par with an XBox 360 controller in terms of quality, not like the cheezy Nintendo or MadCatz plasticky ones) and then I was good to go. I actually just swapped out my Roku box, as it fits in the same spot and needs the same connections. (I could use WiFi also, but obviously I get better quality with a wire).

I logged into the system, it did a quick system update (expected), and then I was able to start checking out the menu system, and - incredibly, astoundingly, impossibly, playing games instantly. I played Lego Batman, Prince of Persia, a little bit of Borderlands and Unreal Tournament, and they all worked perfectly.

You can definitely detect a bit of the compression going on in the images - but even with a tiny bit of fuzziness, the resolution for these games was *still* better than what you get out of the Wii.

Maybe if I think about the technology - the realities of the increase in broadband speeds to homes, the lower cost of creating a data center, more powerful CPUs, GPUs, the massive leap in data storage, and that we've been able to use similar technologies like Windows Remote Desktop for years - then maybe, I shouldn't be so surprised. But for some reason I am. It just seems absolutely amazing to me they were able to get this all organized, working, shipped to market, and it has that "just works" consumer-level of friendliness and price-point.

One of the first things I learned when I started using the system is that the front of the OnLive console has two USB ports, but unlike the XBox, PS3 or Wii USB ports, these are made to be used with PC-compatible controllers or a mouse and keyboard. This is great as I have a Logitech USB controller already that I used to play emulators with, saving me the cost of having to buy Yet Another Proprietary Console Controller [Update: Actually, my old controller didn't work... not sure why, it's supposed to. Wireless mouse was laggy too...]. But in fact, you can use all the games available so far with just a mouse and keyboard (most likely, they were all PC games to begin with).

This shows where the system comes from, and also where it's going. Having the extra box is handy, I can bring it into the other room, connect it right where my computer normally is, and start playing games away from the TV without any setup hassle. I could bring it on the road during vacations or business trips. But I could probably already do that with my normal PC, but I don't - especially since my laptop is over a couple years old already, and would definitely not be able to handle the graphics of something like Batman: Arkham Asylum.

This can't be understated, in my opinion. Rather than having to upgrade to a better laptop or desktop machine, I just spent $66 and have a gaming system anywhere I want it. Or better yet, OnLive has a PC/Mac version of their client as well.

Thinking about it this way, it suddenly makes me realize that running PC versions of games isn't the liability that I thought it was initially when I was thinking of only console gaming. Yet another misconception on my part... Imagine if OnLive is able to sign up Blizzard and get World of Warcraft and StarCraft on their system? Or the games in Valve's Steam? There are insanely good games for the PC that have millions of users, but also require some pretty decent hardware to run. With OnLive though, running on their console box or an old PC or hey, even a mobile phone or tablet, that cost goes away. Pretty awesome. This is what they've been trying to tell us, but I finally realized it wasn't bullshit.

That said, the OnLive box itself will most definitely disappear. OnLive just announced deals to integrate the system in various consumer electronics. This is fine with me, I mean, I've already run out of spots to put electronics boxes around my TV - a PC for playing downloaded videos, a streaming Roku box, XBox, Wii, Cable/DVR... Gah! Having on demand, high-end gaming built right into my next TV or cable box (next to video streaming on demand of course) is fine with me.

Very cool stuff. I love seeing new technology move so fast, and see something work that I previously didn't realize was possible. And hey, it also involves video games too, win-win!

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Dec 2010 21:27

[image]

Let's say I download some plugins for my web browser - say AdBlock, or Readability - and then visit your website, ignoring your ads and simply grabbing the content. This, to my knowledge, is not specifically illegal. It may be against your site's terms and conditions, but that's between us as individual parties, not against the law in general. Especially since I'm not redistributing the copyrighted content. Also, the purveyors of these plugins and tools aren't liable for any wrongdoing either as far as I can tell. In fact Apple added Readability to their Safari web browser. I doubt they would have done that had they any fear of copyright infringement.

Now, let's say I set up similar tools on my own personal server, and I use that server to grab web content, filter it, and then deliver it to me without advertising. Is that illegal? What's the difference between using my local web browser to do the filtering and using a server?

Let's say you set up that server, and offer that service to people who sign up for your service. Now are you at fault? This is what FlipBoard seems to be doing, and on the surface, it seems to be illegal. They are literally copying the content and re-delivering it to end users, modified in a way that's against the copyright holder's interests.

The question in my mind is now that we're entering into an age of cloud computing, at what point does the service provider become simply a conduit for individuals, rather than a separate entity? If Flipboard did everything they're doing now, but instead of doing it on the server, they simply added all the spidering and filtering in the client, would that somehow be different?

What about web-based email services? They filter out images and advertising as a basic feature - are they infringing on the copyright rights of the emailers? If I send out an advertising-supported email that a user signed up for, and your email client (web or local) strips out those ads, who's at fault?

There's got to be other examples that I can't think of right now, but it's an interesting question in my mind. On one hand, I don't like the idea of people taking *my* content for free (unless I give it to them that way, like this blog does now), but also I regularly use AdBlock on all my browsers because of the abuse of advertising by content providers. (Though, hey, it's their content, so I guess it's not abuse really, but as a user things like auto-playing videos, popup ads, etc. feels an awfully lot *like* abuse to me.)

I am a firm believer that the age of "intelligence in the cloud" is coming quickly. We'll soon all have personal, autonomous agents that live in the cloud, gathering, filtering, prioritizing, parsing and organizing the ever-increasing amount of information that is generated daily from everything and everyone around us. We're at the point now where every single friend *you've ever had*, everyone you've ever worked with past and present and every celebrity, politician, brand, author or artist you like are giving hourly status updates, and it's already overwhelming. When our cars, refrigerators, toasters, TVs, offices, homes, yards, streets, stoplights and more are all cranking out updates as well, it will be impossible to keep track.

But here's the thing: Is that Tweet you wrote copyrighted? Is Google infringing on your rights when they distribute it via Reader? What about your website? What's the difference between "parsing" your website with my browser and "scraping" your website with my server? Why does it matter where the computer lives that's doing the processing? Flipboard blurs the line between browser and news-reader, and strips out advertising when it displays sites using it's "custom client", but if your ads are all in Flash, doesn't the iPhone do the same already since it doesn't support Flash content? When you use Readability in Safari it also happens. Why is it different?

It'll be interesting to see how it all progresses culturally and legally as cloud services continue to take off, and new ways of computing - mobile, tablets, web apps, etc. - push the status quo again and again.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Saturday, 06 Nov 2010 20:14

[image]

So I got the XBox 360 Kinect on launch day, and assumed it would come with some sort of mount to put it on top of the TV, but it didn't. Supposedly, it works just sitting on the base in front of your TV as well, but my experience using it for a couple days says that it definitely doesn't work well sitting below the TV. At home I have barely the 7-8 feet of clearance needed between my couch and monitor - setting the camera below the TV just didn't work. I read that if you mount the camera up high above the TV you can gain a few feet of clearance, so last night I started to figure out what to do to mount it.

I don't know if you've noticed, but all the Kinect mounting kits aren't available for sale yet - either on Microsoft's direct site, or on Amazon. I pre-ordered the Kinect "wall-mount" kit for $15, but it won't ship for a couple more weeks.

Checking out the images of the Kinect TV mount, I was struck by how it looks like a tripod. So the first thing I did was dig out my video camera tripod to see if there was a good way to set the Kinect camera on top of it without it falling over. If you haven't seen a Kinect camera in action yet, it actually has a small motor in it which moves the camera around a bit, so it has to be on a level, stable platform.

While I was digging around in my gadget box looking for something to attach the camera to the tripod, I ran across a universal GPS car-mount kit that I bought this summer for a camping trip. I actually never really used it, because the TomTom GPS I got at the same time had its own mounting stuff which worked great. The extra generic kit had a bunch of attachments for iPhones, iPods and various other gadgets, so I decided to hold on to it in case I wanted to use it for something else.

So! Looking at it, I realized that it was *perfect* to mount the Kinect camera with. The first thing I did was use the sticky base plate that came with it, and attached that to a space behind my TV. (You can see in the pictures below). This thing is obviously made to stick to any oddly-shaped dashboard and not move, even in the a hot sun, so it's solid. Then I took the curved bracket with the industrial suction-cup and attached it to the base plate. Again, it's meant to hang upside-down, so being only slightly tilted, it's also solidly attached. I could probably lift my TV up using it as a handle - seriously.

The last part was trying to figure out which of the attachments best suited the base of the Kinect. The GPS kit comes with several sticky squares which you can use in case nothing else fits, but I discovered something even better! The iPod nano adapter's plastic molding fits the base of the Kinect *perfectly*. Seriously, it's exactly the right dimensions and sits tightly. If it was a little wider, it'd be like it was made for it. To make sure it had a bit more stability, I put some of that removable sticky putty on the bottom of the Kinect base.

It's awesome. Seriously, the look of the whole unit is like it was made to sit there. And the curve of the mounting arm means I can place it towards the back of the TV, giving just a few more valuable inches of space to the whole thing.

The results are that the Kinect works 100x better than before. Also, the snapshots it takes are much nicer as well, as they're not distorted from being taken at such a low position. It's a fantastic solution if you ask me. If you happen to have a Kinect and want to mount it before the official, more-clunky, mounts are available, this is the way to go. I'm going to test it for a few more days, but I think I may end up cancelling the wall mount kit as this thing seems to fit the need.

More pics below.

-Russ

[image] [image] [image] [image] [image] [image] [image]

[image]

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Thursday, 21 Oct 2010 18:40

[image]

A few years ago, I had an idea that was essentially a reaction to the introduction of "microformats" - embedding useful data into web pages by using "semantically correct" tags around information like address info or calendar stuff, and then using CSS to make that content blend into the page. The problem (to me) is that microformats were really stretching some of the tag definitions, and there was no real organizing scheme - all the different specs seemed like one-offs. I started messing with a parser, and just gave up because you basically had to hard-code the tags you were looking for, in the order they were supposed to be found.

So my idea was something I called "microschema". It was essentially to do the same sort of data marking, but by overloading an HTML's class attribute to do it. This works because class can contain multiple entries separated by a space - some could be used by CSS for formatting, the others for denoting key-names for the data in the element.

I wrote a long email to someone (who never responded - thanks!), but never got around to posting about it on my blog, and then moved on. I dug out the email - here's a bit of it:

Microschema - a proposal for standard markup rules to better enable Microformats

The general idea is to define a set of concrete rules for extracting semantic data from a web page, using low touch approaches to existing standards and practices that have been pioneered by the Microformats group. Instead of using something as complex as RDF or applying arbitrary new namespaces on top of XHTML in order to give the underlying markup more meaning, they instead base their formats on existing XHTML elements and attributes enabling easier implementation and faster adoption.

The problem however with Microformats as they currently are created is that they are one at a time, using a set of design patterns as a guide, rather than building on a solid set of basic parser rules. Not only is this process slow, but it's error-prone and not scalable - as each new microformat will have its own quirks, and need it's own specifically designed parser in order to interpret them, rather than using a more generic, yet standardized system which can be coded into a parser once and re-used. This is what the Microschema ruleset aims to do - helping make Microformat rules clearer, save time in creating new microformats, and spur adoption as well. The Microschema isn't a generic set of rules in the sense of a full-on schema, with ordering and value constraints, the same way that microformats are not meant to replace full-on standards-body created data specs.

Simply put, Microschema is a set of practices that people and parsers can follow to unambiguously identify information on a web page by clearly marking field - value pairs inside normal tags. Microformats can then build on top of these rules to group together these pairs into data standards such as vCard or vCal.

Microschema is an effort to take interpretaion out of Microformats. Instead of religiously trying to divine the true meanin of the original HTML spec and applying appropriate new applications (i.e. address, abbr, etc.), Microschema is simply about using the tools at hand to create concrete rules about parsing meta-data out of XHTML in a standard, open way.

Example:

        <div>
        <span class="tel home">1-123-123-1234</span>
        <span class="tel office">1-123-123-1235</span>
        <span class="tel office">1-123-123-1236</span>
        </div>

I actually didn't throw away the idea completely - it lead to some of the stuff I worked on within Mowser (my mobile transcoding startup, for those with short memories). Re-using the "class name as data marker" idea, I wanted to use class names as ways of hinting for the transcoder to do things like hide areas of a page from being processed (like side-bars, embedded flash, etc.) or to show certain areas only to mobiles, etc. I don't think I ever got it fully implemented though - I just relied on handheld CSS to do most of the work, but the core of the idea was there: Embed key-field information in the class attributes to help a client or server-side parser better analyze and present the data inside the marked tags. Just recently in fact, Instapaper started supporting this exact idea, so it's obviously a concept that is very useful.

This brings us to the future - I ran across an article on HTML5 Microdata the other day in Webmonkey (they're still around!??!) called Microdata: HTML5's Best-Kept Secret, and what do you know? The HTML5 folks have added in some new rules to let developers add in arbitrary attribute names to any tag, allowing them to become key-names for data within the tags. Well, hey! Would you look at that! No need to clumsily overload the class attribute any more, now you can specifically mark attributes as belonging to one type of "vocabulary" (aka "namespace"). Apparently, Google is already parsing Microdata when it's spidering web pages.

Here's an example:

<div itemscope itemtype="http://data-vocabulary.org/Organization">
    <h1 itemprop="name">Hendershot's Coffee Bar</h1>
    <p itemprop="address" itemscope itemtype="http://data-vocabulary.org/Address">
      <span itemprop="street-address">1560 Oglethorpe Ave</span>,
      <span itemprop="locality">Athens</span>,
      <span itemprop="region">GA</span>.
    </p>
</div>

That's pretty cool. I'm a firm believer in the "One Web" school of thought when it comes to web pages (i.e. mobile or separate versions of websites are inherently inferior to the main desktop version, and thus users will naturally demand full-browser capabilities, rather than settle for lesser quality, regardless of convenience.). But I think there is probably quite a bit that can be done to help enhance the experience - say on a mobile handset, tablet or television - by embedding additional data within Microdata.

Then again, it's been a few years since I was thinking about this a lot, and technology trends have moved on. I wonder if this sort of "hidden semantic identifier" within HTML is actually even really needed at all. Look at how Facebook implemented their various widgets and APIs using FBML (or now XFBML). It's their own proprietary markup language that is embedded directly into regular HTML pages. The FBML markup is ignored by the browser, but picked out by their Javascript and processed into visible widgets when the page is rendered.

If you look at it, the fundamental idea is exactly the same - humans get one form of the information contained within the markup, and the computers get a different version that can be used to help the presentation, functionality, or for data processing. Why bother with embedded attributes like Microdata at all when it could be as simple as developers deciding on some semi-standard tags like <data name="key">value</data> which browsers will ignore, but parsers can use?

One thing is for sure, embedded RDF, Microformats and various other semantic namespace ideas really haven't worked - so it's encouraging to see this obviously important concept still evolving. It'll be interesting to see in a few years whether Microdata has taken off, or whether it too disappears into technical obscurity like the various schemes preceding it.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Sep 2010 08:39

So cool - Amazon now lets you embed book sample chapters on the web. Awesomeness... I totally want to play with this more... I grabbed the ASIN number from the Ultimate Hitchhiker's Guide and embedded it here to test:

Very cool.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Wednesday, 01 Sep 2010 21:43

[image]

So I find it very amusing that a few weeks after I write about the death of RSS feeds, Twitter goes and disables Basic Authentication on their site, and doesn't have any immediate work-around for a user's authenticated timeline feed. And apparently, no one really cares, either. I'm talking about the feed that's *still* linked to from the orange feed icon at the bottom of your home page when you're logged in. I use that feed to keep track of my personal subscriptions in my news reader - but she's dead Cap'n. One would think that Twitter will eventually either re-enable Basic Auth for that specific feed, or at least provide some sort of token link workaround, but who knows when that'll happen. Especially since the number of people complaining about it (on Twitter at least) is almost non-existent.

Well, the nice thing is that I have my own little oAuth-enabled Twitter service already up and running called Roomatic, so I quickly added a hack this morning so I could get at my feed again. Just click here: http://www.roomatic.com/friends_timeline and it'll forward you to Twitter to ask for authentication, you click yes, and it'll re-direct you back to the same url with some params which passes through the raw xml feed of your subscribers tweets (I'm not looking, I promise). Just use that URL in your feed readers, and poof, instant Twitter feeds again. I'm pretty sure the security token doesn't expire, so it should be good to use for the foreseeable future. (Once Twitter re-enables that feed again, I'll probably just shut it down as there's no reason to deal with all that bot traffic.)

BTW, the feed may display oddly in Google Chrome, of course, because it doesn't bother even supporting RSS or Atom without a separate extension. But then again, it's a dead format, so why bother, right?

;-)

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Monday, 09 Aug 2010 03:12

[image]

Let's think back a few years ago to what I think we can all consider the heyday of news feeds - a time when blogs were a still relatively new concept, companies like Feedburner were acquisition targets, Atom was an upstart format taking on the hegemony of RSS, and things like Twitter and Facebook were only just gaining traction.

For a while there, there was a lot of buzz around news feeds as consumers got more used to the idea. Yahoo!, for example, added feeds to their My Yahoo! page and Google created iGoogle, mobile phones all rushed to create "widgets", etc. I remember some of the various news reader options like Bloglines, NetNewsWire, NewzCrawler, FeedDemon, etc. Apple even embraced the idea of "Podcasts" directly on their devices and the whole concept of feeds became mainstream.

Since then, however, feeds have started to fade in importance. Google introduced it's Reader, and everyone who were real news junkies moved to using that. Everyone else either uses Facebook or Twitter. In fact, if you're a new news site or blog, it's pretty much required have a Twitter and FB account to push out your changes nowadays.

"Following" has replaced "subscribing" it seems.

(As an aside, considering the wars that were fought over "full news feeds", the shift to Twitter as a primary news reader blows my mind. Apparently, links and snippets are enough for the vast majority of Internet users. Not even original URLs matter much either - shortened, trackable links are fine too. Imagine trying to take that position a few years ago?)

How far have feeds fallen from grace? Well, a primary example is Facebook - they're notorious for keeping their data locked up and providing few feeds, if any. Yes there are a few still working, but you have to hunt to find them, and you still miss most of what's happening in your network. But even more telling, some of the hottest new startups that have gotten press lately don't even use feeds at all. Check out Quora for instance. Being able to keep track of all those Question/Answer threads and various categories via feeds used to be a number one priority for new sites, but Quora hasn't even bothered. Same thing goes for Hunch as well.

What surprised me the most, however, was when I read in a Wired news article that the new iPad-based news reader Flipboard doesn't even use RSS or Atom feeds *at all*. They simply scrape the original content pages using a version of the code that powers Readability, the Javascript based bookmarklet for making websites more readable that Apple recently integrated into Safari. That same article points out that Instapaper also does the same sort of thing as well.

So wow, RSS and Atom XML feeds are starting to become as irrelevant as "Made for Netscape" buttons and <blink> tags. As someone who has written their own custom news reader and uses it daily, I'm pretty amazed.

But waitasecond, without any sort of standards or standard semantic markup to take the place of a feed's title, pubdate and description elements (the basics of any feed), it's going to be really hard to grab a page's content correctly, no? Well, it's not easy - and not always perfect - but it's definitely possible. Especially with better and better web-parsing engines hosted right on the server itself.

In fact, much of this problem reminds me a lot of what I did with Mowser a couple years ago to re-purpose web content for the mobile web, doing intelligent things like taking out styles and functionality that mobiles couldn't handle, and swapping out Google AdSense Javascript for it's mobile equivalent, etc. What I really built into the server was a virtual web parsing engine. It used libxml2 underneath to consume the HTML, but then I would walk the DOM strip out the stuff that wasn't needed. Mowser's main strength (in my opinion), was the ability of letting publishers leave hints in their markup to help better process their pages (using class attributes, since you can have multiple per element), rather than leaving it all up to the ability of automatic transcoders, which are notoriously bad at it. I also had the idea of publishing a wiki-style site where regular users could help give hints to sites as well, improving the overall quality of the final mobile page.

In a similar effort, Instapaper is now starting an effort to create a "Community text-parsing configuration" for its service. The idea being that users of Instapaper's text-only feature can create a configuration which helps the text-parser suss out the important bits of a web-page, and also allows publishers to similarly help by leaving hints in their site's class attributes as well.

For my service, I had to deal with forms, dynamic content, tables, lists and images, etc., so I was never able to get it perfect. But even just trying to extract only story content (titles, paragraphs, etc.) like Flipboard and Instapaper are doing is not always going to work well. Web pages can be created in a million different ways, and usually are.

This brings me to one of the cooler sites I've seen lately - ScraperWiki. It's sort of like Yahoo! Pipes, but rather than having to deal with drag/drop icons to do the parsing, you get to write the actual code (in php or python) yourself. It's a very, very cool idea (and one, I have to say, I've had before). Any scrapers you write are shared, wiki-style, with everyone else so they can be re-used and modified. This shares the burden of accurately parsing a site with lots of people, and could be a killer service.

In fact, I propose that scraping is the obvious evolution of web-based content distribution. The fact is that separate but equal is never equal - in this case a separate feed for content that comes from a web site will always be inherently inferior to it's main HTML content. (This goes for "mobile-specific" versions as well.) Search engines already do some sort of page "scraping" in order to extract content and index the results, right? This new trend is essentially bringing functionality that has been limited to specific uses cases (search) to a whole new generation of applications.

Sites like Quora and Hunch no longer have to worry about having feeds. They know they can rely on their users to use social networks to distribute links, and users will either view their sites via a web browser, or any service they use will mimic a browser (i.e. scraping) to grab that content automatically. No need to worry about full-content vs. summaries, etc.

All this brings me, finally, to my main thought: Are we on the verge of seeing a new generation of news readers?

The basic feed reader functionality of dumbly polling a site over and over again, grabbing the XML when it updates, checking for duplicates and then presenting the results in a long list is just simply not compelling to the average user. Only the most hardcore of the info junkies (like myself) want to deal with that sort of deluge of content. Though the core idea is great - bringing information from sites all over the internet into one spot for quick and easy access - the implementations up until now have left a lot to be desired.

The newest content and community startups have apparently recognized this and forgone feeds all together. Why bother? Only a select few people actually use the feeds anyways, the rest simply use feeds as an easy way to steal content for AdSense driven spam sites. Sites that want to share their content do so now by providing custom APIs, with developer keys, etc. - it's much easier to track and, in the end, provides a richer experience to their users.

The newest aggregators have also recognized the problems with news readers, and by correlation feeds themselves, and are instead focusing on making content easier and more enjoyable to consume. Whether it's Pulse's new image-heavy horizontal layout for the iPad, or Instapaper's text-only view of saved pages, the idea is to get away from the Google Reader river of news.

By breaking out of the box that feeds create, aggregators can do so much more for their users. Think beyond scraped content as just a simple replacement for feed content, instead think about the intelligence that content parsers have to contain in order to work. To extract the correct content from a website, a crawler has to intelligently process a page's structure and function, maybe even its security and scripting logic as well. This is a huge leap forward because it enables more intelligence and awareness to be integrated into the experience, which is exactly what users want.

Flipboard, for example, is interesting not only because it formats the results into a friendly newspaper-like experience, but also because it generates the news from your contacts on Twitter and Facebook. It doesn't just dumbly grab a list of things you already read, it integrates with your existing network to get content you didn't know existed. It doesn't care if there's a feed or not of the link it's grabbing, it parses the page, pulls out a headline and some content and presents it to the user to browse at their leisure. This sort of thing is just the beginning.

Every day I read my news reader and I get frustrated with the experience in various ways. I see more duplicate content than I need to (especially with today's content-mills repeating every little item), or I see too little content from some sources, and too much from others. Some sites constantly add inane images to attract my attention or include linked feed advertising, others include great illustrations that I don't want to miss, but sometimes do because I mentally start to ignore all the images. I also can't get to any information from within my company's firewall, though there are plenty of updates to internal blogs and wikis I'd like to keep track of. And despite keeping track of hundreds of sites, I still end up clicking over to Facebook once in a while to catch up on shared videos or links or other updates, as that stuff doesn't have a feed.

Essentially, it's becoming more and more work to separate signal from noise, and it never seems that everything you want to keep track of has a feed. I can't imagine what it must be like if your job is to parse news for a living. Imagine being an analyst for a bank and having to wade through the cruft you'd get in a news reader every day, not to mention the monthly publications, etc.

What I think is going to happen is that both browsers and aggregator services in the cloud are going to start enabling a lot more logic and customization. We see the start of it now with Grease Monkey scripts and browser plugins and extensions, but I think another level of user-friendly artificial intelligence is needed. I'm talking about applications that parse web pages, gather content, and display it all intelligently, economically and without magic (which is generally always wrong), as directed by your specific choices of what you think is good and bad. ScraperWiki is the first step towards this sort of thing, but really it's only just the beginning.

Anyways, a few years ago I decided that the mobile web as a separate entity was a dead end because of the quickly improving mobile browsers and it turns out I was pretty spot on. It never dawned on me that the same logic could be applied to web feeds because of things like quickly improving server-side parsers and bad user experiences, but now I'm seeing that it is. I personally still wouldn't launch a new site today without having a decent feed, but I bet it'll be a short time before I don't worry about it, and I bet there's a lot of other web developers that feel that way already.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Sunday, 25 Jul 2010 23:21

[image]

Last year I whipped up a quick little mobile widget called QLauncher, which could be used on the home screen of S60 phones like the N97 or 5800 to quickly launch a web search box with various services as options. It used S60's Web Runtime Toolkit - or WRT - which is essentially a wrapper around the internal Webkit based browser the phone uses.

When I switched away to my N900 based on Maemo (now MeeGo), the one thing (besides Google Maps) that I really missed was my little QLauncher app. Well, the Qt guys just launched their new Qt Web Runtime application framework for Maemo so finally I can get my QLauncher back!

It turns out that the other .wgz widget file that S60 uses is slightly different from the W3C standard, which uses a config.xml file rather than an info.plist file, among some other small differences. The Qt guys seem to have used the W3C version - or at least are using the .wgt filename extension the standard config. So I just went in and tweaked that stuff, threw it on my phone and *poof* it worked! W00t!

You can download the N900 Qt WRT version of Q Launcher here.

You'll need to first install the Qt WRT stuff which is now in the Maemo Extras-Devel repository (assuming you have the latest firmware, etc. and can run the rest of the Qt packages... good luck with that). Just do "apt-get install qtwrt".

I *love* development for mobile phones using web tools like HTML, CSS and Javascript. (This is part of the reason I think Palm's WebOS is so cool as well.) I'm especially getting into Javascript development lately and I love it. I'll have to explore the Qt WRT API a bit and see what other stuff I can delve into as it makes a lot of the native phone functionality accessible from Javascript.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Tuesday, 22 Jun 2010 00:27

[image] [image]

I spent a ton of time writing about tablets this weekend and playing with my iPad to get my thoughts in order. I was thinking about how I really miss being able to "page down" when reading websites. My desktop browsers are always set up not to "smooth scroll", so page movements are nice and quick, and I use the space bar or the page down key by default to quickly scan long pages. This is the same for my phone, actually. On my N900, I was testing out Opera Mobile, and was disappointed because the keys didn't do anything - I use the space bar with the N900's integrated Firefox-based browser to page down and it works great.

I started thinking about how to add some Javascript to my personal newsreader to enable me to just touch the side of the screen to page down - sort of like how the e-book readers work - when I ended up creating a new layout for my blog instead. As you can see from the images above (click for a bigger version), it's a two-column layout with no scrolling, and you touch the left and right of the screens to move forward and back on the page. It actually looks pretty slick I think.

I was able to rework the page easily because my blog style is so basic - I don't have a sidebar or anything to deal with. I used JQuery to dynamically change the page once it was loaded, and I used PHP to make sure I only show that stuff to iPad owners. There's probably better ways of doing this stuff, and in general the whole design and script could use some tweaking (I'm using pixel-specific numbers in some spots which worries me). But it's a fun proof-of-concept.

Here's the code:


<?

        if(strpos($_SERVER['HTTP_USER_AGENT'], 'iPad') !== false){ 

?>

<script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script>
<script type="text/javascript" src="/blog/jquery.scrollTo-1.4.2.js"></script>

<script>

 $(document).ready(function(){

    $("#content").height($(window).height() - 100);

    $("#content").css("overflow","hidden");
    $("#content").css("padding","0");
    $("img").css("max-width","300px");
    $("iframe").css("max-width","300px");

    $("#content").css("-webkit-column-count","2");
    $("#content").css("-webkit-column-gap","10");

    $("#page").click(function(event){

        var height = $("#content").height();

        if(event.pageX > $(window).width()/2){        
            $("#content").scrollTo("+=" + 816, {duration:400});
        } else {
            $("#content").scrollTo("-=" + 816, {duration:400});
        }
    });

    $(window).resize(function() {
        $("#content").height($(window).height() - 100);
    });

});

</script>

<? } ?>

A few points to note: First, by setting the content div's height to a specific value in CSS, I'm able to make a long page fit in a single iPad screen. Then by adding in the CSS3 column count, the overflow from the div goes horizontally, rather than vertically - and by hiding the overflow in CSS, you don't see the scrollbars.

Next I used the JQuery ScrollTo plugin to manage the interactivity. When the user clicks on the side of the screen, I scroll in that direction the width of the textbox, which moves the text over a couple columns, basically making a paging effect. I added in a little 500ms delay to improve the effect, and it seems to look ok. The last thing I did was add in a max-width on any images so that they don't flow outside of their column.

I need to go back and clean up a bit of the CSS as my footer is a bit wonky - maybe I need to absolute position it or something. Also, the page doesn't refresh itself properly when the screen orientation changes. But for now I think it's pretty ok.

It's nice to be able to make a design for just one platform, I have to say - not having to worry if another browser doesn't display the page properly is almost fun. That said, I do worry about focusing too much one device. Also, I wonder if breaking the normal scrolling paradigm is annoying, rather than convenient, to readers. Changing how a standard web page works isn't something I think is a great idea, as it would suck if you had to figure out a new way of reading a web page for each site out there. And admittedly, paging through the columns on my site now is *much* slower than being able to do it the normal flick-scroll way - Which is funny, considering how I ended going down the path to the design.

Like I said though, it's a proof of concept and seems interesting to me at the moment. I just like the way it *looks*.

:-)

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Sunday, 20 Jun 2010 03:36

[image]

Like many cutting-edge geeks out there, I have been using the iPad since it's launch a month or so ago. I absolutely love it. More than that, I think pretty much anyone who doesn't love the iPad probably doesn't own one - it's that good. And even more, I think its form factor is the next step in the evolution of computing and is going to revolutionize how people use computers over the next 20 years or more.

I'm not even going to put caveats on it. Yes, the iPad has some issues - it can be a bit heavy at times. Data entry is slow and cumbersome (I'm typing this on a real keyboard, thank you very much). Apple's fascist rules leading to the lack of things like Flash and true multi-tasking in the OS is annoying. But none of that matters if you look at the big picture: The iPad is essentially a proof of concept for tablet, touch-centric computing, and it has proven itself and then some.

Every iPad owner has anecdotal stories of how awesome it is and I'm no different. Seeing my 65yo Mom and my 8yo son use an iPad with so few problems or question was enlightening in many ways. But also judging from personal experience, I can tell there's big things happening here. The iPad has literally changed how I use computers - which is a pretty astounding thing to say.

My prediction? Over the next year or so as various touch-tablet devices are launched, we're going to see a massive shift away from notebook and netbook computers for the general consumer. The tablet lends itself to four areas of use: Web (including Internet-centric apps like weather, news, maps, etc.), Media (music and video), Gaming (from solitaire to Farmville-like games) and Communication (email, social networking, IM, etc.). Which is what 99% of people want their computers for anyways.

Because I think there's still a lot of caution, confusion - or sometimes outright antipathy - towards the iPad and tablets in general, I wanted to note what I think are important points to notice about this new category of devices, and why they're so amazing.

First, it's not about Apple

When you think of Apple - you think of things "just working", right? Ease of use, no hassles, etc. The iPad has some of that, but it really doesn't take advantage of enough of the things that Apple is known for to really make that stuff a significant factor. For example, I rarely, if ever, sync it to my computer (only to backup my purchased apps), so I don't really care if that's quick or easy or "integrated". I don't use it much as a media player, so I don't care about iTunes at all. In fact, most of the Apple qualities the iPad has are the ones that annoy me the most - non-removable battery, no USB or memory card ports, no multi-tasking, no printing, no Flash, etc. In short, the iPad is awesome *despite* it being made by Apple, not because of it.

Don't misunderstand me, Apple definitely got the experience and form factor right, (as they often do), but there's just not a lot of secret sauce in how they did it this time. In fact, I've been using web-tablets since Nokia (my employer) launched the 770 in 1995 and can tell you that much of the same experience has been available for years. My Archos 5 Android-based tablet I got earlier this year was also almost there. All Apple did was finally bring the pieces together with a bigger screen and better touch technology (capacitive as opposed to resistive) combined with their already great iPhone OS. Why Nokia or others like Microsoft - who've been working on tablets for years - didn't get there first is a mystery. But I'll tell you this, though Apple definitely had a 3 year (or more) lead on their competitors with the iPhone, they don't have nearly that same sort of lead with the iPad.

So, to continue - if it's not some secret Apple ingredient, what is it about tablets that make them so great?

Touch Is Amazing

Touch interfaces are the future. They do have some limitations, but these are outweighed by their amazing qualities which drastically improve upon how computers have been used up until now.

First, they finally remove a huge layer of abstraction that is the mouse or touchpad. Essentially, "what you see is what you use". Though a mouse may seems "natural" to most of us who use one on a daily basis, anyone who's had to stand over someone learning how to use a PC for the first time - be it a child or a grown up - knows how truly confusing it can be. Touch puts the interactivity in the exact same spot as the thing you're interacting with. Suddenly the images on the screen become much more visceral and "real" - to the point where even dogs, cats and newborns can interact with them without knowing otherwise. Can't get much more simple than that.

Secondly, touch is portable. In order to use a mouse or trackpad, you need to be in a certain position, dictated not by you, but by the device - either on a desk, or propped up so at least one hand can be free to push and prod at an odd-angle. Touch screens allows for much more freedom - you can be laying on your back, sitting comfortably, or standing on a moving bus, it doesn't matter.

Third, though I think this has been a bit overrated, multi-touch capabilities allows multiple fingers, hands or people to get involved in the UI as well which allows the user interfaces to do a lot more as a result.

It's important to note, like I wrote in my post yesterday, that just adding a touch screen to the WIMP interface doesn't work - as anyone who's tried a Windows Tablet has already discovered. Windows, Icons, Menus and Pointers just aren't made for someone using the interface with just their fingers. In fact, Fitt's law, which dictates that things at the edges of the screen are easier to target, can be completely turned on it's head. My Lenovo IdeaPad for instance has a bezel around the screen, which makes trying to use the scrollbars, control buttons or taskbar in Windows nearly impossible. In the future, I'm sure touchscreen manufacturers will make sure the screens are flush with their frame to avoid this, but it just goes to show exactly how hard it is to convert a WIMP interface into touch.

Instant Accessibility

When you reach into your pocket to use your mobile phone, it's always on and ready to go, right? You don't have to boot it up, or go sit in a special place to use it. Mobiles are with you wherever you are, and are instantly active. Tablets are like that as well, except they simply provide a nicer user experience because of their bigger screens and dedicated functionality. If I'm watching TV and get a new email, I lean over and grab my iPad off the coffee table, check the email and put it back within seconds. If I see something I want to look up (either a fact from a documentary, or to find out who that cute actress is in a movie, etc.), again, I just grab the iPad and I have that info in seconds. I get app updates as well, such as alerts from Facebook or Scrabble - yet because it's so quick and convenient to get to this stuff, these alerts are welcome, and not at all intrusive or annoying. I used to do most of this stuff with my mobile phone, but there would always be limitations on what I could do or how comfortably I could do it, and then I'd have to get up and use my laptop instead. But laptops require sitting at certain angles, with the keyboard and trackpad propped to not cause physical discomfort - tablets aren't like that.

So what ends up happening is tablets becomes almost utilitarian information access devices. Instead of having to wander over to my laptop when I want or need something off the Internet - directions, maps, info, addresses, email, etc. - I now grab a tablet instead. It's always available, generally within easy reach, does what it's meant to do, and then disappears back into the background. It conforms to me, rather than the reverse. Because it's so instantly accessible, tablets just ease into flow of whatever it is you're doing at that moment. You don't have to stop what you're doing to get the info you need, you just grab it on the fly. No friction, no fuss.

Super Social

The other great thing about tablets is how social they end up being. Some other bloggers have written about this as well - but I've seen it first hand. For example, I've used the iPad on the couch with my family, with both of us sitting comfortably, yet we were still able to see the screen and use the interface. You'd almost assume the iPad would be a relatively personal device as the screen is still smaller than a normal piece of notebook paper, but I can't count how many times I've ended up sharing the screen with someone else - sitting, standing, at home, at work, it doesn't matter. Ever share a desktop computer? You're jammed in next to each other, with both of you bumping heads tying to get a good angle on the screen or fighting over the mouse/keyboard... it's not good. Another thing is you can also just look something up and then just *hand it* to someone else for them to see what you've done. Have you ever tried to do that with a laptop? It's generally impossible unless they're *very* familiar with your computer's setup already.

This is the stuff that amazes me - tablets really are completely different than mobiles or desktops/laptops in subtle, yet significant ways. Tablets seem to help lower the wall that comes up when people are using regular computers. If you're reading something on an iPad, it's sort of like reading a magazine, which translates to it being less isolating than when you're lurking behind a computer screen. Someone on a laptop is usually leaning over, staring intently, clicking their mouse once in a while - all which screams "Do Not Disturb" to those around them. Someone using their mobile has almost the same sort of body language, actually! Head down, shoulders hunched, mobile held close to their eyes, squinting intently at the screen while hesitantly jabbing at various options, or suddenly tapping out a message furiously. Everyone around that person gets the clear signal that they are doing something private and to not intrude.

Using a tablet is completely different! You're usually sitting in a comfortable position, face viewable, eyes scanning normally, with an occasional flick at the screen or other casual movement - this gives a totally different and much more welcoming vibe.

This may not seem important but we are all spending way more time on the Internet or using digital content in general. Tablets allow people to consume that content without isolating themselves from those around them. While your family is watching TV, you can be sitting on the couch nearby reading an online newspaper, rather than hunched over in the corner at a desk. During morning breakfast, you can be checking your email while having your coffee, yet still be social and accessible. You can bring a tablet to a meeting, and not seem like you're hiding behind the screen, or checking Facebook instead of working. These sorts of societal effects are pretty big in my opinion.

That said... it depends

So as devices for consumption of content or casual interaction like games, tablets and touch interfaces are spectacular. But the problem is many of us do other things with our computers that may not fit in very well with touch.

Typing on a touch screen is generally slower because you can't feel the keys. You can get used to it and get faster, I'm sure, but I doubt most people will ever get as fast as when using a real keyboard. This means that text-entry heavy applications - email, document writing, etc. - aren't going to be as good on a touch device for the immediate future. There's some work on creating screens that provide feedback to your fingers making it feel like you're touching real objects, but I'm not sure if it's clear whether that will make it easier to type or not. I will say that combining keyboards and touch-screens works pretty well - I have a ThinkOutside Bluetooth keyboard that I've used with my iPad and it works pretty well. Surprisingly, it felt more accurate to touch the screen when I needed to, rather than having to grope around for the mouse, then scan the screen for where the pointer was, move the mouse to what I wanted to interact with, and then move back to the keyboard.

If you don't have buttons, you don't have combos and shortcuts. I think most regular computer users have a few key combinations they use regularly, whether it's as simple as Control-C/Control-V to copy and paste, or more elaborate combinations for Excel and Photoshop experts. Even holding down the shift key while selecting multiple rows of items, or even simply clicking to highlight a row of text is a combination of sorts, really. Yet, all of these activities are made harder by the fact that pure touch interfaces only have a few fingers as data-entry points. Some solutions are timeouts - like what the iPad does for copy/paste or Windows Tablet edition does to mimic the right mouse button, or using two or even three fingers instead of one. But these solutions are limited, which means until applications are completely re-written with touch interfaces in mind, the people who rely on actions like this to work efficiently are going to balk at tablets completely.

Our fat fingers hide the things we're touching. Occlusion is a problem, as well as trying to get pixel-perfect accuracy with the blunt instrument that is our fingers. I don't think this is a huge problem, actually, I just think that zooming in and out of a specific screen area will have to become a more common UI paradigm. Right now, we simply move our mouse slightly slower to get that image aligned *just so*, or put the cursor at exactly the right letter. Getting that sort of accuracy with our fingertips is tough, unless it's easy to quickly enlarge the UI to the point where your fingertip isn't so big. Again, for someone who uses their mouse like a surgeon wields a scalpel, using a touch-screen will be like operating with mittens on.

Despite these problems, I still think that tablets are going to take over. Why? Because these are problems of content producers, and we all know from using the Internet that much fewer people actually create all the content out there than consume it. And much of the problem is simply about getting used to new ways of doing things. Millions of people got used to texting when you had to do it with number pads, and then later millions of others got used to tapping out long, involved messages on small Blackberry keyboards. They'll get used to using touchscreen keyboards as well. New apps will be written that get rid of the need for so many shortcuts, or come up with some cool new use of multitouch that helps ease the problems. And the rest? Well, they'll continue to use computers just like they're doing now. The guys at Pixar aren't using Atom powered netbooks to create movies, right? There will always be a class of users who's needs require better/more specific computing power. But I think the vast majority of the rest of us will be moving away from laptops and desktops, and using tablets instead.

We're just a few years away from the time when incoming Freshman at colleges around the world will be carrying shiny new tablets instead of laptops, and when that happens the corporate world won't be far behind, and we will have truly entered a new era of computing.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Saturday, 19 Jun 2010 02:55

[image]

I recently bought a Lenovo IdeaPad S10-3t multi-touch tablet/netbook. It was on sale for $200 off and I just couldn't resist. I figured it'd be a great way to test out the variety of new tablet OS's coming out and to compare them with my much beloved iPad. Even though I already have two other netbooks (my original HP mini, and a Nokia Booklet 3G) the ability to convert from a netpad into a tablet is key, and having a capacitive screen and an Atom N470 CPU just made it a really great device for the price. Here's an Engadget review with some video.

I should say that after using an iPad, the IdeaPad definitely feels clunky and heavy as hell. Oh, and the screen is awful at side viewing angles. You'll go blind if you tried to use thing thing in portrait mode (though it is supported). But beyond that, it's actually a slick little machine. If you're not willing to wait a few months for the next generation of cheap "netpads", I'd say the IdeaPad is a pretty good buy.

So let me tell you my experiences with using it as a tablet.

Windows 7 Tablet

[image]

First thing I did was play with the pre-installed version of Windows 7. I won't say it was fast, but it was definitely less pokey than my Nokia Booklet 3G due to the newer Atom processor. It wasn't long before I realized I wasn't really getting the full "tablet" experience though, because it came with Windows 7 Starter Edition, which doesn't have the tablet services built in!! So using Microsoft's convenient (albeit extortionate) Instant Upgrade service, I upgraded to Windows 7 Home Premium, and suddenly a bunch of settings and functionality appeared that wasn't there before - like mouse icon hiding, scrolling, flick gestures, right-clicking after hold, screen orientation, on-screen keyboard, etc.

Yet even though adding tablet functionality into the OS helped considerably, it was readily apparent that Windows Tablet functionality is really just a superficial overlay at best, more focused on pen-based computing than real touch. They've been doing pen-based stuff since the late 80s, so that's not surprising, really. ;-) Basically, despite the IdeaPad having multi-touch and a sensitive capacitive screen, using Windows with just your fingers is just painful.

This really solidified for me the idea that trying to go from a WIMP interface to a Touch based one isn't easy or straight forward. Before I write more about that, let me mention the other OSes I tried as well.

Netbook OSs: MeeGo, Ubuntu Netbook and JoliCloud

[image]

MeeGo's phone and tablet version hasn't been released yet, so I wasn't able to try that interface, but I did install the first public build of MeeGo that came out a week or so ago and it worked well! This made me very happy, as the hardware support for this first version is limited to Intel-only hardware - no NVidia, ATI or GMA500 support for video for example - which means it wouldn't run on either of my other netbooks. Also, believe it or not, MeeGo doesn't work well in VirtualBox either (unless they've done something in the past week to fix that). This means until I got it running on the IdeaPad, I hadn't actually seen MeeGo in action! Short review: It's *fast*, easy to use, and since I use Linux on a daily basis, felt very comfortable to use - like a mini version of my day-to-day computer. You can't really compare the speed and utility of MeeGo on a netbook vs. the monolith that is Windows 7 Home Premium. Though some of your favorite apps might not be there (iTunes, etc.), the basics of web, email, IM, etc. work great.

You can check out the MeeGo site for screen-shots of the OS, but what's not readily apparent is there's no traditional "Desktop" anymore and "windows" are de-emphasized (though you can still sorta see the window controls for apps that haven't been re-written, like GIMP). Instead the apps are all maximized by default, with no window frames or buttons for minimizing/resizing. These are definitely steps away from the traditional WIMP interface, and like I said, even though this version of MeeGo isn't focused on tablets, you can see the seeds of that new paradigm for working with computers there. I can't wait to see the tablet version when it's launched - I really hope they do a good job.

[image]

I also grabbed the latest versions of Ubuntu's Netbook version, and tried out JoliCloud as well. I'm not sure if there's a standard Gnome netbook UI or what, but they both looked and worked pretty much the same way. Like MeeGo, neither starts with a traditional Desktop, and they de-emphasize windowing. The rest of their GUIs works more-or-less like standard Gnome, and in fact, the new netbook dashboard can be reverted pretty easily to the standard Gnome desktop with an option switch.

I should note that none of the Linux OSs I played with supported the IdeaPad's touchscreen. I tried to install various drivers and kernel tweaks, but just couldn't get it to work, which is too bad. The reason I tried JoliCloud, actually, was because they are now supporting touch-screens out of the box, but sadly the IdeaPad wasn't one of the initial batch. Still, it was great to play with the newest netbook OSes out there just to confirm what I was originally thinking.

Goodbye WIMP!

So after playing with Windows using the touchscreen and messing with the various netbook OSes, it became pretty clear to me that using regular Windows-Icon-Mouse-Pointer based GUIs with your fingers just isn't going to happen. And if other manufacturers are going to catch up to Apple, they're going to really have to start from scratch and *re-write* their apps, because most of WIMP just doesn't translate to a tablet form factor. Lately we've seen some noise about future Windows tablets which will compete with the iPad and Ubuntu coming out with a tablet (which they've since clarified), but there's no way it can happen.

This may be pretty obvious, but let me go into detail about the various parts of the traditional WIMP interface and examine them from a touch-GUI perspective.

Pointer Icon - First is pretty easy, you can't have any sort of pointer indicator when touching a screen. Windows will let you hide it permanently - showing concentric circles where you tapped instead - but I've also seen the "autohide" simply mean to make the mouse pointer disappear when it stops moving, but appear under your finger when you touch the screen. No way - it has to be completely gone, because it's just disconcerting to have this pointer following your fingers around.

Windows - What are windows for, anyways? Well, back when managing files was the most important thing you did on your computer, the Desktop and Windows were a way of seeing various parts of your computers storage at once, and moving files between them. Once in a while it's nice to have one window up showing text, while you refer to it in another screen. When Windows launched, each window became the application itself, so managing windows was also managing multi-tasking. Since then, however, we've moved towards a more web-centric view of the world. Apps moved away from having 30 windows open, and started organizing them into tabs. Managing individual files became much less central to our use of a computer, and managing vast amounts of data took its place (this is why many of us use iTunes to manage our vast collections of music, not by manually creating hundreds of folders and copying thousands of MP3 files around by hand). So windows, in general, are losing their relevance to how we use computers, which is why we see netbooks moving away from them, and no portable devices using them, saving on both complexity and screen real-estate as well.

But most importantly, from a touch perspective, windows are almost *completely* useless. Moving, resizing, minimizing, maximizing, closing are all things that are just plain too hard to do with our big old digits in the way of the screen. On the IdeaPad specifically, the bezel at the edges of the screen made using the corner buttons (or any interface near the edges, actually) almost impossible because I literally couldn't fit my finger in the space required. But even with a perfectly flush screen, dealing with windows would be more effort than their worth. We just don't use them nowadays.

Mousing and Shortcuts - In addition to managing windows, "dragging and dropping" with a touch interface is painful, and individually selecting multiple files is impossible. Really, any sort of thing where you had to hold down a button or a key while moving the mouse just doesn't translate directly to a touch screen. (The IdeaPad actually has a left-mouse key on the side of of the screen for exactly this reason, I assume). Selecting text, scrolling, context-menus, etc. This means you need to re-think how that function works completely. Apple, for example, has added various multi-touch gestures (two fingers for scrolling certain areas, pinching for zoom, etc.) as well as touch-timeouts for functionality like selecting and copying text. An application like Photoshop, which has a million key-combos for changing the current tool, etc. is going to have to be re-implemented all together.

Menus - Traditional window menus are super useful things to have in computers, I have to say: When Microsoft got on their kick a few years ago to hide them, it drove me nuts as there's just too much functionality in modern apps to include with just buttons. Menus have names and are organized hierarchically which means you can talk someone through a problem over the phone using them (rather than "the last button on the left") which is also incredibly useful. But that said, using the Windows 7 tablet really showed me how antiquated menus are. Even with the display magnified 125%, they're tiny and hard to manage with your fingers, and if you bump up the size of the fonts more, you might as well just throw away a third of your screen real-estate (and even that doesn't really solve the aiming problem). The only real solution is to include an app's core functionality in the main interface itself (buttons, icons, drop-downs, etc.) and have a dedicated "settings" page for the rest of the functionality.

Desktops vs Dashboards - What about icons? Well, as long as they're big enough, these are obviously great for touch screens as they work just like buttons, which we're used to using in the real world anyways. But the traditional desktop is definitely going away to be replaced by the dashboard instead, filled with relevant icons, info and options. From what I've seen, most people use their desktop like this anyways - it's filled with shortcuts and important files and is the first place they look when wanting to start working on something, rather than the Start menu or Dock. Organizing the desktop is just a great idea - how many times have you seen someone's desktop that is just filled to the brim with crap - downloaded install files, icons from Adobe or Symantec or corporate image crap like VoIP icons, links to MSN or AOL dialup accounts, etc. Moving to a Dashboard like many mobile devices use nowadays is the right direction. The iOS (iPhone/iPad/iPod Touch) dashboard is most prominent example of this, as it displays the "springboard" before even the *phone* app itself, much to the users (and app developers) delight.

Moving to Touch

[image]

So I'm sure there's other elements of a normal WIMP GUI that don't work as well (scrollbars for example), but these are the major ones in my mind, and succinctly show that slapping touch capabilities on top of a desktop OS just isn't going to work. This is why the iPad has had such a huge success, yet Tablet PCs pushed by Microsoft for years have languished. In short, there's just too many fundamental issues with the Windows interface to just "tweak". It's not a matter of size, weight, power or portability of the device that matters - the core underpinnings of a WIMP-based interface is just incompatible with touch usability. Everything is going to need to be re-written from ground up - OS and apps alike. That is not just true for Microsoft, by the way, it also applies to Linux guys as well (Gnome, KDE, Xfe), and the distributions that use their code like Ubuntu, Fedora, JoliCloud, etc. Just like Microsoft, they can't just add a few changes here and there to support touch and think their old codebase will work just fine, because it won't.

Apple figured this out first - and I think others are now stumbling upon this truth as well: Google with their Android OS, HP with their Palm-developed WebOS and Nokia with MeeGo. Happily for these guys, focusing on touchscreen mobiles (rather than mostly QWERTY ones like RIM) has enabled them to get a jump ahead in what I see as the future of *all* computing - not just mobility, tablets are just the start. These guys are in a much better position to "scale up" their OS's from their small-screen roots, rather than having to pare back from full-blown desktop platforms. The rest of the platform creators out there just don't seem to have gotten it yet.

A great example: Despite pushing MIDs for the past few years (portable, Intel-based touchscreen computers), Intel's first efforts with Moblin (which became MeeGo) were all focused around a desktop UI! Combining efforts with Nokia was a perfect way for MeeGo to get some "touch DNA" added quickly, as Maemo had been focused on touch screens for years now. I really can't wait to see how it's turning out. (Note, as always, though I work for Nokia, I have no direct or inside knowledge of the MeeGo project. If I did, I wouldn't write about it.)

What's interesting is how Microsoft seems to get it in some ways - the Surface table-top touch platforms and the Kinect gaming platform - yet they've done so little when it comes to really supporting non-WIMP interfaces in general or mobile computing. They desperately need to realize that Windows' 20 year run is at its end and that a whole new way of working with computers is here. Not that Windows and WIMP is going to disappear overnight, by any stretch, but I bet you the shift is going to be huge, happen quickly and surprise a lot of people. The floodgates, I think, are opening.

What's also interesting is Google and it's two competing OSes - Chrome and Android. It seems that Android has won this internal battle and could easily do most of what Chrome is supposed to *and* work perfectly on tablets out of the box, since unlike Chrome, it's been tweaked relentlessly to mimic the iPhone and its touch interface. (Remember the first implementation of Android looked like copies of the Nokia e61, rather than the un-released iPhone.) I wonder how long they're going to pursue this two OS strategy?

Finally, the fate of WebOS still seems up in the air. HP had few choices if it wanted to launch a truly compelling and competitive tablet computer. Putting Windows on its Slate device they showed at CES wouldn't please anyone and none of the other Linux systems are really any better sans Android, which comes with its own set of issues, I'm sure. Buying WebOS was a great idea and *seems* like it'd be a great fit to scale up to a tablet from the Palm Pre size devices. But nothing has been announced, and weeks are slipping by. The only thing Mark Hurd has mentioned is putting WebOS on their printers(!?!?) and sorta/kinda/maybe saying they weren't going to kill the Palm smartphone line... yet.

It seems like a great opportunity for the Open Source folk to really get out in front of this, no? MeeGo's license is truly free, so that'd be a great start for the Gnome and KDE heads. (Especially the KDE guys, as the underpinnings of the GUI will come from Nokia's Qt division). But if not, then someone should be starting a dedicated "Tablet OS" project, as there's obviously a huge vacuum for this sort of thing.

Maybe it's because of the hardware? Despite having shown tablet PCs for the past year or so at the big gadget shows, there really hasn't been a whole bunch of touchscreen enabled PCs out there at a reasonable price. That's why I was so excited about the IdeaPad when I saw it on sale, actually. At one point, I was even considering buying a DIY touchscreen kit for my HP Mini. Since I'm *not* a hardware guy, that tells you how few options there have been lately. Maybe once some decent tablets get into the hands of developers, there'll be more understanding of what needs to be done. I hope it happens soon! Because the quicker I can use a slick non-Apple tablet device, the better!

Ok, that's enough brain dumping for this post. At some point I need to write up the qualities which I think make tablets so amazing - I just didn't feel like adding more noise to the "iPad is awesome" posts out there, so I haven't yet. Also, I feel the need to write about how I think touch based UIs are going to change the web as well - in many ways, web pages inherit many of the bad traits of their WIMP based containers - scrollbars, too-small-to-click links and other elements, hidden areas, etc. Besides just magnifying the page, there's got to be some more standard changes to be made, I think. Ok, enough...

As always, email or tweet to me if you have any thoughts or comments.

:-)

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Tuesday, 15 Jun 2010 07:36

Like everyone else on the frigin' planet, I've been watching the World Cup and going insane listening to the vuvuzela horns. There's been a couple articles lately about how to filter out the sounds - first this German article using a Mac and then this other article about how to do it using JACK and QJackCtl.

I didn't feel like installing anything overly complex on my Ubuntu box and figured it *must* be possible to do using just the basic sound tools that come with Linux. It took me a little while, but using SoX (Sound eXchange) I was able to pipe sound from the default recording device (my laptop's mic/line-in), filter it, and then stream it immediately through the speakers. (Just apt-get sox if you don't have it on your box already).

Here's the command:

rec -d vol .5 equalizer 233 .1o -48 equalizer 466 .03o -48 equalizer 932 .02o -48 equalizer 1864 .2o -24 | play -

[Update: Yusuf Kaka responded in a post with a much more accurate set of arguments.:

rec -d | play -d vol 0.9 bandreject 116.56 3.4q bandreject 233.12 3.4q bandreject 466.24 3.4q bandreject 932.48 3.4q bandreject 1864 3.4q

Very cool!]

Now, it seems to work, but after listening to it over and over again, I'm starting to wonder how much of a real difference it makes. In theory, it should be completely filtering out those frequencies (233Hz, 466Hz, 932Hz, and 1864Hz) at various bandwidths. In practice it sounds more like it's dulling those frequencies rather than removing them all together - you can still hear horn blasts - but you can hear the crowd and whistles as well.

rec and play just call the main sox binary, so you can use the same params to play back a sound file. For testing, I recorded a bit of the Italy/Paraguay match into a .wav file using rec, and then used play like this to test the various filter settings:

play -v .5 gamesound.wav equalizer 233 .1o -48 equalizer 466 .03o -48 equalizer 932 .02o -48 equalizer 1864 .2o -24

The main problem is that I'm just copying stuff I see on other sites, and not really technically understanding how audio works at all. What's bandwidth? Gain? What does it mean by clipping? And who the hell is Butterworth?

Anyways, the command above is definitely filtering the sound, but I feel like it could be doing it *more*, I just can't figure out which numbers to tweak. If you have some ideas, email or send me a tweet!

:-)

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Friday, 11 Jun 2010 22:44

[image]

This is a follow-up post to one I wrote few months ago about how difficult it was to keep up with the 250 or so people I follow on Twitter and Facebook. Because I was pulling in everyone's updates into my custom news reader, I would store up hundreds of updates every day, which I would then try to skim through as best as possible.

In the end, I decided it just wasn't in fact possible to keep up, and that most Twitter users most likely view their stream in bursts of what I termed as "Phased Attention". Or to put it in less geeky words, people are either paying attention to their stream, or they don't and don't care. No one besides myself was trying to store up every tweet made in the past 8, 24, 48 or more hours and then go through and read them all.

Once I realized how futile my effort was, I deleted the streams from my reader and just ignored them. That's not very much fun, though, because people are still using Twitter and Facebook to share links, thoughts, etc., even if in fact, most of their followers never actually see those tweets. I felt like I was really missing something. But what was the solution? I was either overwhelmed or felt like I wasn't involved.

After I mulled it over for a few weeks, I decided to re-add my Twitter and Facebook streams, but putting my Phased Attention theory into practice, I limited the amount of tweets I see in my reader to a buffer of the previous five hours. This means that if I get busy and miss a day or so of updates, I don't return to 2,000+ messages to try to wade through in order to jump back into the flow of messages. The results have been great - I still get a great feel of what the people I'm following are doing, and yet I'm not overwhelmed at the same time. The buffer is a very practical solution - better than logging in to the Twitter website or using a client and scrolling down a few pages to see what I missed, but not as bad as trying to keep up with every single tweet either.

Essentially, what I've done is codify into my news reader how Twitter is actually used by most people. Most tweets aren't of much lasting value, so if you miss them, it's not a big deal and I think most Twitter users realize that. Viewing someone's thoughts about the big game three days after it happened is just a waste of time really. So by accepting that updates are generally time sensitive, and coding for it, I've been able to finally keep up with status updates in a reasonable way. Now instead of being distracted constantly (by using a client which pings every few minutes with new updates) or missing huge chunks of messages (by logging into to Twitter once and while to see what was happening), or having to wade through *everything* posted (by subscribing to the raw feed of updates), I now get a nicely balanced number of updates that I can check out, which are still relatively "fresh", yet not overwhelming.

What's interesting is how retweets become very useful in this way as well. Rather than being an annoying repost of something without any real additional value, the rewteet becomes useful because of the fact that the original tweet has been *shifted forward in time*, which increases my chance of seeing it.

Now that I feel I figured it out, I wonder when Twitter, Facebook and others will get it as well (if they haven't already and I haven't seen). Because right now, status updates are a sort of Social Ponzi Scheme, where new users have to be added all the time to make up for the eventual dwindling of older users who tire out of trying to keep up with everything, before giving up all together. Any other type of filtering just doesn't work - even if you just focus on the "important" people or topics, eventually those numbers will get overwhelming as well. The only real solution is to consider all status updates as ephemeral, and then simply decide on how much and when you want to jump into the stream and enjoy them before jumping out again. As the members of social networking sites hits saturation levels, the growth is going to stop - if they don't figure out how to keep users engaged without it being overwhelming to do so, they're going to have problems.

Then again, I've never really understood social networks, so I may just not get it still. Regardless, my new system works pretty great for me so I'm happy. Time will tell what happens to the social networks.

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Monday, 17 May 2010 20:18

[image]

This year I decided to fix my horrible sleeping problems I've had for the past decade or so. A month ago I had an operation to take out my tonsils and fix my deviated septum to help alleviate my sleep apnea (a whole other long post, which I probably will never write as I don't actually want to think about the experience again). It was pretty extreme, but it seems to have helped a bit (no operation is a 100% cure). I've also done things like video tape myself sleeping to find an optimal pillow arrangement for breathing, bought one of those teeth molds to help hold my jaw in place, and lost some weight. Anything to avoid using a CPAP machine, which I also have but don't want to use as it sucks. It all seems to be helping as I'm now sleeping at night and not using a machine to do it.

So now that I can sleep a little better, I wanted to start tracking my sleep patterns and improve my sleep even more. I did a search online, thinking about some of the neat new gadgets I've read about - FitBit, for example - but came across an 99 cent iPhone app called Sleep Cycle Alarm Clock. It looked very cool, and after reading a couple reviews, I thought it was totally worth a buck to try it out and see what happens.

The way it works is by using the accelerometer in the iPhone to measure when you move around at night on your bed. You place it on the corner of your mattress, and then any time you roll over, or really move at all in any significant way, it assumes you're not in a deep sleep. It's meant to be an alarm clock, as it will try to find a time in the morning within a 30 minute window in which to wake you up which best matches your sleep patterns, but I was more interested in the graph of sleep. Am I actually getting deep sleep like I should? The graph won't be as accurate as something measuring your actual body fluctuations, but it's pretty cool.

Last night I tried it out, and you can see for yourself in the image above that it definitely gives a great indication of your sleep patterns. All those dips are times when I didn't move. I can also see a weird pattern during the deepest sleep when I wake up for a bit and then go back to sleep. I'd bet money that I changed into a bad position, started having trouble breathing because of that position and how deep sleep I was in and had to adjust. That's the sort of thing I want to start to track more.

I'll probably invest in a FitBit as well, but I thought this app was definitely worth writing about as it's pretty ingenious, cheap and works well. Also I also wanted to document my ongoing battle with getting a good night's sleep, which is far from over. To be continued, as the say...

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Wednesday, 21 Apr 2010 22:54

[image]

So I'm at work, and my iPad is sitting on the desk beside my computer and I think to myself, "Gosh, I wish I had a cool Lego iPad stand like I do at home." Then I remembered the Bionicle kit I got a few months ago from one of the admins here - it was an extra left over from the Christmas party and she thought my son would like it. I didn't take it home though, as I figured I'd end up with the munchkin at work sometime and it'd be nice to have something to keep him busy just in case.

So I fished out the box and looked at the picture on the front of a spider thing with lots of legs. PERFECT! I opened it up, and about after about 10 minutes of experimenting had a nicely stable iPad stand for work. W00t!

I took it apart again, and snapped some pictures of me putting it together in case you want to play along at home. I used kit #8977, but I'm sure there'd be other kits that'd work just as well as I had a bunch of pieces left over.

:-)

-Russ

[image]

[image]

[image]

[image]

[image]

[image]

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Wednesday, 21 Apr 2010 06:53

[image]

I decided to make myself an iPad stand out of Lego after seeing some DYI stands recently and I figured I'd share the results. My goal was to create a stand that was both minimal, yet sturdy. I didn't want to slap a few pieces together in some wobbly base that would either fall apart when I moved it, or worse, break apart while bearing the weight of my shiny new $500 gadget. But I also didn't want a multi-colored plasticky monstrosity sitting on my desk either. So here's what I did, so you can play along at home if you like. (The rest of the pictures are at the bottom of the post.)

First, I started out with a Lego road base piece so that there was a nice smooth section where the iPad can slide in without having to place it "just so" on the studs. I'm pretty sure we got that particular piece in a Firefighter kit a few years ago. We've moved on to Star Wars since then, so no one will miss that piece I'm sure. The nice thing is that because it's meant to go on the bottom of a bigger set and support the weight of a building, it's smooth on the bottom - so it won't scratch up a table top, nor does it slide around much.

After that I grabbed a bunch of thicker pieces that I felt would support the back of the iPad without separating from the base. The rounded brick pieces worked out perfectly - the curve at the back acts as a sort of arch (I'm guessing), so it's hard to "shear" it apart. Finally, I tapered the back up until the angle was just right, then topped it with a couple slanted pieces I found that fit perfectly.

For the front, I just threw a black couple 6x1 bricks on the ridge, and topped them with smooth black caps. As you can see, when you look at it from the front, you can't even tell it's sitting on Legos. Though I may decide to make it more Lego-like (i.e. colorful with studs showing) I actually wanted to see how nice I could make it look first.

I'm very happy with it -it's sitting here on the desk right now and it works great. Nice and sturdy, and it blends perfectly from the front.

:-)

-Russ

[image]

[image]

[image]

[image]

[image]

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Friday, 02 Apr 2010 17:45

Mag+ live with Popular Science+ from Bonnier on Vimeo.

I got an email this morning from the folks at Bonnier, the guys that created that really cool Mag+ tablet magazine demo that I wrote about last December. I assume they saw my post about it, and wanted to make sure that I knew about their new app for the iPad - a digital version of Popular Mechanics.

This is incredibly, insanely cool. To go from that concept to an actual product in such a short time is just astounding to me. In December, stuff like this seemed to be something that should be possible, but for some reason hadn't been done yet and still seemed a ways off from being realized. It was a short post, but let me emphasize what I wrote then (months before the iPad was announced):

"... the size, shape and interactivity of the tablet mockup they have is emblematic of where all computing is going, I'm convinced. The simplicity of the device is just gorgeous - a big screen in a thin, light frame, with multi-touch actions for control. Perfect. Simple, but powerful."

And now the iPad is here and the Mag+ concept magazine is *real*. AMAZING. What a difference four months makes, no?

Welcome the future everyone!

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Thursday, 11 Mar 2010 04:52

[image]

This post is about my attempts to figure out how to best view Twitter and Facebook updates. I still haven't figured it out, and though I've been meaning to write this post for a while, I was hoping to have developed some sort of solution or system to expound upon, but I don't.

Let me, instead, expound on what I've noticed so far.

First, let me describe how I use Twitter, and to a lesser extent, Facebook. With Twitter, I follow, at this moment, 245 accounts. None of these accounts have been added for any reason except for me thinking that what that person has tweeted in the past is interesting, and that I want to hear what they say. In other words, I haven't followed someone just because they followed me. Depending on your point of view, 245 people may seem like a lot or a little, but when you combine industry people, official accounts, friends, co-workers and contacts, it adds up rather quickly. (And I'll tell you the truth, I'm not a very social person. I'm sure the real "networkers" out there are following a lot more than this.)

So because I use my own custom news reader, my Twitter timeline feed has been stored in a MySQL database which I've been using to tweak my Tweet-reading experience. And now, I can look at it from a historical perspective as well: Since last November (the last time I purged the DB for whatever reason), I've gotten over 98,000 updates. In more specific terms, every 15 minutes, 24 hours a day, for the past 130 days, I've been storing every tweet I would normally get online.

98,000 tweets / 130 days = 743 tweets a day on average.

That's a lot of updates in 5 months, (and that's not counting the Facebook updates, which I also read). In fact, if you look at the chart I made above, the average doesn't really show the real story - the number of tweets can vary between 600 on a slow day to well over 1,000 on a busy day. Except for the Christmas holidays, when everyone took a break it seems. Some tweeters are more verbose like Michael Gartenberg (who's tweeted 3850 times in that period, or around 30 times a day average), and others are much less so, but regardless, the total number of tweets stays north of the 600 mark daily.

Let me say, it's basically *impossible* to keep up with 600+ updates a day. I've tried.

I'm talking about reading each and every tweet from each of my contacts. Not filtering in any way, but simply re-organizing and re-formatting the tweets to see if there's some magic way of keeping up. The best I've been able to do so far is to organize the Tweets on their own page, sorted by contact. Here's a snapshot of what that looks like. By organizing tweets and Facebook updates by person, they become almost blog-like in their nature. It definitely helps for skimming, so if someone I know that hasn't posted for a while shows up, I'll see their pic and can stop and read more closely. I can also see when someone is just being overly verbose, and skim all their tweets all at once.

I've also tried:

  • To organize the tweets based on hourly and half-hourly increments.
  • Adding Javascript so I don't see an individual's tweets unless I click to expand that section.
  • Highlighting the updates that have links, calling them out for better scanning
  • Formatting tweets like one big paragraph per person (getting rid of the date)

In addition to ways of seeing *everything*, I've also tried filtering:

  • Based on "favorites", ranked from 1 to 5, with the top ranked users sections expanded, and the rest collapsed.
  • Filtering to show only those tweets with links
  • Filtering based on keywords I find interesting (mobile, etc.)

None of these options really works, for various reasons. Either you miss context, or you miss the flow of a conversation, or you end up just focusing a fewer amount of users and should just unfollow the rest anyways.

So, if it's impossible to *really* keep up with a moderate amount of users, the question in my mind is, how the hell is Twitter and Facebook continuing to do so well, and increase in popularity? Well, I've come up with a couple of terms that I think explains how these services actually work. The first is what I'm calling "Phased Attention" - a time period in which a user views a stream of updates filled with "Transient Information" - data that by it's very nature is expected to come and go.

To be more specific, "Phased Attention" is pretty much how anyone who uses a Twitter client deals with the constant flow of updates - they turn it on, participate, and then turn it off. Anything posted outside that time period is of little concern. Unlike my attempts to archive and read every tweet from every user, every day, it seems the only real way to consume the constant flow of data from contacts, data sources, etc. is during the time when that information is freshest, not later.

"Transient Information" is information that you don't necessarily need to worry about. If it was, it would (and should be) sent by different means than a status update stream. This is an important concept that I think we're starting to see is fundamental to Twitter and Facebook. First, because trying to force-fit important information into the stream is just a *bad* idea, and secondly, because we're learning how much transient information is actually out there. What my friend ate for lunch, what someone's opinion of a TV show is, or even stock prices and sports scores are all bits of info that are valuable only for a certain amount of time, to a certain audience.

It may seem obvious, but I think these concepts are a fundamental change from how we've viewed communication and information consumption up until now. Email is expected to be durable for the most part, and messages build up in your inbox until you actively do something about them. This is similar to the way news feeds have been read up until now, as well. Emails and news items don't get deleted after a certain amount of time, do they? (Though many of us wish they would). However, that is essentially what happens to tweets when they disappear "below the fold" - they might as well be gone forever.

On the flip side of this, chat - including group chats and IM - expect that all participants actively and synchronously take part - you wouldn't bother typing if there was no one around to respond to you. (Though IRC might be the exception here). These systems expect the *participants* to be durable, if someone drops from a IM during the middle of a conversation, it usually means the end of the conversation.

Basically, these two ideas means that Twitter and Facebook are fundamentally different in that users don't expect either the participants nor the messages themselves to be durable.

Let's imagine how an average person signs up to Twitter and starts using it for the first time. The 245 people I follow average about 5 posts a day - so assuming this new user has typical friends, the first few days or so of using Twitter is very similar to other services she may have used before. She posts an update, her friends see it, and they respond, etc. After she adds her 10th friend, however, she's now getting nearly 50 updates a day, and she's noticing that her friends don't always respond to her tweets like they did at the beginning. But enough do so that she keeps on updating her status - it's fun and cathartic, thinking that there's all these people out there that are interested in what she's doing. Soon however, she's following or friended a few dozen people - including some services like CNN, or her favorite singer or actor - and a few dozen more are following her back. The number of updates have skyrocketed to nearly 200 a day or more. Every once in a while she'll "catch up" to what a friend wrote over the weekend or something, but in general, more and more messages fly by without her seeing them. But that's ok because when she's online, she responds or retweets the ones she thinks are interesting, and also sees responses from one or two people that's following her. This keeps her interest going like a Pavlovian randomized reward and the process continues.

The key takeaway from all this analysis is simply a better understanding of the true nature of these new data streams:

  • You can't consume all the data in a typical update stream. There's basically no way of organizing, sorting or prioritizing it that would let you see it *all*. The best you can do is filter, which by its very definition means you're missing something.
  • Any stream which contains mostly Transient Information - like status updates - can't replace any medium that contains vital information. The chances that the updates will be missed - *even by those subscribed to them* - is just too high.
  • Phased Attention is the 21st century's answer to Information Overload. As the data volume increases, more of it will simply be skimmed, missed or outright ignored outside a set time-frame or context. What's different is that unlike in years past, this is becoming an accepted, and expected way of dealing with communication and information, and it's bound to bleed back into other technologies such as email.

I have a suspicion that the novelty of the transient information stream is going to wear off. I'm not sure when, but it feels to me like a pyramid scheme at the moment, driven by the power of numbers of people who are involved. For every one person who becomes disillusioned with Twitter due to the lack of interaction or return on time invested, there are two people who replace them. But there comes a point when that growth has to end because there really is a finite number of people out there. For example, I've got roughly 1200 people following me on Twitter. If I update enough, eventually a few of them will actually see my tweets and respond. If they don't, it's pretty much like a write-only medium, which isn't very compelling. After a while, as less and less people bother responding and are not replaced by others, the desire to update your status will simply disappear. Retweets, liking and favorites, etc. help this a bit, as it lowers the bar in terms of effort to give feedback, but even this eventually is going to fade in value.

So where does it all go from here? I don't know. I'm still struggling to understand the growth of social networks, and every day I'm starting to wonder if the emperor has any clothes on or not. Something doesn't seem right, I just can't figure out exactly what it is. The idea that people are able and willing to use a Twitter client all day, with multiple accounts and windows, subscribing to hundreds of people's updates, posting 30 or more times a day, and are still able get anything done in their life seems far fetched to me. Are all those people really content with missing 90% of the information streaming through? I guess maybe they are, because that's the only way you'd be able to manage it. This just doesn't seem right at all.

Or maybe it's just me. I'm just not sure...

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Date: Thursday, 28 Jan 2010 08:29

[image]

Like every other techy out there, I also have an opinion about the newly launched iPad. I'm not going to wade in the minutia of various hardware or software decisions like the wonky micro SIM and the lack of SDCards or USB ports, nor the lack of background tasks or Flash support, or the pricing etc. Instead, I want to focus on the bigger picture, and came up with a couple thoughts I wanted to share.

First, if you've read this blog before, you already know that I'm in love with the idea of tablets as the next generation of computing devices, so I was looking forward to Apple's announcement as much or more than anyone. In fact, I've been expecting a tablet from Apple for, oh, about 7 years now. No, really. I wrote this post back in 2003 - Urgh! Steve, we all want a Mac Tablet! - when Steve gave an interview categorically denying any interest in tablets (or mobile phones), and have been patiently waiting since then. Well, it took the better part of a decade, but here it is.

But is it really what we wanted?

I think there's a lot of disappointment out there that the iPad is not in fact a Mac Tablet (i.e. a "MacPad"), but is instead essentially a larger iPhone. What I mean is that rather than shrink the open, Intel-based MacBook platform down to a smaller, touch-screen focused device - Apple chose instead to enlarge the dedicated, closed platform of the iPhone. It's an interesting - and probably obvious - choice for Apple and Jobs, but one that's causing some consternation for the rest of us who were really looking forward to a much different machine.

I think all of us really wanted to see a MacPad, not an iPad.

Not a peripheral gadget, but a primary computing device that can be used by touch if desired. Something that can function on a desktop with a keyboard and mouse, and then be picked up and tossed into a bag, or to be used while sitting down on a couch or in bed with only the touch screen for the interface.

That's not to say the iPad is somehow fundamentally broken because it's not a general purpose computer. Far from it - I'd still use it if I had one in my house right now. I've been using smaller dedicated web tablets for quite a few years now starting with the Nokia Maemo devices such as the N810 and more recently the Archos 5 Android web tablet. They're great to have when you want to get away from your computer, but still consume what is an ever-increasing amount of digital content. I wouldn't use those devices as my primary computers by any stretch - even on short trips, I'd rather take along a netbook - but to browse the web, listen to music, have access to IM or read eBooks? They're a fantastic gadget for this stuff. The Apple iPad will be a larger, and in some ways, nicer version of these devices, so for many people it'll be perfect.

But again, I'm not sure this sort of limited functionality is what most of us really want in a next generation computing device - which is what tablets really are meant to be. Not only in terms of use cases - where does a 10" computer fit in, really, where does it normally live and charge, etc. - but also just based on cost. Most people don't have $500 to spend on a computer and another $500 to spend on a purposefully-limited adjunct tablet device. They're going to want to get the one device that does it all - from work, to communication, to eBooks, to entertainment.

There's something else though, besides my gut feeling of what consumers want, where my opinion is as useless as anyone else's - it's that Apple's competitors in this burgeoning category aren't as behind as they were when Apple launched the iPhone.

It's really, really hard to jump exponentially forward in technology - a 10x improvement only happens once every generation or so. Apple's already done it twice - once with the Mac GUI in the 1980s and again with the iPhone in the 2000s. To do it again so soon is asking quite a lot from any company - technology just doesn't work that way. When the iPhone was announced, Jobs said that they were at least 3 years ahead of their competitors. That turned out to be incredibly accurate - it's 2010 and we're just now seeing devices that compete in terms of usability and functionality of the iPhone launched back in 2007. It was a huge leap forward in technology in a variety of ways, and as a result has created a massive ecosystem around it, and a huge following as well.

The iPad, however, is only an incremental improvement on the iPhone's huge leap forward. It is not, by any stretch, 3 years ahead of its competitors. And in many ways - the ability to install standard software, for example - it's already behind.

The fact is that *every* single feature that was announced today in terms of the iPad's core functionality can already be done on the web tablets sitting in my house. Right now. Yes, my Archos 5 has only half the screen size (though 3/4s the resolution), but that will be changing quickly I'm sure. And it's open. And it runs Flash. And this is just from the "gadget" guys. How easy will it be for traditional laptop and "netbook" makers to slap a touch screen on their devices, get rid of their keyboards and create general purpose "netpads" that have all the functionality of the iPad and more? Quite easy. We've already seen a variety of PC-tablets announced at CES, including HP's very lustworthy "slate" computer.

In other words, Apple is wading into a market that's got a lot of players in it already. They did this before with the iPod and cleaned house, so hey it may not really mean much. But I think in this category there's a big difference. In many ways, it brings them back into direct competition with Microsoft again, who's been working on Tablet versions of their OS for years, but now also add in Nokia and Google as well. These are not inconsequential competitors.

Anyways, in summary, I think if Apple had come out with a MacPad today, they would have provided that one device we've all been waiting for, kept well ahead of the market, and been a real force to reckon with. But by incrementally improving on the iPhone platform, I think they may have consigned themselves to a niche market instead.

Seriously - the first time you go to Starbucks and see a guy sitting with his legs crossed, sipping his latte and browsing his iPad, what are you going to think? "Poseur". The iPhone had that same effect at first, until everyone realized how amazing and useful it really was - not only that but it replaced something you already had, your old cruddy mobile phone. The iPad though? Is an add-on, an extra, a not-really-needed. And thus just screams to the world, "I have more money than brains, look how cool I think I am!" That's going to be a real perception problem I think.

All that said, you know I'm going to buy one, right? Right. :-)

-Russ

Author: "Russell Beattie (russ@russellbeattie.com)"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader