• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Friday, 22 Jun 2012 07:00

Note: Strictly for fans of The Wire. If you haven’t seen Season 3, you should go see it before reading further.

I sat in meetings that were all about taking over corners. How many corners do we need? —Stringer Bell

In Season Three of the Wire, the show shows the downfall of the Barksdale organization and the rise of Marlo Stanfeld and his crew. This is a stunning downfall for the Barksdale crew who had put themselves in a great position of power by the end of Season Two.

What happened?

On outward appearances, this seems solely due to a series of tactical mistakes and unlucky breaks by and for Avon Barksdale and Stringer Bell.

  • The reluctance from Stringer and the New Age Co-op to use violence against Marlo right in the beginning.
  • Marlo’s paranoia and tactical superiority leading him to turn the tables on Avon’s assasination attempt.
  • Unlucky timing at the end of Season Three where Slim Charles is left waiting for the go-ahead to ‘get’ Marlo at the rimshop.

None of this explains why a superior organization, with numerical superiority, a better ‘product’ from Prop Joe was taken down in a short time by a upstart.

What happened to the Barksdale organization is a a result of a few strategic mistakes, committed long before the first gun shot was fired between them and Marlo.

Market disruption

The blowing-up of the Franklin towers represents a fundamental disruption in the market. The Barksdale’s marketshare disappeared in one single explosion. Since buildings don’t get blown up on a whim, we can assume Stringer knew of this and had time to prepare.

In his book ‘The Innovator’s Dilemma’, Clayton Christensen says, “Disrupting has a paralyzing effect on industry leaders. They are always motivated to go up-market, almost never motivated to defend the new or low-end markets that the disruptors find attractive.”

Stringer’s strategy of choosing to push a higher quality product rather than holding down the right territory or win some of it from Marlo ultimately proved to be highly flawed. Just like big corporations don’t want to tangle with a small startup in a low-end margin product, Stringer didn’t want to engage with Marlo and his peers.

Ceding the low-end

Christensen’s book outlines the fall of integrated steel mills. Instead of producing all grades of steel (with very different margins), they decide to leave the low-margin steel to the upstart mini-mills. This is a great short term decision because their margins go up, they own the high-end of the market and the mini-mills are left with the unattractive low-grade steel business. However, over a period of time, the mini-mills got better at producing higher grade steel, causing the bigger mills to retreat to higher and higher margin segments. Ultimately they had nowhere left to hide. Closer home, we’ve seen this play out over and over again in the software world.

In The Wire, the Barksdale crew grew comfortable with the margins from the Franklin Terrace towers. Though they had the resources (‘the muscle’) to go hold down corners outside the towers in West Baltimore, they chose to not do so, letting people like Marlo grow in power and win uncontested territory. And when they they suddenly needed to in Season Four but by then it was too late.

In Microsoft, you saw this play out with Internet Explorer. After winning the browser wars, Microsoft let the browser team erode and ignored IE development. When they needed to respond to Firefox, they were suddenly faced with turning around an old, creaky codebase with a non-existent team.

Stages of failure

Jim Collins outlines the many stages of an incumbent’s downfall in ‘How the Mighty Fall’. You can see the Barksdale organization go through each of the stages until their ultimate downfall.

  1. The Hubris of Success - Stringer assuming that their superior product and muscle are the only things that count.
  2. Undisciplined pursuit of more - Stringer’s attempt to pivot into real estate development, instead of focusing on their core competencies. Like many an organization attempting to expand, he not only failed at understanding the dynamics of the new business he was entering, he let himself be distracted from his core business.
  3. Denial of Risk and Peril - Not taking Marlo as a threat until it is too late. Tactical sloppiness resulting in Marlo almost taking out Avon.
  4. Grasping for salvation - In a typical corporation, this is where the CEO tries to bring in expensive consultants, fire a bunch of people and try and execute a turnaround (or sue a leading social networking company over patents?). Here on the Wire, this would be Stringer trying to take out Senator Clay Davis, not understanding the real threats to the organization.
  5. Capitulation to Irrelevance or Death - all the events of the season finale for Season Three where by Season Four, you see Marlo firmly in control.

Inconsistent strategy and infighting

Even with all these mistakes, the Barksdale crew could have survived if they had focused on external threats and followed one consistent strategy.

However, like many large organizations, the leaders were distracted by their power struggle at the top and followed two conflicting strategies at the same time. All they had to do was to put all their resources behind either Stringer’s ‘co-op, lets move into real estate’ play or get all behind Avon’s street warfare. The two organizational leaders tried to do both, and failed at both.

Ultimately, the Barksdales did to themselves what once great companies like DEC, Sun, RIM, Nokia and many, many others did to themselves.

Ironically, it is Stringer Bell, smarter and more qualified than most corporate CEOs, who commits many classic CEO errors and dooms his organization to irrelevance.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Saturday, 07 Jan 2012 08:00

Over the years, I’ve come to hold some strong. opinions on testing, testers and the entire business of quality assurance. Inspired by this post on Facebook’s testing, I wanted to write this down so I can point people to it. Some of this is controversial. In fact, even mentioning some of this in conversation has caused people to flip the bozo bit on me.

  1. Most product teams don’t need a separate testing role. And even if they do, the ratio of full time dev:full time test should be in the order of >20:1. For evidence, look no further than some of the most successful engineering teams of all time. Whether it be the Facebook of today or the original NT team from 30 years ago, some great software products have come from teams with no or little testers.

  2. Developers need to test their own code. Period. The actual mechanism is unimportant. It could be unit tests, full fledged automated tests or button mashing manually or some combination thereof. If your developers can’t/won’t or somehow think it’s ‘beneath them’, you need better developers.

  3. Here’s a politically incorrect thing to say. Several large engineering organizations start hiring testers from the pool of folks who couldn’t cut it as developers. I’ve been in/heard of a scary number of dev interviews where somebody has said “(S)he is not good enough to be a dev. Maybe a test role?”. This leads to the widely held but rarely spoken of perception that someone in test is not as smart as someone in dev. And due to the widespread nature of #3, the few insanely smart people who are actually genuinely passionate about quality and testing get a raw deal. I know this since I worked with a few of them.

  4. Tracking code coverage is dangerous. Since it is so easily measurable, it often becomes a proxy for the actual goal - building quality software. If your dev team is spending their cycles writing silly tests to hit every rarely used code path - just because it shows up in some status report - you have a problem. Code coverage is one of many signals that needs to be used but since it is so seductively numeric in nature, it drowns out many others. It falls victim to Goodhart’s law.

  5. I’m yet to see well-written test code in organizations that have testers separate from developers. Sadly, this is accepted as a fact of life when it doesn’t need to be.

  6. Just like metrics like code coverage are overused, other quality signals are underused. Signals like - how many support emails has this generated? Actually using the product all the time yourself and detecting issues. Analyzing logs from production and customer installations. All the other tactics mentioned in the Facebook post at the top.

  7. A common tactic by engineering leaders is to try and reduce the size of their testing orgs by moving to automated testing. This is a huge mistake. If you have a user-facing product, you absolutely need manual eyes on it. Sit someone down from the Windows organization at Microsoft for coffee and they’ll blame the focus on automated testing for a lot of the issues in Windows Vista. The mistake here is assuming that you need a full-tim test person to use the product.

Some disclaimers

  • Some of my best friends and the smartest people I’ve met are from the QA world. So this is by no means an indictment of everyone who is in the testing profession. You know who you are. :)
  • There are always exceptions. I know of several product organizations where a separate testing function is necessary (for example: hardware, mission critical products, nuclear reactors, huge legacy installed base, etc). But most of the below should hold true for anyone shipping a typical internet service.
  • If you’re a small startup, none of this applies since you don’t have the luxury of getting fulltime testers anyhow.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jan 2012 19:17

2011 was a weird year.

  • Aarthi & I got featured in the NYTimes which was great because my mom got to see my photo in the paper back in India (the New York Times gets syndicated everywhere). I believe every visitor to my mom’s house in Chennai still gets shown a clipping.
  • Worked on some very interesting things in the cloud space. Also got to run multiple teams for the first time at Yahoo, which has been a rewarding learning experience.
  • Quit the only company I’ve ever worked for, a place I’ve called home for six years. Still find myself saying “we” when it comes to Microsoft from time to time.
  • Moved twice (Seattle->Palo Alto, Palo Alto->San Francisco). I don’t want to see a moving box again for a very long time.
  • Met a ton of interesting people in the Bay Area. Folks at Yahoo, VCs and angels in endless coffee shop meetings, some great hackers and product people, some bozos and everyone in between. I understand the ‘valley’ so much more now.
  • Travelled more than I’ve ever had before including my first trip to Sweden (which was awesome) and New York (need to find a way to live there someday).
  • Had someone I idolized pass away. We’ll always miss you, Steve.
  • Got to hang out with and go on crazy adventures with this person all over the world. Something we need to make more time for in 2012.

I was looking through some old email and I saw a mail I had sent a friend in February 2011 saying “Umm..yeah..I should really think about moving to California sometime”. It’s surreal to see how much has changed in such a short time.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 28 Dec 2011 06:49
“bubblegum has all the cool filters and social options iOS and Android users have been enjoying with Instagram, but integrates beautifully into the WP7 platform.”

- Bubblegum just got ranked one of the top 15 mobile apps of 2011 by Mashable.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 18 Dec 2011 21:02

Update: See Carmack’s response to this post.

I wanted to post this as a comment on this thread about John Carmack on HN

John Carmack is one of my heroes in the tech world. Not because of his technical accomplishments and helping to create games that I’ve spent years of my life on. But for his single minded obsession with his craft after two decades. Every time he is on stage, he is so obviously in love with what he does that it is infectious.

That was one of my favorite things about Dave Cutler back at Microsoft.

Here’s this legendary figure pushing 70 years who has accomplished more things than most developers dream up. But he showed up at work every single day and made checkins every single day - including Dec 25th and Jan 1st, something he was proud of.

I was once at Microsoft campus late on a Sunday and walked past his office. Spotting the familiar blue hue from his office, I looked inside and saw him debugging something.

“Hey Dave,”, I said “Ever get bored of ntos and ntos/ke? You’ve been coding there for…for like 20 years now?”

ntos is the core part of the Windows NT source tree and ke is where the kernel code lives in. Where pretty much every source file would have a header with Cutler’s name on it and a created date in the 80s.

He turned slowly, looked me over. Obviously not very thrilled about this pipsqueak program manager interrupting him being in the ‘zone’. He then smiled and said. “I love this stuff. What else do you want me to do? Be on a boat somewhere?”

With that, he turned back to his debugger and went back to work.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 14 Dec 2011 19:01

In 2003, Miguel gets stopped from doing a Mono BoF at PDC 2003. I can’t find the blog post but I remember Robert Scoble (who was a one man PR-machine for PDC 2003) doing some damage control.

“I just received a letter from the INETA PDC organizers informing me that the Mono BOF would not be happening. I was looking forward to discuss Mono. The BOF suggestion had a pretty good rating and hat plenty of voters, I guess it was a far stretch to have Microsoft have a BOF on Mono :-)”

Fast forward to 2011, Microsoft ships Kinectimals on iOS built on Mono

On an tangential note, it was an interesting trip down memory lane to read Miguel’s conference notes from then. The world definitely didn’t turn out the way MSFT expected.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Monday, 05 Dec 2011 00:39
Photo data:

Wrote up a little utility to spelunk through my thousands of photos. See longer post on my other blog

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 04 Dec 2011 08:00

I have over 30GB of photos collected over many years. Most of them were taken by me, some by Aarthi, some by other professional photographers (my wedding album). But that’s just the photos I know of. While doing some photo backup, I realized I really didn’t know what I had in my ever growing photo directory. So like any other self-respecting red-blooded geek, I saw only sane option. Write some hacked-together code which will make for a fun blog post and will never be looked at ever again.

Right.

EXIF, stage right

I needed to grab the EXIF data out of these images. A little bit of Googling and StackOverflow-ing later, I had found the right Python Imaging Library incantation to read EXIF data out.

However, it turns out camera manufacturers like to use their creativity in using EXIF data. I came across several weird numeric fields, malformed data, files that made PIL choke, etc. In the end, I came up with a sane list of EXIF tags and then hacked together a script to walk through my tree of photos and dump it into a CSV file.

You can find my script here if you’re interested. And here’s the data from my photo collection - a 3mb CSV.

Next up, time to break out R. Unfortunately, I’m as good at R as I am at Parkour. Basically, I have no idea what I’m doing. However, getting the basics right proved to be trivial. I’m probably getting things wildly wrong below so please do correct me. My secret wish is that someone uses the data to do much more interesting things with it.

The one where I pretend to know statistics

First, I loaded the data into a data frame in R. The latin1 encoding below takes care of all the character mapping weirdness you expect with raw data from the wild.

> data <- read.csv("~/code/photostats/exifdata.csv", fileEncoding = "latin1")

After that, you can use the nifty summary() function to get a good overview of your data. The output of summary(data) is too big to paste here - you can find it here.

Now we can play with the data and try and answer some questions which I’m going to make up for the sake of justifying the work put in so far. For example, what are all the pieces of software that have mucked with my photos over the years? There was a zoo - from well known apps to weirdo firmware versions. You can get them by running the below command.

> unique(data$Software)

A far more interesting question - what camera do I like to take photos with? The answer is overwhelmingly obvious - my iPhones (see bottom right of table below ) with 1882 out of my ~9100 photos. However, I have over 1400 of unknown origin which I need to dig into further. I suspect that these were from my wedding photos.

> sort(summary(data$Model))

Pretty pictures

What I really wanted to dig into were my photo taking habits, about what drives me to take photos. For example, here’s a graph of when these photos have been taken. Surprisingly, I’ve stayed fairly consistent over the years with expected spikes at special events and trips(that spike in September 2010 being a huge collection of photos from my wedding). To make this gorgeous graph below (really, isn’t it breathtaking?), I first converted the dates into something R could understand and then made a histogram, breaking on every month.

dates = strptime(data$DateTime, "%Y:%m:%d %H:%M:%S")
hist(dates, "months", col="lightblue",freq=TRUE)

A similar interesting question is which day of the week do I find myself taking photos?

barplot(table(weekdays(dates)))

As expected, most of my photo taking happens over the weekends. I suspect my wedding photos (which happened on a weekend) skew this a bit and the distribution wouldn’t be so uneven without it.

Now, I’m obviously barely scratching the surface here. I’ve just started getting serious about my DSLR so in some time, I can compile some interesting stats on ISO, aperture, etc. And there’s far more behavioral data I can get out of it (what situations do I prefer my iPhone vs other cameras)? And then I can always throw OpenCV into the mix and figure out who/what is inside the photos I take. Now that should be real fun.

Author: "--"
Send by mail Print  Save  Delicious 
Chit chat   New window
Date: Saturday, 01 Oct 2011 07:29

For years, I started all my meetings big or small with smalltalk. I would chat about the weather, some wisecrack about some current news event or TV show - pretty much anything but the agenda of the meeting at hand for the first few minutes. It wasn’t something I deliberately did, I just happen to be a talkative chap.

A few months ago, I started watching a lot of Mad Men. I was struck by how crisp Don Draper was in his office dealings. He always seemed to jump into his office meetings without any preamble. Inspired by Jon Hamm, I switched to diving into the meeting agenda almost the second I walked into a meeting, be it a one on one or a larger gathering.

Surprisingly, my meetings, both at work and outside work, have gotten worse without any small talk, more noticeably so when I don’t know the person beforehand. I started mixing things up, doing some meetings leading off with more chit-chat and some with less. I now have enough empirical evidence to convince myself that meetings with start off with some polite chatter tend to be more fun, agreeable and productive than ones without. With one on ones, the effect is more pronounced.

I have no idea whether this is due to my conversational style or whether this is an universal truth. My theory is that the polite chit-chat is the verbal equivalent of a handshake and helps put the other person at ease (I also tend to be a naturally curious person so I don’t have to fake it).

Whatever it is, don’t be surprised if I start off with some inane wisecrack the next time you run into me.

[Edit: Aarthi says she has observed the exact opposite effect and that her meetings with no chit-chat tend to go better. So much for my universal theory then ]

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 29 Sep 2011 05:15

I sent out mail at work by mistake today which misspelt ‘public’ as ‘pubic’. I knew it the moment I had hit send but I couldn’t stop the sending quickly enough. Of course, we all had a laugh about it but it got me thinking about how archaic email was as a medium when compared to what I’m used to with Facebook, Google+, Twitter or any modern discussion app.

Email doesn’t have to be ‘email’. It can be as good as any modern app given that it is used mostly in the same way - common server side software and a few client side apps speaking public APIs/protocols.

Here’s what I want

  1. The ability to unsend email and have it deleted everywhere. Not the silly ‘recall’ feature in Exchange/Outlook that guarantees your readers will be hunting down what the original email was. Gmail has a light-weight version of this in their labs but you can’t genuinely delete/unsend something like you can with Twitter, say.

  2. The ability to edit email after it is sent.

  3. Change the list of people who can see an email thread at any time. If you can change the viewership for your Facebook posts at any time, why not for your email conversations?

  4. Permalinks to email threads so you can point to a live version instead of attaching a static version (I believe shortmail does this already, Google Wave used to do this too).

Of course, all this only works when you can control both the client and the server but in most corporate environments, you already do - all users run on the same server side software (Exchange, Notes, etc) and the same few client apps (Outlook, Mail.app). You only need to speak ‘email’ when you go outside your controlled environment to an external email address where you could switch to today’s email experience.

Thoughts?

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 18 Sep 2011 08:52

One of the occupational hazards of working at Microsoft was attending offsites. These were 2-3 day affairs, typically cloistered watching endless sessions of Powerpoint in a out-of-the-way Washington resort with a bunch of execs. At one such shindig I was attending a few years ago, one of the attractions was a talk from a couple of external speakers, both of them local VCs. The talk was covering certain things Microsoft could be doing better in particular areas (being deliberately obtuse here to honor confidentiality and besides, the details aren’t pertinent here).

This VC threw up a slide at the end of his slide to summarize most of his talk. It had the following sentence in bold which made the room break into applause.

Don’t be so f-king strategic

All large companies (and I do mean all - this is not a post about Microsoft) tend to be in love with finding the right ‘strategy’ in place before doing anything. There are reams and reams of text written on what exactly strategy is and how to go about having a good one. Some of them are actually quite good (for example, Porter’s work on the five forces). You could often get rapped on the knuckles (or worse) for being ‘off-strategy’.

Don’t get me wrong. Good strategy combined with good execution is a joy to watch (case in point - Apple over the last decade). The last thing you would want is people off doing their own thing and being all ‘off-strategy’ and rebellious.

But here’s the problem.

You’re not Steve Jobs and your organization is not Apple. And your well-thought out strategy is probably terrible.

If there’s one thing I’ve learnt in the corporate world, it is that a staggering amount of ‘strategic analysis’ is nonsense and guesswork. There’s nothing wrong in admitting that. Figuring out the market, what the future looks like, what users want or heck, even what your own company can do is hard. Bloody hard. Eric Reis says that all startups are experiments and the same can be said for any company in a fast changing industry too.

Most times, you don’t even know what the right thing to do is until you have actually gone out and tried a bunch. Experimenting might be hard if you’re launching spacecraft (and even that doesn’t stop Elon Musk or Jeff Bezos). But if you’re in the software world, there is no excuse for not building a bunch. For not trying a bunch.

Building stuff and getting people to use it will always lead to better results than sitting in a bunch of meetings with over-paid consultants and trying to extrapolate from various signals and trends on what people might want and what you should be doing. Even if you had the right strategy in place at one point in time, that isn’t good enough. The world changes so quickly that you might need to do a 180-degree turn in a matter of months to react to changing user behavior or market trends. Top-down strategic planning doesn’t deal well with this.

What you need are many experiments in parallel. Not all of them need the same amount of resources and not all needs to be released to the public. But you absolutely do need people working on crazy, random things. It doesn’t matter whether you call it 20% time, whether you call it a research department or it is just what all your employees do on weekends. But it needs to happen in some form.

This only works if your corporate structure is built with the flexibility to do random things. Especially things which are ‘off-strategy’.

If you’re a mining corporation in Minnesota, allowing an employee to experiment with adhesives might wind up with you revolutionizing the stationery industry (this happened).

If you’re the world’s largest software company in 2005 and your strategy is to sell phones to enterprises, going off-strategy to understand/build what normal consumers want, might save you years later (this didn’t happen).

So go out there. Try random things. Don’t be so f*king strategic.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 15 Sep 2011 05:44


I almost fell out of my chair when I saw this in the Windows 8 boot process. I would love to know who got legal at MSFT to agree to such a friendly experience.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Friday, 26 Aug 2011 06:52


My current wallpaper

Author: "--"
Send by mail Print  Save  Delicious 
Date: Sunday, 07 Aug 2011 07:00

This is one of those posts that is meant to save time for myself in the future when I’ll have to figure all of this out from scratch.

I spend a lot of time in coffee shops and public places with unsecure wifi. Unsecure wifi scares the bejeesus out of me so I wanted to figure out a way to secure any traffic going through. It would also be nice to access things on my home network. It turns out there are a million different ways to do this and I found one that worked for me. Here were the constraints that I imposed.

  • It should be secure (duh !) for coffee-shop, public wifi browsing. This is not designed to hold up to connecting to DefCon/blackhat conference wifi.
  • It should work from anywhere in the world.
  • It shouldn’t require me to have any computing devices booted up and running at home apart from my wifi router running a DD-WRT build.
  • It should work on all my computing devices, especially on iOS.
  • It shouldn’t use any external VPN/SSH services. No good reason apart from the fact that I’m just masochistic about these things.

If you don’t have these constraints, there are many different ways to do this. Here are some alternate options

Alternate paths

  • If you are only using laptops, you should just use SSH using the excellent instructions here. I still use this when I’m using my MBP
  • If you’re ok with using an external service, you should use something like LogMeIn Hamachi, which is an excellent product and more secure than the setup I lay out below.
  • If you’re ok with not being able to use this from non-jailbroken iOS devices, you should use OpenVPN instead of PPTP as I do so below. That is more secure but not supported by iOS out of the box.
  • If you’re ok with having a machine apart from your DD-WRT router running, there are several options. For example, there are tons of VPN servers that will let you set up a OpenVPN or a L2TP/PPTP server (both protocols supported by iOS out of the box). See this comparison of the various protocols.

But if you happen to have these specific set of constraints I do and like DIY-hacks, read on.

DO NOT SKIP - IMPORTANT - Security risks of using PPTP

VPNs can be created using a multitude of protocols and the one we are going to use, PPTP, is the most insecure of the lot. Wait, what? Why are we picking the most insecure one if the whole purpose of the exercise is to make internet usage more secure? Worse, by using something insecure, we could let somebody get into our home network and rampage around. If you’re not going to use a long passphrase/password, you shouldn’t be doing this.

Here’s why I picked PPTP and I believe using it with very long passwords/passphrases is acceptable.

  1. OpenVPN is the most secure solution, arguably but iOS doesn’t support it out of the box. iOS does support L2TP but DD-WRT doesn’t support that. So we’re stuck with PPTP. If you’re willing to run a server at home, you should be using L2TP.
  2. PPTP’s security increases when using long passwords. The security attacks are typically dictionary based. So make sure you use a long password.
  3. And finally, chances are low that an attacker at a public wifi station is going to put in the effort to go after you. If that isn’t true, you’re in trouble.

If you don’t understand what I’m talking about or if you don’t agree, you shouldn’t be doing this.

END SECURITY SECTION

Setting up your DD-WRT wifi router as a VPN server

  • If you don’t have DD-WRT installed on your wifi router, stop reading right now and go install it. It will not only give you all sorts of extra features you never knew wifi routers could do, it also boosts performance over most stock firmwares. In our case, we’ll use the VPN service.

  • Get the right version of DD-WRT installed. I have v24-sp2 installed but I believe anything over v24 should be fine.

  • Read the instructions on the DD-WRT wiki. This saved me a lot of headache and when I didn’t see bits (like the one on special characters in passwords, for example), I regretted it later.

  • Go to the Services -> VPN tab on your router’s administration page (which is typically at http://192.168.1.1 ). DD-WRT moves this UI around from version to version so you might need to hunt a little.

  • The wiki tells you what each of these settings mean but here’s what I used to get it working.

    PPTP Server -> Enable
    Broadcast support -> Enable
    Force MPPE Encryption -> Enable
    Server IP -> 192.168.1.1 (you can pick anything here, just remember to use this when you forward traffic a bit later)
    Client IP -> 192.168.1.110-120 ( you can use any valid range here)
    DNS1/DNS2 -> 8.8.8.8/8.8.4.4 (not using Google’s DNS also worked for me but others on the web have reported issues here)
    CHAP-Secrets -> __ * username * “password” *

    The format of the username/password line is critical. It is asterisk-space-username-space-asterisk-space-password enclosed by quotes-space-asterisk. If you don’t have special characters in your password, you can skip the quotes. If you have multiple usernames and passwords, just use the same format in a new line. REMEMBER - use a long password with special characters or you will be in trouble.

Hit ‘Apply Settings’.

  • Now, you need to do a couple of things to work around some iOS and OSX quirks. The first is around DNS. Add the below as a startup command in the Administration->Commands tab.

	#!/bin/sh
	echo "nopcomp" >> /tmp/pptpd/options.pptpd
	echo "noaccomp" >> /tmp/pptpd/options.pptpd
	kill `ps | grep pptp | cut -d ' ' -f 1`
	pptpd -c /tmp/pptpd/pptpd.conf -o /tmp/pptpd/options.pptpd

  • Run this command using Adminstration->Commands to force encryption (the DD-WRT wiki explains this in detail if you want to understand what this does)

	sed -i -e 's/mppe .*/mppe required,stateless/' /tmp/pptpd/options.pptpd
  • Go to Security->VPN Passthrough and make sure PPTP passthrough is enabled.

Setting up DynDNS

The next step is to get this accessible from anywhere in the world. DD-WRT has built-in support for DynDNS which makes this easy.

  • Create an account on DynDNS. You’ll get a host-name, something of the form username.dyndyns-server.com.

  • In DD-WRT, go to Setup->DDNS. Select DDNS service as DynDNS.org, enter your DynDNS username, password and hostname and make sure the status textarea doesn’t have any errors when you hit ‘Apply Settings’. In you type in dig username.dyndns-server.com in a terminal (or use nslookup on Windows), you should now see your public IP.

  • Now comes the scary step - forwarding traffic from the outside world. We’re going to forward two ports only (one should be sufficient but some users report errors here). Go to the NAT/QoS->Port Forwarding tab and add the following entries. If you didn’t pick 192.168.1.1 before as the server IP address, you need to change that here.

    Application - vpn, Port from - 1723, Protocol - Both, IP Address - 192.168.1.1, Port To - 1723, Enabled - Check
    Application - vpn, Port from - 1792, Protocol - Both, IP Address - 192.168.1.1, Port To - 1792, Enabled - Check

  • Hit ‘Apply Settings’.

  • Reboot the router. I typically do this by pulling out the power cord and plugging it back in.

Setting up OSX as a VPN Client

At this point, you should have a functional VPN server. Let’s connect to it! I’m going to lay out the instructions for OSX and since Apple uses the same terminology, iOS setup is almost identical from inside the General Settings->Network UI. All other VPN clients should have a similar configuration experience as well.

  • Open up the ‘Network’ preferences pane in System Preferences.

  • Use the ’+’ button at the left bottom of the pane.

  • Pick VPN as the interface, PPTP as VPN Type and name it anything you want (I used ‘Home VPN’).

  • You should have a VPN interface created for you. Here, enter your DynDNS hostname in ‘Server Address’, your username that you entered in the CHAP Secrets section as ‘Account Name’. Press ‘Advanced…’ and check the option to send all traffic through this connection. Now, back in the main pane, press ‘Connect’. Enter the password you typed out back in the CHAP Secrets section and…

  • Voila! You are now connected to your own VPN server. If this actually worked on your first attempt, congratulations! You can now browse securely from anywhere in the world by channeling all traffic through your home network.

If this didn’t work

There are several things that could go wrong above. Here are some common debugging tasks

  • Check the username, password format. This was the cause of much pain, especially around special characters.
  • Check the output at every step. For example, try connecting using 192.168.1.1 instead of the public hostname if you think DynDNS is the problem.
  • The DD-WRT forums are excellent. Search there and try posting there if you have an unresolved issue.
  • Of course, there’s always your favorite search engine to fall back on :).

Happy VPNing!

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 21 Jul 2011 07:00

I was talking to someone at work about REST APIs and thought it’d be good to write up some of my experiences from designing and running the team that shipped the Windows Azure Service Management APIs. There are some other great ‘lessons learnt’ posts out there, like this one from Foursquare and this one from Daniel Jacobson of Netflix.

Disclaimer: These are my views alone. I don’t even think the rest of the team agrees and I know for a fact that some of these opinions are quite controversial.

Some background

The Windows Azure Service Management API (or SMAPI) was built in 2008 to essentially let developers control all aspects of their Windows Azure account - from deploying new code, changing configuration, scaling up and down services, managing storage accounts, etc. Here are the docs. I can’t talk about the numbers but it is safe to say that this API is hit an incredibly high number of times each day, from all sorts of clients - IDEs, developer tools, automated monitoring systems, evil bots, you name it. I was lucky to work with an incredible team on this API (hi Zhe! hi Dipa! hi Frank!) and I’m proud of what we managed to build. Onto the lessons learnt.

REST conventions don’t matter…that much

I’m going to incur the wrath of many of my friends for saying this. When we started off designing the API, we spent weeks on perfecting the cleanest URL design possible. All of us had read Fielding’s thesis and it wasn’t uncommon for this book to be waved around in heated design meetings. But at the end, I came to an important realization - we were wasting our time.

The URI+HTTP verb+wire format doesn’t matter.

Here’s why. None of your developers will actually see the raw HTTP request+response. If your API is going to be the least bit successful, it has to have language specific bindings either built by you or the community. Sure, it definitely comes in handy to be able to compose API requests in curl in your sleep and you’ll definitely make your bindings developers’s lives easier. And if you blatantly ignore some basic REST idioms (like making GETs non-idempotent) you will break your API in all sorts of nasty ways. But if you’re spending every other meeting arguing over whether something should be a PUT or a POST for days or weeks on end, you’re wasting your time and the discussion is academic.

Evidence in point - look at the AWS APIs (not S3 but the rest). All of them are the ugliest RPC-style, verbose XML beasts you can find. But you can’t argue with how insanely successful they’ve been, can you?

Start with how the API is going to be used, work backwards

One of the best things we did while designing the API was to start with how the code using the API would look like and then work backwards to the REST definitions. Since the API didn’t exist yet, we just wrote pseudo-code in different languages for various important scenarios. This gave us a great feel and pointed out some obvious flaws which weren’t visible from just reading the API specification . For example, we found out that we were forcing developers to construct too many intermediate objects in-between successive calls to our API. We also found that developers couldn’t send back the same data structures they had retrieved from the API without modifying it. Without the pseudo-code, we probably wouldn’t have found these flaws until deep into our implementation where any change would have incurred a lot more cost. Or worse, we could have shipped the API like this and be stuck with this behavior for a long time.

We made it a rule that no API function could be reviewed unless it had some pseduo-code next to it. This also had the side effect of greatly improving API design meetings when people could see concrete usage patterns instead of a list of objects and functions with parameters.

A terrible way to design an API is to start with your internal systems first, figure out a REST syntax that maps reasonably to it and then throw that over the wall to your users. There are many of these APIs floating around, from Microsoft and other companies, and these almost unerringly cause developers a lot of pain.

Reduce the cognitive load

Be minimalistic in the number of ‘things’ your API exposes. Each concept your API exposes carries a significant cognitive load and dramatically increases the learning curve for your users. We were brutal in trimming the number of concepts we exposed, even when we risked merging slightly different types of ‘things’ into one ‘thing’. We were also brutal in trimming the operations we supported on each ‘thing’. All of this went a long way in making the API simpler than when it started out.

Make sure key actions are simple and fast, prefer chunky over chatty

One of the issues we hit very late in our API design was what is commonly called the ‘N+1’ problem. This is where someone queries a parent object to find a list of children and then issues a separate HTTP request to access each child. In our case, the single most common operation was accessing the list of services in Windows Azure and querying each of them to see what their status is. As we were very close to shipping, we didn’t have time to go rework our design to work around this so we put in what I thought at the time was a giant kludge - a ‘recurse’ parameter on the parent which expands all the children.

This surprisingly turned out to be very efficient and wound up making both devs and our servers a lot happier. The other feature we looked at was how to do partial responses, something GData now has support for. In each of these cases, the actual implementation wasn’t the cleanest (I tacked on a query parameter) but identifying a key usage scenario and optimizing it proved to be invaluable. I wouldn’t be surprised if this is saving millions of requests each day to our API services. More importantly, it makes clients faster and easier to implement.

One good forcing function to make this happen is to build prototypes of API clients as you build the API. For example, we were maintaining clients that pushed builds, monitoring clients and prototypes that simulated other common scenarios. It is amazing how obviously bad an API becomes as soon as you write some code to consume it. As a designer, you’re better off making sure you’re the one discovering it and not your users.

Measure and log everything

Something we were very good about is measuring things. We instrumented our API to the wazoo and tracked those numbers keenly. Everyday, we knew what users were hitting, what the popular calls were, what the common errors were and so on. This gave us great insight into how users were actually using the API (we could tell how much API activity we had saved using the N+1 hack, for example). When we rolled out new functionality, we could tell how users were adopting it and take corrective measures if we were not seeing what we expected. And probably most important of all, we could see what errors users were hitting and dig into those. We often changed or added error messages to make it clearer to users why they were seeing something based on this data. All this went a long way in increasing the usability of the API.

Versioning and extensibility

APIs change in unanticipated ways. Some of these changes are small - you want to add a new property to some data structure or add a new item to a list of items. Some are bigger - you want to change the authentication mechanism or drop support for existing APIs. For either of these, you’ll be thanking yourself if you future-proof your API from the beginning.

There are many ways to version your API to protect yourself. You could do what we did - the client sends a version header and the server shows the client behavior it expects. Or you could just add a version number to your path. In any case, just baking this into the first version of your API will save yourself a lot of heartburn down the road.

The other common change you want to design for is minor additions. Here, I think I could have done better with the Azure SMAPI. Something I like about the Foursquare API is the way they use generic structures and indirection (example - { type: “special”, item: { special: { ...specialdata... } } }). This lets them add new ‘types’ without breaking old clients which don’t anticipate those types. I wish we did something like this with the Windows Azure APIs. It would have made life much easier for us when we wanted to just add an item here or a list there without having to increase the API’s versioning number and breaking several clients.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Thursday, 23 Jun 2011 07:00

If you are a Harry Potter fan or even remotely interested in books, you’ve probably heard of J.K.Rowling’s decision to break into e-publishing. After several years of avoiding the e-book world (and rampant privacy), she has done something very interesting - forsake her print publishers and go it alone. A lot of main stream authors have dabbled in self-publishing - Stephen King wrote a short story years ago, for example. But J.K.Rowling is the first ‘superstar’ to go full fledged into self-publishing and ship her books from her website.

Some background

I’ve been digging into this space for the last few weeks for a variety of reasons. A bit of it is out of self-interest - I’ve been making use of my post-Microsoft vacation to pursue a long held dream and work on my first fiction book (a thriller, far from being complete, even further from being any good). Also, I’ve just been interested in how the publishing world works for a long time.

Backup a few years. For the last several decades, if you were a newbie fiction writer trying to get their first novel published, the process went something like the below.

‘The Process’ (as it used to be)

  • You put your manuscript (printed in double-space, 12 pt font, nothing else) in a package along with a very well-written query letter and sent it out blindly to all the agents who seemed interesting from that big book of agents sold. You might try and filter for agents who actually represented authors in your genre but from talking to people, I think this part was often ignored. If you Google around, you’ll see hundreds, if not thousands of blog posts/books on how to write the perfect query letter, opening and ‘hook’ the agent since that was so important.

  • On the other end of the package was a often over-worked agent getting hundreds of submissions a month. He or she will probably discard your manuscript after a poorly written query letter or if the first page isn’t arresting enough. Not because the agent was some mean person, of course. With that much volume of incoming manuscripts, they absolutely had to filter as much as they could.

  • Most writers would get what is a called a ‘form rejection’. If you’ve ever failed to get through an interview, you have probably seen some of these - a cold, impersonal letter out of a HR-driven template saying you weren’t a good fit or some such thing. Some lucky rejections would get a personal touch - some encouraging feedback or some helpful advice. These are treasured by writers, mostly since it can be depressing to get form rejection after rejection and you cling on to any bit of hope you can.

  • If you went past this step and the agent decided to represent you, you were in. It was time to break out the champagne. Now, your agent repeats the process with publishing houses (who will never accept manuscripts directly from unknown writers) and shops around your manuscript. You again face rejections and most publishers will probably reject you.

  • Finally some publishing house (if you’re lucky, one of the ‘Big 6’) decides to pick you up. You negotiate the financial numbers (you typically make a couple of bucks out of every physical book sold unless you are John Grisham, Dan Brown or Lee Child).More congratulations! More champagne is drunk (I suspect a lot of drinking goes on in the writing world). Seriously, you are now in the elite. You are a writer who is about to get published, something most people only dream about.

  • The publishing house gives you an editor who works with you on your manuscript. He or she goes over your book with a fine pencil and suggests changes all over the place - from grammar and tightening prose to big changes (like inconsistent actions from characters poor plotting, etc). You get to make the final call on whether you accept these changes but these people generally know what they’re talking about. Quick aside - my O’Reilly editors were great but they assigned me a developmental editor for a period of time (an external contractor) who basically suggested I remove every bit of humor, every colorful anecdote from my Windows Azure book. I declined and almost every reader I’ve spoken to mentions how much they loved the exact bits that he wanted taken out.

  • Along the way, the publishing house tries to figure out when to release your book. This is a mysterious calculation which involves marketing budgets, other books in the genre coming out this year, the motion of the moon, etc. It isn’t uncommon to have to wait a few years after a finished manuscript to see it out in print.

  • The publishing house prints your book! Hooray! You’re now a published writer. You can get into associations like International Thriller Writers, Inc. You see your book on Amazon and get that seductive link to creating your own author page on Amazon. You watch your Amazon rankings every hour (hint for writers - there are automated tools for this). Depending on your publisher, you might get a marketing budget to do things like mini-book tours, bookstore signings and maybe even a radio show or two. If you’re lucky, some reviews. Mostly, you’re on your own since marketing budgets are very limited.

  • You become rich and famous. You have interviews on national TV. Your book gets made into a movie starring George Clooney and Julia Roberts. Dan Brown and J.K.Rowling are chasing your taillights in the Amazon rankings. Lee Child is lining up for an autograph. Dean Koontz and Jeffrey Deaver want to have coffee with you to learn your secrets. Ok, so this last step might a teeny bit harder than the others.

Bring on the revolution

All this changed due to two key developments, both caused by Amazon.

  • The Kindle goes mainstream. Among other things, the Kindle is the ideal Christmas gift for a nephew/niece/grandchild, etc. Surprisingly, both my Kindle and Aarthi’s Kindle were gifts we gave each other.
  • Amazon opens up the Kindle to self-publishing by any author. In June 2010, they stunned the publishing world by increasing the author’s royalty to 70% (for books priced between $2.99 and $10). This is key since until now, authors had to sell massive amounts of books to make any sort of money back. A typical $30 hardcover book will get the author around $3-$4 and a $12 paperback will get the author around a buck.

All of a sudden, writers could self-publish and not have to go through any of the gate-keepers. Just as importantly, they could now price their books at very low prices ($.99, $1.99) and rely on readers to make impulse purchases. It’s much easier to pull the trigger on a $1.99 thriller than a $9.99 thriller. It was a classic low-price, high-volume strategy. The publishing houses couldn’t match these prices or these royalty figures of course, since they had their own costs to worry about. Since writers need editors, cover-art, etc, a little cottage industry has sprung up where you can hire an editor, somebody to do graphic art, etc for low prices. A lot of these are still nascent (which is why most self-published books have terrible cover designs) but it doesn’t take a genius to see how these might evolve.

Also, the self-publishing world has been seeing their first breakout stars and big writers moving into self-publishing. There are three which are interesting, for different reasons.

  • The first is Amanda Hocking. She made history by selling over a million copies of her book on the Kindle and making over $2m in royalties. She has been held up as the poster child for the self-publishing world. In a surprising move, Amanda decided recently to sign a multi-book contract with a traditional publishing house - St.Martin’s Press (as you can imagine, there was a fierce bidding war to get her onboard). In her blog post responding to a fierce outcry, she said ““I want to be a writer,…I do not want to spend 40 hours a week handling e-mails, formatting covers, finding editors, etc. Right now, being me is a full-time corporation.”.

  • The second is Barry Eisler (his Rain books are really worth reading). Eisler raised more than a few eyebrows by turning down a $500K advance from St.Martin’s press and getting into self-publishing for himself.

  • The third is John Locke (which is actually not a pseudonym, believe it or not) who recently sold a million books over the Kindle. Locke writes a series of gritty thrillers (I like his hero, Donovan Creed) and westerns on the Kindle. His books are good reading but more importantly, John is like the Tim Ferriss of the thriller world. He knows how to market his wares very, very well (his sales background doesn’t hurt). He uses mailing lists, Twitter, Facebook, plugs his websites in his books, uses the right soundbites in interviews and overall, just does a great job of packaging himself. In fact, he even wrote a book called How I Sold 1 Million eBooks in 5 Months. Locke realizes the importance of things like high visibility book rankings and pulls stunts you typically see on the iOS AppStore.

If you’re interested in self-publishing, J.A.Konrath’s blog is a must-read.

How a big house can help you

The publishing world realizes all this of course. However, it is very unclear to me as what they intend to do about it.

There are still several things that only traditional publishers can do. Publishers throw in editing/graphics as part of the deal - you now have to find external people to do it for you. A good editor is invaluable and to be protected and cherished. Only traditional publishers can get you print distribution at scale. This counts since a lot of people still buy only books (I still prefer my books in the paper form). You still can’t get a review in the NYT or People magazine if you’re self-published, even if you’ve sold over a million copies. You still can’t get into Barnes & Noble or Borders or a small book store if you’re self-published. That robs of you precious real estate in book displays, etc (which the publisher often pays for). You can’t get other benefits - associations demand that you be published traditionally, you may not qualify for most writing awards, etc. You may not even be able to get your books into libraries. Of course, every author has his or her own priorities as far as these are concerned.

The biggest downside to self-publishing might be the marketing muscle you don’t get. Marketing yourself is hard and unlike John Locke, most people are not good at it and don’t want to do it. Especially writers, who often tend to be shy creatures who like the privacy of their dens. Selling well with self-publishing means constantly promoting yourself and putting yourself out there and not having a professional PR person line up tours/appearances for you. This is very hard work and work which most people don’t know how to do.

Apart from all this, there’s still the perception problem. Self-published long meant “Not good enough to be published” and that perception still holds for a lot of people. You will get snide remarks and get looked down upon by a lot of people. Whether that matters to you or not is completely up to you of course.

Make no mistake, traditional publishing is still the established, mainstream way of being a writer.

And the agents

The biggest people at risk right now are the traditional middlemen - the agents. They’re not taking this lying down however. Here’s a post from Rachelle Gardner, who runs a well-known blog on her life as an agent and the publishing business in general. I really like Rachelle’s posts in general but the below made me shake my head sadly. Quoting her

“With no more gatekeepers, no more exclusivity, no more requirement to actually write a good book, won’t published books lose value? If anybody can get a book published, doesn’t that diminish the perceived status of all authors?…Well, I have news for you. If you think the published books are bad now, just wait until self-pubbing becomes the norm. Holy cow. Folks, you don’t see an agent’s daily slush pile. Sure, some of it is good. But let me tell you. At least half of it is seriously not good. As I look at all the books I say “no” to, and then realize these books could be for sale within a matter of months, I get depressed.”

If you’re in the tech world, you probably see how wrong she is too. The AppStore and probably the web is an example of how this model will work. In fact, that’s the beauty of it - that anyone sitting in his pajamas can get their content out there immediately. And the best content will always rise to the top since people will find it and bubble it up.

If you see the comments, you can see how (some) traditional writers are now having to deal with their world upended. Some love the changes and are jumping onto the bandwagon. Some see it but still want their books published by a traditional house. Some of them have spent years (or decades) in the hopes of being published and now all of a sudden, that prize doesn’t seem as valuable. Here’s a sample comment from the extreme end of the opinion spectrum.

“I think this idea that “everyone deserves to get their book published” is fallacious and insulting to both the good, hard-working writers and more importantly to the readers.Everyone can open their mouth and make a noise. Not all of us deserve to stand in the Royal Opera House and sing to a paying public.”

My thoughts

  • As a technology person and as someone who has grown up with the web, I see the obvious parallels to what is happening here and what happened in the tech world and the web. For example, take open source. When open source first came on the picture, lots of the same arguments were made - how could you trust it? Who ensured quality? All those arguments look ridiculous now. On the other hand, lots of people predicted the death of proprietary software. That hasn’t happened either. Both have learned to co-exist and live together. Same with the rise of Youtube. Indie artists can now become famous on Youtube but recording companies have learnt to make use of the Youtube phenomenon (case in point - Justin Bieber’s career).

  • I think the argument ‘there needs to be gatekeepers’ is bogus. History is full of examples of amazing things happening when the gatekeepers were removed and the masses could decide what was good and what wasn’t. But that doesn’t mean there’s no space for informed experts. People who work in the book industry including agents, understand what makes a good read very well. There is great demand for those skills, perhaps just not in their current form. Think about someone like Roger Ebert. Just because there’s Rotten Tomatoes out there doesn’t mean a Roger Ebert review still doesn’t hold importance.

  • I like the fact that writers can spend more time writing and less time on things like writing the perfect query letter. The idea that you had to impress this one person who was too busy to give you more time never resonated with me.

  • The economics of the publishing world are unsustainable. You can’t sell a e-book for $9.99 since readers have grown accustomed to cheaper books. And print book sales are being eclipsed by ebook sales. Publishers and agents have to take a long hard look at where exactly they’re adding value.

  • This is going to shock people given the rest of the post but…if you’re a newbie writer and you have an option of a deal with a traditional publishing house, I believe you should take it before turning to self-publishing. Self-publishing is beating on the doors but the advantages of being mainstream outweigh that just a bit in my opinion. Note that this opinion is current of June 2011 and will probably change soon. If you’re someone like Barry Eisler and already have an established platform with traditional publishing, you should really consider self-publishing.

  • Most of all, I’m thrilled that people are still reading books. I love books and have been lucky to grow up surrounded by books. Books as a means of communicating holds a special place in my heart. If people are reading more books and are still spending bored afternoons, plane rides, the hour before bedtime being transported into another world created by the written word, who cares what the medium is? Be it in paper,the Kindle,the Nook, their iPhone or their iPad or on some future holographic display, it’s all good.

Now, excuse me since I have some unread books beckoning me.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 10 May 2011 07:00

After close to six years, I’m doing something that I sometimes thought I’d never do - leaving Microsoft. I’m also doing something that I’ve always wanted to do - move to the San Francisco Bay Area and be a part of the culture and ecosystem there. I’m first going to take some time off and then do the exciting thing of trying to figure out what to work on next.

Here’s the mail I sent out to friends and co-workers at Microsoft today. Microsoft has been amazing to me and I’ll be forever grateful to the people and the company for what they’ve given me. Like I say in the mail, it is a remarkable company and I’m proud to have been a part of it.

From: Sriram Krishnan 
Sent: Tuesday, May 10, 2011 5:44 PM
To:
Subject: Farewell Mail: SriramK has left the building


_tl:dr I'm leaving Microsoft after 5 years and moving to
California. I don't know what I'm doing next but it will be
around building products for sure. Read on for the long version
:)_

I always imagined my last day at Microsoft will end with me being
fired and getting carried out by some burly security guys as I
kick and scream. And as a finishing touch, they'll come dump my
belongings on me in the parking lot as I crawl sadly away with
the camera panning out to gloomy music. I'm honestly a bit
surprised that hasn't happened given all the crazy things I've
done over the years. For some weird reason, Microsoft has been
surprisingly nice to me and even chose to trust me with bigger
and bigger things.
 
I joined this company before I was legally allowed to drink -
it's the only company I've ever worked for. At a time when I
wasn't sure what to do with my life, Microsoft gave me a place
where I belonged, where I fit in. I've got to do some amazing
things here. I got to build developer tools used by millions,
consumer websites, super scalable infrastructure stuff and design
APIs called a gazillion times a day. I got to travel the world
and work with execs from the biggest companies in the world. I
got to fulfill a childhood dream and write a book. I got
interviewed in the NYTimes. I've done some crazy things (I once
stopped a riot at a conference by quelling a 5000-strong mob) and
I got to meet and hang out with my heroes (hi DaveC! :) ). I even
wound up marrying an amazing woman who joined Microsoft with me.
 
Most of all, I've got to know some amazing people. I’ve had some
great managers, mentors, colleagues and peers who have set a high
bar for any future workplace to match. I've made so many amazing
friends here who'll be friends for life.

For people outside, it's hard to understand how much of a family
Microsoft can be. It’s simple really, you have everything here -
the competitive siblings, the wise aunt, the supportive parents,
the crazy uncle (ok - maybe more than a few crazy uncles). Above
all else, a place to call *home*. It has been home to me for most
of my adult life and leaving now feels like leaving my parent’s
house to college.
 
As for what's next, I really don't know yet. No, this is not some
clichéd phrase like “spending more time with family” or “moving
on to bigger challenges” – I really don’t know :). I do know I'm
moving to the San Francisco Bay Area. But after that, I only
imagine I'll be involved with something that involves building
products because that’s what I love to do.
 
My favorite Star Trek episode of all time is a TNG episode called
the "Inner Light". (Spoiler Alert) In that episode, Picard gets
to live a lifetime of experiences in the blink of an eye. In a
way, that's how my time at MSFT has been - a lifetime of
experiences and people compressed into a very short time.
 
Thank you for giving me the privilege of working with you and of
being a part of this remarkable company.
 
You can contact me on this new 'Electronic Mail' thing that the
kids are talking about at me@sriramk.com or on Twitter (@sriramk)
or on LinkedIn (http://www.linkedin.com/in/sriramkrishnan01).
I'll see you around on the interwebs.
Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 13 Apr 2011 16:58

I'll be doing a keynote at the 'Talk Cloudy to Me' event down in Mountain View organized by the Silicon Valley Cloud Computing Group. It's on April 30th at the Microsoft campus. They have a phenomenal list of speakers and I look forward to just sitting in the audience and listening to the other talks :).

I still haven't decided what to talk about for my keynote. Do let me know if there's anything you guys would like to hear about. The crazier the better :)

Permalink | Leave a comment  »

Author: "--"
Send by mail Print  Save  Delicious 
Date: Wednesday, 13 Apr 2011 07:00

I’ll be doing a keynote at the 'Talk Cloudy to Me' event down in Mountain View organized by the Silicon Valley Cloud Computing Group. It’s on April 30th at the Microsoft campus.

Author: "--"
Send by mail Print  Save  Delicious 
Date: Tuesday, 05 Apr 2011 17:22

tl;dr Delta Airlines and American Express together almost screwed up our trip to Sweden, cost us several hundred dollars and a lot of hassle due to ridiculous processes and rude customer service.

I generally don't write blog rants about companies I deal with but I had to make an exception in this case. Last week, Aarthi and I made a trip to Sweden which was part work-related. I'll be writing a long post on what was an amazing trip later but first I wanted to cover what was some shocking lack of customer service from both Delta Airlines and American Express.

Some background first. Like with most trips, Aarthi had booked her travel through the American Express travel website and we were flying to Stockholm through Amsterdam using a combination of Delta and KLM. There was nothing special about the booking from any other booking. The night before the trip, we even got an email from Delta asking us to check-in and pick seats. Everything seemed normal - until we showed up at the checkin counter at the airport.

Where is your paper ticket?

We wound up at the airport with only a hour and a half to go for the flight. This was a mistake on our part with an international flight - one we learnt the hard way. After waiting in line, we wound up with a Delta Airlines employee who was visibly grumpy and just wasn't pleasant from the get-go. We handed him our IDs and our booking number and waited to get our boarding passes. Here's where things went wrong.

"You guys don't have tickets", he said. "Show me your paper tickets".We were confused. "Paper tickets? We don't have any paper tickets - here's the code, here's our booking confirmation, etc. Here's the email from Delta itself telling us to check-in". He wouldn't budge "You guys don't have tickets. You need to show me a paper ticket."

We were pretty confused - mostly because we hadn't seen any paper tickets (in fact, I don't remember using a paper ticket to travel for over 5 years). No amount of explanation would make him budge - he was dismissive and rude throughout and just wasn't convinced we had bought any tickets for the flight which was now scheduled to leave in 50 minutes or so.

Over to American Express

"Call your travel agent" he said and thrust an old phone at us. Assuming that American Express would help us, we frantically called the American Express helpline. We only had 50 minutes but if we sorted through this quickly, we could still catch the flight. We dug up the contact number and quickly called them only to find...hold music. For over 10 minutes.

After an interminable wait, we wound up with another super unhelpful person. Frankly this was surprising because though the airline industry isn't exactly renowned for customer service, America Express is supposed to be great at helping you out at emergencies. 

The Amex person first took a long time to understand what was happening. From her/his POV, Amex had done everything right and this was Delta's fault. While Delta thought this was Amex's fault. After a long time (and some very stern words on how close we were to the boarding gates being shut), the Amex person asked us to fill out a lost ticket form. The Delta agent at the counter turned down that request, saying if we did that, we would miss our flight as that process would take an hour to finish.

Buying our tickets again

Left with no other option, we cancelled our tickets (~$250 cancellation fee, Delta only credits for a year with all other sorts of catches) and bought new tickets at the counter at a much higher price - I would estimate this little runaround with Amex and Delta cost us close to $500 over our original tickets and who knows when we'll fly Delta again). At this point, we had less than 10 minutes for the gate to close. Once we got our tickets, we made a mad dash through the airport. After clocking what was probably my fastest running time through SeaTac, we made it through the gates with 30 seconds to spare (they literally closed the gate after we got through). Ironically, we got yelled at the boarding gate for being late as if the whole thing was our fault :).

It is very hard to convey how angry we were with this entire ordeal. From an unhelpful and rude Delta Agent to an Amex person who failed us just as we needed them the most to two outdated companies hurting customers caught up in one of their mistakes and being hassled, the entire experience was full of fail. 

Obviously, I'm going to try very, very hard to avoid Delta from now on. If nothing else, I'll get to avoid their anti-customer agents at SeaTac.

Permalink | Leave a comment  »

Author: "--"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader