• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Thursday, 09 Oct 2014 15:25

One of the things that happens when you’re a developer focused analyst firm these days is that you talk to a lot of companies. The conversations analysts have with commercial vendors or developers about their projects are called briefings.

Whether the company or project is large or small, old or new, there are always ways to use our collective time – meaning the analyst’s and the company/developer’s – more efficiently and effectively. Having been doing this analyst thing for a little while now, I have a few ideas on what some of those ways might be and thought I’d share them. For anyone briefing an analyst then, I offer the following hopefully helpful suggestions. Best case they’ll make better use of your time, worst case you make the analyst’s life marginally easier, which probably can’t hurt.

  1. Determine how much time you have up front
    This will tend to vary by analyst firm, and sometimes by analyst. At RedMonk, for example, we limit briefings with non-clients to a half hour, a) because we have to talk to a lot of people and b) because very few people have a problem getting us up to speed in that time. It’s important, however, to be aware of this up front. If you think you have an hour, but only have half that, you might present the materials differently.
  2. Unless you’re solving a unique problem, don’t spend your time covering the problem
    If the analyst you’re speaking with is capable, they already understand it well, so time describing it is effectively wasted time. If there’s some aspect of a given market that you perceive differently and break with the conventional wisdom, by all means explain your unique vision of the world (and expect pushback). But a lot of presentations, possibly because they originated as material for non-analysts, spend time describing a market that everyone on the call likely already understands. Jumping right to how you are different, then, is more productive.
  3. If you’re just delivering slides and they’re not confidential (see #4), do not use web meeting software
    If you need to demo an application, web meeting software is acceptable. If you’re just going over slides that aren’t confidential, skip it. Inevitably the meeting software won’t work for someone; they don’t have the right plugin, a dependency is missing, their connections is poor, etc. The downtime while everyone else is waiting for the one meeting participant to download yet another version of web meeting software they probably don’t want is time that everyone else loses and can never get back. Also, it’s nice for analysts to have slides to refer to later.
  4. Don’t have confidential slides
    If you’re actively engaging with an analyst in something material, a potential acquisition for example, confidential slides are pretty much unavoidable. But if you’re doing a simple product briefing, lose the confidential slides. It makes it more difficult to recall later – particularly if a copy of the slides is not made available – what precisely is confidential, and what is not. Which means that analysts may be reticent to discuss news or information you’d like them to, due to the cognitive overhead of having to remember which 5 slides out 40 were confidential. When it comes time to present confidential material, just note that and walk through it verbally.
  5. If you spend the entire time talking, you may miss out on the opportunity for questions later
    It’s natural to want to talk about your product, and the best briefings are conducted by people with good energy and enthusiasm for what they do. That being said, making sure you leave time for questions can gain you valuable insights into what part of your presentation isn’t clear, and – depending on the analyst/firm – may lead to a two way conversation where you can get some of your own questions answered.
  6. Don’t use the phrase “market leader,” let the market tell us that
    This is perhaps just a pet peeve of mine, but my eyebrows definitely go up when vendors claim to be the “market leader.” This is for a few reasons. First, because genuine market leaders should not have to remind you of that. Second, what is the metric? Analysts may not agree with your particular yardstick. Third, because your rankings may not reflect an analyst’s view of the market, and while disagreement is normal it can sidetrack more productive conversations.
  7. Analysts aren’t press, so treating them that way is a mistake
    While frequently categorized together, analysts and press are in reality very different. Attitudes and towards and incentives regarding embargoes, for example, are entirely distinct. Likewise, many vendors and their PR teams send out “story ideas” to analysts, which is pointless because analysts don’t produce “stories” and are rarely on deadline in the way that the press is. What we tell clients all the time is that our job is not to break news or produce “scoops,” it’s to understand the market. If you treat analysts as press that is trying to extract information from you for that purpose, you may miss the opportunity to have a deeper, occasionally confidential, dialogue with an analyst.
  8. Make sure the analyst covers your space; if you don’t know, just ask
    Every analyst, whether generalist or specialist, will have some area of focus. Before you spend your time and theirs describing your product or service, it’s important to determine whether or not they cover your space at all. Every so often, for example, vendor outreach professionals will see that we cover “developers” and try to schedule a briefing for their bodyshop offering developmental services. Given that we don’t generally cover professional services, this isn’t a good use of anybody’s time. The simplest way of determining whether they cover your category, of course – assuming you can’t determine this from their website, Twitter bio, prior coverage, etc – is to just ask.
  9. Asking for feedback “after the call”
    In general, it seems like a harmless request to make at the end of a productive call: “If you think of any other feedback for us after the call, feel free to send this along.” And in most cases, it is relatively innocuous. Another way of interpreting this request, however, is: “Feel free to spend cycles thinking about us and send along free feedback after we’re done.” So you might consider using this request sparingly.
  10. Don’t ask if we want to receive information: that’s just another email thread
    There are very few people today who don’t already receive more email than they want or can handle. To make everyone’s lives simpler, then, it’s best to skip emails that take the form “Hi – We have an important announcement. Would you like to receive our press release concerning this announcement? If so send us an email indicating that you’ll respect the embargo.” As most analysts will respect embargoes – because we’re not press (see #7) – asking an analyst to reply to an email to get yet another email in return is a waste of an email thread. Your best bet is to maintain a list of trusted contacts, and simply distribute the material to them directly.

Those are just a few that occur off the top of my head based on our day to day work. Do these make sense? Are there other questions, or suggestions from folks in the industry?

Author: "stephen o\'grady" Tags: "Industry Analysis"
Send by mail Print  Save  Delicious 
Date: Tuesday, 07 Oct 2014 17:24

Last Thursday at ten in the morning, this auditorium was full because I made a joke four years ago.

Describing the Monktoberfest to someone who has never been is difficult. Should we focus on the content, where we prioritize talks about social and tech that don’t have a home at other shows but make you think? Or the logistics, where we try to build a conference that loses the things we don’t enjoy from other conferences? Or maybe the most important thing is the hallway track, which is another way of saying the people?

Whatever else it may be, the Monktoberfest is different. It’s different talks, in a different city, given and discussed by different people. Some of those people are developers with a year or two of experience. Others are founders and CEOs. People helping to decide the future of the internet. Those in business school to help build the businesses that will be run on top of it. Startups meeting with incumbents, cats and dogs living together.

Which is, hopefully, what makes it as fun as it is professionally useful. It doesn’t hurt, of course, that the conference’s “second track” – my thanks to Alex King for the analogy – is craft beer.

During the day our attendees are asked to wrap their minds around complicated, nuanced and occasionally controversial issues. What are the social implications and ethics of running services at scale? When you cut through the hype, what does IoT mean for our lives and the way we play? Perhaps most importantly, how is our industry industry actually performing with respect to gender issues and diversity? And what can we, or what must we, do to improve that?

To assist with these deliberations, and to simultaneously expand horizons on what craft beer means, we turn lose two of the best beer people in the world, Leigh and Ryan Travers who run Stillwater Artisinal Ales’ flagship gastropub, Of Love and Regret, down in Baltimore. Whether we’re serving then the Double IPA that Beergraphs ranks as the best beer in the world, canned fresh three days before, or a 2010 Italian sour that was one of 60 bottles ever produced, we’re trying to deliver a fundamentally different and unique experience.

As always, we are not the ones to judge whether we succeeded in that endeavor, but the reactions were both humbling and gratifying.

Out of all of those reactions, however, it is ones like this that really get to us.

The fact that many of you will spend your vacation time and your own money to be with us for the Monktoberfest is, quite frankly, incredible. But it just speaks to the commitment that attendees have to make the event what it is. How many conference organizers, for example, are inundated with offers of help – even if it’s moving boxes – ahead of the show? How many are complimented by the catering staff, every year, that our group is one of the nicest and most friendly they have ever dealt with? How many have attendees that moved other scheduled events specifically so that they could attend the Monktoberfest?

This is our reality.

And as we say over and over, it is what makes all the blood, sweat and tears – and as any event organizer knows, there are always a lot of all three – worth it.

The Credit

Those of you who were at dinner will have heard me say this already, but the majority of the credit for the Monktoberfest belongs elsewhere. My sincere thanks and appreciation to the following parties.

  • Our Sponsors: Without them, there is no Monktoberfest
    • IBM: Once again, IBM stepped up to be the lead sponsor for the Monktoberfest. While it has been over a hundred years since the company was a startup, it has seen the value of what we have collectively created in the Monktoberfest and provided the financial support necessary to make the show happen.
    • Red Hat: As the world’s largest pure play open source company, there are few who appreciate the power of the developer better than Red Hat. Their support as an Abbot Sponsor – the fourth year in a row they’ve sponsored the conference, if I’m not mistaken – helps us make the show possible.
    • Metacloud: Though it is now part of Cisco, Metacloud stood alongside of Red Hat to be an Abbot sponsor and gave us the ability to pull out all the stops – as we are wont to do.
    • EMC: When we post the session videos online in a few weeks, it is EMC that you will have to thank.
    • Mandrill: Did you enjoy the Damariscotta river oysters, the sushi from Miyake, the falafel and sliders bar, or the mac and cheese station? Take a minute to thank the good folks from Mandrill.
    • Atlassian: Whenever you’re enjoying your shiny new Hydro-Flask 40 oz growler – whether it’s filled with a cold beverage or hot cocoa – give a nod to Atlassian, who helped maked them possible. Outside certainly approves of the choice.
    • Apprenda / HP: From the burrito spread to the Oxbow-infused black bean soup, Apprenda and HP are responsible for your lunch.
    • WePay: Like your fine new Teku stemmed tulip glassware? Thank WePay.
    • AWS/BigPanda/CohesiveFT/HP: Maybe you liked the ginger cider, maybe it was the exceedingly rare Italian sour, or maybe still it was the Swiss stout? These are the people that brought it to you.
    • Cashstar: Liked the Union Bagels on Thursday or the breakfast burritors? That was Cashstar’s doing.
    • O’Reilly: Lastly, we’d like to thank the good folks from O’Reilly for being our media partner yet again and bringing you free books.
  • Our Speakers: Every year I have run the Monktoberfest I have been blown away by the quality of our speakers, a reflection of their abilities and the effort they put into crafting their talks. At some point you’d think I’d learn to expect it, but in the meantime I cannot thank them enough. Next to the people, the talks are the single most defining characteristic of the conference, and the quality of the people who are willing to travel to this show and speak for us is humbling.
  • Ryan and Leigh: Those of you who have been to the Monktoberfest previously have likely come to know Ryan and Leigh, but for everyone else they reall are one of the best craft beer teams not just in this country, but the world. And they’re even better people, having spent the better part of the last few months sourcing exceptionally hard to find beers for us. It is an honor to have them at the event, and we appreciate that they take time off from running the fantastic Of Love & Regret to be with us.
  • Lurie Palino: Lurie and her catering crew have done an amazing job for us every year, but this year was the most challenging yet due to some late breaking changes in the weeks before the event. As she does every year, however, she was able to roll with the punches and deliver on an amazing event yet again. With no small assist from her husband, who caught the lobsters, and her incredibly hard working crew at Seacoast Catering.
  • Kate (AKA My Wife): Besides spending virtually all of her non-existent free time over the past few months coordinating caterers, venues and overseeing all of the conference logistics, Kate was responsible for all of the good ideas you’ve enjoyed, whether it was the masseuses two years ago, the cruise last year or the inspired choice of venue this. And she gave an amazing talk on the facts and data behind sexual harassment. I cannot thank her enough.
  • The Staff: Juliane did yeoman’s work organizing many aspects of the conference, including the cruise, and with James secured and managed our sponsors. Marcia handled all of the back end logistics as she does so well – and put up with the enormous growler boxes living at her house for a week. Kim not only worked both days of the show, but traveled down to Baltimore and back by car simply to get things that we couldn’t get anywhere else. Celeste, Cameron, Rachel, Gretchen, Sheila and the rest of the team handled the chaos that is the event itself with ease. We’ve got an incredible team that worked exceptionally hard.
  • Our Brewers: We picked a tough week for brewer appearances this year, as we overlapped with no fewer than three major beer festivals, but The Alchemist was fantastic as always about making sure that our attendees got some of the sweet nectar that is Heady Topper, and Mike Guarracino of Allagash was a huge hit attending both our opening cruise and Thursday dinner. Oxbow Brewing, meanwhile, not only connected us with a few hard to get selections, but loaned us some of the equipment we needed to have everything on tap. Thanks to all involved.
  • Erik Dasque: As anyone who attended dinner is aware, Erik was our drone pilot for the evening. He was gracious enough to get his Phantom up into the air to capture aerial shots of the Audubon facility as well as video of our arriving attendees. Wait till you see his video. In the meantime, here’s a picture.

With that, this year’s Monktoberfest is a wrap. On behalf of myself, everyone who worked on the event, and RedMonk, I thank you for being a part of what we hope is a unique event on your schedule. We’ll get the video up as quickly as we can so you can share your favorite talks elsewhere.

For everyone who was with us, I owe you my sincere thanks. You are why we do this, and you are the Monktoberfest. Stay tuned for details about next year, as we’ve got some special things planned for our 5th anniversary, and in the meantime you might be interested in Thingmonk or the Monki Gras, RedMonk’s other two conferences, as well as the upcoming IoT at Scale conference we’re running with SAP in a few weeks.

Author: "stephen o\'grady" Tags: "Conferences & Shows"
Send by mail Print  Save  Delicious 
Date: Thursday, 04 Sep 2014 19:48

Foucault's Pendulum

One of the lessons that has stayed with me all these years removed from my History major is the pendulum theory. In short, it asserts that history typically moves within a pendulum’s arc: first swinging in one direction, then returning towards the other. I’ve been thinking about this quite a bit in recent months as the predictable result of widespread developer empowerment becomes more and more visible in virtually all of the metrics we track. Unsurprisingly, when you have two populations making decisions, the larger one leads to a wider array of outcomes. CIOs, as an example, were long content to consolidate on a limited number of runtimes – Java, .NET and a few others. All of the data we see, however, suggests that as the New Kingmakers have begun to rise up and act on their own initiative, the distribution of runtimes employed has exploded. The pendulum, quite obviously, had swung from centralized to fragmented, driven by a fundamental shift in the way that technologies were selected.
The question I’ve been pondering is simple: when does it begin to swing back in the other direction?

If there is any reversal here, it will come from developers. Even the large, CIO-centric incumbents are aware today that developers are in charge, so there’s no evidence to suggest that CIOs have a plausible strategy for putting developers back under thumb. But while over the last few years newly empowered developers have shown an insatiable appetite for new technologies, it hasn’t been clear that this trajectory was sustainable longer term.

Which is I’ve been paying attention, looking for evidence that the pendulum swing might be slowing – even reversing. The data is inconclusive. As Donnie has noted, there have only been five languages that really mattered on a volume basis on Github: JavaScript, Ruby, Java, PHP, and Python. And yet our rankings indicate that while they do indeed represent the fat part of the tail, there is substantial, ongoing volume usage of maybe twenty to thirty on top of that.

What the data won’t say, however, developers themselves will. Witness this piece from Tim Bray:

There is a re­al cost to this con­tin­u­ous widen­ing of the base of knowl­edge a de­vel­op­er has to have to re­main rel­e­van­t. One of today’s buz­zwords is “full-stack developer”. Which sounds good, but there’s a lit­tle guy in the back of my mind scream­ing “You mean I have to know Gra­dle in­ter­nals and ListView fail­ure modes and NSMan­agedOb­ject quirks and Em­ber con­tain­ers and the Ac­tor mod­el and what in­ter­face{} means in Go and Dock­er sup­port vari­a­tion in Cloud provider­s? Color me sus­pi­cious.

Which links to this piece by Ed Finkler:

My tolerance for learning curves grows smaller every day. New technologies, once exciting for the sake of newness, now seem like hassles. I’m less and less tolerant of hokey marketing filled with superlatives. I value stability and clarity.

Which elicited this reponse from Marco Arment:

I feel the same way, and it’s one of the reasons I’ve lost almost all interest in being a web developer. The client-side app world is much more stable, favoring deep knowledge of infrequent changes over the constant barrage of new, not necessarily better but at least different technologies, libraries, frameworks, techniques, and methodologies that burden professional web development.

Which in turn prompted a response from Matt Gemmell entitled “Confessions of an Ex-Developer”:

I’m glad there are no compilers (visible) in my life. I’m also glad that I can view the WWDC keynote as a tourist, without any approaching tension headache as I think about what I’ll need to add, or change, or remove. I can drift languidly along on the slow-moving current of the everyday web, indulging an old habit when a rainy evening comes by.

It’s a profoundly relaxing thing to be able to observe the technology industry without being invested in it. I’m glad I’m not making software anymore.

To be clear, these are merely four developers. Four experienced developers, more importantly. It may very well be that their experiences are nothing more than a natural and understandable change in priorities that comes with age.

But their experience seems to mirror a logical reaction to a very rapid set of transformations in this industry. Given the hypothesis that the furious rate of change and creation in technology will at some point hit a point of diminishing returns, then become actively counterproductive, it follows that these could merely be the bleeding edge of a more active backlash against complexity. Developers have historically had an insatiable appetite for new technology, but it could be that we’re approaching the too-much-of-a-good-thing stage. In which case, the logical outcome will be a gradual slowing of fragmentation followed by gradual consolidation. Market outcomes would be dependent on individual differences between rates of change, the negative impacts of fragmentation and so on.

It may be difficult to conceive of a return to a more simple environment, but remember that the Cambrian explosion the current rate of innovation is often compared to was itself very brief – in geologic terms, at least. Unnatural rates of change are by definition unnatural, and therefore difficult to sustain over time. It is doubtful that we’ll ever see a return to the radically more simple environment created by the early software giants, but it’s likely that we’ll see dramatically fewer popular options per category.

Whether we’re reaching apex of the swing towards fragmentation is debatable, less so is the fact that the pendulum will swing the other way eventually. It’s not a matter of if, but when.

Author: "stephen o\'grady" Tags: "Programming Languages"
Send by mail Print  Save  Delicious 
Date: Wednesday, 06 Aug 2014 19:58

defining the unit of atomic weight

According to published reports, Docker (neé dotCloud) is in the process of securing $40M in financing. Update Originally mis-stated the amount of financing, but the substance of the post stands.

If popularity is a guiding metric, this infusion will come as no surprise. Docker is one of the fastest growing projects we have ever seen at RedMonk, and virtually no one we speak with is surprised to hear that. In a little over a year, Docker has exploded into a technology that is seeing near universal uptake, from traditional enterprise IT suppliers (e.g. Red Hat) to emerging infrastructure players (e.g. Google).

There are many questions currently being asked about Docker. Most obviously, why now? The idea of containers is not new, and conceptually can be dated back to the mainframe, with more recent implementations ranging from FreeBSD Jails to Solaris Zones. What is about Docker that it has captured mainstream interest where previous container technologies were unable to?

Rather than one explanation, it is likely a combination of factors. Most obviously, there is the popularity of the underlying platform. Linux is exponentially more popular today than any of the other platforms offering containers have been. Containers are an important, perhaps transformative feature. But they historically haven’t been enough to compel a switch from one operating system to another.

Perhaps more importantly, however, there are two larger industry shifts at work which ease the adoption of container technologies. First, there is the near ubiquity of virtualization within the enterprise. When Solaris Zones dropped in 2004, for example, VMware was six years old, five months from being bought by EMC (in a move that baffled the industry) and three years away from an IPO. Ten years later, and virtualization is, quite literally, everywhere. At OSCON, for example, one database expert noted that somewhere between 30% and 50% of his very large database workloads were running virtualized. The last workload to be virtualized, in other words, is almost half the time. Just as the ASP market failure paved the way for the later SAAS market entrants, the long fight for virtualization acceptance has likely eased the adoption of container technologies like Docker.

More specific to containers specifically, however, is the steady erosion in the importance of the operating system. To be sure, packaged applications and many infrastructure components are still heavily dependent on operating system-specific certifications and support packages. But it’s difficult to make the case that the operating system is as all powerful as it was, given the complete reversal of attitudes towards Ubuntu in the cloud era. Prior to the ascension of Amazon and other public cloud suppliers, large scale enterprise support on a general basis was near zero. Today, besides being by far and away the most popular distribution on Amazon, Ubuntu is supported by those same enterprise stalwarts from HP to IBM. Nor has IAAS been the only factor in the ongoing disintermediation of the operating system; as discussed previously, PAAS is the new middleware, and middleware’s explicit mission has historically been to abstract the application from the operating system underneath it.

These developments imply that there is a shift at work in the overall market importance of the operating system (a shift that we have been expecting since 2010), which in turn helps explain how containers have become so popular so quickly. Unlike virtual machines, which replicate an entire operating system, containers act like a diff of two different images. Operating system components to the two images are shared, leaving the container to house just the difference: little more the application and any specific dependent libraries, etc. Which means that containers are substantially lighter weight than full VMs. If applications are heavily operating system dependent and you run a mix of operating systems, containers will be problematic. If the operating system is a less important question, however, containers are a means of achieving much higher application density on a given instance versus virtual machines fully emulating an operating system.

Taken in the aggregate, this is at least a partial explanation for the question of “why now?” As is typical with dramatic movements, Docker’s success is as much about context as the quality of the underlying technology – intending no disrespect to the Docker engineers, of course. Engineering is critical, it’s just that timing is usually more critical.

The most important question about Docker, however, isn’t “why now?” It is rather the one being asked more rarely today, by those struggling to understand where the often overlapping puzzle pieces fit. The explosion of Docker’s popularity begs a more fundamental question: what is the atomic unit of infrastructure moving forward? At one point in time, this was a server: applications were conceived of, and deployed to, a given physical machine. More recently, the base element of an infrastructure was a virtual recreation of that physical machine. Whether you defined that as Amazon did or VMware might was less important than the idea that an image resembling a server, from virtualized hardware and networking interfaces to a full instance of an operating system, was the base unit from which everything else was composed.

Containers generally and Docker specifically challenge that notion, treating the operating system and everything beneath as a shared substrate, a universal foundation that’s not much more interesting the raised floor of a datacenter. For containers, the base unit of construction is the application. That’s the only real unique element.

What this means yet is undetermined. Users are for the most part years away from understanding this division, let alone digesting its implications. But vendors and projects alike should, and in some cases are, beginning to critically evaluate the lens through which they view the world. Infrastructure players like VMware and the OpenStack ecosystem, for example, need to project forward the potential opportunities and threats presented by an application as opposed to VM-centric worldview, while Docker and others in similar orbits (e.g. Cloud Foundry) conversely need to consider how to traverse the comprehension gap between what users expect and what they get.

Google App Engine, Force.com and others, remember, tried to sublimate the underlying infrastructure in the first generation of PAAS offerings and the result was a market dwarfed by IAAS – which not coincidentally looked a lot more like the physical infrastructure customers were used to. But as the Turkey Fallacy states, “it hasn’t happened so it won’t happen” is not the most sustainable defense imaginable. Just because PAAS struggled to get customers beyond thinking in terms of physical hardware doesn’t mean that Docker will as well.

In any event, expect to see players on both sides of the VM / app divide aggressively jockeying for position, as no one wants to be the one left without a chair when the music stops.

Author: "stephen o\'grady" Tags: "Containers, Open Source, Virtualization"
Send by mail Print  Save  Delicious 
Date: Monday, 07 Jul 2014 23:10

Founded in 1998, VMware went public on August 14, 2007. Founded a year after VMware in 1999, Salesforce.com held its initial public offering three years earlier on June 23, 2004. From a temporal standpoint, then, the two companies are peers. Which is one reason that it’s interesting to examine how they have performed to date, and how they might perform moving forward.

Another is the fact that they represent, at least for now, radically different approaches to the market. VMware, of course, is the dominant player in virtualization and markets adjacent to that space. Where Microsoft built its enormous value in part by selling licenses to the operating system upon which workloads of all shapes and sizes were run, VMware’s revenue has been driven primarily by the sales of software that makes operating systems like Windows virtual.

Salesforce.com, on the other hand, has been since its inception the canonical example for what eventually came to be known as Software-as-a-Service, the sale of software centrally hosted and consumed via a browser. Not only is Salesforce.com’s software – or service, as they might prefer it be referred to – consumed very differently than software from VMware, the licensing and revenue model of services-based offerings is quite distinct from the traditional perpetual license business.

It has been argued in this space many times previously (see here, for example) that the perpetual license model that has dominated the industry for decades is increasingly under pressure from a number of challengers ranging from open source competition to software sold, as in the case of Salesforce.com, as a service. In spite of the economic challenge inherent to running a business where your revenue is realized over a long period of time that competes with those that get all of their money up front, the bet here is that the latter model will increasingly give way to the former.

Which is why it’s interesting, and perhaps informative, to examine the relative performance, and market valuation, of two peers that have taken very different approaches. What does the market believe about the two models, and what does their performance tell us if anything?

It is necessary to acknowledge before continuing that in addition to the difference in terms of model, Salesforce and VMware are not peers from a categorical standpoint. The former is primary an applications vendor, its investments in platforms such as Force.com or Heroku notwithstanding, while the latter is an infrastructure player, in spite of its dalliances with application plays such as Zimbra. While the comparison is hardly apples to apples, therefore, it is nevertheless instructive in that both vendors compete for a percentage of organizational IT spend. But keep that distinction in mind regardless.

As for the most basic metric, market capitalization, VMware is currently worth more than Salesforce: $42 billion to $35 billion as I write this. Even the most cursory examination of some basic financial metrics will explain why. Since it began reporting in 2003, Salesforce.com has on a net basis lost just over $350 million. VMware, for its part, has generated a net profit of almost $4 billion.

If we examine the quarterly gross profit margins of the two entities over time, these numbers make sense.

Apart from some initially comparable margins for Salesforce, VMware has consistently commanded a higher margin than its SaaS-based counterpart. While this chart likely won’t surprise anyone who’s tracked the markets broadly or the companies specifically, however, it’s interesting to note the fork in the 2010-2011 timeframe. Through 2010, the margins appeared to be on a path to converge; after, they diverged aggressively. Nor is this the only metric in which this pattern is visible.

We see the same trajectories in a plot of the quarterly net income, for example, but this is to be expected since this is in part a function of the profit generated.

The question is what changed in the 2010 timeframe, and one answer appears to be investments in scale. The following chart depicts the gross property, plant and equipment charges for both firms over the same timeframe.

Notice in particular the sharp spike beginning in 2010-2011 for Salesforce. After six years as a public entity, the company began investing in earnest, which undoubtedly had consequences partially reflected in the charts above. VMware’s expenditures here are interesting for their part, because the conventional wisdom is that it is the services firms like Salesforce, Amazon, Google or increasingly Microsoft whose PP&E will reflect their necessary investments in the infrastructure to deliver services at scale. Granted, VMware’s vCloud Hybrid Service is intended to serve as a public infrastructure offering, but this was intended to be an “an asset-light model” which presumably would not have commanded the same infrastructure investments. Nevertheless, VMware outpaced Salesforce until 2014.

The question, whether it’s from the perspective of an analyst or an investor, is what the returns have been on this dramatic increase in spending. Obviously in Salesforce’s case, its capital investments have dragged down its income and margins, while VMware’s dominant market position has allowed it to not only sustain its pricing but grow its profitability. But what about revenue growth?

One of the strongest arguments in favor of SaaS products is convenience; it’s far less complicated to sign up to a service hosted externally than it is to build and host your own. If convenience is indeed a driver for adoption, greater revenue growth is one potential outcome: if it’s easier to buy, it will be bought more. This is, in fact, a necessity for Salesforce: if you’re going to trade losses now for growth, you need the growth. To some extent, this is what we see when we compare the two companies.

The initial decline from 70+ percent growth for both companies is likely the inevitable product of simple math: the more you make, the harder it is to grow. While we can discount the first half of this chart, the second half is intriguing in that it is a reversal of the pattern we have seen above. While VMware solidly outperformed Salesforce at a growing rate in profit and income, Salesforce, beginning about a year after its PP&E investments picked up, has grown its revenue at a higher rate than has VMware. Early in this period you could argue the rate differential was a function of revenue disparities, but the delta between the revenue numbers last quarter was less than 10%.

In general, none of these results should be surprising. VMware has successfully capitalized on a dominant position in a valuable market and the financial results demonstrate that. Salesforce, as investors appear to have recognized, is clearly trading short term losses against the longer term return. While there is some evidence to suggest that Salesforce’s strategy is beginning to see results, and VMware is probably paying closer attention to its overall ability to grow revenue, it’s still very early days. It’s equally possible that one or both are poor representatives for their respective approach. It will be interesting to monitor these numbers over time, however, to try and test how the two models continue to perform versus one another.

Author: "stephen o\'grady" Tags: "Business Models, Software-as-a-Service"
Send by mail Print  Save  Delicious 
Date: Friday, 13 Jun 2014 17:20

As we settle into a roughly bi-annual schedule for our programming language rankings, it is now time for the second drop of the year. This being the second run since GitHub retired its own rankings forcing us to replicate them by querying the GitHub archive, we are continuing to monitor the rankings for material differences between current and past rankings. While we’ve had slightly more movement than is typical, however, by and large the results have remained fairly consistent.

One important trend worth tracking, however, is the correlation between the GitHub and Stack Overflow rankings. This is the second consecutive period in which the relationship between how popular a language is on GitHub versus Stack Overflow has weakened; this run’s .74 is in fact the lowest observed correlation to date. Historically, the number has been closer to .80. With only two datapoints indicating a weakening – and given the fact that at nearly .75, the correlation remains strong – it is premature to speculate as to cause. But it will be interesting to monitor this relationship over time; should GitHub and Stack Overflow continue to drift apart in terms of programming language traction, it would be news.

For the time being, however, the focus will remain on the current rankings. Before we continue, please keep in mind the usual caveats.

  • To be included in this analysis, a language must be observable within both GitHub and Stack Overflow.
  • No claims are made here that these rankings are representative of general usage more broadly. They are nothing more or less than an examination of the correlation between two populations we believe to be predictive of future use, hence their value.
  • There are many potential communities that could be surveyed for this analysis. GitHub and Stack Overflow are used here first because of their size and second because of their public exposure of the data necessary for the analysis. We encourage, however, interested parties to perform their own analyses using other sources.
  • All numerical rankings should be taken with a grain of salt. We rank by numbers here strictly for the sake of interest. In general, the numerical ranking is substantially less relevant than the language’s tier or grouping. In many cases, one spot on the list is not distinguishable from the next. The separation between language tiers on the plot, however, is generally representative of substantial differences in relative popularity.
  • GitHub language rankings are based on raw lines of code, which means that repositories written in a given language that include a greater number amount of code in a second language (e.g. JavaScript) will be read as the latter rather than the former.
  • In addition, the further down the rankings one goes, the less data available to rank languages by. Beyond the top tiers of languages, depending on the snapshot, the amount of data to assess is minute, and the actual placement of languages becomes less reliable the further down the list one proceeds.


(click to embiggen the chart)

Besides the above plot, which can be difficult to parse even at full size, we offer the following numerical rankings. As will be observed, this run produced several ties which are reflected below.

1 Java / JavaScript
4 Python
5 C#
6 C++ / Ruby
9 C
10 Objective-C
11 Shell
12 Perl
13 R
14 Scala
15 Haskell
16 Matlab
17 Visual Basic
18 CoffeeScript
19 Clojure / Groovy

Most notable for advocates of either Java or JavaScript is the tie atop these rankings. This finding is not surprising in light of the fact that one or the other – most commonly JavaScript – has been atop our rankings as long as we have had them, with the loser invariably finishing in second place. For this run, however, the two languages find themselves in a statistical tie. While the actual placement is, as mentioned above, not particularly significant from an overall share perspective, the continued, sustained popularity of these two runtimes is notable.

Aside from that tie, the rest of the Top 10 is relatively stable. Python retook fourth place from C#, and CSS pushed back C and Objective-C, but these changes notwithstanding the elite performers in this ranking remain elite performers. PHP, as one example, remains rock steady in third behind the Java/JavaScript tandem, and aside from a slight decline from Ruby (5 in 2013, 7 today) little else has changed. Which means that the majority of the interesting activity occurred further down the spectrum. A few notes below on notable movements from selected languages.

  • R: Advocates of R will be pleased by the language’s fourth consecutive gain in the rankings. From 18 in January of 2013 to 13 in this run, the R language continues to rise. Astute observers might note by comparing plots that this is in part due to growth on GitHub; while R has always performed well on Stack Overflow due to the volume of questions and answers, it has tended to be under-represented on GitHub. This appears to be slowly changing, however, in spite of competition from Python, issues with the runtime itself and so on.
  • Go: Like R, Go is sustaining its upward trajectory in the rankings. It didn’t match its six place jump from our last run, but the language moved up another spot and sits just outside the Top 20 at 21. While we caution against reading much into the actual placement on these rankings, where differences between spots can over-represent only marginal differences in performance, we do track changes in trajectory closely. While its 21st spot, therefore, may not distinguish it materially from the languages directly above or behind it, its trendline within these rankings does. Given the movement to date, as well as the qualitative evidence we see in terms of projects moving to Go from other alternatives, it is not unreasonable to expect Go to be a Top 20 language within the next six to twelve months.
  • Perl: Perl, on the other hand, is trending in the opposite direction. Its decline has been slow, to be fair, dropping from 10 only down to 12 in our latest rankings, but it’s one of the few Tier 1 languages that has experienced a decline with no offsetting growth since we have been conducting these rankings. While Perl was the glue that pulled together the early web, many believe the Perl 5 versus Perl 6 divide has fractured that userbase, and at the very least has throttled adoption. While the causative factors are debatable, however, the evidence – both quantitative and qualitative – points to a runtime that is less competitive and significant than it once was.
  • Julia/Rust: Two of the first quarter’s three languages to watch – Elixir didn’t demonstrate the same improvement – continued to their rise. Each jumped 5 spots from 62/63 to 57/58. This leaves them still well outside the second tier of languages, but they continue to climb in our rankings. For differing reasons, these two languages are proving to be popular sources of investigation and experimentation, and it’s certainly possible that one or both could follow in Go’s footsteps and make their way up the rankings into the second tier of languages at a minimum.
  • Dart: Dart, Google’s potential replacement for JavaScript, is a language we receive period inquiries about, although not as a high a volume of them as might be expected. It experienced no movement since our last ranking, placing 39 in both of our last two runs. And while solidly in the second tier at that score, it hasn’t demonstrated to date the same potential for rapid uptake that Go has – in all likelihood because its intended target, JavaScript, has sustained its overwhelming popularity.
  • Swift: Making its debut on our rankings in the wake of its announcement at WWDC is Swift, which checks in at 68 on our board. Depending on your perspective, this is either low for a language this significant or impressive for a language that is a few weeks old. Either way, it seems clear that – whatever its technical issues and limitations – Swift is a language that is going to be a lot more popular, and very soon. It might be cheating, but Swift is our language to watch this quarter.

Big picture, the takeaway from the rankings is that the language diversity explored most recently by my colleague remains the norm. While the Top 20 continues to be relatively static, we do see longer term trends adding new players (e.g. Go) to this mix. Whatever the resulting mix, however, it will ultimately be a reflection of developers’ desires to use the best tool for the job.

Author: "stephen o\'grady" Tags: "Programming Languages"
Send by mail Print  Save  Delicious 
Date: Tuesday, 20 May 2014 20:54

Server room with grass!

Following the departure of Steve Ballmer, one of the outgoing executive’s defenders pointed to Microsoft’s profit over his tenure relative to a number of other competitors. One of those was Salesforce.com, which compared negatively for the simple reason that it has not generated a net profit. This swipe was in keeping with the common industry criticism of other services based firms from Amazon to Workday. As far as balance sheets are concerned, services plays – be they infrastructure, platform, software or a combination of all three – are poor investments. Which in turn explains why the upward trajectory common to the share prices of firms of this type has generated talk of a bubble.

Recently, however, Andreessen Horowitz’ Preethi Kasireddy and Scott Kupor questioned in print and podcast form the mechanics of how SaaS firms in particular are being evaluated. The source will be an issue for some, undoubtedly, as venture capitalists have a long history of creatively interpreting financial metrics and macro-industry trends for their own benefit. Kasireddy and Kupor’s explanations, however, are simple, digestible and rooted in actual metrics as opposed to the “eyeballs” that fueled the industry’s last recreation of tulipmania.

The most obvious issue they highlight with services entities versus perpetual software models is revenue recognition. Traditional licenses are paid up front, which means that vendors can apply the entire sale to the quarter it was received which a) provides a revenue jolt and b) helps offset the incurred expenses. Services firms, however, have typically incurred all of the costs of development and customer acquisition up front but are only able to recognize the revenue as it is delivered. As they put it,

The customer often only pays for the service one month or year at a time — but the software business has to pay its full expenses immediately…

The key takeaway here is that in a young SaaS business, growth exacerbates cash flow — the faster it grows, the more up-front sales expense it incurs without the corresponding incoming cash from customer subscriptions fees.

The logical question, then, is this: if services are such a poor business model, why would anyone invest in companies built on them? According to Kasireddy and Kupor, the answer is essentially the ratio of customer lifetime value (LTV) to customer acquisition costs (CAC). Their argument ultimately can be reduced to LTV, which they calculate by some basic math involving the annual recurring revenue, gross margin, churn rate and discount rate, and then measure against CAC to produce a picture of business’s productivity. The higher the multiple of a given customers’s lifetime value relative to the costs of acquiring same, obviously, the better the business.

Clearly businesses betting on services are doing so – at least in part, competitive pressures are another key driver – because they believe that while the benefits of upfront perpetual licensing are tempting, the more sustainable, higher margin approach over the longer term is subscriptions. Businesses like Adobe who have made this transition would not willingly leave the capital windfall on the table otherwise. Which means that while we need more market data over time to properly evaluate Kasireddy and Kupor’s valuation model over time, in particular their contention that services plays will be winner-take-all by default, it is difficult to argue the point that amortizing license fees over time allows vendors to extract a premium that is difficult to replicate with up front, windfall-style licensing. Even if this alternative model cannot justify current valuations, ultimately, it remains a compelling argument for why services based companies should not be evaluated in the same manner as their on premise counterparts.

But there is one other advantage to services based businesses that Kasireddy and Kupor did not cover, or even mention. If you listen to the podcast or read the linked piece, the word “data” is mentioned zero times. Which is an interesting omission, because from this vantage point data is one of the most crucial structural advantages to services based businesses. There are others, certainly: the two mention the R&D savings, for example, that are realized by supporting a single version of an application versus multiple versions over multiple platforms. But potentially far more important from my perspective is the inherent advantage IaaS, PaaS and SaaS vendors have in visibility. Services platforms operating at scale have the opportunity to monitor, to whatever level of detail they prefer, customer behaviors ranging from transactional velocity, collaboration rates, technology preferences, deployment patterns, seasonal consumption trends – literally anything. They can tell an organization how this data is trending over time, they can compare a customer against baselines of all other customers, customers in their industry, or direct competitors.

Traditional vendors of on premise technologies sold under perpetual licenses need to ask permission to audit the infrastructure in any form, and to date vendors have been reluctant to grant this permission widely, in part due to horrifically negative experiences with vendor licensing audit teams. By hosting the infrastructure and customers centrally, however, services based firms are granted virtually by default information that sellers of on premise solutions would only be able to extract from a subset of customers. There is a level of permission inherent in working off of remotely hosted applications. It is necessary to accept the reality that literally every action, every behavior, every choice can (and should) be tracked for later usage.

How this data is ultimately used depends, of course, on the business. Companies like Google might leverage the data stored and tracked to serve you more heavily optimized ads or to decide whether or not to roll out more widely a selectively deployed new feature. Amazon might help guide technology choices by providing some transparency into which of a given operating system, database and so on was more widely used, and in what context. The Salesforce and Workday’s of the world, meanwhile, can observe in detail business practices, compare them with other customers and then present those findings back to its customers. For a fee, of course.

Which is ultimately why data is an odd asset to ignore when discussing the valuation of services firms. Should they execute properly, vendors whose products are consumed as a service are strongly differentiated from software-only players. They effectively begin selling not just software, but an amalgam of software and the data-based insights gleaned from every other user of the product – a combination that would be difficult, if not impossible, for on premise software vendors to replicate. Given this ability to differentiate, it seems likely that services firms over time will command a premium, or at the very least introduce premium services, on top of the base software experience they’re delivering. This will become increasingly important over time. And over time, the data becomes a more and more significant barrier to entry, as Apple learned quite painfully.

This idea of leveraging software to generate data isn’t new, of course. We at RedMonk have been writing about it since at least 2007. For reasons that are not apparent, however, very few seem to be factoring it into their public valuations of services based businesses. Pieces like the above notwithstanding, we do not expect this to continue.

Author: "stephen o\'grady" Tags: "Cloud, Data, Economics"
Send by mail Print  Save  Delicious 
Date: Thursday, 15 May 2014 18:42

No ifdefs

Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can’t be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.
The UNIX Programming Environment, Brian Kernighan and Rob Pike

“Is the ‘all in one’ story compelling or should we separate out a [redacted] LOB?” is a question we fielded from a RedMonk client this week, and it’s an increasingly common inquiry topic. For years, functional accretion – in which feature after feature is layered upon a basic application foundation – has been the norm. The advantage to this “all in one” approach is that buyers need only to make one decision, and refer to one seller for support. This simple choice, of course, has a cost: the jack of all trades is the master of none.

At RedMonk, we have been arguing for many years that the developer has evolved from virtual serf to de facto kingmaker. Accepting that, at least for the sake of argument, it is worth asking whether one of the unintended consequences of this transition may be a return to Unix-style philosophies.

The most obvious example of this in the enterprise market today is so-called microservices. Much like Unix programs, many services by themselves are trivial in isolation, but leveraged in concert can be tremendously powerful tools. This demands, of course, an Amazon-level commitment to services, such that every facet of a given infrastructure may be consumed independently and on demand – and this level of commitment is rare. But with even large incumbents increasingly focused on making their existing software portfolios available as services, the trend towards services broadly is real and clearly sustainable.

The trend towards microservices, which are much more granular in nature, is more recent and thus more difficult to project longer term (particularly given some of the costs), but certainly exploding in popularity. Sessions like Adrian Cockroft’s “Migrating to Microservices” are regularly standing room only with lines wrapping down two halls. The parallels between the Unix philosophy and microservices are obvious, in that both essentially are devoted to the idea of composable applications built from programs that do one thing well.

These types of services are difficult to sell to traditional IT buyers, who might not understand them well enough, would prefer to make a single decision or both. But developers understand the idea perfectly, and would prefer to choose a service that does what they need it to over one that may or may not do what they need but does ten things they don’t. It’s easy, then, to see microservices as the latest manifestation of the developer kingmaker.

It’s not as easy, however, to understand a similar trend in the consumer application space. In recent months, rather than continue trying to build a single application that serviced both file sharing and photo sharing needs, Dropbox split its application into the traditional Dropbox (Files) and the newly launched Carousel (Photos). Foursquare today released an application called Swarm, which essentially forks its business into two divisions: Foursquare, a Yelp competitor, and Swarm, a geo-based social network. Facebook, meanwhile, ripped out the messaging component of its core application in April because, as Mark Zuckerberg described it:

The reason why we’re doing that is we found that having it as a second-class thing inside the Facebook app makes it so there’s more friction to replying to messages, so we would rather have people be using a more focused experience for that.

Like enterprise tech, consumer technology has been trending towards all-in-one for sometime as pieces like the “Surprisingly Long List of Everything Smartphones Replaced” highlight. But if Facebook, Foursquare and Twitter are any indication, it may be that a Unix philosophy renaissance is underway in the consumer space as well, even if the causative factors aren’t as obvious.

All of which means that our answer to the opening question should come as no surprise: we advised our client to separate out a new line of business. Particularly when developers are involved, it remains more effective to offer products that do one thing well, however trivial. As long as the Unix-ization of tech continues, you might consider doing the same.

Author: "stephen o\'grady" Tags: "Microservices"
Send by mail Print  Save  Delicious 
Date: Wednesday, 07 May 2014 22:31

Among the predictions in this space for the year 2014 was the idea that disruption was coming to storage. Having looked at the numbers, this prediction may have been off: disruption had apparently already arrived. By my math, these are EMC’s revenue growth rates for the last four years for its Information Infrastructure business: 18.43% (2010), 17.92% (2011), 2.05% (2012), 3.48% (2013). While the Information Infrastructure includes a few different businesses, Information Storage – what EMC is best known for – is responsible for around 91% of the revenue for the Information Infrastructure reporting category. And Information Infrastructure, in turn, generates 77% of EMC’s total consolidated revenue – the rest is mostly VMware (22%).

All of this tells us two things. One, that EMC has seen a multi-year downward trajectory in its ability to grow its storage business, and two, that storage is responsible for the majority of the company’s revenue. Put one and two together and it’s clear that the company has a problem.

How the company has reacted to these developments, meanwhile, can help observers gain a better understanding of what EMC believes are the causes to this under-performance. Based on the announcements at EMC World, it’s easy to sum up the company’s strategic response in one word: software. From ScaleIO to ViPR to the acquisition of DSSD and its team of ex-Solaris engineers, a lot of the really interesting news at EMC World was about software, which is an interesting shift for a hardware company. EMC is committed enough to its software strategy, in fact, that it’s willing to directly compete with its subsidiaries.

If it’s true that EMC is betting heavily on software to restore its hardware growth, the next logical question is whether this is the appropriate response. Based on what happened to the major commodity compute players – Dell has gone private, HP is charging for firmware and IBM left the market entirely – it’s difficult to argue for a different course of action. It seems unlikely that the optimal approach moving forward for EMC – or any other storage provider, for that matter – is going to be heavy hardware engineering. There are customers, particularly loyal EMC customers, that are hungry for hardware innovation and will continue to pay outsized margins for that gear moving forward. There are many more customers, however, willing to explore software abstractions layered on top of commodity hardware, otherwise known as software-defined storage. There’s a reason that EMC’s primary points of comparison were vendors like Amazon and Google rather than its traditional competitors.

Like its counterparts in the networking space who are coping with the implications of software-defined offerings in their space, EMC essentially had two choices: bury its head in the sand and pretend that the business is fine, or begin to tactically incorporate disruptive elements as part of a longer term strategy for adapting its business. Which is another way of saying that the company only really had one realistic choice, which to its credit was made: EMC is clearly adapting. Software-defined storage was a common topic of discussion at the company’s event this week, and while there are still areas where the embrace is awkward, the company clearly understands the challenge ahead and is taking steps to adjust its product lines and the models behind them. The transition to what it calls the “third platform” – EMC’s terminology for the cloud – will pose monumental challenges to the business longer term, but by betting on software EMC is investing in the area most likely to deliver differentiated value over time.

The biggest problem with the transition to the “third platform,” however, isn’t going to be their engineering response. As the company likes to point out, it is investing heavily in both M&A and traditional R&D, and with names like Bechtolsheim, Bonwick, Shapiro et al coming on board it’ll have the requisite brainpower available. But the problem with its current strategy is that it does little to prioritize convenience. As we’ve seen in the cloud compute segment, customers are increasingly willing to trade performance and features for speed and ready availability. And like most systems vendors, EMC is not currently built to service this type of demand directly; they will instead have to settle for an arms supplier-type role. Even in software, which is intrinsically simpler to make available than the hardware EMC has traditionally sold, the company keeps assets like ViPR locked up behind registration walls. In a market in which technology decisions are being made based more on what’s available than what’s good, that’s an issue.

The gist of all this, then, in the wake of EMC World is that the company is inarguably adapting to a market that’s rapidly changing around it, but has tough problems to solve in availability and convenience. The loyalty of EMC accounts is absolutely an asset, one that the company will need to rely on as customers make the “third platform” transition moving forward. But the company also needs to remember that technology decision making at those loyal EMC accounts has changed materially, and is increasingly advantaging players like Amazon at the expense of incumbents.

The focus on software engineering, therefore, is appropriate and welcome, but insufficient by itself to address the coming transition. Only a focus on reducing the friction of adoption, and improving developer engagement, can fix that.

Disclosure: EMC is a client, as are Amazon and VMware.

Author: "stephen o\'grady" Tags: "Business Models, Cloud, Storage"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Mar 2014 15:52

While the term SOA was lost to marketers years ago, the underlying concept may be in the process of making a comeback. Though the term itself has become a bad word outside of the most conservative enterprises and suppliers today, constructing applications from services has clear and obvious benefits. In his instant classic post about his time at Amazon, Google’s Steve Yegge described Amazon’s journey towards an architecture composed of services this way:

So one day Jeff Bezos issued a mandate…His Big Mandate went something along these lines:

1) All teams will henceforth expose their data and functionality through service interfaces.

2) Teams must communicate with each other through these interfaces.

3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4) It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter. Bezos doesn’t care.

5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6) Anyone who doesn’t do this will be fired.

Like Cortez’s soldiers, the Amazon employees got to work if for no other reason than they had no choice. The result, in part, is the Amazon you see today, the same one that effectively owns the market for public cloud services at present. Much as enterprises have historically writen off Adrian Cockcroft’s Netflix lessons with statements like “it only works for ‘Unicorns’ like Netflix,” most have convinced themselves that the level of service-orientation that Amazon achieved is effectively impossible for them to replicate. Which is, to be fair, likely true absent the Damoclean incentive Bezos put in place at Amazon. What’s interesting, however, is that many of those same enterprises are likely headed towards increased levels of abstraction and service-orientation, whether they realize it or not.

The most obvious example of this trend at work is the unfortunately named (Mobile) Back-end-as-a-Service category of providers. From Built.io to Firebase to Kinvey to the dozen other providers in the space, one the core value propositions is shortening the application development lifecycle by composing applications from a collection of services. Rather than building identity, location, and similar common services into the application from scratch, BaaS providers supply the necessary libraries to access externally hosted services. Which means that the application output of these providers is intrinsically service-oriented by design.

Elsewhere, in the adjacent Platform-as-a-Service space, providers are essentially advancing the same concept. In building an application on Engine Yard or Heroku, for example, developers are not required to implement their own datastores or caching infrastructure, but rather may leverage them as services – whether that’s Hadoop, MongoDB, memcached, MySQL, PostgreSQL, Redis, or Riak. Even IBM is planning to make the bulk of its software catalog consumable as a service by the end of the year. Which is logical, because the differentiation for PaaS providers is likely to be above the platform itself, as it is in the open source operating system market.

Consider on top of all of the above the existing traction for traditional SaaS offerings, and the reality is that it’s getting harder to build applications that are not dependent in some way upon services. And for those applications that are not yet, vendors are likely to make it increasingly difficult to maintain that independence as they move into services as a hedge against macro-issues with the sales of stand alone software.

There’s a reason, in other words, that micro-services are all the rage at the moment: services are how applications are being built today.

Author: "stephen o\'grady" Tags: "Services, Software-as-a-Service"
Send by mail Print  Save  Delicious 
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader