The debate questioning why SaaS should have a multi-tenant component is one that has gone on for some time. Although strong arguments exist on both sides, I prefer to focus on an example that illustrates why, in my belief, multi-tenancy is a must-have for SaaS.
No company launches a SaaS offering and then decides they do not want to maximize profits. Economies of scale play an important role in your ability to build an efficient product and become more profitable as you scale. A great example is a leading healthcare electronic medical records (EMR) company I worked with last year. Provided below is a snapshot of the current ASP model hosted within their own data center:
|Instances of Application||1000|
|Databases Under Management||3000|
|Time Required for Upgrades and Patches||1000+ Hours|
This company had the lofty goal of expanding from 1000 hosted customers to nearly 10,000 in 12 months. Based on their estimations, after introducing some special coding for reductions, their database footprint would expand to nearly 20,000 under management within 12 months. There also would not be enough time in the year to upgrade all of their customers, based on current timetables and processes in place.
They were at a crossroads, and needed to fundamentally change the way they do business. Keeping the status quo would become unprofitable, and continuing to host their ASP offering with a single-instance approach for each customer would ultimately become impossible. Customers wanted to rid themselves of costly infrastructure required to run their application suites on-premise, and were looking for a cloud solution.
In came Apprenda. The transition would not be easy. By introducing single-instance multi-tenancy, and some of the other great SaaS features the Apprenda platform provides, the new model for their hosted infrastructure looked like this:
|Instances of Application||1|
|Databases Under Management||3|
|Time Required for Upgrades and Patches||~ 10 minutes|
This new scenario seems like a no-brainer. Even if they scaled to 10,000 customers by year end, the management numbers would stay constant, and would enable them to seamlessly scale their business with ease. The “elephant in the room” was the shift in the way they did business. Since all customers would now be using the same instance of the application, major customizations in code on a per-tenant basis were out the window. This is not an Apprenda by-product; this is a change, in accordance, with a true multi-tenant approach.
What we did offer was the ease to give these major customers the choice to still use a heavily customized version on-premise, or hosted at-a-premium (to compensate for management costs). Since the same code base was used both for on-premise and Apprenda, there was no need for building a new product.
Moving forward with new customers, they could offer the multi-tenant SaaS approach at lower costs and easily manage the operational and business aspects without the prior headaches. The economics of scale of the new SaaS business was astonishing in its efficiency. As they added more customers, the cost of goods sold actually decreased since they were all utilizing the same underlying infrastructure for all.
While I am sure the multi-tenancy debate will continue, this provides a real-world example of a company that could not continue to turn a profit unless they embraced multi-tenancy. In this case, a fundamental change in the way business was done was essential, moving forward.
Over the past few years of working with hundreds of ISVs around the world, I feel I have been privy to inside information from many leading companies and future market challengers. I have learned about the strategies and ambitions of many software vendors; where they are today and where they want to be tomorrow. One variable that tends to be common from vendor to vendor includes defining an end goal of SaaS. The true meaning of SaaS or Software-as-a-Service is as subjective a term as cloud. Ask ten different vendors how they define SaaS and you will most likely receive ten different definitions.
The truth though is SaaS is all or nothing.
To become a nimble, flexible, and competitive SaaS vendor requires major changes in the way your customers consume your application(s). Consumers are becoming more and more inclined to use software as an on-demand offering because the market is pushing them in that direction. Ridding themselves of the costly infrastructure and specialists required to run software is a big driver and on the mind of many decision-makers, including CFO’s. Like it or not, more and more of your customers will and are looking for a SaaS offering. If it is not from you, they will find it elsewhere.
I have been involved in conversations and strategic discussions with some of the top names in software, some of the leading enterprises and some of the future market leaders that you have never heard of yet. I do not consider myself a fortune-teller and have no crystal ball. However, I do have knowledge that I have gathered over the years regarding the transition to SaaS that I feel is valuable to share. While the transition to SaaS is valuable and necessary, the time and resources needed to build in-house is often extremely underestimated. Therefore, I have compiled the Top 6 issues that are often looked over when companies look to build SaaS in-house and broke them into the Infographic below. These 6 issues are only the tip of the iceberg. If you are interested in learning more about transitioning your software business to SaaS, I recommend you check out our whitepaper, “Making the Move to SaaS.”
I recently wrote a piece that was published on Datamation that outlines 5 guidelines for implementing private PaaS in the Enterprise. Check it out here!
What do you think?
If you’d like to mingle with others in the Cloud space, the SaaSBlogs group on LinkedIn now has 4200+ members and is growing every day; make sure you’ re not missing out and join today!
Lock-in is a problem that is as old as the software industry. In fact, there is arguably no way to avoid it entirely. If I choose a programming language, application server, or RDBMS, I would end up writing apps that basically couldn’t run atop of anything but that vendor’s (or community’s, in the case of open source) platform technology. Practically speaking, one would make a conscious choice to use that platform, and know that lock-in is part of the game. I could spend time evaluating the tech, and assuming I did a decent amount of diligence, I would get enough comfort to move forward with it because the perceived benefits outweigh the lock-in risks. The reason I could make this judgment call is that outside of relatively infrequent releases, the evaluation I did today is likely to be valid tomorrow because the underlying assumptions do not change that much over time. In fact, I would have little reason to believe that on a new release, the vendor would drastically changing the platform in a way that was severely out of whack with historical context, and I could even expect pricing to stay in a somewhat reasonable margin of what I’ve paid in the past. This “traditional” form of lock-in was very static in nature because of how fixed the underlying assumptions remained over time, and the ongoing contextual validity one would have with respect to the original decision to be locked in vs. the current state of the platform.
Unfortunately, the negative potential of lock-in has increased in severity due to cloud, and more specifically, due to PaaS. Lock-in risk in PaaS is defined as the sum of two components: a static lock-in risk that is the same in nature (as described earlier) as the platforms of prior computing paradigms and dynamic lock-in risk that is dependent on a set of constantly changing assumptions. When one decides to build apps on a PaaS, they should be evaluating two things:
- How good is the ‘P’ part of PaaS – the platform itself? This is an evaluation of the actual runtime, APIs, frameworks, management capabilities, and engineering value I can extract when I write code targeting the platform. Additionally, how proprietary is all of this to the PaaS provider (i.e. how severe is the lock-in requirement to get access to this value? Is it just APIs, runtime behavior, etc. or a new, proprietary and non-standard language?)
- How good is the ‘S’ part of PaaS – that is, the service. This is an evaluation of the stability, cost, performance, and quality of the hosting capabilities of the PaaS.
Evaluating P is relatively easy because you can experiment with it and get a feel for the value and whether you are willing to trade that value for some lock-in. Additionally, P lock-in is much more static in nature which means that you likely don’t need to worry about significant changes in expectations. The S is scary. Hosting quality, service levels, performance, and price could be one thing today, and something very different tomorrow. You might deploy to a PaaS and say “Wow, things are running so smooth and fast, and the price is just right,” only to have the rug pulled out from under you without notice, or worse. If the P is high/non-standard lock-in and the S is limited or no choice, you’re screwed.
Why does this happen? The short of it is that if a PaaS provider is the exclusive vendor of both P and S, any lock-in to P translates directly into lock-in at S. That is my code can only run where my PaaS vendor tells me it can run. For most PaaS vendors, this means that you can run it on the service provided by – wait for it – the PaaS vendor alone! This is some really dangerous stuff and a tremendous amount of business risk. Having mission critical apps coupled to a hosting layer that needs to prove itself each minute of each day and has a very dynamic nature both in terms of quality and price is not good. How do you protect against this? Some of us (Apprenda, CloudFoundry, Cumulogic, among others) have decided that the P part of PaaS is the high value layer, and that S should be commodity and flexible, and frankly, irrelevant so long as it delivers on basic promises. This ‘Open PaaS’ model is one where rather than having P coupled to a single S, you might have dozens or even hundreds of services that offer the platform, essentially giving the developer choice of where to deploy apps while getting the full value of the platform; a “no lock-in” hosting model. This portable PaaS form factor is here to stay. Not only can one deploy to any service provider that is offering a (hopefully) differentiated instance of that particular PaaS, but the PaaS software is likely downloadable so that you can even download the PaaS software and run it on your laptop. This decoupling of the P from the S is what can bring lock-in down from the clouds (cheesy pun intended), and back to some sense of normalcy. Sure, you still need to commit to the PaaS and adopt its APIs and will grow dependent on its runtime capabilities, but as a customer of ours put it: “I’m OK committing to a platform, I’m not OK being locked into a service provider.”
If you agree with the title, you probably already know where I’m going with this. If you don’t agree, I really hope to change your mind. There seems to be this misconception in the market that PaaS is dependent on the existence of IaaS. Typically we see “cloud stack” diagrams that looking something like this
I’ve even drawn this myself (and will continue to do so in the context of market education). The problem is that this diagram is intended to communicate a conceptual, “lay of the land” hierarchy that describes a cloud-only stack where all layers are dependent only on other cloud layers. Clearly, there are tons (dare I say even a majority) of SaaS vendors that do not fit this at all. Some have neither PaaS nor IaaS below them, and some have only IaaS below them. Some SaaS vendors didn’t have these options at their disposal when they built their SaaS offerings. If they were starting today, I would encourage them to use a PaaS layer for a number of reasons. PaaS is no different in that one could be built without depending on IaaS. The one difference between PaaS and SaaS, however, is that I wouldn’t encourage anyone to build a PaaS that explicitly depends on an IaaS layer.
PaaS is typically built in one of two ways:
- With infrastructure multi-tenancy, where each guest application is isolated from other guest applications by using virtual machines. Each application gets its own dedicated OS instances, so the PaaS isn’t really multi-tenant, but instead, relies on multi-tenancy one layer down. In this case, the PaaS is a fancy packaging system and VM template manager with some load balancer bells and whistles.
- With true PaaS multi-tenancy, where the PaaS manages pools of OS resources and co-habits applications on shard OS instances. This is typically achieved using something like process level isolation to safely segregate different guest applications from one another. In this model, there is no inherent functional dependency on virtualization (IaaS or otherwise) because the PaaS does some complex leg work to do its own higher density multi-tenancy.
In architecture 1, there is an absolute dependency on either an IaaS or some hypervisor technology. For a variety of reasons, I do not consider this an optimal PaaS architecture. Architecture 2, which I do consider a real PaaS, is doing work inside of the OS rather than outside of the OS. It achieves multi-tenancy using a much different technique. This technique can be layered on any pool of OS resources, physical or virtual. Nothing about architecture 2 creates a hard dependency on virtualization or IaaS.
If a PaaS is built using architecture 2 (which is what we’ve done with the Apprenda PaaS), it should purposely avoid acknowledging the existing of a hypervisor technology. In fact, it should assume that everything is a standard OS instance. This does three things:
- It promotes implementation behavior that prevents the PaaS from being locked into underlying expectations
- It ensures that the PaaS provider writes an appropriate systems architecture so that interaction with an IaaS/hypervisor layer can be bolted on as an optimization to ease management of the PaaS. For example, if I am operating the PaaS and I need to provision more capacity on the fly, it would be nice if the PaaS could detect it is running on EC2 and spawn new VMs to include as PaaS capacity, or do the same if it is running in a private cloud environment atop of virtual machines on <insert your favorite hypervisor here>.
- Allows a PaaS to be deployed in a reduced complexity and lower cost environment where no virtualization exists. Let’s assume that 10 years from now, an enterprise or public PaaS provider achieves 100% penetration (all new apps going forward only on the PaaS), why keep virtualization around? The apps don’t care about virtualization, and certainly, the PaaS provides the elasticity and insulation from effects of the underlying hardware but at a new layer.
The end result is a highly flexible PaaS environment that can be run on any environment (on-premises or in the cloud), and that can still provide integration tooling to optimize deployments on environments where the PaaS has the knowledge and ability to do so.
At CloudExpo 2011 in Santa Clara, there was an expert panel on PaaS that answered a question from the moderator that implied IaaS/virtualization had to exist in order for PaaS to exist. I decide to ask the question of whether the panelists felt PaaS requires the existence of virtualization, and one of my favorite Cloud evangelists, James Watters from CloudFoundry, had a good answer, which I will paraphrase here: “no, but the dominant design principle applies and virtualization has enough inertia that it will be an intimate part of PaaS anyway”. This doesn’t mean that PaaS is dependent on IaaS/virtualization from an implementation point of view, and PaaS providers who have built that explicit dependency are either using virtualization for multi-tenancy, or unnecessarily knee-capping flexibility. For the customers’ sake, leave virtualization down a layer, and stop letting it become a leaky abstraction to what should be a cleaner systems architecture. If you build PaaS without a dependency on virtualization, you can then introduce integrations to IaaS layers as an optimization rather than a requirement. No one likes lock-in, particularly in the context of an operating layer.
One-man bands are impressive; typically able to play harmonica, guitar, drums, and a bunch of other instruments all at the same time with some meaningful composure but, unfortunately, with very little harmony. Despite this seemingly impressive feat, you never see a one-man band on the Billboard Top 100 – never (and John Popper doesn’t count). The reason for this is simple: one-man bands are good at getting attention on the side of the street, but they simply don’t make good music, let alone music that can be compared to that of a band of specially trained musicians. Being an expert at say, the guitar, requires that you dedicate every ounce of training energy into being a master of that one instrument. It also means that either as part of a band or solo, you can probably make the Billboard Top 100. This level of depth and dedication means that you won’t be relegated to a sideshow alongside the likes of that guy that pours molten lead into his mouth and spits out a cooled, solid lead slug; you have the opportunity to become a music legend. Certainly, Eric Clapton put in much more effort in learning to play guitar than every one-man band guitarist out there, and probably more than a few of them combined.
Building a PaaS is no different than making music. You can one-man band the hell out of programming languages and stack runtimes, but you’re just going to be making noise that sounds OK and gets eyeballs on the street. What you’ve done doesn’t have durable, legendary potential. It seems that lately, every PaaS provider wants to add as many languages as possible as quickly as possible. A week doesn’t go by where there isn’t a press release that is written in the stock format “<Insert Favorite PaaS Here> introduces game-changing, atom-smashing, world class support for <Insert Whatever Language the Other PaaS Said They Support Here>” This is a recipe for disaster both for the market in general, as well as for customers. There is an easy framework for understanding why this is bad business, and it all starts with being a real PaaS rather some heavily diluted manifestation of PaaS.
Building a PaaS the right way means that you’ve either:
- Invented a brand new language and execution runtime, or
- extended an existing runtime (JVM, CLR, etc.) into a new higher order runtime and systems architecture
In either case, the PaaS is doing some intimate, lower level heavy lifting and likely building a host of “base services” that are highly dependent on the language and/or underlying runtime. Those base services are exposed as APIs and are part of provided frameworks, or are capabilities that are instrumented into guest applications of the PaaS. Building all of this is non-trivial and takes a long time to build, but provides extremely rich next generation value to customers. This is the true meaning of “supporting” a language.
Much like the one-man band, when a PaaS starts laying claim to supporting its 37th language, they’re juggling too many things to properly be an expert at any one, and their version of “I’m good at the guitar” is not Eric Clapton’s version of the same statement. This statement redefinition and implied level of expertise is a good parlor trick, but can’t stand the test of time. The way PaaS providers achieve this is that they define “support” as deploying and loosely managing apps of that newfangled language or runtime type, but don’t provide any uniquely differentiated, high quality value. When a PaaS provider says they support a new language, they likely mean that they can move some pile of indistinguishable bits to a new underlying target application server and issue a “Website Start” command to whatever that target is (e.g. Tomcat, IIS, etc.) If that’s “next generation value”, then I’m in the wrong industry.
The huge downside to this multi-language craziness is that it creates a tremendous amount of market noise and distraction, and robs customers of time and value. Up front, a customer gets excited when they see “support” for so many languages. Some customers might not yet understand what the real potential and long term value of PaaS is. They’ll be satisfied with that PaaS, at least for a while. Once they starting building more complex cloud apps and realize that the PaaS does nothing to help them with this because it isn’t a runtime, then they walk away disenfranchised. For customers with more up-front sophistication, they see “support” as woefully inadequate out of the gate and began to get frustrated with PaaS as a concept in general.
A PaaS releasing support for language after language is a bad smell. It typically (I can’t claim always) means that they don’t have a sufficiently valuable piece of machinery under the hood, and they are likely not offering a runtime model. If they did, supporting the N+1th language would be hard work, and I would hypothesize that each N+1th language would take more time to support than the Nth language (rationale is fodder for another post). Now, to temper my sentiment, I do believe it is possible to support multiple languages/runtimes, but it has to be done correctly. The result is likely being able to support a small number of languages, rather than all of them.
If you’re an observer, practitioner, or customer in the PaaS space, don’t get distracted by the one-man band on the side of the street while walking to the rock concert – that would be a terrible reason to reach the ticket counter and hear the words “Sorry, we’re sold out.” If you’re a PaaS vendor, please, do it the right way and stop being distracted by shiny objects: support very few languages very well, rather than all languages very poorly. And remember, you might hire a one-man band to work your 8 year olds birthday party, but people pay lots of money and wear their finest clothes and jewelry for one night out at the symphony.
Platform as a service (PaaS) has been hijacked. PaaS was supposed to be a critical bulwark in protecting cloud computing from the frail models of earlier computing eras, but to many, has become a category to dump anything that is tooling to help with some part of managing and deploying applications on cloud infrastructure. It will take some time to explain, hence the long post. I started thinking about this after reading posts by two cloud evangelists for whom I have tremendous respect and were venting their frustrations with the PaaS space – Krishnan Subramanian and Ben Kepes. Each published posts titled “AppFog Announces Java Support – What The Heck Is Happening In PaaS Space?” and “OpenLogic Announces General Availability of CloudSwing PaaS“, respectively, that delivered well-deserved jabs at the PaaS space. Why were the jabs “well deserved”? Because PaaS is littered with “me too” undifferentiated offerings that focus on “user experience” and “deploying apps”, that’s why. I’m less interested in understanding or claiming that a given vendor suffers from this, but more interested in the discussion of these issues in PaaS at broad. To understand this sentiment, we need to really spend some time to define what PaaS should be and what sort of trap the market is falling into.
To define PaaS, let’s first define an arbitrary collection of “traditional” Operating System (OS) instances on real or emulated hardware and any networking equipment associated with those OS instances as a resource pool. A PaaS’ can be best thought of as a computing platform that acts as a host to “guest” applications and that is layered atop a resource pool such that the details of individual members of that resource pool are abstracted away and presented as a single logical layer to those guest applications. Guest applications are not privy to, bound to, or dependent on the details of components that are beneath the PaaS layer, thereby creating a rigid boundary. This abstraction reduces the amount of direct control an application has over its host environment, and in exchange, is provided certain functional guarantees and systems services. Furthermore, it could be argued that this PaaS computing platform be accessible to the developers who write guest applications in a self-service or near self-service way. Up until this point, I think nearly anyone who considers themselves a student of cloud computing would agree. After this point, things seem to fall apart since:
- No one agrees on what PaaS should provide
- Those who do agree on what PaaS should provide tend cluster around certain functional capabilities, where the clustering *seems* (not certain of this yet) to not be motivated by ideology but by a market wide “easy win” mentality
- Hampering of creativity due to other cloud models such as IaaS distracting focus from making revolutionary progress on the more promising end game that is PaaS
The natural question anyone would have is “Sinclair, so what are the feature and functions that define PaaS beyond the systems definition you gave earlier?” I’ll do my best to provide some direction, and then circle back to the title of this post. As I described earlier, PaaS defines a layer that sits above a pool of OS/network resources, but below guest applications in such a way the details and control are abstracted away by the PaaS. As such, a corollary is that at a bare minimum, a PaaS *must* allow developers to deploy and manage applications in-spite of the loss of control and being abstracted away from detail. This should not be looked at as a value of PaaS, but rather a requirement to achieve some level of fidelity with the pre-PaaS workflows that were part tooling, part developer manual labor, and part luck. No PaaS vendor should be proud of this achievement - it’s the scholarly equivalent of getting a ‘D’ on a test and trying to spin the fact that you barely passed as “100% success at not failing.” Granted, it’s a bit better getting a ‘D’ in that a minimally implemented PaaS at least takes the tedium of manual, time intensive, error prone application deploy and manage processes and automates them in a way that is generally flawless. Things like deploying apps and scaling apps become trivial button push operations on PaaS offerings, pushing application bits to servers and updating load balancers, among some other minor things. Nonetheless, this is not the revolutionary model that cloud computing has promised, at least that’s not what I consider “insanely great” and certainly wasn’t the model I envisioned to be world-changing when I co-founded Apprenda.
Going back to the title of this post, all PaaS’ are NOT created equal. Most “PaaS” offerings’ feature sets amount to the basics I described earlier. You see, PaaS vendors took the low friction approach of building “platforms” that merely stand up stack instances for apps being deployed through them (notice I didn’t say to them; these PaaS offerings do not act as containers to guest applications, but rather as fancy networked installers) and deal with some network component setup. In fact, some do as little as templating VMs with frameworks and stack components so that apps can be deployed with dependencies in place. This has short term sizzle, but in the long term does not provide any meaningful world changing value. What we have is a categorical skewing that sets up a market to conceptually think of something grander when they hear PaaS, only to find something less interesting.
So why does a runtime need to exist, anyway? First, let’s level set on one thing: one of the major goals of cloud is to abstract away details of underlying resources. Period. Cloud forces an application-centric model (Thanks goes to James Urquhart for articulating this), and as such, needs to remove distractions created by irrelevant, low-level details. Much like VM-based runtime paradigms such as the JVM and CLR got rid of the need to deal with what integers were stored in what CPU registers, cloud should get rid of the need to worry about execution time details about OS’, network routing, etc. Cloud has done two things: it has raised the ceiling for potential application architecture complexity AND opened the door to new types of previously unavailable value. Without going through a formal proof of sorts, it should be obvious that a PaaS that merely pushes bits onto servers and updates load balancers is incapable of supporting the execution needs of applications that need to deal with abstract resource concepts and new levels of architecture complexity due to the nature of network-distributed resources. It would be akin to thinking that a fancy C++ library could provide pari passu value to the JVM: it simply couldn’t. As a cloud application executes, it needs an underlying active system that is doing constant heavy lifting to make magic happen, and not just waiting on the sidelines for the next app or infrastructure management event. A great PaaS will:
- Provide self-service access to developers
- Provide application management tools and capabilities
- Use a runtime model that sits underneath an application as it executes, but above the infrastructure with the intent of abstracting away non-strategic details
- Provides frameworks and APIs to access higher order value
- Manage the infrastructure in a way that shields the application from as much impact as possible
The last piece of this book-of-a-post is helping identify what a PaaS runtime looks like. An anchoring theme for a PaaS runtime is active participation in the execution of guest application code. As such, you will typically see non-trivial PaaS offerings do some number of the following things:
- Implicit participation in application messaging at various tiers of typical application architectures. Additionally, the PaaS my provide declarative or imperative control to guest applications over application messaging.
- Use “resident code” to instrument behavior into executing application code. Much like traditional runtimes forcibly load libraries into your memory space, PaaS runtimes may load code and data that co-habit your application memory space
- May use code generation to augment guest applications in a way that is orthogonal to the guest applications functions, but in alignment with the PaaS runtimes execution needs. This might be tough to understand if an example is not provided, and the only example I know of is what our Apprenda PaaS does. To simplify the example, let’s consider web services deployed to the Apprenda PaaS. REST and SOAP web services go through a compile phase when they hit our hosting containers. This compile phases adds control end points for direct P2P style communications with any instance of that SOAP or REST service, allowing the “node” to participate in direct control calls from the runtime fabric. The end result is that each node is an independent peer entity on the distributed system that is Apprenda, but is also “stitched in” to fabric behavior.
- Activate behavior autonomously based on what guest applications are doing. For example, on an application crash, detecting the actual error (where possible) and acting on it would be an example of this sort of autonomic management.
- Provide APIs and frameworks to the guest applications that trivialize access to complex architecture value. Think publish/subscribe, map/reduce, and message brokering models.
- Might influence or modify how requests and even threads behave in the application as the application executes.
- Provide general purpose APIs for runtime services from the platform.
This list is definitely not exhaustive, but you get the picture. This the stuff that makes cloud runtimes real and interesting, and this is what differentiates some PaaS’ as being “insanely great” versus others that, while not bad, don’t provide this level of value. This is the stuff that the future is made of, and hopefully the market evolves quickly enough where Subramanian and Kepes can both agree that innovation and value abound.
Next week I’ll be travelling to Las Vegas to present at VMworld 2011. The session is going to focus around a reference architecture on how to auto scale the Apprenda Platform using vCloud Director and other VMware technologies while maintaining your applications running without disruptions. In the fast evolving space of cloud computing applications are becoming increasingly harder to develop and manage but expectations of uptime and reliability are higher than ever so taking advantage of a PaaS (Platform as a Service) layer like Apprenda’s can help enterprise IT and ISVs to excel in the cloud.
VMworld: August 29 – September 1
Title: Reference Architecture to Autoscale an Instance of Apprenda’s Application Fabric for .NET through VMware vCloud Director
Registration: Content Catalog
- Understanding Private PaaS Benefits
- Introduction to Apprenda’s PaaS
- Reference Architecture Breakdown
- Takeaway and Summary
Hope to see you all there but if you are not coming don’t miss the action and follow me on twitter at @asultan.
It wasn’t curiosity. It was thinking that the “good old” ASP model would cut it. It was thinking that little by little, they’d get away from labor intensive provisioning, manual billing, and some day refactor the Rube Goldberg contraption that made their “hosted” model work. Sound familiar?
Software as a Service has quickly become the preferred method of application delivery and consumption. Why is it that while many ISVs claim to provide some flavor of SaaS today, few are doing it with the same cost of delivery profile and operational agility as SaaS leaders like Salesforce.com, or Taleo?
Join Apprenda and Savvis on August 9th at 1:30PM EST for a webinar covering the most dangerous pitfalls that ISVs fall into time and time again. You’ll learn:
- The unprofitable truth about the ASP model
- Why multi-tenant infrastructure isn’t enough
- The real-world economics of SaaS leaders and laggards
- How to avoid building dozens of custom SaaS operations systems
- Key business and technical pitfalls when making an infrastructure choice
You’ll come away with everything you need to either “save the ship”, or leap frog your competitors with a SaaS strategy that rivals the best of the best.
Date: Tuesday, August 9, 2011
Time: 1:30 PM – 2:30 PM EDT
We hope you’ll join us!
The private/public cloud discussion is something that has been getting lots of attention. I think one of the big issues is that different perspectives are causing people to speak differently about the same thing, and confusion around what public vs. private really is (e.g., what is a Private PaaS running on public IaaS) adds fuel to the fire. Maybe a parable can help. One of my favorite (albeit overused) parables is that of the Blind men and an elephant. The story is as follows: four blind men discover an elephant for the first time. Since they have never encountered an elephant before, they each touch a part of the elephant that they are closest to, feeling about so that they can assess what it is they are touching. One man grabs the elephants trunk, and determines that he is clearly touching a snake. Another grabs the tail, and concludes that it is a rope. The third, wrapping his arms around the leg, announces that it is clearly a tree. Last but not least, the fourth blind man runs his hands along the elephants side and determines that they have discovered a wall.
Clearly, they’re each interacting with the elephant, but are also each describing the elephant (based on their individual perspectives) in drastically different ways.
This is extremely relevant to today’s cloud computing discussion. Essentially, the problem we have is that cloud computing is our elephant, and although we are each interacting with cloud computing, we are suffering from the same problem as the four men in the parable; our “perspective blindness” causes us to only base our understanding on what we are capable of assessing, thereby failing to see cloud computing as a whole. This was extremely apparent after I wrote a post for GigaOM. Both in comments and in Twitter, I had people supporting and rejecting the notion of cloud technologies transplanted into private settings (such as deploying private IaaS or private PaaS). Some argued that public cloud provided X, and Y, and Z while private didn’t. At the end of the day, both public and private cloud can provide significant value to the various constituents in enterprise IT. To understand why private cloud might make sense to some and not others, we need to understand the relationships between the various parties in enterprise IT in the context of cloud computing:
Granted, the diagram doesn’t capture the finer grained details of the relationships, but it does define a framework to work off of. Each constituent in enterprise IT has a “primary” role when considering the application as the first class citizen of the system:
- Application Development
- Infrastructure Management
Each constituent is analogous to one of the blind men touching the elephant that is cloud computing. Let’s look at this more specifically. If you are a business end user that needs to source a certain application X, you can either a) use a solution that the developers within your organization have built on some private cloud or b) use an external SaaS provider (which apparently is a growing trend in IT). From the business end users perspective, they don’t care if an app is running in a private cloud or public cloud, or is a custom solution coming from LOB developers within the enterprise or an external SaaS provider. They need an app for some specific business function to create value and solve problems. We can find lots of tangential reasoning as to why they should use public cloud only, but from a practical perspective, they just need to know that whatever app they are using fulfills all of their requirements (and constraints). If the app is provided by their own LOB developers and is deployed to the private cloud, that’s fine. If they are using a public SaaS app, that’s fine too. Some apps might provide additional value in a public form factor, but it shouldn’t be the deciding factor.
Similarly, from the perspective of enterprise developers who build custom apps within the enterprise, they want to know if they can quickly and easily deploy their apps and get functionality from an underlying stack. The current IT model is broken: each dev team builds an app a different way, when it’s time to deploy, they contact IT and wait 30-60 days for a server to be provisioned, they manually configure the app, and manually deal with the app’s lifecycle using the same cumbersome approach that was used to get it up and running. Cloud (particularly PaaS) presents a very agile model with blazing fast time to market, advanced architecture qualities, and little friction. Developers are simply looking to get past this antiquated approach; they might want to deploy to a public PaaS like Azure, but if central IT offered a private PaaS, they might prefer that because it’s less friction in terms of adoption and from their perspective, they get the same major benefits: upload and deploy apps at a button click, get scalability for free, and get other benefits like distributed caching as services. From the developer’s perspective, they’d get what they need, regardless of the form factor. It gets even more difficult parse if you consider that central IT could deploy a private PaaS to either their own private cloud, or to a public IaaS provider like EC2. Is that a public or private PaaS? Clearly, the IT team owns the Amazon account and is deploying their own PaaS layer to public infrastructure. From the developers perspective, it’s a private PaaS (as in owned by their organization), from central IT’s point of view, they are using public cloud. For me, this is a very real topological distinction since our customers can license SaaSGrid and deploy it to their own bare metal infrastructure and get a private PaaS, on their private virtualization cloud and get private PaaS, or on something like EC2 and expose a PaaS to their internal developers. We don’t necessarily care, but the lines are clearly blurry as to which is “more public”:
- Private PaaS on Private Cloud
- Private PaaS on Public IaaS
- Public PaaS
Is number 2 more public or more private? This highlights the absurdity of the debate that continues to come up in nearly every public/private conversation. At the end of the day: who cares? If the end users in an organization want a specific offering provided in a SaaS fashion: great. If they use an offering built by their LOB developers and deployed to their private PaaS on their private cloud, or their private PaaS deployed on public IaaS, they’d be just as happy.
In some instance, public cloud makes the most sense, in others private cloud. Sometimes, real costs of adopting public cloud severely impact it’s ROI, so private cloud gives some of the benefits with less costs, causing private cloud ROI to be higher. In other cases, public cloud can be adopted using little friction and is the way to go. I think the sooner we recognize that the fierce debate of whether the elephant is a snake, a rope, a wall or a tree is silly, and that we should each be comfortable with the fact that our perspectives might influence or judgment, the sooner we can all focus on extracting value from cloud tech.
Do you agree that both public and private cloud are relevant? Are you passionate that cloud is public only or nothing? If so, why?
I recently wrote a piece that was published on GigaOM that uses a bus (as in vehicle) analogy to try and establish a conceptual framework for the public vs. private cloud discussion. Check it out here!
I like Phil Wainewright of ZDNet’s take on the Amazon incident last week. I like it partially because his post-chaos “seven lessons to learn from Amazon’s outage” reads oddly similar to my recent Transparency in the Cloud series. Mostly, though, I appreciate Phil’s approach because all of his lessons are targeted at the cloud services consumer.
Without a doubt, the blame for last week’s outage falls squarely on Amazon’s shoulders, but the responsibility to understand risk and adjust appropriately is in the hands of those that choose to use cloud-based services either to consume services, or to provide services to other companies.
One thing about centralizing services in the cloud is that it makes everything extremely visible. What was traditionally hidden behind private firewalls is now out in the open for everyone to see. I have a hunch that the aggregate downtime and subsequent hours spent by private IT departments on a regular basis far exceeds even the four days it took Amazon to clean up after this incident.
Which brings me to my point: There are two things that cost money during outages – 1) lost business, and 2) resolution efforts.
Lost business is a sunk cost, whether the outage is with your website hosting company, your cloud services provider, or inside your own private datacenter. The mere fact that you are in an outage situation means you are losing business. The difference is that in a private outage situation, you are also compounding the loss by shelling out the dollars to fix the problem. In this case, it was Amazon’s software (and/or hardware) that failed, let them foot the cleanup bill (and they happily will)!
Regarding the anti-cloud sentiment that seems to be pouring from mainstream outlets, I’ll borrow from CNN’s sensationalistic metaphor. Did we stop building ocean liners after the Titanic sank? Did everybody start crossing the Atlantic in tiny dinghies because they didn’t trust larger ships?
Smart people who specialized in the things that went wrong with the Titanic figured out how to make those things better and prevent disaster from happening again. Given that we all know there’s inherent value in cloud services, instead of throwing around anti-cloud sentiment, let’s all focus our energy on helping cloud providers like Amazon get better at it. We can do that by learning and applying our own lessons, and contributing our own technologies and talents.
Let’s face it. There’s a lot of hype around “the cloud.” Lots of promises, lots of claims, lots of vendors, and lots of lackluster results. All the while, software engineers and architects are getting sick of it.
If you’re a software engineer or architect, what does the cloud do for you? It’s elastic and infinitely scalable, so you just put your app up there and everything magically works, right? The cloud solves all of your scalability challenges, all of your app delivery challenges, and it just plain works, right?
Wrong. You’re the one responsible for building the software that the cloud exists to host and deliver, and you know full well that there’s a lot more to it than that.
What about onboarding new customers or business units to your app? The individual end-users – how do they get access, and to what are they entitled? What about charging for different features, or different transactions? What about managing the application lifecycle, and rolling out updates? What about the underlying architecture to make use of the cloud in an intelligent way? To actually take advantage of the raw compute power at your disposal, and not just use the cloud like it’s the late 90′s again and people are throwing their apps online like it’s going out of style.
These are the types of things that software engineers and architects are thinking about.
Haven’t we been through this before?
There are many significant engineering challenges associated with building and delivering applications today. This is very similar to when we first started developing applications for the desktop PC. Back then, everyone wrote code to manage memory, to interface with specific hardware, etc. Then the desktop OS came along, and made all of that complex and time consuming (but CRITICAL) work a thing of the past.
While the challenges themselves were different, they were still challenges that were specific to the delivery method, rather than challenges associated with building the actual software functionality. Those challenges will always be there, because the passion to innovate and develop applications that help facilitate better business performance, and meet the needs of end users is what drives great engineers/organizations. HOWEVER, the challenges associated with the delivery method/paradigm go away in time, as layers of abstraction come about to solve those problems for us.
The “Cure for the Common Cloud” is Here
Now let’s get back to today. Shouldn’t we expect that all of these challenges associated with building and delivering next generation software applications in this new cloud era will become a thing of the past? Won’t we be able to focus on building great software again, and not worry about all of the complexities of the delivery method? Someday? Maybe?
Yes. We can today!
A large and increasing number of organizations and developers have discovered the “Cure for the Common Cloud“. They’ve found the abstraction layer that handles all of the complex engineering challenges associated with building and delivery applications today, and truly leveraging private or public cloud infrastructure in an intelligent way. They’ve found the one technology that decouples apps from infrastructure, developers from IT/Operations, and business execution from IT implementation.
The Cure for the Common Cloud is here. Do you have it?
Continuing on the theme of Parts I and II, this article provides two more questions that all consumers of B2B SaaS should ask their providers. While the questions in Part II were centered around providers’ ability to maintain speedy and reliable service, the remaining questions focus on their ability to be flexible in pricing and customization of their offering.
Question #1: How flexible is your pricing?
Ah yes, the custom contract. Dangling off the salesmen’s tool belt since the dawn of selling. In the land of packaged software licensing, sales folks worked off of sales models that allowed them to play whack-a-mole until they had an offer suitable to win your business. In the subscription-based access world of online services, the process isn’t so simple. Often times, the ability of a salesperson to meet your financial requirements in both pricing and billing terms is tethered to the service’s ability to meet your business needs. I know, it sounds silly, but allow me to use a real world example that I have been telling people about for years.
In the book Founders at Work, 37Signals partner and creator of Ruby on Rails David Heinemeier Hansson dishes on the early days of 37Signals. Particularly of interest here is his story about how they had programmed the flagship service to bill customers on a yearly basis. This meant that upon signup, a new customer would be invoiced for their first year of service, gain access to the application, and subsequently be billed annually for renewed access. Just a few days before the launch of said application, their bank (read: creditor) notified them that due to lack of credit history, they were not allowed to bill on elongated terms. They had to collect money from customers more frequently. They delayed the launch for roughly a month while they re-engineered the application’s billing systems to allow for monthly billing cycles. Now, the 37Signals team is a smart team, and they were able to skirt disaster and launch an incredibly successful software as a service business.
The point of re-telling this story is not to warn service providers, but to motivate end-users to ask questions about how flexible a provider can be with their pricing and billing. Understand what your provider(s) can and cannot do with respect to charging you money. Learn about how the provider actually bills you. In a world where we’ve become accustomed to simply checking our bank accounts for charges by PayPal, make sure that your provider(s) have a way to work with you during good and bad times, and the many permutations of financial standing that happen in between.
Question #2: What if I don’t need everything you have to offer?
If you’re thinking that this sort of the same question as #1, you caught me. Almost. Feature granularity is certainly a conversation about flexibility, but from our end-user perspective it’s not the same topic as pricing and/or billing. What I’m talking about here is flexibility in application functionality. It used to be that the license you purchased dictated the features you saw; the license literally contained the knobs and switches that controlled application features. Subscription-based software as a service might mean that a provider has to give up cutting custom license terms (ok, ok, see #1). In that event, you want to be sure that the provider hasn’t tossed those knobs and switches along with their licensing mechanism. Can the service be tuned to suit your needs? Can it be tuned to suit others’ needs without affecting your experience? Can pieces of functionality be turned on in a pinch? Can functionality be hidden from some users and not others? How quickly can new functionality be enabled?
This line of questioning serves only to avoid the dreaded “it’s not you, it’s me” conversation with a service provider who just can’t meet your needs. At least if you ask the questions ahead of time, you enter into the union knowing precisely what the service has to offer, and whether or not it can change.
So here we are, the software as a service end-users, ready to go forth and subscribe. I hope you feel a bit more empowered as you venture into the world of service-based software. Armed with the right questions, we have the power to shape how this all happens and get what we want out of the software companies-turned-service providers. After all, it’s in our best interest to help make every provider we encounter a superior deliverer of software. And don’t worry, they’re more scared of it than we are.
In Part I, we recognized that end-users of B2B software are in a unique position to drive the way internet-based services are delivered. This is important, because the data shows that most of us are going to at least start to transform the way we consume software in our businesses within the next couple of years.
We all know that answers lead to knowledge, and knowledge leads to power (Your first Jedi allusion of the week!). So how do we, as end-users, get the upper hand when it comes to software-as-a-service? We do this by asking the proper questions. To get you started, here are the first three questions you should ask every SaaS provider you encounter:
Question #1: How does your service scale?
Historically, software companies needed only worry about how well their software could handle the load within the walls of the largest customer they had. For most deals, they could get away with telling us that the software is used by a token Fortune 500 company that is 4 gazillion times bigger than us. We’d extrapolate from numbers well above our heads and inevitably say “Well, if it works for them, it must work for smaller companies like mine.” On the contrary, in the software as a service world, the logic often works in reverse. Remember that our software vendors are now responsible for mustering the ping, power, and pipe to provide service to all of their customers as one large entity. In the aforementioned scenario, that makes most of us small fish in a big pond. If a service provider doesn’t have a viable approach to scalability, which means more than just “throwing servers at it,” then they are going to experience something called “buckling.” When a service provider experiences buckling, and their service slows to a crawl, or stops completely, it makes the fact that they just landed another 10,000 seat Fortune 500 deal not so impressive to the rest of us.
Ask them how they scale. Do they provision a new server for every customer? (if they do, say “welcome to 1999,” and then run.) Do they provision a new VM for every customer? If so, how many VMs of their particular software footprint can fit on a single physical server in their datacenter? Have they built scaling mechanics into their application or are they relying purely on infrastructure growth? All of these things play into that “buckling” thing I mentioned, and often times you can gauge if and when a provider will buckle simply by knowing the answers to these scalability questions. For more on this, see economies of scale.
Question #2: How do you release new versions of your service?
Releasing software on CDs is safe. It’s safe because there’s an entire protocol around the release and there’s time between when the software is ready and when the bulk of customers will install it. That means there’s ample time to test the software and collect feedback, and there’s even time to fix mistakes after the software has been released. In the world of online services, there are no “take backs.” When a service provider updates their services, chances are we will see these updates the very next time we click a button in our browser or refresh the page. That gives them very little room for error. To circumvent this situation, and make life a little more familiar, some service providers declare scheduled upgrades. They carve out little windows of time to make the service unavailable to anyone but themselves, and figure the fact that their customers are aware of this impending outage makes it ok. Calling timeout gives them the opportunity to deploy the update and test it in the privacy of their own datacenters, just like they used to before releasing a new CD. The problem here is that their timing will never be your timing, and although it may not happen often, eventually there will be a scheduled update that conflicts with your need to access the service. The good service providers will find a way to upgrade you without downtime. Find one of them.
Question #3: How do you plan for disaster recovery?
Even the mighty Google is not impervious to the whims of server ghosts. Just ask the 40,000 or so Gmail users whose inboxes recently went *poof* in the night. But that’s Gmail, a free consumer-focused email service. Google rightfully offers little to no guarantee that this won’t happen to you or me tomorrow. This might be acceptable for our personal email accounts (what do you expect for free?), but would this type of “c’est la vie” approach to your business’s data fly with your CEO? Not mine. Therefore, it’s imperative that you know of your service provider’s disaster recovery and business continuity plans. Disaster recovery means a broad spectrum of things, so there are a lot of questions to ask. Start with a couple: If data is lost, how quickly can it be restored? Can another customer potentially do something that would result in an outage or data loss for me? If half of your datacenter falls into a spontaneous sink hole, how many customers would go with it? Hint on that last one, if the answer is more than zero, chances are their plan is not worth your dime.
There’s a lot more to the disaster recovery discussion, but I will leave it here for now, lest I spark a conversation that will have us exercising the disaster recovery plan for this very blog.
In Part III of Transparency In The Cloud, we’ll continue our list of important questions to ask all SaaS providers.
In case you didn’t know, the days of business-to-business (B2B) software-as-a-service are upon us. If you’re a software developer and you haven’t begun planning a SaaS offering, stop reading this article right now, go gather your team and get started. Seriously, you’re already late.
Ok, now that they’re gone, this article is for the rest of us: the B2B software end-users.
I have it on good authority, that in the next couple of years most of us are going to throw away our piles of compact discs and DVDs and replace them with bandwidth. We’re going to say goodbye to license fees and free-up some square footage by dismantling our servers. We’re then going to embrace the technologies that give us access to always-on, internet-based services which we will access from our new server-rooms-turned-corner-offices. The benefits are manifold: accessing software typically costs less than owning it, online services are accessible from any location and on a plethora of devices, and we don’t have to worry about things like hard drive failures when our documents aren’t actually stored on our hard drives.
The unfortunate side-effect of our herd-like flocking to internet-based services is that, by forfeiting our ownership of the servers and software that fuel our businesses, we put our destinies in the hands of software companies that, put simply, are software companies. They’ve spent many years honing algorithms and interfaces that made us want their software in the first place but unfortunately, those years were not spent learning how to offer that software, as a service. After all, we bought the servers, we provided the power, often times we even installed the software ourselves. Believe it or not, sometimes we’re better at running software than the vendors who sold it to us.
Fortunately, some software companies are comprised of incredibly smart people who do amazing and innovative things. They also have amazing tools available to them to supplement what they lack in experience. I have every bit of faith (*cough* SaaSGrid) that with some hard work (*sniffle* SaaSGrid), and a bit of help (*yelling* SaaSGrid), our trusted software vendors will seamlessly make the transition from shrink-wrappers to world-class service providers, without us noticing as much as a blip on our B2B, software-consuming, radars.
As end-users, we have an obligation to ensure this whole thing goes smoothly. We need to hold our service providers accountable. One way to do that is by relentlessly asking questions. Interestingly, no one is more qualified than us to ask the appropriate questions, because we’re the ones who’ve been running software on-premises for 20-years. We know the scenarios and situations to avoid, and most of these scenarios translate into very good questions that each and every service provider should be able to answer in a way that not only gives you the warm and fuzzies but also makes technical sense. Remember, we’re banking our businesses on these companies’ ability to learn how to provide software-as-a-service. I’d at least like to know that they have a plan.
In part two of the Transparency In The Cloud series, we’ll start a list of questions that you should ask each and every SaaS vendor you approach. The questions are designed to help us guide the B2B SaaS transformation by making us all knowledgeable and empowered SaaS end-users.
The challenges with managing identity and entitlements have grown exponentially with the proliferation of software as a service, SaaS like architectures, and cloud computing. Enterprises and ISVs alike struggle to provide application endusers with a seamless experience, while overcoming the associated engineering and security challenges.
For SaaSBlogs readers who are interested, we are holding a joint webinar with Microsoft this Wednesday (March 30th) at 1:00PM EDT to dig into these challenges and the latest solutions available today that you may not be aware of yet.
We’ll be joined by Microsoft’s Eugenio Pace, Senior Program Manager – Patterns and Practices, and co-author of A Guide to Claims Based Identity and Access Control, along with Apprenda’s VP of Client Services, Matt Ammerman. The event will include a discussion and real world demonstration of how the Microsoft family of technologies is leading the charge and making these challenges a thing of the past. You’ll also get an inside look at how SaaSGrid enables drop in federated identity and claims for .NET applications.
If you are interested, you can register here: https://www3.gotomeeting.com/register/976980454
Date: Wednesday 3/30
Time: 1:00PM EDT
- Introduction to Claims Based Identity (Principals and Architecture)
- Key Problems Solved By Claims Based Identity
- Current Standards and New Powerful Tools
- Ground Breaking Drop-in Federated Identity and Claims Enablement for .NET applications via SaaSGrid (Live Demo)
The Cloud Computing industry has been in a state of technologic and rhetoric driven flux ever since the term “cloud computing” was coined. Coming from both a software and venture capital background, I enjoy paying close attention to the often incongruent evolution of both our industry’s capabilities and its marketing claims. This disparity isn’t unique to the Cloud industry. Most new markets experience this while the vernacular jargon gets socialized and standardized on. Case in point, the term Nanotechnology was actually coined to refer to the concept of self-replicating autonomous bots (ala Michael Crichton’s Prey novel), what the world got was a much broader and less grandiose set of mainly micro-scale innovations all loped under that industry name.
Similarly, Cloud Computing was initially taken to mean perfectly abstracted infinitely elastic computing scale in an on-demand basis. We don’t have this yet. What we have, and Im referring specifically to IaaS since it is our basic building block, is easy to configure, easy to provision, on-demand, utility priced virtual machines. This is a great progression for the software and IT professions, but it falls short of fulfilling the fanciful capabilities drawn in people’s minds. Cloud IaaS makes it easier and cheaper to do what we’ve been doing for years, but hasn’t profoundly changed the actual requirements, workflow or operational hassles that developers and IT administrators face. As a Senior Enterprise Architect at a large enterprise remarked to me last year – “we’ve run the course with virtualization, we did that years ago.”
Making compute power available has been solved – the current challenge is to put that instant-on horsepower it to work. To be clear – this is being done fairly well in some use cases: big data crunching (CG rendering, map reduce) and consumer facing websites come to mind because they have scale mechanics that rely on simple replication and balancing.
Business applications, however, are still beholden to the underlying topology of servers that supports them. The application itself and the IT system must be intrinsically wired together in order to function properly. For example – business applications often store large amounts of data and will store data across multiple servers. If user number 523 logs in and her data is on DB server number 5, how do you fetch that data when she sends a request through UI server 3? Answer – the application has to know who she is, where her data is stored and how to retrieve it. This enmeshing is necessary to solve many similar complexities, but makes it extremely difficult to take advantage of today’s current Cloud infrastructures – without completely re-architecting and rebuilding applications from the ground up and adding multiple highly complex new systems.
In order for this current world of instant-on cloud servers to be truly impactful and revolutionary for developers and IT operators we must continue to elevate them further away from the underlying mechanics and break the bonds between application and infrastructure. In order to do this, we need some middle tier that abstracts the application away from the underlying server topology and surrounds that application with the management, authentication, user-routing and scaling mechanics needed to seamlessly take advantage of newly provisioned resources. This is why we are seeing a universal up-stack migration by the industry leaders like Amazon, Microsoft and SalesForce.com into the still-evolving realm of PaaS (platform as a service) where the original vision of throwing code into an autonomous external cloud of elastic computing power is starting to coalesce in a couple offerings.
SalesForce.com’s “Force.com” platform for instance holds true to the “deploy and forget” ideal, but ties the developer to a restricted programming environment based on their own proprietary programming language in return. To counteract this, SalesForce.com has made bold moves by first partnering with the leading provider of virtualization technology (the underpinnings of the cloud movement) VMware to create VMforce for Java and then acquiring Heroku for the Ruby programming language. It wont be long before these two additional platforms are able to take advantage of the value added capabilities built into Force.com. The last few months have also seen the acquisition of Makara (another PHP PaaS) by Redhat, giving this enterprise heavyweight a compelling Private PaaS story for their client base. Amazon, the standard in IaaS, has been building new up-stack capabilities in-house and recently released their Beanstalk service which simplifies scaling Java applications. Not to be outdone, VMware last week announced the acquisition of WaveMaker a Rapid Application Development tool that compliments their SpringSource acquisition. Its not hard to imagine a new RAD-PaaS offering with these capabilities.
Scrolling this stream of movement, it can be hard to divine where this is all heading and where the competitors will bump into one another. It all comes down to the application developers and what up-stack capabilities you can offer them that improve one or more of: their build-time productivity/speed; the business value they can incorporate into their applications or the ease/quality of the ongoing application delivery. The following graphic gives a basic snap shot of where we are and how I see this evolving.
Webinar announcement: Accelerating The Road To SaaS – 5 Ways To Get To Market Before Your Competitors
For SaaSBlogs readers who are interested, we are holding a joint webinar tomorrow (Wednesday 3/23 at 11:00 EDT) that will cover 5 key lessons critical to the speed of transitioning to a SaaS delivery model. The webinar is co-hosted by our partner Tenzing, a leading IaaS provider for ISVs.
If you are interested, you can register here: https://www3.gotomeeting.com/register/897150934
Date: Wednesday 3/23
Time: 11:00 EDT
Whether your company is a SaaS start up or employs a more traditional on-premise software model, you need to find ways to accelerate and streamline your SaaS application development. The fact is software as a service is exploding; IDC estimating that by 2014, 85 percent of new software companies will be offering their product through a SaaS delivery mechanism.
This webinar explores how Apprenda’s SaaSGrid Application Delivery Fabric combined with Tenzing’s Cloud infrastructure delivers a true end-to-end SaaS delivery platform to enable ISVs to bring their SaaS offerings to market faster with the lowest ongoing cost of service delivery. Why reinvent the wheel when making the move to SaaS? If you are to survive and thrive you need to leverage best of breed technology that will get you into market faster and with the right capabilities to unlock the efficiencies of scale needed for long term success.
Join Will Childs, SaaS Practice Director at Tenzing, and Devon Watson, Director of Business Development at Apprenda, as they discuss their partnership and resulting platform that enables software vendors to dramatically accelerate SaaS delivery while reducing development cost and complexity.
Before I get started, this post IS NOT a post about why multi-tenancy is a good thing, why it’s better than virtualization, or anything of that nature. I had to get that out before starting – there are plenty of posts that deal with this topic (one, two, three, etc.). Instead, I want to tackle a different issue: the issue of what multi-tenancy means in a variety of contexts as well as how its positioning by vendors is leading to mass confusion.
No particular event has motivated this post; instead, this post is the result of a number of conversations and miscommunications. A while back, I started noticing some disturbing trends in the market, and more specifically, in vendor pitches. Let me use a real world example of what I mean. Frequently, the sales team at Apprenda ends up in the following type of conversation at some point in our sales cycle (clearly, this is distilled for brevity):
Prospect: “I saw that SaaSGrid offers multi-tenancy.”
Apprenda: “That’s right; SaaSGrid gives your application access to various types of multi-tenancy using the same code base and application assets. It’s actually amazingly unique and takes significant effort out of your R&D.”
Prospect: “Yeah, but doesn’t <insert your favorite PaaS/IaaS here> offer multi-tenancy?” or “Well, the folks over at <insert your favorite PaaS/IaaS here> said they offer multi-tenancy too.”
Apprenda: “That’s different. Those technologies are themselves multi-tenant. SaaSGrid is a server technology that allows your single-tenant application to be multi-tenant at all application tiers without the huge effort typically associated with going multi-tenant.
Prospect: “Wait, but that’s exactly what I heard <insert your favorite PaaS/IaaS here> say. You’re saying that they don’t do the same thing as you.”
Apprenda: “Correct. Most, if not all, cloud platform vendors that refer to multi-tenancy are references to their own architectures, meaning they can efficiently pack their customers onto shared hardware/OS instances and offer you better service and a low price point. What you’re asking is how you can be multi-tenant and offer the same to your customers. Others in the cloud can’t help you with that.
Prospect: “OK, so using their service doesn’t make sense then?”
Apprenda: “That’s hard to answer, but it usually makes sense in conjunction with something like SaaSGrid. Clearly, being on something like EC2 (which is multi-tenant at the infrastructure tier) has advantages to you. Using SaaSGrid means you can really lower your internal cost of offering your service to your customers, and either put the savings to the bottom line or pass it along to your customers.”
Prospect: “What SaaSGrid does seems pretty magical now that I’ve cut through the marketing BS, how do you do it?”
Apprenda: “SaaSGrid is a runtime. When deployed to SaaSGrid, your application is transformed and new capabilities are instrumented into all tiers of your application. When running on SaaSGrid, these transformations and runtime instrumentations ‘inject’ SaaS architecture DNA into your non-SaaS web application.”
As you can see, this sort of conversation is a distraction. Whether it’s a conversation with an analyst, industry pundit, or potential customer, I’ve found that the most important thing to do upfront is to level-set on what both sides mean/understand when they say “multi-tenant.” Why so much confusion? I think it’s due to two factors:
- The marketing pitch offered up by vendors and how those marketing sound-bites are contexualized
- The general overloading of the term “multi-tenant”
On the marketing side, vendors do a good job of highlighting multi-tenancy. The problem is that the lack of context around the “feature” of multi-tenancy causes significant miscommunication. From a marketing perspective, vendors are sucked into the Green Crystals Marketing described by Bob Warfield a couple of years ago. Most cloud vendors are touting that they are multi-tenant; they want you to understand that they have a cost-effective and safe mechanism to isolate their customers from one another. To understand this better, I’ve taken the liberty to copy and paste (with references, of course) some content related to multi-tenancy from various cloud vendors:
The AppFabric Container provides base-level application infrastructure such as automatically ensuring scale out, availability, multi-tenancy and sandboxing of your application components. (Microsoft, Windows Azure)
Cloud-enabling infrastructure to allow secure multi-tenant deployments, including fully integrated management, monitoring, metering and billing infrastructure (CloudBees)
If you are running numerous applications/application instances, XAP’s fine-grained multi-tenancy allows you to share them across all available machines, instead of running only one instance per machine. This allows you to support more users on each machine. (GigaSpaces)
There a few others to use as examples, but this is fine for now. I’m not going to debate whether advertising that you are multi-tenant is an effective use of marketing real-estate. all of these snippets of text highlight multi-tenancy, but what is unclear is the context. For example, the first two indicate multi-tenancy at the platform tier; that is, multiple, unrelated code assets can share common OS instances. While this is powerful in its own right, it’s easy to understand how someone reading this text might walk away thinking “Excellent. Our requirements for our new SaaS project call for multi-tenancy. I can check that off our list.” The fact is, while ambiguous in terms of presentation, these technologies do not endow your application with multi-tenancy, they let you run in a multi-tenant environment. If you’re looking to build a multi-tenant app, you need to still architect multi-tenancy (‘architect’ is a verb according to the Oxford English Dictionary). In the gigaspaces case, although they refer to multi-tenancy in a way that lends itself to an “endowment” interpretation, it seems that they are focusing more on a grid approach of scaling the app to support more end users. While this is also valuable, it does not deal with segregation and isolation of logical groups of tenants (which is what multi-tenancy really is). At Apprenda, we even dealt with this definition in problem in our FAQ. This leads me to the next issue: term overloading.
Multi-tenancy is valid in 3 common computing contexts:
- Infrastructure: This is multi-tenancy the way someone like Amazon might refer to multi-tenancy on EC2. In the IaaS context, multi-tenancy means that multiple OS instances can run on the same physical hardware through hypervisor technology.
- Platform: PaaS multi-tenancy means that, like a Heroku or a CloudBees, the platform can isolate code from different apps/vendors on the same OS instance (usually by commingling processes and databases on OS instances). This removes the need to allocate a whole VM per application stack component, improving efficiency.
- Application: SaaS multi-tenancy, at least at the highest level of isolation, means that single runtime stack component instances are shared across multiple customers. For example, a single database might commingle data rows for thousands of customers while preserving isolation and performance.
Clearly, multi-tenancy means different things in all of these scenarios. There is nothing wrong with overloading, but it certainly doesn’t help the already high levels of confusion that exist around the word. If you’re an app developer in the Cloud looking to see what tech can help you, having a sense of clarity is most useful. Make sure you ask simple questions like:
- When you say ‘multi-tenant’, what do you mean?
- If multi-tenancy is a feature of your IaaS/PaaS, does that mean my app automatically becomes multi-tenant and I get to reap efficiencies from it?
- If I want my app to be multi-tenant on your IaaS/PaaS, will I have to still architect the app to be multi-tenant?
If you get answers of “I don’t know” or “No”, then clearly, you’re on your own if you want to build a multi-tenant SaaS app from the ground up.
Do you feel multi-tenancy is thrown around too often by the wrong parties? Is multi-tenancy confusing to you?