Eucalyptus Acquired by HP — How Did That Happen?

by Sebastian Stadil on Sep 12, 2014 2:24:45 PM

Thursday, HP and Eucalyptus surprised everyone in cloud as they announced that HP was acquiring Eucalyptus Systems for an undisclosed amount, rumoured to be somewhere below the 100-million-dollar mark.

Now, while that amount is nothing to be sneezed at (comparatively, Eucalyptus had raised $55 million in venture capital), it is well below the typical return of a successful startup. And for good reason: Eucalyptus wasn’t exactly a runaway success. In fact, it was definitely one of the underdogs (along with Apache CloudStack) in the open-source private cloud platform market.

Now, what makes this acquisition surprising is the fact that the leader in this market happens to be OpenStack, to which HP has so far committed significant resources, and which it supports through its HP Helion OpenStack distribution. In fact, HP was one of the most significant contributors to OpenStack’s upcoming “Juno” release (1st by commit count; other metrics likewise show HP in a solid position).

So, why would HP acquire Eucalyptus Systems, a small competitor that had so far failed to make a dent? (though it wasn’t totally unsuccessful either)

Broadly speaking, there are three main reasons why acquisitions happen:

  1. Acquiring market share, or killing a competitor

  2. Acquiring a product

  3. Acquiring a team

Considering Eucalyptus’ position in the market, option 1 seems unlikely. What about the others

Did HP Acquire Eucalyptus For Its Product?

HP’s commitment to OpenStack makes it unlikely that it is acquiring Eucalyptus Systems to integrate the Eucalyptus private cloud platform into its own product portfolio.

Indeed, OpenStack is a mature and well-architected platform that not only significantly overlaps with Eucalyptus’ functionality (they are both private cloud platforms), but is also largely incompatible with it (Eucalyptus is a largely monolithic Java codebase, whereas OpenStack is a modular Python one).

So Was It For The Team?

The Eucalyptus team doesn’t seem like an obvious fit to work on HP’s product (if only because of the programming language differences), but would presumably bring significant experience in terms of AWS compatibility, which HP could use (AWS compatibility is Eucalyptus’ key differentiator in the private cloud market).

However, there are OpenStack startups that have built their business on enabling AWS compatibility for OpenStack (CloudScaling is one such example). If HP’s goal was to acquire AWS compatibility experience, one of those might have been a better buy than Eucalyptus Systems.

What About The Leadership?

With the acquisition, Eucalyptus Systems CEO Mårten Mickos goes on to head HP’s cloud business.

As a friend, I know from firsthand experience that Mårten is a world-class executive. He’s a good speaker, an inspiring leader, and a visionary. Plus, he does have a proven track record.

Considering that HP’s cloud business has been without a leader since the departure of Biri Singh, bringing Mårten on board could be just what the company needs to finally turn its cloud division around and become a solid contender in the space.

What’s Next?

At this point, HP’s leadership hasn’t revealed its plans, but it is doubtful that those plans even exist. Indeed, in the wake of the acquisition, HP VP of product Bill Hilf stated that:

"There’s going to be a lot of strategic discussion we have to have about [the acquisition] — how and what makes the most sense over time".

So, perhaps HP’s strategy is to cross fingers and hope that Mårten will know what to do with its cloud business. Will the acquisition be worth it for HP? Surely, the price tag seems steep for a single executive, even for one of Mårten’s caliber.

Finally what will the acquisition entail for the Eucalyptus product? If the product wasn’t the motivator for the acquisition — which seems likely —, then it’s both probable and unfortunate that it might eventually be discontinued. Time will tell.

Read More

Topics: OpenStack, Opinion, Eucalyptus, Cloud News, Private Cloud

A Taxonomy of the Cloud Management Ecosystem

by Sebastian Stadil on Aug 20, 2014 8:30:00 AM

Cloud management tools form a constantly evolving landscape.

Just a few weeks ago, Hashicorp introduced Terraform. In June, Google was introducing Kubernetes. In October of last year, Airbnb was open-sourcing SmartStack. Heat was introduced in OpenStack Havana in 2013. Even older tools like Chef, Salt, and Scalr (that’s us!) aren’t that old, and they are constantly changing as new releases get introduced.

And all in all, these tools make the same promise: manage your cloud (better). But does that mean they are one and the same? Not really. There definitely is some overlap, but these tools have strongly diverging focuses, and this post hopes to shed some light as to what those are.

To make this more concrete, we’ll look at an example. Let’s assume you are a DevOps engineer, and have your favorite N-tier web application running in your cloud of choice, backed by your favorite database. How did you get there?

Cluster Discovery

If your application is to be backed by a database, it definitely needs to know how to connect to it (e.g. what is the IP of your database server?!). There are tons of options here (DNS is one), but if you are dealing with a reasonably complex (or redundant) deployment, the right choice is usually cluster discovery software.

Cluster discovery systems provide a centralized repository of configuration information. In our example, application servers would connect to the cluster discovery system in order to retrieve the IP of the database servers they should use, but this is of course applicable to many other use cases.

Now, of course, you still need to somehow hardcode connection details to access your cluster discovery system! For this reason, cluster discovery systems are expected to be highly-available and redundant, so that discovering the cluster discovery system doesn’t become a problem. This is of course something you should bear in mind when choosing one!

Popular cluster discovery systems include: ZooKeeper, Etcd, Consul, and SmartStack. It’s important to note that there are other pieces of software that can play that role too, like Chef Server.

Read More

New Scalr Cost Analytics is Released: Enterprise Cloud Cost Management

by Praneet Wadge on Aug 12, 2014 6:30:00 AM

Scalr is happy to announce the release of Scalr Cost Analytics!

Cloud has dramatically transformed the process of resource provisioning, shifting control away from IT and towards a self-service model for developers. The resulting complexity makes it difficult to accurately predict cloud spend, allocate budget, and find inefficiencies. This is further complicated when enterprises split their cloud capacity across public and private clouds.

Yet, with an ever-changing cloud landscape (ex. Rackspace's recent exit of the pure-play IaaS market), it’s all the more critical for enterprise leadership to have visibility into the financial impact and business value of their different cloud options.

Here at Scalr, we work with enterprises that are pioneering cloud adoption in their industry; our cloud management platform (CMP) helps enterprise DevOps teams efficiently build cloud-native infrastructure, and enables their IT counterparts to govern the usage of cloud resources.

We saw that Scalr users and their colleagues were dealing first hand with the problems caused by a lack of financial transparency into their cloud resources. As such, we are building Scalr Cost Analytics: cost management tooling to restore visibility and control over cloud spend. This blog post details the major components of the tool’s initial release.    

Why is Cloud Cost Management tooling needed?

Generally speaking, using cloud resources breaks the old model of fixed infrastructure costs, leading to two main problems.  

1. Cloud Costs are hard to predict

Unlike traditional infrastructure capacity which comes with a pre-determined price independent of actual usage, cloud’s pay per use model entails that costs can vary extensively from one day to the next, as DevOps teams provision and tear down entire infrastructure clusters.

What’s more, self-service provisioning results in multiple departments or clients simultaneously tapping into the enterprise’s cloud, making it even more difficult for Finance to answer questions such as: How much cloud budget should I allocate?”, “Who should I give it to?”, and “Will there be overspend?”

To solve this problem, and to keep up with real-time provisioning, Finance needs tooling that generates real-time cost reporting and predictions, and tracks budgets.  

2. Cloud costs are hard to understand

When cloud resources are provisioned and decommissioned over the course of just a few hours or days, it becomes difficult to keep track of which resource does what exactly, and to whom it should be charged back to. 

In fact, as far as Finance is concerned, cloud accounts tend to turn into buckets of instances and volumes that are delicate or confusing to reason about — and don’t even think about optimizing them!  In turn, when IT receives an aggregate cloud bill at the end of the month, mapping capacity usage to specific users can be an arduous if not impossible task.

To solve this problem, and enable cloud cost optimization, what Finance and IT need is tooling that delivers a better understanding of the story behind each cloud resource: “Why was this instance provisioned?", "Who owns this volume?", and "Can we decommission this resource?”

In other words: Finance and IT need tooling that provides actionable cost data.

Surveying the existing landscape of cloud cost management tools

With these two challenges in mind, how can your enterprise use cloud in a financially responsible manner?

Several options have been available for cloud cost management, but face their own issues.

  • The Amazon Billing Management Console. Though vastly improved from its release in 2013, the console does not help answer “who” or “why” when it comes to spending the cloud budget. Furthermore, it is of course AWS-only.

  • Tools like Cloudability, CloudVertical, and Cloudyn. Though all have extensive features, the first two do not support private clouds, and CloudVertical does not provide budgeting capabilities. Cloudyn is a quality tool, but it is based on resource tagging, which is prone to human error (with misspelled of forgotten tags), and remains time-consuming for the DevOps engineers that provision cloud resources.

  • A competing CMP vendor offers a Cost Analytics product. However, its software focuses exclusively on drilling down into aggregated cost data through filters in a separate interface. With very limited bidirectional integration with their CMP, (which is where users are actually launching resources from!) it can be difficult to make their cost data actionable.  

  • DIY: We won’t go here as organizations that prefer to build over buy do so because they can and not always because they should.

But why is Scalr Cost Analytics different? What makes Scalr Cost Analytics unique is that it is built directly into the Scalr Cloud Management Platform (CMP) itself.

As a CMP, Scalr thoroughly understands which cloud resources are being utilized, in what amounts, by whom, and for what purpose. In turn, Scalr Cost Analytics is able to leverage the insight gained from this integration to provide Finance, IT, and DevOps with contextualized expenditure predictions and actionable cost data. 

What Can You Do With Scalr Cost Analytics?

Scalr Cost Analytics is designed to provide actionable cost insights and contextual tooling to the main cloud stakeholders in enterprises: DevOps, IT, and Finance.   

Finance can:

  • Model their existing accounting schema to categorize cloud costs
  • Allocate cloud budget to business units, projects, and teams with their own leads
  • Analyze cloud spend across multiple cloud platforms, down to individual server farms
  • Forecast weekly, monthly, quarterly and yearly costs and overspend

IT will be able to:

  • Break down spend by cloud server and resource types for projects, teams, and business units
  • Plan for capacity with spend matched with server count and usage
  • Pinpoint uncontrolled costs and apply lease management policies and quotas
  • Audit the cloud bill for errors and inconsistencies

DevOps will be able to:

  • View the costs of running their applications at design time BEFORE launching resources
  • Choose the most cost efficient server types
  • Understand the financial impact of usage spikes and dips
Read More

Topics: Announcements, Features, Multi-Cloud, Release, Cost, Cloud Management, DevOps, Private Cloud, Finance

Finding Your Way Around The OpenStack Ecosystem: 10 Names To Know

by Thomas Orozco on Aug 5, 2014 8:00:00 AM

Unlike competing open-source cloud platforms CloudStack and Eucalyptus, OpenStack isn’t an integrated piece of software. Instead, it’s a constellation of projects that can be combined to deploy a private cloud.

OpenStack’s unique architecture yields unparalleled flexibility, as you can pick and choose only the pieces that you’ll need, so that your private cloud is perfectly tailored towards your use case.

Unfortunately, this architecture also breeds a certain amount of complexity, and it’s not always easy to find one’s way around the OpenStack ecosystem of projects. The OpenStack Wiki provides a solid reference, but it can be a bit disorienting unless you know what you are looking for (find its project page here).

Fortunately, you found your handy guide! This blog post introduces the 10 most important projects that make up the OpenStack ecosystem, and whose name you need to know if you intend to deal with OpenStack.

1. Nova

Also known as OpenStack Compute, Nova is arguably the most important component of the OpenStack private cloud platform.

Nova exposes a REST API that clients can leverage to provision, decommission, and otherwise manage the lifecycle of virtual machines. Behind the scenes, Nova relies on a hypervisor driver to send commands to your hypervisor (which could be Xen, KVM, VMware, or any of the hypervisors listed here: Supported Hypervisors in OpenStack).

Given that Nova is an abstraction layer on top of your hypervisor and the driver controlling it, some features may or may not be available, depending on the underlying hypervisor’s capabilities, or the driver’s maturity (e.g. Docker is a supported hypervisor, but at this time you can’t do live migration with it).

2. Swift

Also known as OpenStack Object Storage, Swift is one of the older OpenStack projects: just like Nova, it was present in OpenStack’s first integrated release, Austin.

Swift is functionally similar to Amazon’s Simple Storage Service (S3): it lets you store arbitrarily large data files, arrange them in buckets, and retrieve them at a later date, using a simple REST API. Not all applications can use object storage like Swift for their storage needs, but object storage remains a staple of successful cloud-native application design, as we covered in this earlier blog post.

Note that there are alternatives to using Swift with your OpenStack deployment. One such alternative is Ceph.

Read More

Topics: OpenStack, Cloud Platform, Private Cloud, enterprise cloud

"The Security Group Does Not Exist" - Working with the AWS APIs at Scale

by Igor Savchenko on Jul 30, 2014 7:00:00 AM

At Scalr we provide SaaS and on-premises software to manage cloud infrastructure (and it’s open-source), so we end up making lots and lots of API calls to the AWS APIs.

What’s great about scale is that when you’re making thousands of API calls a day, events with a 0.1% probability of occurring tend to happen multiple times a day, sometimes resulting in surprise, confusion, and hair pulling! One of those low-probability events is the subject of today’s blog post.

Let’s start with a quick test, shall we? Can you spot what’s wrong with this code?

import boto.ec2
 
# Connect to EC2
conn = boto.ec2.connect_to_region("us-east-1")  # Credentials are in the environment!
 
# Create a new SG
security_group = conn.create_security_group("test-sg, "Just making a point!")
 
# Define SG rules
ip_rules = [("tcp", 80, "0.0.0.0/0"), ("tcp", 443, "0.0.0.0/0")]
 
# Update the SG with the rules
for protocol, port, network in ip_rules:
  security_group.authorize(protocol, port, port, network)

If you can’t: read on!

The AWS APIs Are Fundamentally Eventually Consistent

When you hit the AWS API to create a security group, or allocate an Elastic IP, the response you get usually includes an ID for the resource you just created — and when it doesn’t, you get an error message explaining what went wrong, so you can fix your API call.

Now, popular belief (and the AWS documentation) suggests that as soon as you have the resource ID, you can make further API requests against it, like adding security rules, or associating an EIP, or adding tags to an instance. And in most cases, this will work.

But the truth is that the AWS APIs are in fact eventually consistent: the mere fact that you have a resource ID does not guarantee that the underlying resource actually exists.

In practice, this means that once in a while, your API call will return an ID that you can’t use just now, because the resource doesn’t exist yet. And if you nonetheless try and use it, you’ll get an error message, like:

The security group 'sg-xxxxxxxx' does not exist

The allocation ID 'eipalloc-xxxxxxxx' does not exist

The instance ID 'i-xxxxxxxx' does not exist

Of course, when you retry the call a few minutes later — after you got to your logs — the security group is there, and so is the Elastic IP, and you’re left wondering: “Oh AWS, why can’t I use the security group I just created?”.

Read More

Topics: API, Technical, Tips, Amazon

Welcome to the Scalr blog!

We build a Cloud Management tool that helps businesses efficiently design and manage infrastructure across multiple clouds.

Here, we post about our experience working with the Cloud, and building Scalr. On average, we do that twice a week.

Sometimes, we'll also cover Cloud-related news.

Subscribe to Email Updates