Deciphering VMware's OpenStack Play

by Thomas Orozco on Sep 23, 2014 8:00:00 AM

A couple weeks ago at VMworld, VMware announced that it was introducing “VMware OpenStack”, an integrated OpenStack release that is specifically designed (or configured?) to be deployed on a VMware virtualization layer (note: if you’re unfamiliar with what a “resource layer” is in this context, we encourage you to review this paper).

Is This New Software?

Not really. VMware OpenStack is essentially a repackaging of existing functionality and software. Indeed, OpenStack support for VMware virtualization is not new:

  • For compute virtualization, OpenStack Nova already supports VMware vSphere

  • For network virtualization, OpenStack Neutron already supports VMware NSX

  • For storage virtualization, OpenStack Cinder and Glance already support VMware VSAN and vSphere storage

So, in other words, OpenStack already had support for VMware virtualization software (and all of that is open-source), and therefore VMware OpenStack is not actually much more than an OpenStack distribution that is pre-configured to integrate with VMware virtualization software.

Then What is VMware’s Value Add?

The central value proposition of VMware OpenStack is to easily deploy OpenStack on an existing VMware “Software Defined Data Center” (SDDC); a VMware virtualized resource layer.

To that end, VMware provides an OpenStack installer (which ships as an OVF package). VMware argues it will let you trivially deploy OpenStack to an existing VMware infrastructure, configure it, and manage the controller services (to learn more, the SDDC2198 VMworld session includes a demo, and can be viewed online).

Note that, functionally, this is somewhat similar to what Mirantis provides with Mirantis Fuel.

Is It Still OpenStack When It’s VMware OpenStack?

Fundamentally, OpenStack’s functionality and core value proposition is to abstract away your virtualization infrastructure, and present standardized developer-friendly APIs.

Now, the APIs exposed by VMware OpenStack are the actual OpenStack APIs. So, yes, VMware OpenStack is in fact OpenStack. As such, it will be compatible with the ecosystem of tools that have been developed around OpenStack itself — including Cloud Management Platforms like Scalr.

Read More

Topics: OpenStack, Opinion, Cloud Platform, vCloud, Private Cloud, enterprise cloud, VMware

Announcing Scalr Cloud Management Platform 5.0

by Sebastian Stadil on Sep 16, 2014 3:00:00 AM

The hardworking team at Scalr is proud to announce the release of Scalr 5.0. This release furthers Scalr’s focus on the enterprise, with an emphasis on IT management and control, and new integration capabilities.

As with previous releases, Scalr 5.0 is licensed under the Apache 2.0 License. If you are a new user, you can download this new release now. If you’re already using Scalr, then head for the upgrade instructions on the Scalr Wiki.

Highlights of this release include:

Enhanced Enterprise Capabilities

Cost Analytics

First and foremost, Scalr 5.0 introduces Cost Analytics. Cost Analytics leverages the substantial amounts of infrastructure metadata generated by Scalr (such as “Who launched this instance?” or “What is this machine used for?”), and enables IT departments and their finance counterparts to use that metadata to better understand their costs across public and private clouds.

Cost Analytics is the result of close and intense collaboration with Scalr enterprise customers, and we’re delighted to be able to bring this new feature to our community of open-source users.

Policy Enforcement with Global Orchestration

When we introduced Scalr 4.5 in December 2013, two highlights of the release were Governance and Role-Based Access Control (RBAC). Governance enabled IT to control what cloud resources should be exposed to end-users (to e.g. prevent dev workloads from being deployed to a production network), and RBAC enabled IT to control permissions on a user basis (to e.g. restrict root access to instances to a specific group of users).

With Scalr 5.0, we’re adding a third layer of IT control: Global Orchestration. Using Global Orchestration, IT departments can centrally define IT policies that will be enforced at the instance level. Example use cases include deploying a standard firewall or authentication policy across all of the organization’s cloud resources, or enforcing the presence of auditing software.

Compliant Agent Upgrade Schedule with Custom Scalarizr Repositories

Scalr relies on an agent, Scalarizr, to remotely perform automation tasks on managed instances. Those tasks include deploying applications, enforcing policies using Global Orchestration, and more.

Up until Scalr 5.0, the Scalarizr agent was deployed through Scalr-managed repositories, whose upgrade schedule was sometimes incompatible with enterprise IT change management policies. Starting with Scalr 5.0, IT departments can manage their own Scalarizr repositories, and therefore control their organization’s agent upgrade schedule.

Facilitated Integration with Webhooks

As a Cloud Management Platform, Scalr is central to the provisioning and management of the cloud infrastructure at organizations that deploy it. It is therefore natural that these organizations need to integrate Scalr with other systems, such as change management databases, audit systems, and more.

With Scalr 5.0, we’re introducing a compelling and flexible solution: Webhooks.

Webhooks are outbound notifications that are delivered by Scalr to external systems whenever infrastructure events are triggered, such as when an instance is launched or decommissioned. Webhooks are dispatched as standard HTTP JSON requests, so that integration developers feel right at home when using them.

DevOps Enhancements

Of course, we didn’t forget about our audience of DevOps end-users with Scalr 5.0. Besides a ton bug fixes and enhancements (which were detailed on the Scalr Product Blog throughout the Scalr 5.0 development cycle), we’re introducing two major new features in this new release:

Read More

Topics: Announcements, Strategy, Features, Technical, Cost, Cloud Management, DevOps, Cost Analytics

Eucalyptus Acquired by HP — How Did That Happen?

by Sebastian Stadil on Sep 12, 2014 2:24:45 PM

Thursday, HP and Eucalyptus surprised everyone in cloud as they announced that HP was acquiring Eucalyptus Systems for an undisclosed amount, rumoured to be somewhere below the 100-million-dollar mark.

Now, while that amount is nothing to be sneezed at (comparatively, Eucalyptus had raised $55 million in venture capital), it is well below the typical return of a successful startup. And for good reason: Eucalyptus wasn’t exactly a runaway success. In fact, it was definitely one of the underdogs (along with Apache CloudStack) in the open-source private cloud platform market.

Now, what makes this acquisition surprising is the fact that the leader in this market happens to be OpenStack, to which HP has so far committed significant resources, and which it supports through its HP Helion OpenStack distribution. In fact, HP was one of the most significant contributors to OpenStack’s upcoming “Juno” release (1st by commit count; other metrics likewise show HP in a solid position).

So, why would HP acquire Eucalyptus Systems, a small competitor that had so far failed to make a dent? (though it wasn’t totally unsuccessful either)

Broadly speaking, there are three main reasons why acquisitions happen:

  1. Acquiring market share, or killing a competitor

  2. Acquiring a product

  3. Acquiring a team

Considering Eucalyptus’ position in the market, option 1 seems unlikely. What about the others

Did HP Acquire Eucalyptus For Its Product?

HP’s commitment to OpenStack makes it unlikely that it is acquiring Eucalyptus Systems to integrate the Eucalyptus private cloud platform into its own product portfolio.

Indeed, OpenStack is a mature and well-architected platform that not only significantly overlaps with Eucalyptus’ functionality (they are both private cloud platforms), but is also largely incompatible with it (Eucalyptus is a largely monolithic Java codebase, whereas OpenStack is a modular Python one).

So Was It For The Team?

The Eucalyptus team doesn’t seem like an obvious fit to work on HP’s product (if only because of the programming language differences), but would presumably bring significant experience in terms of AWS compatibility, which HP could use (AWS compatibility is Eucalyptus’ key differentiator in the private cloud market).

However, there are OpenStack startups that have built their business on enabling AWS compatibility for OpenStack (CloudScaling is one such example). If HP’s goal was to acquire AWS compatibility experience, one of those might have been a better buy than Eucalyptus Systems.

What About The Leadership?

With the acquisition, Eucalyptus Systems CEO Mårten Mickos goes on to head HP’s cloud business.

As a friend, I know from firsthand experience that Mårten is a world-class executive. He’s a good speaker, an inspiring leader, and a visionary. Plus, he does have a proven track record.

Considering that HP’s cloud business has been without a leader since the departure of Biri Singh, bringing Mårten on board could be just what the company needs to finally turn its cloud division around and become a solid contender in the space.

What’s Next?

At this point, HP’s leadership hasn’t revealed its plans, but it is doubtful that those plans even exist. Indeed, in the wake of the acquisition, HP VP of product Bill Hilf stated that:

"There’s going to be a lot of strategic discussion we have to have about [the acquisition] — how and what makes the most sense over time".

So, perhaps HP’s strategy is to cross fingers and hope that Mårten will know what to do with its cloud business. Will the acquisition be worth it for HP? Surely, the price tag seems steep for a single executive, even for one of Mårten’s caliber.

Finally what will the acquisition entail for the Eucalyptus product? If the product wasn’t the motivator for the acquisition — which seems likely —, then it’s both probable and unfortunate that it might eventually be discontinued. Time will tell.

Read More

Topics: OpenStack, Opinion, Eucalyptus, Cloud News, Private Cloud

A Taxonomy of the Cloud Management Ecosystem

by Sebastian Stadil on Aug 20, 2014 8:30:00 AM

Cloud management tools form a constantly evolving landscape.

Just a few weeks ago, Hashicorp introduced Terraform. In June, Google was introducing Kubernetes. In October of last year, Airbnb was open-sourcing SmartStack. Heat was introduced in OpenStack Havana in 2013. Even older tools like Chef, Salt, and Scalr (that’s us!) aren’t that old, and they are constantly changing as new releases get introduced.

And all in all, these tools make the same promise: manage your cloud (better). But does that mean they are one and the same? Not really. There definitely is some overlap, but these tools have strongly diverging focuses, and this post hopes to shed some light as to what those are.

To make this more concrete, we’ll look at an example. Let’s assume you are a DevOps engineer, and have your favorite N-tier web application running in your cloud of choice, backed by your favorite database. How did you get there?

Cluster Discovery

If your application is to be backed by a database, it definitely needs to know how to connect to it (e.g. what is the IP of your database server?!). There are tons of options here (DNS is one), but if you are dealing with a reasonably complex (or redundant) deployment, the right choice is usually cluster discovery software.

Cluster discovery systems provide a centralized repository of configuration information. In our example, application servers would connect to the cluster discovery system in order to retrieve the IP of the database servers they should use, but this is of course applicable to many other use cases.

Now, of course, you still need to somehow hardcode connection details to access your cluster discovery system! For this reason, cluster discovery systems are expected to be highly-available and redundant, so that discovering the cluster discovery system doesn’t become a problem. This is of course something you should bear in mind when choosing one!

Popular cluster discovery systems include: ZooKeeper, Etcd, Consul, and SmartStack. It’s important to note that there are other pieces of software that can play that role too, like Chef Server.

Read More

New Scalr Cost Analytics is Released: Enterprise Cloud Cost Management

by Praneet Wadge on Aug 12, 2014 6:30:00 AM

Scalr is happy to announce the release of Scalr Cost Analytics!

Cloud has dramatically transformed the process of resource provisioning, shifting control away from IT and towards a self-service model for developers. The resulting complexity makes it difficult to accurately predict cloud spend, allocate budget, and find inefficiencies. This is further complicated when enterprises split their cloud capacity across public and private clouds.

Yet, with an ever-changing cloud landscape (ex. Rackspace's recent exit of the pure-play IaaS market), it’s all the more critical for enterprise leadership to have visibility into the financial impact and business value of their different cloud options.

Here at Scalr, we work with enterprises that are pioneering cloud adoption in their industry; our cloud management platform (CMP) helps enterprise DevOps teams efficiently build cloud-native infrastructure, and enables their IT counterparts to govern the usage of cloud resources.

We saw that Scalr users and their colleagues were dealing first hand with the problems caused by a lack of financial transparency into their cloud resources. As such, we are building Scalr Cost Analytics: cost management tooling to restore visibility and control over cloud spend. This blog post details the major components of the tool’s initial release.    

Why is Cloud Cost Management tooling needed?

Generally speaking, using cloud resources breaks the old model of fixed infrastructure costs, leading to two main problems.  

1. Cloud Costs are hard to predict

Unlike traditional infrastructure capacity which comes with a pre-determined price independent of actual usage, cloud’s pay per use model entails that costs can vary extensively from one day to the next, as DevOps teams provision and tear down entire infrastructure clusters.

What’s more, self-service provisioning results in multiple departments or clients simultaneously tapping into the enterprise’s cloud, making it even more difficult for Finance to answer questions such as: How much cloud budget should I allocate?”, “Who should I give it to?”, and “Will there be overspend?”

To solve this problem, and to keep up with real-time provisioning, Finance needs tooling that generates real-time cost reporting and predictions, and tracks budgets.  

2. Cloud costs are hard to understand

When cloud resources are provisioned and decommissioned over the course of just a few hours or days, it becomes difficult to keep track of which resource does what exactly, and to whom it should be charged back to. 

In fact, as far as Finance is concerned, cloud accounts tend to turn into buckets of instances and volumes that are delicate or confusing to reason about — and don’t even think about optimizing them!  In turn, when IT receives an aggregate cloud bill at the end of the month, mapping capacity usage to specific users can be an arduous if not impossible task.

To solve this problem, and enable cloud cost optimization, what Finance and IT need is tooling that delivers a better understanding of the story behind each cloud resource: “Why was this instance provisioned?", "Who owns this volume?", and "Can we decommission this resource?”

In other words: Finance and IT need tooling that provides actionable cost data.

Surveying the existing landscape of cloud cost management tools

With these two challenges in mind, how can your enterprise use cloud in a financially responsible manner?

Several options have been available for cloud cost management, but face their own issues.

  • The Amazon Billing Management Console. Though vastly improved from its release in 2013, the console does not help answer “who” or “why” when it comes to spending the cloud budget. Furthermore, it is of course AWS-only.

  • Tools like Cloudability, CloudVertical, and Cloudyn. Though all have extensive features, the first two do not support private clouds, and CloudVertical does not provide budgeting capabilities. Cloudyn is a quality tool, but it is based on resource tagging, which is prone to human error (with misspelled of forgotten tags), and remains time-consuming for the DevOps engineers that provision cloud resources.

  • A competing CMP vendor offers a Cost Analytics product. However, its software focuses exclusively on drilling down into aggregated cost data through filters in a separate interface. With very limited bidirectional integration with their CMP, (which is where users are actually launching resources from!) it can be difficult to make their cost data actionable.  

  • DIY: We won’t go here as organizations that prefer to build over buy do so because they can and not always because they should.

But why is Scalr Cost Analytics different? What makes Scalr Cost Analytics unique is that it is built directly into the Scalr Cloud Management Platform (CMP) itself.

As a CMP, Scalr thoroughly understands which cloud resources are being utilized, in what amounts, by whom, and for what purpose. In turn, Scalr Cost Analytics is able to leverage the insight gained from this integration to provide Finance, IT, and DevOps with contextualized expenditure predictions and actionable cost data. 

What Can You Do With Scalr Cost Analytics?

Scalr Cost Analytics is designed to provide actionable cost insights and contextual tooling to the main cloud stakeholders in enterprises: DevOps, IT, and Finance.   

Finance can:

  • Model their existing accounting schema to categorize cloud costs
  • Allocate cloud budget to business units, projects, and teams with their own leads
  • Analyze cloud spend across multiple cloud platforms, down to individual server farms
  • Forecast weekly, monthly, quarterly and yearly costs and overspend

IT will be able to:

  • Break down spend by cloud server and resource types for projects, teams, and business units
  • Plan for capacity with spend matched with server count and usage
  • Pinpoint uncontrolled costs and apply lease management policies and quotas
  • Audit the cloud bill for errors and inconsistencies

DevOps will be able to:

  • View the costs of running their applications at design time BEFORE launching resources
  • Choose the most cost efficient server types
  • Understand the financial impact of usage spikes and dips
Read More

Topics: Announcements, Features, Multi-Cloud, Release, Cost, Cloud Management, DevOps, Private Cloud, Finance

Welcome to the Scalr blog!

We build a Cloud Management tool that helps businesses efficiently design and manage infrastructure across multiple clouds.

Here, we post about our experience working with the Cloud, and building Scalr. On average, we do that twice a week.

Sometimes, we'll also cover Cloud-related news.

Subscribe to Email Updates