Questions?  Call us now at +1 415 669 4811 |  |  | Login

A Taxonomy of the Cloud Management Ecosystem

by Sebastian Stadil on Aug 20, 2014 8:30:00 AM

Cloud management tools form a constantly evolving landscape.

Just a few weeks ago, Hashicorp introduced Terraform. In June, Google was introducing Kubernetes. In October of last year, Airbnb was open-sourcing SmartStack. Heat was introduced in OpenStack Havana in 2013. Even older tools like Chef, Salt, and Scalr (that’s us!) aren’t that old, and they are constantly changing as new releases get introduced.

And all in all, these tools make the same promise: manage your cloud (better). But does that mean they are one and the same? Not really. There definitely is some overlap, but these tools have strongly diverging focuses, and this post hopes to shed some light as to what those are.

To make this more concrete, we’ll look at an example. Let’s assume you are a DevOps engineer, and have your favorite N-tier web application running in your cloud of choice, backed by your favorite database. How did you get there?

Cluster Discovery

If your application is to be backed by a database, it definitely needs to know how to connect to it (e.g. what is the IP of your database server?!). There are tons of options here (DNS is one), but if you are dealing with a reasonably complex (or redundant) deployment, the right choice is usually cluster discovery software.

Cluster discovery systems provide a centralized repository of configuration information. In our example, application servers would connect to the cluster discovery system in order to retrieve the IP of the database servers they should use, but this is of course applicable to many other use cases.

Now, of course, you still need to somehow hardcode connection details to access your cluster discovery system! For this reason, cluster discovery systems are expected to be highly-available and redundant, so that discovering the cluster discovery system doesn’t become a problem. This is of course something you should bear in mind when choosing one!

Popular cluster discovery systems include: ZooKeeper, Etcd, Consul, and SmartStack. It’s important to note that there are other pieces of software that can play that role too, like Chef Server.

Read More

New Scalr Cost Analytics is Released: Enterprise Cloud Cost Management

by Praneet Wadge on Aug 12, 2014 6:30:00 AM

Scalr is happy to announce the release of Scalr Cost Analytics!

Cloud has dramatically transformed the process of resource provisioning, shifting control away from IT and towards a self-service model for developers. The resulting complexity makes it difficult to accurately predict cloud spend, allocate budget, and find inefficiencies. This is further complicated when enterprises split their cloud capacity across public and private clouds.

Yet, with an ever-changing cloud landscape (ex. Rackspace's recent exit of the pure-play IaaS market), it’s all the more critical for enterprise leadership to have visibility into the financial impact and business value of their different cloud options.

Here at Scalr, we work with enterprises that are pioneering cloud adoption in their industry; our cloud management platform (CMP) helps enterprise DevOps teams efficiently build cloud-native infrastructure, and enables their IT counterparts to govern the usage of cloud resources.

We saw that Scalr users and their colleagues were dealing first hand with the problems caused by a lack of financial transparency into their cloud resources. As such, we are building Scalr Cost Analytics: cost management tooling to restore visibility and control over cloud spend. This blog post details the major components of the tool’s initial release.    

Why is Cloud Cost Management tooling needed?

Generally speaking, using cloud resources breaks the old model of fixed infrastructure costs, leading to two main problems.  

1. Cloud Costs are hard to predict

Unlike traditional infrastructure capacity which comes with a pre-determined price independent of actual usage, cloud’s pay per use model entails that costs can vary extensively from one day to the next, as DevOps teams provision and tear down entire infrastructure clusters.

What’s more, self-service provisioning results in multiple departments or clients simultaneously tapping into the enterprise’s cloud, making it even more difficult for Finance to answer questions such as: How much cloud budget should I allocate?”, “Who should I give it to?”, and “Will there be overspend?”

To solve this problem, and to keep up with real-time provisioning, Finance needs tooling that generates real-time cost reporting and predictions, and tracks budgets.  

2. Cloud costs are hard to understand

When cloud resources are provisioned and decommissioned over the course of just a few hours or days, it becomes difficult to keep track of which resource does what exactly, and to whom it should be charged back to. 

In fact, as far as Finance is concerned, cloud accounts tend to turn into buckets of instances and volumes that are delicate or confusing to reason about — and don’t even think about optimizing them!  In turn, when IT receives an aggregate cloud bill at the end of the month, mapping capacity usage to specific users can be an arduous if not impossible task.

To solve this problem, and enable cloud cost optimization, what Finance and IT need is tooling that delivers a better understanding of the story behind each cloud resource: “Why was this instance provisioned?", "Who owns this volume?", and "Can we decommission this resource?”

In other words: Finance and IT need tooling that provides actionable cost data.

Surveying the existing landscape of cloud cost management tools

With these two challenges in mind, how can your enterprise use cloud in a financially responsible manner?

Several options have been available for cloud cost management, but face their own issues.

  • The Amazon Billing Management Console. Though vastly improved from its release in 2013, the console does not help answer “who” or “why” when it comes to spending the cloud budget. Furthermore, it is of course AWS-only.

  • Tools like Cloudability, CloudVertical, and Cloudyn. Though all have extensive features, the first two do not support private clouds, and CloudVertical does not provide budgeting capabilities. Cloudyn is a quality tool, but it is based on resource tagging, which is prone to human error (with misspelled of forgotten tags), and remains time-consuming for the DevOps engineers that provision cloud resources.

  • A competing CMP vendor offers a Cost Analytics product. However, its software focuses exclusively on drilling down into aggregated cost data through filters in a separate interface. With very limited bidirectional integration with their CMP, (which is where users are actually launching resources from!) it can be difficult to make their cost data actionable.  

  • DIY: We won’t go here as organizations that prefer to build over buy do so because they can and not always because they should.

But why is Scalr Cost Analytics different? What makes Scalr Cost Analytics unique is that it is built directly into the Scalr Cloud Management Platform (CMP) itself.

As a CMP, Scalr thoroughly understands which cloud resources are being utilized, in what amounts, by whom, and for what purpose. In turn, Scalr Cost Analytics is able to leverage the insight gained from this integration to provide Finance, IT, and DevOps with contextualized expenditure predictions and actionable cost data. 

What Can You Do With Scalr Cost Analytics?

Scalr Cost Analytics is designed to provide actionable cost insights and contextual tooling to the main cloud stakeholders in enterprises: DevOps, IT, and Finance.   

Finance can:

  • Model their existing accounting schema to categorize cloud costs
  • Allocate cloud budget to business units, projects, and teams with their own leads
  • Analyze cloud spend across multiple cloud platforms, down to individual server farms
  • Forecast weekly, monthly, quarterly and yearly costs and overspend

IT will be able to:

  • Break down spend by cloud server and resource types for projects, teams, and business units
  • Plan for capacity with spend matched with server count and usage
  • Pinpoint uncontrolled costs and apply lease management policies and quotas
  • Audit the cloud bill for errors and inconsistencies

DevOps will be able to:

  • View the costs of running their applications at design time BEFORE launching resources
  • Choose the most cost efficient server types
  • Understand the financial impact of usage spikes and dips
Read More

Topics: Announcements, Features, Multi-Cloud, Release, Cost, Cloud Management, DevOps, Private Cloud, Finance

Finding Your Way Around The OpenStack Ecosystem: 10 Names To Know

by Thomas Orozco on Aug 5, 2014 8:00:00 AM

Unlike competing open-source cloud platforms CloudStack and Eucalyptus, OpenStack isn’t an integrated piece of software. Instead, it’s a constellation of projects that can be combined to deploy a private cloud.

OpenStack’s unique architecture yields unparalleled flexibility, as you can pick and choose only the pieces that you’ll need, so that your private cloud is perfectly tailored towards your use case.

Unfortunately, this architecture also breeds a certain amount of complexity, and it’s not always easy to find one’s way around the OpenStack ecosystem of projects. The OpenStack Wiki provides a solid reference, but it can be a bit disorienting unless you know what you are looking for (find its project page here).

Fortunately, you found your handy guide! This blog post introduces the 10 most important projects that make up the OpenStack ecosystem, and whose name you need to know if you intend to deal with OpenStack.

1. Nova

Also known as OpenStack Compute, Nova is arguably the most important component of the OpenStack private cloud platform.

Nova exposes a REST API that clients can leverage to provision, decommission, and otherwise manage the lifecycle of virtual machines. Behind the scenes, Nova relies on a hypervisor driver to send commands to your hypervisor (which could be Xen, KVM, VMware, or any of the hypervisors listed here: Supported Hypervisors in OpenStack).

Given that Nova is an abstraction layer on top of your hypervisor and the driver controlling it, some features may or may not be available, depending on the underlying hypervisor’s capabilities, or the driver’s maturity (e.g. Docker is a supported hypervisor, but at this time you can’t do live migration with it).

2. Swift

Also known as OpenStack Object Storage, Swift is one of the older OpenStack projects: just like Nova, it was present in OpenStack’s first integrated release, Austin.

Swift is functionally similar to Amazon’s Simple Storage Service (S3): it lets you store arbitrarily large data files, arrange them in buckets, and retrieve them at a later date, using a simple REST API. Not all applications can use object storage like Swift for their storage needs, but object storage remains a staple of successful cloud-native application design, as we covered in this earlier blog post.

Note that there are alternatives to using Swift with your OpenStack deployment. One such alternative is Ceph.

Read More

Topics: OpenStack, Cloud Platform, Private Cloud, enterprise cloud

"The Security Group Does Not Exist" - Working with the AWS APIs at Scale

by Igor Savchenko on Jul 30, 2014 7:00:00 AM

At Scalr we provide SaaS and on-premises software to manage cloud infrastructure (and it’s open-source), so we end up making lots and lots of API calls to the AWS APIs.

What’s great about scale is that when you’re making thousands of API calls a day, events with a 0.1% probability of occurring tend to happen multiple times a day, sometimes resulting in surprise, confusion, and hair pulling! One of those low-probability events is the subject of today’s blog post.

Let’s start with a quick test, shall we? Can you spot what’s wrong with this code?

import boto.ec2
# Connect to EC2
conn = boto.ec2.connect_to_region("us-east-1")  # Credentials are in the environment!
# Create a new SG
security_group = conn.create_security_group("test-sg, "Just making a point!")
# Define SG rules
ip_rules = [("tcp", 80, ""), ("tcp", 443, "")]
# Update the SG with the rules
for protocol, port, network in ip_rules:
  security_group.authorize(protocol, port, port, network)

If you can’t: read on!

The AWS APIs Are Fundamentally Eventually Consistent

When you hit the AWS API to create a security group, or allocate an Elastic IP, the response you get usually includes an ID for the resource you just created — and when it doesn’t, you get an error message explaining what went wrong, so you can fix your API call.

Now, popular belief (and the AWS documentation) suggests that as soon as you have the resource ID, you can make further API requests against it, like adding security rules, or associating an EIP, or adding tags to an instance. And in most cases, this will work.

But the truth is that the AWS APIs are in fact eventually consistent: the mere fact that you have a resource ID does not guarantee that the underlying resource actually exists.

In practice, this means that once in a while, your API call will return an ID that you can’t use just now, because the resource doesn’t exist yet. And if you nonetheless try and use it, you’ll get an error message, like:

The security group 'sg-xxxxxxxx' does not exist

The allocation ID 'eipalloc-xxxxxxxx' does not exist

The instance ID 'i-xxxxxxxx' does not exist

Of course, when you retry the call a few minutes later — after you got to your logs — the security group is there, and so is the Elastic IP, and you’re left wondering: “Oh AWS, why can’t I use the security group I just created?”.

Read More

Topics: API, Technical, Tips, Amazon

Back To Basics: All the Wrong Reasons To Discard Cloud (And A Right One To Adopt It)

by Thomas Orozco on Jul 23, 2014 9:00:00 AM

Scalr being a Cloud Management company, we often blog about how Cloud Management can increase the value you get out of your cloud (in the IaaS sense of the term).

But how do you decide to use cloud in the first place? Our experience has been that the reasons for organizations to adopt or discard cloud are sometimes quite nebulous. This post seeks to provide clarity.

AWS is the Largest Cloud, but it isn’t the Only Cloud

Cloud infrastructure was invented by Amazon when the company launched its AWS (Amazon Web Services) offering. AWS’s value proposition was (and still is) characterized by:

  • Trading capital investments (CAPEX) for operational expenditure (OPEX), thanks to a pay-per-use billing model

  • Consolidating workloads and increasing utilization, thanks to virtualization that enables a vast range of diverse instance types

  • Providing self-service and instant access to infrastructure resources, thanks to a UI and an API that make it possible to provision instances in minutes, and thanks to a billing model based on hourly increments.

Evidently, these are tradeoffs, not straight up benefits. In fact, your organizati

on might decide not to adopt cloud specifically because you have existing capital investments (CAPEX) you intend to leverage, or because your high-performance computing (HPC) workloads can’t afford to run on virtualized hardware.

But fortunately, you don’t have to throw the baby away with the bathwater! There are options that allow you to “unbundle” this value proposition.

Read More

Topics: API, AWS, Cloud Management, Private Cloud, cloud adoption

Welcome to the Scalr blog!

We build a Cloud Management tool that helps businesses efficiently design and manage infrastructure across multiple clouds.

Here, we post about our experience working with the Cloud, and building Scalr. On average, we do that twice a week.

Sometimes, we'll also cover Cloud-related news.

Subscribe to Email Updates