Eycalyptus – cloud introduction and auto-scaling tutorial

In this article, I will show how to do a very simple auto-scaling system on eucalyptus cloud using the wonderful eucalyptus fast start image. Afterwards you will appreciate how easy and configurable the Eucalyptus cloud is in regards to configuring customization scripts on systems that are booted dynamically inside auto-scaling triggers (like low CPU, RAM, etc… ).

A little history, last year (2014), HP has requisitioned a company called Eucalyptus, what I must admit surprised me after spending so much time with OpenStack. So I tried to get an idea why this move has happened and what are the main differences that immediately come to mind to compare these two.

So let me went with you on the first example exposure to Eucalyptus.

eucalyptus-logo… demo experience


  1. Physical system with Intel-V or AMD-x virtualization on CPU
  2. Virtual server running in hypervisor that supports nested virtualization (KVM or vmWare)

The target requirements

1)      Have a cloud system with capability to deploy a server quickly
2)      Test basic systems like load-balancing
3)      Check the network forwarding inside the cloud
4)      Demonstrate auto-scaling system of Eucalyptus on example server system

LAB IP setup

Dedicated vlan or switch with with IPs as such: – m0n0wall router – My laptop system IP – CentOS used for embedded eucalyptus deployment – 55 : public IP range for instances – 100 : private IP range for instances

My LAB basically is only running virtual using a wmWare workstation with two interfaces

vmnet0 (host-only network) – centOS6 with Eucalyptus

The virtual eucalyptus presentation server running CentOS and small virtual network on vmNet0 interface

The virtual eucalyptus presentation server running CentOS and small virtual network on vmNet0 interface

Step 1: installation package from eucalyptus

First this is the install log, I do not want to go over the details as there are many interactive notes like this one that are simply too boring to note, but they show nicely.

However there are more interesting parts to check:

What will be interesting for us during the wizard is setting the public and private IP ranges, in my lab I used these:

Then off-course on the question if we want the optional load balancers, the answer should always be YES as this is what we are interested in 🙂

Step 2: Install complete

After the installation is complete, you will get something like this after an hour of looking at coffee:


Step 3: Running the tutorials .. no, really, this thing has tutorials!

On this point I am fucking surprised that this thing is actually user friendly!

Step3a: Listing tutorial

Gain login to eucalyptus

Then describe images:

Now let’s import additional image of cloud Fedora from internet:

And install the image to the cloud with the following command. To install the image, we will run the following command:

If I check now in the webGUI, there is a new image available called Fedora20.

WebGUI NOTE: Access to the webGUI is running on port 8888, so I will use my , the account is “eucalyptus“, username “admin” and password is “password“.

Eucalyptus WebGUI, new Fedora20 image loaded

Eucalyptus WebGUI, new Fedora20 image loaded

New, the tutorial will show you how to change this image from private to public (so that all cloud users can deploy it) and that can be achieved with this command:

REMARK: There is a bug in the tutorial and the command there was missing the image ID.

You can see again the images also with the euca-describe-images command.

Now the last part is lauching an instance with the image, this can be simply done by this command:

REMARK: By default there is already one instance running since installation that is eating 2GB of RAM. So your second instance may fail with euca-run-instances: error (InsufficientInstanceCapacity): Not enough resources, if this happens, go to the eucalyptus WebGUI and terminate the default instance:

Terminate default instance running since install!

Terminate default instance running since install!

If you are doing this via the tutorial, you will get a nice extra output like this:

Step 3b: Missing more tutorials

So what to do next ?

Step 4: With tutorials missing, let’s play independently

Now this is where fun starts, we have Eucalyptus, we have an image, but nothing much to do now as the tutorials are not yet really finished (December 2014/January 2015). So let’s try going independently and play around Eucalyptus. But I will not go into API or development of AWS in this tutorial, but I will go for the auto-scaling feature.

But first. lets mess around and get a feeling how to work with Eucalyptus a bit more, so lets list the basic commands for checking the eucalyptus without webGUI:

Prerequisite: Login to eucalyptus, which inside the faststart image you can do via provided source with this command:

 euca-describe-images – shows all the system images loaded in the eucalyptus storage

 euca-describe-keypairs – shows all the keypairs that eucalyptus has in storage (to use for the systems after launching the instance)

euca-describe-groups – will show the FW rules for specific group, currently on the default one exists

euca-describe-loadbalancing – will show the configured load-balancer groups

euscale-describe-launch-configs – describes the configuration scripts for instances

In addition please keep these commands in mind as these are the best commands to troubleshoot during this tutorial, but currently I give no example output because on this point in our tutorial there these are mostly empty.





Step 5: Start preparations before auto-scaling (security groups)

Here we will create a security group called “Demo” that will allow basically the same things like the default group, but also 443 port. So in total icmp, TCP22, TCP80, TCP443.

if we now look again on all the security groups, we will see both the default and the new one (you can also double-check via webGUI)

Step 6: Create a load-balancer


Sometimes in the future, you will probably need to troubleshoot the load-balancer and for that you need access with SSH to the load-balancer instance. Now the problem is that by default Eucalyptus doesn’t give SSH keys to the load-balancer instances, so we need to do some steps to tell Eucalyptus to give these SSH keys where needed. So first generate a key with euca-create-keypair:

The the cloud property ‘loadbalancing.loadbalancer_vm_keyname’ governs the keypair assignments, so we modify it like this:


To create a load-balancer, we will use the eulb-create-lb command, the parameters are very simple at this point as we will only use the HTTP for load-balancing with default settings (more information about the settings can be found in the –help of the command, or on eucalyptus.com documentations)

You can also again check the load-balancer with eulb-describe-lbs

Every load-balancer needs a health-checking mechanism, this we can add using this command:

The above command will create a load-balancer check that is checking an URL of /index.html every 15 seconds, failure of a test is after timeout of 30 seconds and two consecutive failures means server down and two consequent successful tests mean the server is back up.

Step 7: Server configuration scripts after booting (in auto-scaling)

If we want to do auto-scaling demo, the empty servers booting has to have some way to prepare the server after boot for real work. Because we are working here with HTTP servers, we need a small script, that will install apache2 server and configure a basic index.html webpage.

This is a script that we will use as part of a “launch-configuration” to do example configuration of a server instance after start:

Tak this script and save it as a file /root/demo-lanuch-configuration-script.sh

Now lets take this script and make it part of a DemoLC launch configuration in eucalyptus with euscale-create-launch-config , we will use our Fedora20 image ID of emi-0676ae2c.

Now have a look on the launch-configuration with euscale-describe-launch-configs, where our DemoLC is visible:

Step 8: Creating the auto-scaling group


Before we go further I want to present to you something that was a problem for me when I was first attempting to create this auto-scaling system. My problem was that despite that I have enough RAM in my eucalyptus host (~8GB), I was not able to start more than 2 instances because of resource quotas and the auto-scaling was simply quietly failing in the background. Therefore you should first manually check if you can create at least 3 instances manually in the dashboard/webGUI (whe one running on 8888 port).

You can either start creating new instances via the webGUI interface and wait until you hit this error:

Eucalyptus resource limit error after unsuccessful instance launch.

Eucalyptus resource limit error after unsuccessful instance launch.

The problem that I had was that I had enough RAM, definitely enough to have several t1.small instances (256MB RAM each) running, but something was blocking me, what I found out was that each eucalyptus node (ergo server registered in the control system as place capable of hosting instances) has a quota limits that can be viewed with euca-describe-availability-zones verbose command. This is what I got when I had my problems:

Notice the free and max columns, this is the maximum amount of instances your eucalyptus node will allow you to launch! And 1 instance maximum is definitely not enough for an auto-scaling tutorial we are running here. So here is how to extend this limit, but note that your are responsible for managing your own RAM limits when you do this.

EDIT file /etc/eucalyptus/aucalyptus.conf and look for a parameter “MAX_CORES=0“.  And increase the value, afterwards restart the eucalytus process with # service eucalyptus-cloud restart or  # service eucalyptus-nc restart reboot.

I for example changed MAX_CORES=4 and as such I get the following availabilities in the cloud:


Now we are going to prepare a auto-scaling group that will be driving starting and shutdowns of server as needed, the command used is euscale-create-auto-scaling-group and we will reference both the load-balancer DemoLB and the launch-configuration DemoLC we created in previous steps.

You can again then verify the auto-scaling groups existence with euscale-describe-auto-scaling-groups command as below:

Step 9: Creating scaling-policy for both increase and decrease of instance counts

With the following command euscale-put-scaling-policy we will define a policy for changing scaling capacity based, as name suggest we will in the second step make this policy behavior based on CPU alarms.

Now the second part is to create and alarm and monitor the CPU usage, for that we will use the euwatch-put-metric-alarm command, and at the end in the –alarm-action we will use the auto-scale policy ID from the previous command.


The differences are:
DemoDelNodesAlarm (changed name)
— adjustment = -1 (to have a decrease in number of instances)
— threshold 10   (to check when CPU utilization on instance is below 10%)
–comparison-operator LessThanOrEqualToThreshold (to check below the 10% CPU threshold)

Step 10: Creating a termination policy

One thing that we omitted in the previous scale-down policy is to say which instance should be terminated from the group of instances running. At this moment we will simply choose one of the pre-set options that is called OldestLaunchConfiguration. This method will during scale-down policy shutdown that instance, that has the oldest version of configuration script from Step 7 (ergo it is expected that you will update these scripts over time).

REMARK: This method actually has one additional use-case, imagine that you are doing an application update (for example new version of webpage rolled out to the instances), for something like this you can modify the server configuration script from Step 7 and then just increasing the load will launch a new auto-scaled instance with new webpage and after a while, when the system will be scaling-done the instance cluster, it will shutdown specifically those servers that are running with the oldest version of the server configuration script. This way you can technically do a rolling updates across all your instances as a “trick”.

Step 11: Verification that auto-scaling is running the first instance

Ok, so everything is actually configured, now the auto-scaling group should have already created the initial instance. On this point I will show the webGUI view on the running instances, but I really recommend you to re-run all the commands from Step 4 to give yourself the full view on how the auto-scaling and instance status looks like from the console commands perspective.

If you go to webGUI, then immediatelly enter the “SCALING-GROUPS” view and you will see two groups exist, one is internal system for load-balancer resouces, which is a result of your DemoLB, but you do not have to care about this, the second however is your DemoASG and you should see the number of instances on 1! This is the view:

DemoASG showing the initial instance running!

DemoASG showing the initial instance running!

Next we will check the details, select the gear icon and select View details


In this view, select “Instances” tab and you should see your auto-scaled instance ID i-db9ead12:

Detail of the initial instance ID

Detail of the initial instance ID

Now that we have our ID, lets go check the instance details in the main view “Instances” view (go back to dashboard and select Running instances there):

Finding our auto-scaled instance via ID in the running instance list (note the IP address)

Finding our auto-scaled instance via ID in the running instance list (note the IP address)

Ok, now we have an IP address, lets go connect to it! If you followed my steps from the beginning, you should have the my-first-keypair.pem file in the /root directory. So you can use it to connect to the fedora image like this:

Immediatelly please notice that the hostname of the target system is “instance-192-168-125-74” what means that our configuration script has worked!!! Maybe it will take some time to finish the whole configuration (like apache2 installation), but lets check if the HTTP service is running already with the netstat command.

As you can see, HTTP is running, so lets point our browser to it (using either the internal OR the external IP) and check what we will find:

Access to instance working, including configuration script that configured a webpage!

Access to instance working via (internal IP), including configuration script that configured a webpage!

Access to instance working via (external IP), including configuration script that configured a webpage!

Access to instance working via (external IP), including configuration script that configured a webpage!

Now also you should check the access via the load-balancer, if everything works, you should also via the load-balancer, first check the IP of the load-balancer via the webGUI, go to Running instances again and select details of the load-balancer instance running.

Load-Balancer instance public and private IPs

Load-Balancer instance public and private IPs

So to test access, point your browser also to the public IP of the load-balancer that is the and you should see access to one of the running instances, in this case only the one

Access to instance web service VIA load-balancer with public IP

Access to instance web service VIA load-balancer with public IP

BONUS Step if not working: Troubleshooting load-balancer if needed.

When I have tried accessing the test webpage via Load-Balancer for the first time, it was not working, after double-checking everything I concluded that something must be wrong with the Eucalyptus Load-Balancers used in the auto-scaling. But how to troubleshoot this ? Well the point is that from the eucalyptus system, you can only check how the load-balancer considers the server HTTP system alive or not with the eulb-describe-instance-health command. This was specifically my problem, the server (despite running HTTP and test page) was considered “OutOfService”.

Ok, so we need to check the load-balancer operation, and for that we need to enter it. First list out the instances and look for the load-balancer, in the webGUI you can find the loadbalancer in the running instances, and select detailed view:

load-balancer instance SSH access details

load-balancer instance SSH access details

Notice the Instance ID of i-b5d6412a in the GUI, we can find this also in the console instances view:

Right behind the “running” word is the key pair that the load-balancer instance is using, which is of course the euca-admin-lb that the created Step 6 optional section. If you didn’t done this, you probably see “0” instead of key and this means that there is no SSH keypair deployed in the load-balancer and you cannot connect there now! However if you have done the optional part of Step 6, you can now connect to the loadbalancer with SSH like this:

Once inside the load-balancer, the main cause for me was the NTP not synchronized.

Here are the LOGs : /var/log/load-balancer-servo/servo.log , my error that pointed me to NTP was:

Step 12: Verify the auto-scaling work with CPU stress tests

Now we have the auto-scaling configured, we have policy to increase and to decrease the number of instances based on CPU load, so lets test it. Right now our group has a minimum running instances of 1, lets try to push it to 2 with loading the CPU a little bit up.

To have a tool to push CPU usage up, install “stress” to the

Now, have a look on the auto-scaling group in the webGUI, there is a default cooldown period in seconds between scaling events, therefore we must product a CPU usage above 50% cpu for more than 300 seconds in order to have a trigger.  And for that we use the stress tool like this (running from inside the instance):

This will generate a CPU load inside the instance that should trigger a scaling event.

Alternative, if stress is not generating enough CPU load is to use superPi or for 64bit only linux then this version of y-cruncher pi

Watching the triggers and alarms status:

Specifically if you want history of the data that the alarms use as “input” you can go directly for the metric for CPUutilization like this

Worst case scenario if you have problem triggering the alarms, you can do it manually like this by setting the alarm state to “ALARM”:

If successful, you will see two INSTANCES,one old and one new that was launched under the auto-scaling group:

Auto scaling group triggered INSTANCE increase to 2

Auto scaling group triggered INSTANCE increase to 2

The details of the two instances now running

The details of the two instances now running

In summary

Now after all is finished, and the auto-scaling is working, you technically have something like shown in the diagram below. To test/verify, I encourage you to use all of the commands that I presented during the tutorial, the euca*, eulb*, euwatch* to verify the functionality. I understand that there are probably many other questions here, specifically about the load-balancer internal functions, but this calls actually for actually start learning Eucalyptus for production deployment and that is right now beyond the target of this quick introduction article. But feel free to check the external links below for more information on eucalyptus (especially the administrator guide).

The final auto-scaling architecture at the end of this tutorial

The final auto-scaling architecture at the end of this tutorial

External resources:

Eucalyptus Documentation – https://www.eucalyptus.com/docs/eucalyptus/4.1.0/index.html

Eucalyptus Administrator Guide – https://www.eucalyptus.com/docs/eucalyptus/4.1.0/admin-guide/index.html

To meet other people and get community support, go for IRC to #freenode.com and go for #eucalyptus channel

Peter Havrila

About Peter Havrila

Peter's Profile Page