Category Archives: Infrastructure

The kernel panics from 2014 on our OpenStack Hypervisors

Having a bit of time on this last day of the year I decided to document two of our more challenging issues we had in 2014:
We had 2 cases where hypervisors spontaneously started to lock up or reboot.
Note that we are using Scientific Linux 6.5 and OpenStack Icehouse from RDO.

Issue 1: XFS kernel panic / deadlock

Symptom:

These lockups, sometimes resulting into a kernel panic, started to occur when we started to run bigger ELK instances (Elasticsearch, Logstash, Kibana) on our hypervisors.
When a hypervisor or instance locked up the following could be found in the kernel log of the hypervisor:
XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)

Cause:

The message pointed to memory allocation issues. However, over 100GB of free memory was present on the hypervisor.
With some help of the xfs irc channel the culprit was found:
The RAW backing file for the instance consisted of too many extents.
Because the data in the ELK instances grew in very small increments (e.g. compared to mysql which allocates big blocks of data at once) the RAW backing file had lots and lots of extents.
Apparently xfs started to have issues keeping track of the extents.
(the xfs_bmap command took quite some time to complete…)

Solution:

We changed to pre-allocated RAW files for this instance type.
Pre-allocated files are allocated in one go you have just a few big extents instead of many small extents whenever the file grows.

Issue 2: Bridge/netfilter kernel panic

Symptom:

The issue just started without an apparent reason and we had a random hypervisor reboot once every few days. There was nothing to see in the kernel log and we do an automatic reboot after a kernel panic.

Cause:

Finding this issue required us to first to capture some more info.
We enabled the kernel crash dumps (just enable the kdump service) and now we had a kernel log with the important part:
<4>RIP: 0010:[<ffffffffa048893d>]  [<ffffffffa048893d>] br_nf_pre_routing_finish+0x18d/0x350 [bridge]
<1>RIP  [<ffffffffa048893d>] br_nf_pre_routing_finish+0x18d/0x350 [bridge]

This points to a bridge / netfilter issue.
This specific message is not documented but other people have had kernel panics when (e.g. bridge) info is missing from a packet.

Solution:

Dropping all traffic not specifically allowed by Neutron in the iptables FORWARD chain fixed the issue.  Although this theoretically should not have made a difference (no packets should hit this rule) we have not had any reboots since this rule was applied.
Note that we do not use namespaces and it is quite possible that using namespaces also prevents this issue from happening.

OpenStack Operator tool: Novacek

About

As an OpenStack operator we missed some tooling for troubleshooting and status checking.
To fill this gap we created a python tool to give us this info:
https://github.com/spilgames/novacek
The tool is written in python and uses the OpenStack libraries to get the necessary information.

Examples

Typical things we use this tool for:

  • Get the status of all instances when you login to a hypervisor
  • Mail all people who have an instances on a hypervisor (very basic currently and needs the email in the tenant description field)
  • Check if the hypervisor setup (VLANs) is correct (this also requires the easyovs library)

Development

Although the tool is fully functional we still have some things we would like to add.
To make sure people are not replicating this functionality we chose to publish the tool in the early stages.
Feel free to extend and improve the functionality by doing a merge request.

Using Ceilometer with Graphite

At Spil Games we love OpenStack and we love metrics.
We tried to run Ceilometer in the past but we experienced performance issues. We heavily use Graphite to store metrics we thought it would be a good idea to push Ceilometer metrics into Graphite. The data is directly sent from the compute node to the graphite backend so there are no bottlenecks.The quick and dirty proof of concept code provided here works great in our environment 😉 Note that this solution ONLY offers some compute graphs and does nothing more then this.

What you get:

Per vm graphs.
e.g. cpu usage of all machines on a single hypervisor:

graphite.spilgames.com

Installation

This installation is tested/based on SL 6 and the Icehouse RDO packages.
These steps need to be done on all OpenStack hypervisors where you want graphs from.

  • Install the openstack-ceilometer-compute package.
  • Configure ceilometer.conf:
    – Make sure to have the rabbitmq and keystone settings configured.
    – Add the graphite settings: prefix and append_hostname

Example:

  • Ad a publisher to the graphite entry points.
    e.g. /usr/lib/python2.6/site-packages/ceilometer-2014.1.1-py2.6.egg-info/entry_points.txt

  • Clone our git repo for the example pipeline.yaml and graphite publisher.
  • Copy the pipeline.yaml to /etc/ceilometer/pipeline.yaml
    – make sure you edit the yaml publishers to send it to the correct graphite server.

  • Install the graphite publisher.
    – Copy the graphite.py to /usr/lib/python2.6/site-packages/ceilometer/publisher/graphite.py
  • Restart the openstack-ceilometer-compute agent

You should start seeing graphs now.

Getting it upstream

Since we already have some code we decided to put this on the web.
There is a blueprint here to get things officially upstream but there is still some discussion going on there.

 

OpenStack Swift & many small files

At Spil Games we have been running Swift for more than two years now and are hosting over 400 million files with an average file size of about 50 KB per object. We have a replica count of three so there are 1.2 billion files to be stored on the object servers. Generally speaking, Swift has turned out to be a solid object storage system. We did however run into some performance issues. Below we’ll describe how we analyzed and solved them.

Continue reading

OpenStack with Open vSwitch and Quantum (Folsom) on Scientific Linux

We decided to run OpenStack on Scientific Linux 6, a Red Hat derivative. Although this was not a reference platform (this was before Red Hat announced it would support OpenStack), we decided not to go with Ubuntu due to various factors. Knowledge and availability of an in-house infrastructure tailored to Red Hat based machines were the most important deciding factors.

With the availability of OpenStack packages through the EPEL repository it made running OpenStack on Scienfic Linux a manageable endeavour. Installing was basically just following the install guide.

Continue reading