Thursday, February 18, 2016

Re-using arguments and environment variables for autotools' configure script

A thing that I'm going to describe is nowhere difficult, however it's not very obvious and actually I was pretty surprised that I only managed to think about it after so many years working with autotools.

From time to time I find myself needing to re-run the configure script because I either need to add or remove some option or just run it with sh -x for debugging purposes or with scan-build for example.

Usually I have a number of arguments and environment variables defined and I don't keep that in my head. I just hit ctrl-r and find the last execution of configure with all the stuff included and continue with that. But if I don't touch it for some time then it goes away from the HISTFILE. In that case I have to go the config.log file and copy/paste stuff from there. However, copy/pasting sucks and, moreover, it doesn't have proper quotes.

Apparently, there's a better solution for that: the config.status can provide everything that's needed.

$ ./config.status --config
'--with-hal' '--without-uml' '--without-polkit' 'CFLAGS=-g -I/usr/local/include' 'LDFLAGS=-L/usr/local/lib'
$

For example, to re-run all the same using scan-build just do:

$ eval scan-build ./configure `./config.status --config`

So it's pretty neat.

More info on the config.status scripts is available in its official documentation.

Wednesday, February 3, 2016

ZFS on Linux support in libvirt

Some time ago I wrote about ZFS storage driver for libvirt. At that time it worked on FreeBSD only because ZFS on Linux had some minor limitations, mainly some command line arguments for generating machine-readable output.

As of version 0.6.4 ZFS on Linux actually supports all we need. And I pushed the libvirt related changes today.

It will be available in 1.3.2, but if you're interested, you're welcome to play around with that right now checking out libvirt from git.

Again, most of the usage details are available in my older post. Also, feel free to poke me either on twitter or through email novel@freebds.org (note the intentional typo). It could take some time to get a reply though.

Friday, November 27, 2015

Mutt is Slow

Recently it started bothering me that Mutt is very slow.

I use it with IMAP and I have a number of mailboxes there, usually 1 mailbox per a mailing list. Each of these mail boxes contains relatively large amount of messages, for example, in FreeBSD's svn-ports-all mailbox I currently have 170.000+ messages.

It takes quite some time to open such a mailbox. The first measurements showed approx. 80-90 seconds. And it's not only for the first pass, it happens every time when I open the mailbox.

I noticed that mutt is using bdb to store the headers cache and decided to try the other backend: tokyo cabinet. It helped to improve things a little, but not significantly, only down to 60 seconds.

The other thing I tried is to use per-mailbox header cache file instead of a common one. If you set header_cache in your ~/.muttrc to a directory, it'll use per-mailbox files, otherwise, if it points to a file, it'd be a single cache. I didn't notice any performance changes after this though.

I was wondering if using some sort of memory-based filesystem would improve performance, but, unfortunately, the header cache size is approx. 2.5G and my total RAM is 4G, so it doesn't look like it'd make any sense.

I've made a call graph to get a better picture of what's going on:

Looking at this picture, it appears that the most time consuming operations of the mailbox opening are:

  • Iterating through IMAP server response (15.19% out of 74.64%)
  • Checking cache (33.32% out of 74.64%)
  • Sorting headers (16.6% out of 74.64%)

So it looks like cache handling contribution to the overall time is not that huge as I thought, i.e. even if we remove that part completely, it'd still take tens of seconds to open mailbox and that's no good.

So, actually, I have an impression that the current design with blunt fetching of all the messages simply cannot cope with large amount of data.

Probably it should do something like that:

  • Keep cache only for amount of messages visible on the screen (maybe x3-5 of that to be on the safe side, but still that would be under 1000)
  • Display stuff from cache right away, then pull the newer messages and rebuild the view
  • Lazy-load older messages when user is scrolling (not sure how doable that with ncurses though).

As even doing a PoC appears to be a quite time-consuming task, I'll probably take a look if there are some other terminal-based MUAs that could handle large amounts of email better.

Thursday, September 3, 2015

sriovmng: a tool to manage SR-IOV devices on Linux

Last few months I've been working with the SR-IOV devices on Linux, specifically with OpenStack. And while everything that is needed for day to day operations with SR-IOV devices could be done through the sysfs, I'm having a hard time remembering all the proper paths there. So I decided to write a tool I called sriovmng that will be save me from direct sysfs operations.

This tool supports the following operations:

  • Listing all the SR-IOV interfaces
  • Querying information about specific SR-IOV devices to show its device and vendor ids, number of VFs configured and VFs PCI address
  • Querying interface name by its PCI address and vice versa
  • Setting a number of VFs for device

The tool is written in Python and is available here:

https://github.com/novel/sriovmng

It could be installed using python setup.py install. I haven't yet uploaded it to pypi because I need to add some unit tests for it and also would appreciate if somebody could give it some real testing as well. Please let me know if you'd like to test it and if I could provide any help.

I have tested it myself with the ixgbe driver and it appears to work fine with this setup.

Sunday, June 7, 2015

OpenStack on FreeBSD/Xen Proof of Concept

In my previos post I described how to run libvirt/libxl on the FreeBSD Xen dom0 host. Today we're going a little further and run OpenStack on top of that.

Screenshot showing the Ubuntu guest running on OpenStack on the FreeBSD host.

Setup Details

I'm running a slightly modified OpenStack stable/kilo version. Everything is deployed on two hosts: controller and compute.

Controller

Controller host is running FreeBSD -CURRENT. It has the following components:

  • MySQL 5.5
  • RabbitMQ 3.5
  • glance
  • keystone through apache httpd 2.4 w/ mod_wsgi

Everything here is installed through FreeBSD ports (except glance and keystone) and don't require any modifications.

For glance I wrote rc.d to have a convenient ways to start it:

(18:19) novel@kloomba:~ %> sudo service glance-api status
glance_api is running as pid 792.
(18:19) novel@kloomba:~ %> sudo service glance-registry status
glance_registry is running as pid 796.
(18:19) novel@kloomba:~ %>

Compute

Compute node is running the following:

  • libvirt from the git repo
  • nova-compute
  • nova-scheduler
  • nova-conductor
  • nova-network

This hosts is running FreeBSD -CURRENT as well. I also wrote some rc.d scripts for nova services except nova-network and nova-compute because I start it by hand and want to see logs right on the screen.

Nova-network is running in the FlatDHCP mode. For Nova I had to implement a FreeBSD version of the linux_net.LinuxNetInterfaceDriver that's responsible for bridge creation and plugging devices into it. It doesn't support vlans at this point though.

Additionally, I have implemented NoopFirewallManager to be used instead linux_net.IptablesManager and modified nova to allow to specify firewall driver to use.

Few more things I modified is fixing network.l3.NullL3 class mismatching interface and modified virt.libvirt to use the 'phy' driver for disks in libvirt domains XML.

And of course I had to disable a few things in nova.conf that obviously not work on FreeBSD.

I hope to put everything together and upload the code on github and create some wiki page documenting the deployment. It's definitely worth to note that things are very very far from being stable. There are some traces here and there, VMs sometimes fail to start, xenlight for some reason could start failing at VMs startup etc etc etc. So if you're looking at it as a production tool, you should definitely forget about it, at this point it's just a thing to hack on.

Tuesday, June 2, 2015

libvirt/libxl on FreeBSD

Few months ago FreeBSD Xen dom0 support was announced. There's even a guide available how to run it: http://wiki.xen.org/wiki/FreeBSD_Dom0.

I will not duplicate stuff described in that document, just suggest that if you're going to try it, it'd probably be better to use the port emulators/xen instead of compiling stuff manually from the git repo.. I'll just share some bits that probably could save some of your time.

X11 and Xen dom0

I wasn't able to make X11 work under dom0. When I startx with the x11/nvidia-driver enabled in xorg.conf, kernel panics. I tried to use an integrated Intel Haswell video, but it's not supported by x11-drivers/xf86-video-intel. It works with x11-driver/xf86-video-vesa, however, the vesa driver causes system lock up on shutdown that triggers fsck every time on the next boot and it's very annoying. Apparently, this behavior is the same even when not under Xen. I decided to stop wasting my time on trying to fix it and just started using it in a headless mode.

IOMMU

You should really not ignore the IOMMU requirement and check if your CPU supports that. If you boot Xen kernel and you don't have IOMMU support, it will fail to boot and you'll have to perform some boot loader tricks to disable Xen to boot your system (i.e. do unload xen and unset xen_kernel). Just google up your CPU name, e.g. 'i5-4690' and follow the link to ark.intel.com. Make sure that it lists VT-d as supported under the 'Advanced Technologies' section. Also, make sure it's enabled in BIOS as well.

UEFI

At the time of writing (May / June 2015), Xen doesn't work with the UEFI loader.

xl cannot allocate memory

You most likely will have to modify your /etc/login.conf to set memorylocked=unlimited for your login class, otherwise the xl tool will fail with some 'cannot allocate memory' error.

libvirt

It's very good that Xen provides the libxl toolkit. It should have been installed when you installed the emulators/xen port as a dependency. The actual port that installs it is sysutils/xen-tools. As the libvirt Xen driver supports libxl, there's not so much work required to make it work on FreeBSD. I made only a minor change to disable some Linux specific /proc checks inside libvirt to make it work on FreeBSD and pushed that to the 'master' branch of libvirt today.

If you want to test it, you'd need to checkout libvirt source code using git:

git clone git://libvirt.org/libvirt.git

and then run ./bootstrap. It will inform if it needs something that's not installed.

For my libxl test setup I configure libvirt this way:

./configure --without-polkit --with-libxl --without-xen --without-vmware --without-esx --without-bhyve CC=gcc48 CFLAGS=-I/usr/local/include LIBS=-L/usr/local/lib

The only really important part here is the '--with-libxl', other flags are more or less specific to my setup. After configure just run gmake and it should build fine. Now you can install everything and run the libvirtd daemon.

If everything went fine, you should be able to connect to it using:

virsh -c "xen://"

Now we can define some domains. Let's check these two examples:

The first one is for a simple pre-configured FreeBSD guest image. The second one defines CDROM device and hard disk devices. It's set to boot from CDROM to be able to install Linux. Both domains are configured to attach to the default libvirt network on the virbr0 bridge. Additionally, both domains support VNC.

You could get domain VNC display number using the vncdisplay command in virsh and then connect to a VM with your favorite VNC client.

I've been using this setup for a couple of days and it works fine. However, more testers are welcome, if you're using it and have some issues please drop me an email to novel@`uname -s`.org or poke me on twitter.

Saturday, May 23, 2015

Serial Console setup

For many years I didn't bother setting up a serial console because I for some reason thought that it's a troublesome process, esp. from the hardware side of things, because modern motherboards don't have serial ports on their backside, serial port controllers would probably need some driver that might be not available for FreeBSD etc. That's what I thought. Actually, everything is much simpler.

A couple of weeks ago I've ended up in a situation when I really needed to know what's going on and it was not possible without a serial console, so I decided to finally try to setup it.

The motherboards I have in my computers are:

  • Gigabyte H61M-S2V-B3
  • Asus B85M-E

As I have mentioned, they don't have COM ports on the rear panel. However, it turned out that they both have a serial module connector on the board itself. I don't know how common is that for desktop motherboards, but considering that I didn't pay attention to that when buying them and both appeared to have them, I think it's quite common.

Unfortunately, I wasn't able to find these COM modules, so I just decided to buy a COM controller with 2 COM modules on-board and put them off without using the actual controller.

I put them off, plugged to my motherboards and connected via null modem cable. Things started working as expected!

On the test server I have this configuration in /boot/loader.conf:

# console boot_multicons="YES" boot_serial="YES" comconsole_speed="115200" console="comconsole,vidconsole"

To connect to it from the other host I use:

screen /dev/cuau0 115200

Budget of this operation is ~15$ for the COM controller and ~4$ for the null-modem cable in a radio shop.