I've recently been asked to build a redundant mailstore, using two server-class machines that are running Ubuntu. The caveat, however, is that no additional hardware will be purchased, so this rules out using any external filestorage, such as a SAN. I've been investigating the use of DRBD in a primary/primary configuration, to mirror a block device between the two servers, and then put GFS2 over the top of it, so that the filesystem can be mounted on both servers at once.
While a set-up like this is more complex and fragile than using ext4 and DRBD in primary/secondary mode and clustering scripts to ensure that the filesystem is only ever mounted on one server at a time, it's likely that there will be a requirement for GFS on the same two servers for another purpose, in the near future, so it makes sense to use the same method of clustering for both.
The following guide details how to get this going on Ubuntu 10.04 LTS (lucid). It won't work on any version older than this - the servers that this is destined for were originally running 9.04 (Jaunty), however, I've tested DRBD+GFS on that release, and there's a problem that prevents it from working. As far as I'm concerned, production servers should not be run on non-LTS Ubuntu releases, anyway, because the support lifecycle is far too short. This guide should also work fine for Debian 6.0 (squeeze), although I haven't tested it, yet.
Ever had a situation where you need to rebuild a Debian or Ubuntu package on a regular basis, but it takes an incredibly long time because it's running automatic tests - tests that you don't need until your final build?
For many of these packages, there's a simple way to disable the tests, by setting the DEB_BUILD_OPTIONS to "nocheck", before you build the package:
apt-get source openldap
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -rfakeroot
Not all packages support this, however, and some packages might use 'notest' instead.
There are a number of other values that can be used with DEB_BUILD_OPTIONS, too, if the package supports them:
noopt - turn off optimisation
nodocs - don't build documentation
nostrip - do not strip debugging symbols from binaries
parallel=n - use n parallel processes to build the package
QEMU is well-known as a free replacement for VMware, allowing users to run a PC within a PC. What isn't so well known about QEMU is that, in addition to emulating x86 architectures, it can emulate AMD64, Sparc, MIPS, PPC and ARM CPUs.
In the case of the ARM architecture, QEMU provides a convenient, if slow, environment in which development can be done for embedded systems.
This article describes the process involved in building a Debian/ARM server running under QEMU. It assumes that Debian is also being used as the host server.
Since QEMU's arm emulator has no ability to emulate either IDE or SCSI disks, it will be necessary to install the server on an NFS exported partition.
With the imminent release of Debian Sarge, comes a new installer program, which is much more flexible than that of previous releases. Among many of the new features is support for installation with LVM, installation over an ssh console and even the ability to install over an infra-red link. However with these extra features comes an added layer of complexity, and for the new user, accessing the less commonly used features is not always straightforward.
This article intends to provide a step-by-step guide to installing Debian Sarge with a mirrored-disk configuration. This process can easily be extended to cover any RAID configuration that Debian Linux supports (concatenated disks, RAID5, etc).