Handling Cassandra concurrency issues, using Apache Zookeeper

One of the big problems with CassaFS, when I released the code a week and a half ago, were all the potential write races that could occur – whether it be multiple nodes trying to create the same file or directory at the same time, or writing to the same block at the same time, just to name a few of the potential concurrency scenarios that could play out.

This is because Cassandra doesn’t have the ability to provide atomic writes when updating multiple rows. It can provide atomic writes across multiple columns in a single row, but I would need to redesign the schema of CassaFS to take advantage of this, and even then, there are still going to be a number of operations that need to alter multiple rows, so this is unlikely to help in the long run.

The upshot of this is that in order to do locking, some sort of external mechanism was going to be needed. Preferably one that had some sort of ability to failover to one or more hosts.

After a bit of testing, Apache Zookeeper, described as a “Distributed Coordination Service for Distributed Applications” seems like the perfect candidate for this. It’s easy to configure, the documentation (at least, for the Java interface) is excellent, and they provide plenty of examples to learn from. And the best part, being distributed means that it isn’t a single point-of-failure.

Configuring Zookeeper to work across multiple servers was very simple – it was just a matter of adding the IP addresses and ports of all the servers to the Zookeeper configuration files.

Zookeeper also has a python interface, but other than the inline pydoc documentation, there’s not a lot of explanation of how to use it. I’ve muddled through and put together code to allow locking, based upon the example given on the Zookeeper webpages, here.

The Zookeeper namespace works rather like an in-memory filesystem; it’s a tree of directories/files (nodes). Watches can be set on nodes, which send notifications when a file has changed; I’ve use this facility in the locking code to look for the removal of nodes, when a process is releasing a lock.

import zookeeper
from threading import Condition

cv = Condition()
servers="127.0.0.1:2181"
zh = zookeeper.init(servers)

# not sure what the third and fourth parameters are for
def notify(self, unknown1, unknown2, lockfile):
    cv.acquire()
    cv.notify()
    cv.release()

def get_lock(path):
    lockfile = zookeeper.create(zh,path + '/guid-lock-','lock', [ZOO_OPEN_ACL_UNSAFE], zookeeper.EPHEMERAL | zookeeper.SEQUENCE)

    while(True):
        children = zookeeper.get_children(zh, path)

        # obviously the code below can be done more efficiently, without sorting and reversing

        if children != None:
            children.sort()
            children.reverse()

        found = 0
        for child in children:
            if child < basename(lockfile):
                found = 1
                break

        if not found:
            return lockfile

        cv.acquire()
        if zookeeper.exists(zh, path + '/' + child, notify):
            # Process will wait here until notify() wakes it
            cv.wait()
        cv.release()

def drop_lock(lockfile):
    zookeeper.delete(zh,lockfile)

Using it is straightforward; just call get_lock() before the critical section of code, and then drop_lock() at the end:

def create(path):
    ...
    lockfile = get_lock(path)

    # critical code here

    drop_lock(lockfile)

In CassaFS, I’ve implemented this as a class, and then created subclasses to allow locking based upon path name, inode and individual blocks. It all works nicely, although as one would expect, it has slowed everything down quite a bit.

I used cluster-ssh to test CassaFS before and after I added the locks; beforehand, creating a single directory on four separate servers simultaneously would succeed without error; now, with locking, one server will create the directory, and it will fail on the remaining three.

For anyone on Ubuntu or Debian wanting a quickstart guide to getting Zookeeper up and running, and then testing it a bit, it’s just a matter of:

apt-get install zookeeper
/usr/share/zookeeper/bin/zkServer.sh start
/usr/share/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181
# now we're in the Zookeeper CLI, try creating and deleting a few nodes
ls /
create /node1 foo
get /node1
create /node1/node2 bar
create /node1/node3 foobar
ls /node1
delete /node1/node2
ls /node1
quit

CassaFS – a FUSE-based filesystem using Apache Cassandra as a backend.

A couple of weeks ago, I decided that I wanted to learn how to develop FUSE filesystems. The result of this is CassaFS, a network filesystem that uses the Apache Cassandra database as a backend.

For those who haven’t looked at Cassandra before, it’s a very cool concept. The data it holds can be distributed across multiple nodes automatically (“it just works!”), so to expand a system, it just needs more machines thrown at it. Naturally, to expand a system properly, you need to add extra nodes in the correct numbers, or retune your existing systems; but even just adding extra nodes, without thinking too hard about it, will work, just not efficiently. The trade-off, however, is consistency – in situations where the system is configured to replicate data to multiple nodes, it can take time to propagate through.

Now, I realise I am not the first person to try writing a Cassandra-based filesystem; there’s at least one other that I know of, but it hasn’t been worked on for a couple of years, and Cassandra has changed quite a bit in that time, so I have no idea whether it still works or not.

Getting your mind around Cassandra’s data model is rather tricky, especially if you’re from an RDBMS background. Cassandra is a NoSQL database system, essentially a key-value system, and only the keys are indexed. This means you need get used to denormalising data (ie, duplicating it in various parts of the database), in order to read it efficiently. The best way to design a database for Cassandra is to look carefully at what queries your software is going to need to make, because you’re going to need a column family for each of those.

I hadn’t done any filesystem design before, when I started working on CassaFS, so I naively thought that I could use a file-path as an index. This actually worked, for a while – I had three column families: one for inodes, which contained stat(2) data, one for directories and one containing all the blocks of data:

Inode column family:

Key Data
/ uid: 0, gid: 0, mode: 0755, … etc
/testfile uid: 0, gid: 0, mode: 0644, … etc
/testdir uid: 0, gid: 0, mode: 0755, … etc

Directory column family:

Key Data
/ [ (‘.’, ‘/’), (‘..’, ‘/’), (‘testfile’, ‘/testfile’), (‘testdir’, ‘/testdir’)]
/testdir [(‘.’, ‘/testdir’), (‘..’, ‘/’)]

Block column family:

Key Data
/testfile [(0,BLOCK0DATA), (1,BLOCK1DATA)…]

Of course, this model failed as soon as I thought about implementing hard links, because there’s no way to have multiple directory entries pointing at a single inode, if you’re indexing them by path name. So I replaced pathname indexes with random uuids, and then (naively, again) created a new Pathmap column family, to map paths to UUIDs:

Inode column family:

Key Data
9d194247-ac93-40ea-baa7-17a4c0c35cdf uid: 0, gid: 0, mode: 0755, … etc
fc2fc152-9526-4e33-9df2-dba070e39c63 uid: 0, gid: 0, mode: 0644, … etc
74efdba6-57d4-4b73-94cc-74b34d452194 uid: 0, gid: 0, mode: 0755, … etc

Directory column family:

Key Data
/ [ (‘.’, 9d194247-ac93-40ea-baa7-17a4c0c35cdf ), (‘..’, 9d194247-ac93-40ea-baa7-17a4c0c35cdf), (‘testfile’, fc2fc152-9526-4e33-9df2-dba070e39c63), (‘testdir’, 74efdba6-57d4-4b73-94cc-74b34d452194)]
/testdir [(‘.’, 74efdba6-57d4-4b73-94cc-74b34d452194), (‘..’, 9d194247-ac93-40ea-baa7-17a4c0c35cdf)]

Block column family:

Key Data
fc2fc152-9526-4e33-9df2-dba070e39c63 [(0,BLOCK0DATA), (1,BLOCK1DATA)…]

Pathmap column family:

Key Data
/ 9d194247-ac93-40ea-baa7-17a4c0c35cdf
/testfile fc2fc152-9526-4e33-9df2-dba070e39c63

This enabled me to get hard links working very easily, just by adding extra directory and pathmap entries for them, pointing at existing inodes. I used this model for quite a while, and hadn’t noticed any problem with it because I had forgotten to implement the rename() function (ie, for mv). It wasn’t until I tried building a debian package from source on CassaFS that it failed, and when I tried implementing this, I realised that mapping pathnames wasn’t going to work when renaming a directory, because every file underneath that directory would need to have its pathmap updated.

At that point, I saw it would be necessary to traverse the whole directory tree on every file lookup, to find its inode, and then just give the root inode a UUID of 00000000-0000-0000-0000-000000000000, so that it can be found easily. This way, I could use UUIDs as the Directory column family index, and do away with the Pathmap column family entirely.

Inode column family:

Key Data
00000000-0000-0000-0000-000000000000 uid: 0, gid: 0, mode: 0755, … etc
fc2fc152-9526-4e33-9df2-dba070e39c63 uid: 0, gid: 0, mode: 0644, … etc
74efdba6-57d4-4b73-94cc-74b34d452194 uid: 0, gid: 0, mode: 0755, … etc

Directory column family:

Key Data
00000000-0000-0000-0000-000000000000 [ (‘.’, 00000000-0000-0000-0000-000000000000 ), (‘..’, 00000000-0000-0000-0000-000000000000), (‘testfile’, fc2fc152-9526-4e33-9df2-dba070e39c63), (‘testdir’, 74efdba6-57d4-4b73-94cc-74b34d452194)]
74efdba6-57d4-4b73-94cc-74b34d452194 [(‘.’, 74efdba6-57d4-4b73-94cc-74b34d452194), (‘..’, 900000000-0000-0000-0000-000000000000)]

Block column family:

Key Data
fc2fc152-9526-4e33-9df2-dba070e39c63 [(0,BLOCK0DATA), (1,BLOCK1DATA)…]

Yesterday, I discovered the Tuxera POSIX Test Suite, and tried it on CassaFS. At a rough estimate, it’s failing at least 25% of the tests, so there’s still plenty of work to do. At this stage, CassaFS is not useful for anything more than testing out Cassandra, as a way of getting a lot of data into it quickly, and trying out Cassandra’s distributed database abilities (except, since I have currently hardcoded 127.0.0.1:9160 into CassaFS, it will require some slight adjustment for this to actually work). You can even mount a single filesystem onto multiple servers and use them simultaneously – but I haven’t even begun to think about how I might implement file locking, so expect corruption if you have multiple processes working on a single file. Nor have I done any exception handling – this is software that is in a very, very early stage of development.

It’s all written in Python at present, so don’t expect it to be fast – although, that said, given that it’s talking to Cassandra, I’m not entirely sure how much of a performance boost will be gained from rewriting it in C. I’m still using the Cassandra Thrift interface (via Pycassa), despite Cassandra moving towards using CQL these days. I’m not sure what state Python CQL drivers are in, so for the moment, it was easier to continue using Pycassa, which is well tested.

For Debian and Ubuntu users, I have provided packages (currently i386 only because of python-thrift – I’ll get amd64 packages out next week) and it should be fairly simple to set up – quickstarter documentation here. Just beware of the many caveats that I’ve spelt out on that page. I’m hoping to get packages for RHEL6 working sometime soon, too.

Displaying caller-ID from a VOIP phone on the desktop.

I’ve had a Snom 300 VOIP phone for a few years now; it’s a nice little phone, and can even run Linux, although I haven’t ever tried doing so. At one point, I had it connected up to an elaborate Asterisk setup that was able to get rid of telemarketers and route calls automatically via my landline or VOIP line depending on whichever was the cheapest. These days, I no longer have the landline and don’t really want to run a PC all day long, so I’m just using the phone by itself through MyNetFone.

Unfortunately, the LCD display on it seems to have died; the vast majority of vertical pixel lines are displayed either very faintly, or not at all:

It’s probably not all that hard to fix, assuming that it’s just a matter of replacing the display itself and not hunting for a dead component, but I decided instead to have a look and see what the software offers to work around this – and discovered the Snom’s “Action URLs”.

Basically, the phone can make HTTP requests to configurable URLs when it receives one of a number of events – for example, on-hook, off-hook, call-forwarding … and incoming call, just to name a few. It can also pass various runtime variables to these; so for an incoming call, for example, you could add the caller-id to the url and then get a server to process this.

After a little bit of messing around, I hooked this into GNOME’s notification system, via the Bottle python web framework (which is probably overkill for something like this), and the end result is cidalert, a desktop caller-id notification system:

The source is up on Bitbucket, should anyone think of any cool features to add to it.

Pipe dream: format shifting books for free

I have, of late, been embarking on a huge program of minimalism. I have too much stuff. For the past twelve months, I have been getting rid of a lot of it, although probably not as ruthlessly as I’d like. Everything from old PC hardware, clothes, to computer and electronics magazines have been dumped in recycling bins. I do rather hope that the broken Mac SE/30 which I left out the front of my house, and then disappeared before the hard-waste collection came around, was turned into a fish bowl.

It’s amazing just how much useless paraphanalia is accumulated just from attending conferences. All my LCA t-shirts are going into a Brotherhood bin; I don’t wear them. It would be nice if, in future, LCA registration had a discount option without these. I realise that it probably wouldn’t come to more than about $5 saving, but it’s the principle of the matter – I don’t want resources wasted creating a t-shirt that I’m never going to wear. The same goes for the bags, although these tend to be of much higher quality, and I’ve really liked most of them, but it’s got to the stage where I have enough laptop bags and backpacks to last me a couple of lifetimes, and I just do not need any more.

I lived for fourteen months just travelling, with nothing more than a netbook and a backpack with a week’s worth of clothes. I’d like to get to the point where if I decide to disappear overseas again, I can rent the house out in a furnished state, and have just a small amount of personal possessions that can be left with family. I believe the economic rationalist side of politics would call this “labor mobility”, although I have no desire to pull up stumps and work in Western Australian mines, as they seem to expect everyone else to do, regardless of where their family and support network live.

One of the issues that I haven’t yet tackled is books. Last year, I bought a Kindle, and Amazon DRM annoyances aside (which can be easily worked around), I love it. I do not ever want to buy a hard-copy book again. I do, however, have a library of books that I would like to keep, but not in a form that takes up several cubic metres of space. Given that I’ve already paid for the books, it seems unreasonable to have to pay again for a digital version. Obviously, I could probably find digital versions of most of the books on torrent sites, but then if I were to ever be audited (and given that ACTA has provisions for searching laptops at borders, we can never be sure that such powers won’t be extended into homes) how can I prove that I actually owned the books, after I throw them out?

It’s a shame that Amazon (or someone) doesn’t provide a service where they take back second-hand books, provide a replacement digital copy and then resell the book to someone who does actually want a hard-copy, with a royalty to the author. Probably not cost-effective, I guess. But if there were some way to make it economically feasible, everyone would be a winner; I get to keep the content I paid for, the author gets another sale and a good book doesn’t get pulped.

O’Reilly have an interesting $5 ebook upgrade scheme, but it doesn’t cover all books, and I still bristle at the idea of paying more for an electronic copy of something that I already own.

The same goes for music. I have a CD collection, probably small by most standards, that nonetheless takes up space. It annoys me, because I haven’t played a CD in years, have no interest in the cover art or reading the acknowledgements on the inserts. My two dedicated CD players – one, a 15 year-old portable, and the other, a two-decade old hifi-style component, are both scheduled to be given to my nearest charity shop, if they even want them. Unlike books, the CDs can easily be format-shifted, legally, but if I were to then throw out the physical media, I have no way of proving that I ever actually legitimately acquired them. The only thing I can think to do is sell them, at the heavily marked down prices that second-hand music goes for, and then buy all the albums again from iTunes, which will likely cost more than the CDs sold for.

I do envy future generations. The idea of building up a physical pile of stuff that weighs you down is going to be totally unknown to them, at least from the point of view of books, music, movies and other media that is going completely digital. They’ll never have to waste time going through what I’m doing right now…

Etherwaker – GPL wake on lan client for Android

I’ve been playing around with Android application development quite a bit, over the last few months. The one thing I’ve built that’s actually quite usable has been the wake-on-lan client Etherwaker (because the world really needed another one of these, didn’t it?)

I’ve just put the Mercurial repository for it up on Bitbucket and released it under the GPL-3, for people to peruse or fork at their leisure.

Five second guide to fetching the source: hg clone ssh://hg@bitbucket.org/pdwerryhouse/etherwaker

If you can’t be bothered with all this, and just want to wake up your mythtv box from your bed, then it can be downloaded from the Android market.

Building a redundant mailstore with DRBD and GFS

I’ve recently been asked to build a redundant mailstore, using two server-class machines that are running Ubuntu. The caveat, however, is that no additional hardware will be purchased, so this rules out using any external filestorage, such as a SAN. I’ve been investigating the use of DRBD in a primary/primary configuration, to mirror a block device between the two servers, and then put GFS2 over the top of it, so that the filesystem can be mounted on both servers at once.

While a set-up like this is more complex and fragile than using ext4 and DRBD in primary/secondary mode and clustering scripts to ensure that the filesystem is only ever mounted on one server at a time, it’s likely that there will be a requirement for GFS on the same two servers for another purpose, in the near future, so it makes sense to use the same method of clustering for both.

The following guide details how to get this going on Ubuntu 10.04 LTS (lucid). It won’t work on any version older than this – the servers that this is destined for were originally running 9.04 (Jaunty), however, I’ve tested DRBD+GFS on that release, and there’s a problem that prevents it from working. As far as I’m concerned, production servers should not be run on non-LTS Ubuntu releases, anyway, because the support lifecycle is far too short. This guide should also work fine for Debian 6.0 (squeeze), although I haven’t tested it, yet.

One thing to keep in mind – the Ubuntu package for gfs2-tools claims that “The GFS2 kernel modules themselves are highly experimental and *MUST NOT* be used in a production environment yet”. There’s a problem with this, however – the gfs2 module is available in the kernel, in Ubuntu 10.04, but the original gfs isn’t there (it wasn’t ever there) and the redhat-cluster-source package which provides it, doesn’t build. I’m inclined to say that the “experimental” warning is incorrect.

Firstly, install DRBD:

apt-get install drbd8-utils drbd8-source

We have to install the drbd8-source package in order to get the drbd kernel module. When drbd is started, it should automatically run dkms to build and install the module.

Now, the servers I’m using have their entire RAID already allocated to an LVM volume group named vg01, so I’m going to create a 60Gb logical volume within this volume group, to be used as the backing store for the DRBD block device on each. Obviously, this step isn’t compulsory and the DRBD block devices, can be put on a plain disk partition instead.

lvcreate -L 60G -n mailmirror vg01

After this, configure /etc/drbd.conf on both servers:

global {
  usage-count yes;
}

common {
  protocol C;
}
resource r0 {
  net {
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
  }
  syncer {
    verify-alg sha1;
  }
  startup {
    become-primary-on both;
  }
  on mail01 {
    device    /dev/drbd0;
    disk      /dev/vg01/mailmirror;
    address   10.50.0.11:7789;
    meta-disk internal;
  }
  on mail02 {
    device    /dev/drbd0;
    disk      /dev/vg01/mailmirror;
    address   10.50.0.12:7789;
    meta-disk internal;
  }
}

With this done, we can now set up the DRBD mirror, by running these commands on each server:

drbdadm create-md r0
modprobe drbd
drbdadm attach r0
drbdadm syncer r0
drbdadm connect r0

…and to start the replication between the two block devices, run the following on only one server:

drbdadm -- --overwrite-data-of-peer primary r0

By looking at /proc/drbd, we’ll be able to see the servers syncing. It’s likely that this will take a long time to complete, but the drbd device can still be used, while that’s happening. One last thing we need to do is move it from primary/secondary mode, into primary/primary mode, by running this on the other server:

drbdadm primary r0

So, now we want to create a GFS2 filesystem. There’s a catch here, however: GFS2 cannot sit directly on a DRBD block device. Instead, we need to put an LVM physical volume on the DRBD device, and then create a volume group and logical volume within that. Furthermore, because this is going on a cluster, we need to use clustered LVM and associated clustering software:

apt-get install cman clvm gfs2-tools

And then configure the cluster manager on each server. Put the following in /etc/cluster/cluster.conf:

<?xml version="1.0" ?>
<cluster alias="mailcluster" config_version="6" name="mailcluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <totem consensus="6000" token="3000"/>
        <clusternodes>
                <clusternode name="mail01" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="clusterfence" nodename="mail01"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="mail02" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="clusterfence" nodename="mail02"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_manual" name="clusterfence"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

In the above, I’m using manual fencing, because at the moment, I don’t have any other method for fencing available to me. This should not be done in production; it needs a real fencing device, such as an out-of-band management card (eg, Dell DRAC, HP iLO) to kill power to the opposite node, if something is amiss. All that manual fencing does is write messages to syslog, saying that fencing is needed.

Without fencing, it’s possible to encounter a situation where the DRBD device might have stopped mirroring, yet the mail spool is still mounted on each server, with the mail daemon on each one writing to its GFS filesystem independently, and that would be a very difficult mess to clean up.

One other thing: there’s an Ubuntu-specific catch here – Ubuntu’s installer has this irritating habit of putting a host entry in /etc/hosts for the hostname with an IP address of 127.0.1.1. This will break the clustering, so remove the entry from both servers, and either make sure your DNS is set up correctly for the name that you’re using in your cluster interfaces, or add the correct addresses to the hosts file.

You can now start up clustering on both hosts:

/etc/init.d/cman start

Run cman_tool nodes, and if all is well, you’ll see:

Node  Sts   Inc   Joined               Name
   1   M    120   2011-09-14 10:53:32  mail01
   2   M    120   2011-09-14 10:53:32  mail02

We’ll need to make a couple of modifications to /etc/lvm/lvm.conf on both servers. Firstly, to make LVM use its built-in clustered locking:

locking_type = 3

…and secondly, to make it look for LVM signatures on the drbd device (in addition to local disks):

filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]

Now start up clvm:

/etc/init.d/clvm start

At this point, we can create the LVM physical volume on the drbd device. Because we now have a mirror running between the two servers, we only need to do this on one server:

pvcreate /dev/drbd0

Run pvscan on the other server, and we’ll be able to see that we have a new PV there.

Now, again, on only one server, create the volume group:

vgcreate mailmirror /dev/drbd0

Run vgscan on the other server, to see that the VG also appears there.

Next, we’ll create a logical volume for the GFS filesystem (I’m leaving 10Gb of space spare for a second GFS filesystem in the future):

lvcreate -L 50Gb -n spool mailmirror

And then lvscan on the other server should show the new LV.

The final step is to create the GFS2 filesystem:

mkfs.gfs2 -t mailcluster:mailspool -p lock_dlm -j 2 /dev/mailmirror/spool

mailcluster is the name of the cluster, as defined in /etc/cluster/cluster.conf, while mailspool is a unique name for this filesystem.

We can now to mount this filesystem on both servers, with:

mount -t gfs2 /dev/mailmirror/spool /var/mail

That’s it! We now have have a redundant mailstore. Before starting your mail daemon, however, I’d suggest changing its configuration to use maildir instead of mbox format, because having multiple servers writing to an mbox file is bound to cause corruption at some point.

Other recommended changes would be to alter the servers’ init scripts so that drbd is started before cman and clvm.

Paul Dwerryhouse is a freelance Open Source IT systems and software consultant, based in Australia. Follow him on twitter at http://twitter.com/pdwerryhouse/.

Why Victorians should not put Senator Conroy last

There has been quite a campaign to encourage people to put Senator Stephen Conroy last on the Victorian Senate ballot paper, in light of his never-ending attempts to filter the internet in Australia.

I can sympathise – several years ago, I was advising people to put Senator Richard Alston last on the same ballot paper, for similar reasons, and did so myself. I was wrong to do this.

By putting Senator Conroy last, you are effectively saying that his policies are worse than everyone else on the ballot paper. I am utterly against the filter, but, that said, there are plenty of issues just as serious, and there are some absolute nutcases standing for election for Victoria’s senate seats. Let me provide a few examples:

Family First are a group of extreme religious social conservatives, and most of their members belong to strange pentecostal sects. They too want a mandatory filter, but beyond that, they want to ban internet pornography entirely (good luck with that), they’re firmly against abortion and euthanasia, and they believe that “Small Business (are) the True Heroes of the Economy”, whatever that means. Now, I’m not saying that Family First are a front for whack-job churches like Hillsong and the Assembly of God, but whenever Senator Steven Fielding opens his mouth, I’m pretty sure he’s speaking in tongues. Their Queensland lead Senate candidate has, err, issues, and in the last election, the party demonstrated their lack of judgement by endorsing Pastor Danny Nalliah of Victoria’s-bushfires-were-an-act-of-retribution-from-God fame. Stephen Conroy may be a devout Catholic, but he’s not beyond ignoring stupid church doctrine and taking advantage of the NSW surrogacy laws, something which his own state doesn’t allow. He’s far better than the Family First nutters and should be put higher on the ballot paper than them.

The Citizens Electoral Council are a pack of Larouchite loons who should be put absolutely last on any sane human being’s ballot paper. Conroy is far preferable to them.

We all know who One Nation are, and what they stand for. The only reason I put them above the Citizens Electoral Council is that One Nation couldn’t organise a dinner in a room full of fish-and-chip shop owners. They’ve proved that they’re too incompetent to be dangerous. Nevertheless, they’re racist and extreme-right. Conroy is easily better than them.

The Liberal Party of Australia is a socially conservative party with an almost-dead small-l liberal faction. It is led by a man who, when health minister, pulled out all stops to keep RU486 banned in Australia. He believes that “climate change is crap” and is so creepy that he talks to the media about his daughters’ virginity. One of the Liberal Party’s Victorian candidates that is running for re-election is a former National Party member named Julian McGauran. The Age has an interesting article that refers to him. Definitely going below Conroy.

Obviously, there are plenty of good parties to put above Labor: the Greens, The Australian Sex Party and The Australian Democrats are all socially liberal parties. Stephen Mayne (of Crikey fame) is also running for the Senate, and while I disagree with a few things he’s said in the past, he’s shown himself to be honest and generally progressive.

But to put Senator Conroy last on your ballot paper is to say that he’s worse than a herd of far-right, bigoted religious fundamentalists, who want to interfere with your life. Despite his ridiculous stance on the filter, I don’t believe that he is as bad as them.

Voting in Stockholm

So, I’ve finished my mad dash from the north of Norway, to Stockholm, in order to vote in one of the only two locations in Scandinavia and the Baltics that Australia makes available (the other being Copenhagen). Australia typically only provides voting facilities in embassies, and as Norway, Finland, Estonia, Latvia and Lithuania only have honorary Australian consulates, there’s no opportunity to vote in any of those countries (unless, of course, you have a permanent address there, and thus can get a postal vote).

The voting process was all very straightforward – a room had been set up on the ground floor of the building which houses the embassy, so there was no need to pass through any faux-security measures in order to get in, unlike when I voted in The Hague back in 2001.

No identification was required, as is typical for Australian elections – it was just a matter of completing what was probably a postal vote envelope, and then filling out the ballot papers. The electoral officer then explained how to vote on each paper – the instructions were accurate, though I felt she emphasised a little too strongly that the Senate ballot paper was big, which I suspect caused a couple of people who followed me to vote above the line. That said, she did point out that all the group ticket preference allocations were available for people to read, if they wanted. I always vote below the line, so I didn’t have any need for this.

I was amazed, however, at a question from one of the other voters in the room: “This isn’t for local elections, is it?”. Seriously, I know I’m more attuned to politics than the average person, but a question like this is probably a good argument for compulsory civics lessons in schools. I find it somewhat unbelievable that state schools still brainwash children with religious education, but fail to teach them the basics of how our democracy works.

Arctic Circle

For the last two weeks, I’ve been drifting around northern Norway, spending a few days in the university town of Trondheim, before moving further north to Bodø and the Lofotens.

Trondheim sunset

I was lucky enough to arrive in Trondheim during the St. Olav festival, a week-long smörgåsbord (ok, that’s a Swedish term) of music and food, including a concert by one of Sweden’s biggest bands, Kent who, surprisingly, have absolutely no profile in English-speaking countries whatsoever.

Å

My visit to the Lofoten islands included a couple of nights in a small fishing village with the simple, easy to spell name of Å, after a three hour ferry ride from Bodø, which left me feeling decidedly nauseous, although I’m not entirely sure if that was from the rough seas, or just the smell from the other passengers who had thrown up. Either way, I was glad to get back onto land.

The Lofotens would be, I imagine, a hiker’s ultimate dream. Huge dramatic peaks emerging from the sea, and unbelievable views from the top. I’m not anywhere close to being an experienced hiker or bushwalker, but I have been getting out and walking up quite a few of these mountains, and in one case, high enough that there was still some snow at the top. On a clear day, you can see for miles, and there’s virtually no sound other than the wind, and in some cases, running water.

I’ve found Norway to be particularly easy to travel in; almost everyone speaks English to some degree – and furthermore, Norwegian is very similar to both Swedish, which I took a short-course in three years ago, and written Danish, which I’ve attempted to teach myself, in the past, thus reading signs, menus and travel websites isn’t too much of a problem. Being a Germanic language, Norwegian also shares quite a bit of vocabulary with German and Dutch (both of which I’ve had quite a bit of exposure to), as well as English itself, or at least the parts of it that weren’t bastardised by the Normans. Unfortunately, my attempts to try a bit of Norwegian don’t usually work too well, and I usually have to fall back to English.

One thing that is really fantastic here is the extent of good broadband internet access; I’ve been in tiny little towns, often with populations of one hundred or less, and it’s been clear from the wifi signals (and, admittedly, a little prodding of the open ones, on my part) that good broadband is available widely. There would be towns of similar size in Victoria who still have trouble getting a reliable dial-up connection. Mobile broadband also appears to be widespread, and not just from the former monopoly telco Telenor, but also a second carrier Netcom – and while the prices are, naturally, fairly expensive for an Australian, Netcom at least allows unlimited downloads for 20kr (AUD$3.6 / €2.50) per day, rather than capping or just pretending that it’s unlimited and then charging for excess usage (ie, more than 50Mb per day) like a certain telco in the Netherlands did to me.

I’m now in Narvik, a port city and part-time ski-resort, waiting for a bus to take me to my northernmost destination, Tromsø. I had originally planned to go further north to Nordkapp, but unfortunately the Australian election has put paid to that, and I have to get to Stockholm before August 21st, to vote.

Narvikfjelle summit

While the midnight sun has long passed, it still does not get completely dark at night; it’s possible to wander around at midnight and not require any artificial lighting at all. Two evenings ago, I walked up Narvik’s closest mountain, leaving at about 3.30pm and not reaching the summit until around 8pm – the sun was still high in the sky, and it was as bright as it had been in the middle of the day. It took me another two hours to walk back down again, and at 10pm, the sun was only just beginning to drop below the mountains to the west.

Scotland – Highlands tour

Wow. I really am inept at keeping this up-to-date.

Well, I’ll make the last month brief: Toronto (a week recovering from my travel so far); London – UK (two weeks recovering from my week in Toronto); Edinburgh (not surprisingly, recovering from London – I see a pattern developing here).

Following Edinburgh, I signed up with Macbackpackers for a five day tour of Scotland’s Highlands and Isle of Skye. I don’t normally take tours, generally preferring to travel independently, but not wanting to drive, this tends to limit my options to cities and larger towns. I’d also had recommendations from friends about this company, so I decided that it would be a nice change.

And they certainly weren’t wrong; the tour was the most fun I’ve had during my trip so far. Our guide, a native highlander was excellent. From the moment he entered the bus, he had the group (of around 21-22 people) laughing and kept it up for the entire trip. His knowledge of the area and its history was first-rate, and had an amazing gift for storytelling while keeping the bus on the road.

The tour is designed for people under 35, but they don’t enforce this, unlike many of the “youth tour” operators in Europe (who won’t let someone like me, two years older than the cutoff point, aboard); they’ll welcome anyone onto the tour, as long as you’re happy to keep up with the fairly vigorous program, such as walking up steep hills, swimming in the freezing Loch Ness and late, alcohol-fueled nights in pubs. And then 9am starts the next morning.

Swimming in Loch Ness

Accomodation is at the company’s many hostels, which range from utterly excellent (Castle Rock, Edinburgh) to fairly cramped and lacking sufficient numbers of showers, but otherwise clean and friendly (Inverness); but you’re not obligated to stay in these – you can book hotels or B&Bs seperately, if you prefer.

The first day took us north from Edinburgh, via Pitlochry, to Inverness, visiting Ruthven Barracks and the Culloden Moor Battlefield. Day two was onwards to Skye, with a stop in Ullapool for lunch, and a scenic drive south along the west coast.
Following this was a day doing a circuit of Skye, including a couple of walks through the highlands.

The fourth day was packed with a boat trip on Loch Ness, and yet more walking, this time through Glen Coe. The tour’s evening grand finale was a night of Ceilidh Dancing in Oban, which is great fun; essentially it’s barn-style dancing, sometimes with one partner, sometimes with multiple partners, to traditional Scottish music.

After that, we wound down with a tour of Oban’s whisky distillery and a visit to the National Wallace Monument… and then a relaxing drive back to Edinburgh.

Ceilidh Dancing in Oban

I didn’t know any of the people I was travelling with prior to the trip, but within just a few hours we all got along really well. It’s amazing how quickly people will bond, if you pack them into a bus, goad them to strip down to their underwear (or bathers, for those of us who are slightly more prepared), bribe them to get into a freezing lake, and follow it with a bottle of whiskey (allegedly to warm them up, but frankly I think there was an ulterior motive).

Anyway, I now find myself back where I started: it’s taken me the best part of a week in Glasgow and Belfast to recover from this…

Standard disclaimer applies: I’m not affiliated with this company at all, but I really really enjoyed the tour, and highly recommend it.