A Rails Programmer’s Journey into Go (so far)

You might have heard discussion about a programming language called Go (or #golang on Twitter). It was invented by some noteworthy Google staffers 4 years ago to solve large-scale server side development problems, i.e.:

  • monolithic bloaty code
  • long-compilation & test times
  • varying code styles across different teams
  • concurrency is hard
  • needing C/C++ to write high-performance code
  • dependency management slows down deployment

Hold on… What is Go?

Go is

  • statically-typed compiled like C/C++
  • somewhat Object-Oriented(tm)
  • minimalist compared to Ruby, Java or Scala
  • lightening fast if you’re coming from a dynamic language like Ruby or Python
  • supported by great out-of-the-box tooling (compiler, test runner, syntax formatter, race condition checker, profiler)
  • provided with an amazing standard library to solve 21st century problems
  • concurrency is core part of the language
  • compiled to single static binary: copy-to-deploy.

Most importantly, Go is easy and quick to learn. It took me two weekends of hacking plus midweek reading to grok it, incl. the concurrency.

So how do I get started?

There’s a whole bunch of good videos out there. Below is a list that I’d suggest you watch in the order listed to give you a taste of the language

Go: a simple programming environment

Good overview of Go and its tooling

 

A Tour of Go

Great explanation of Go’s concept of OO and interfaces.

 

Go Concurrency Patterns

Concurrency done easy. If you’ve ever done concurrency, this video will blow you away.

 

Go after 2yrs in Production

Impact of switching to Go on performance and cost savings.

OK, I’m convinced I want to get started…

Time to get your hands dirty… and take the interactive web-based tour at tour.golang.org

Once you’re bored with the tour and you want to start coding for real, follow these guides

  1. Install Go
  2. How to Write Go code
  3. Effective Go

That last link (Effective Go) will be your constant companion – bookmark it :-).

Should I buy Go book?

Don’t bother, the online material is far better than any books you’ll find on Amazon.

What should my first project be?

Try building some client-side libraries to a popular Web API service. This will let you play with different parts of Go and its standard library (go routines, http, JSON marshalling).

I’m super-hardcore and I like to TDD my code?

Use GoConvey, the best test runner on the planet.

 

Does Go have a Rails-like web framework?

No, but it has an excellent Sinatra-like framework. Check this blogpost on Go web programming.

 

 

Building a Docker-based MySQL Server

mysql-databasesThis post continues my travels in learning Docker with the intention of building a full-blown distributable production Rails-stack.

In this post I document my MySQL Server setup, at first without master/slave replication. Master/slave replication I’ll leave as a separate blog post, using this one as the foundation.

I’ve learned with Docker that the trick in building a scalable database container is to separate software from the storage & config layers.

Installing and Configuring the Container

You could be up and running with fully operational MySQL Server in a few minutes, assuming you’ve got an operational Docker host environment and followed my recommendations on folder structure.

Download my MySQL Container into your Docker environment

docker pull caseblocks/mysql

Prepare the mapped folders needed by MySQL

mkdir -p /var/docker/mysql/var/lib/mysql
mkdir -p /var/docker/mysql/var/log/mysql
mkdir -p /var/docker/mysql/var/run/mysqld
mkdir -p /var/docker/mysql/etc-mysql/conf.d

Grab a copy of my my.cnf file.

wget https://gist.github.com/ijonas/6961052/raw/6330391b90e353bcabff418bd9f14a4b9bc1c517/my.cnf -O /var/docker/mysql1/etc-mysql/my.cnf

Now spin up the MySQL container in shell-mode, because we need to configure the database.

docker run -i -entrypoint "/bin/bash" -v /var/docker/mysql1/var:/var -v /var/docker/mysql1/etc-mysql:/etc/mysql -t caseblocks/mysql

And run these configuration steps:

chown mysql.mysql /var/run/mysqld/
mysql_install_db
/usr/bin/mysqld_safe &
sleep 5
echo "GRANT ALL ON *.* TO admin@'%' IDENTIFIED BY 'caseblocks' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql
Change the username ‘admin’ and password ‘caseblocks’ to suit your own needs and wants.
Now exit the shell-mode container and relaunch the container in daemon-mode.
docker run -d -v /var/docker/mysql1/var:/var -v /var/docker/mysql1/etc-mysql:/etc/mysql caseblocks/mysql
That’s it!
You can find scripts to automate/repeat these steps via this gist.

Connecting to the MySQL Container

You can connect to the MySQL service using the same container, using a different command.

docker run -i -entrypoint="mysql" -t caseblocks/mysql -h 172.17.0.159 -uadmin -p

You can find out the IP address of the container using docker inspect and looking at the NetworkSettings part of the response.

How the Container was built?

The caseblocks/mysql container is of rather simple construction. The Dockerfile below which builds the image is your standard “install MySQL server”-type Dockerfile. All the important action happens during the configuration steps described above.

# MySQL service
#
# Version 0.0.1
FROM ubuntu
MAINTAINER support@caseblocks
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -s /bin/true /sbin/initctl
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y mysql-server
RUN apt-get clean
EXPOSE 3306
ENTRYPOINT ["/usr/sbin/mysqld"]

 

A Docker Container folder structure that’s flexible and scales

Docker Folder StructureDocker doesn’t really impose a folder structure on how you organize your containers.

You typically want to separate your Docker development environment from your Docker runtime environment even if at the start they’re on the same host.

I store all my Docker container build files (Dockerfiles) as well as any supplementary scripts and config files in my docker folder off my home folder.

/home/me/docker
               /mysql
                  Dockerfile
                  my.cnf
                  prep-volumes.sh
                  mysql-setup.sh
               /mongodb
                  Dockerfile
                  mongodb.conf
               /redis
                  Dockerfile

The benefit of this approach is that you can manage the changes to these files via your favourite source code management system, e.g. Github.

My containerized databases have their data and logging disk volumes mapped into the container. Keeping the data & log files outwith the containers gives you lots of flexibility wrt. moving DBs between containers as well as backup strategies.

My DB container data & log files are held below /var/docker on the Docker host:

/var/docker
  /mysql1
    /var
      /lib/mysql
      /log/mysql
      /run/mysqld
    /etc-mysql/conf.d
  /mysql2
    /var
      /lib/mysql
      /log/mysql
      /run/mysqld
    /etc-mysql/conf.d
  /mongodb1
    /data
  /mongodb2
    /data
  /mongodb3
    /data

In the example above, I’ve got MySQL1 and MySQL2 in a master/slave setup, and MongoDB1, MongoDB2, and MongoDB3 in a replica set, all containerized on the same host (not recommended for production ;-).

This setup works for me. I’d be interested in finding out what other people are using?

Easy way to install Redis on Ubuntu 12.04

The steps below are the “easy” way to install Redis 2.2 on an Ubuntu 12.04 system. Be aware that at the time of writing this method will install v2.2 of Redis, which is the version included in Ubuntu 12.04.

If you want the latest and greatest Redis 2.6.x features, you’ll need to install from source, which is easy to do by following my other blog post – Install Redis 2.6 from source on Ubuntu 12.04 and running as a daemon.

Step 1 – Add necessary repos to your 12.04 system

Ubuntu 12.04 needs packages from the extended ‘universe’ to install Redis, so lets add them with

echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list

Step 2 – Install Redis

Next, install Redis.

apt-get update
apt-get -y install redis-server

That’s it. You might want to tweak the Redis settings, in which case edit /etc/redis/redis.conf accordingly.

Good luck. Feel free to comment and ask questions.

 

 

 

Building a MongoDB Cluster using Docker Containers

Shipping Containers - (c) Luke Price PhotographyThis post explains the steps I’ve taken to build a cluster of 3 MongoDB servers, each one of whom is contained in a Docker container.

I’ve done this as an experiment in learning Docker yet the resulting cluster can be used by anyone who wants to quickly bring up a MongoDB cluster to gain a deeper understanding of MongoDB replica sets etc.

Requirements & Assumptions

You need an operational Docker environment.  I use an affordable VPS on Digital Ocean, running stock Ubuntu 12.04 in to which I’ve installed Docker.

If you run into an AUFS-related error. Follow the tips from this Stack Overflow post.

For the sake of brevity and sanity, my console output below assumes you’re logged in as root. If in doubt – sudo. :-)

Step 1 – Prepare the folder structure

I’ve documented my thoughts on how to best setup Docker’s folder structure in a separate post.

So lets create the folders that will be mapped into the containers

mkdir -p /var/docker/mongodb1/data
mkdir -p /var/docker/mongodb2/data
mkdir -p /var/docker/mongodb3/data

Create a mongodb.conf file in /home/me/docker/mongodb folder. The only thing worth changing is the replSet name if you don’t like mine.

# mongodb.conf
dbpath=/data
logpath=/data/mongodb.log
logappend=true
replSet = cbrep

Copy the conf file into the data folders for our 3 MongoDB containers.

cp mongodb.conf /var/docker/mongodb1/data
cp mongodb.conf /var/docker/mongodb2/data
cp mongodb.conf /var/docker/mongodb3/data

Step 2 – Build the container

Download the Dockerfile from this gist or copy & paste it below. Its derived from the great blog post written by Arunoda Susiripala.

# MongoDB
#
# VERSION               0.0.1
#
# requires mongodb.conf @ https://gist.github.com/ijonas/6844358 

FROM ubuntu
MAINTAINER support@caseblocks.com

# make sure the package repository is up to date
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update

RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list

RUN apt-get update -y
RUN apt-get install mongodb-10gen=2.4.6

EXPOSE 27017
ENTRYPOINT ["mongod", "-f", "/data/mongodb.conf"]

Next step, build the container in the /home/me/docker/mongodb folder assuming that’s where you’ve placed your Dockerfile.

docker build -t yourname/mongodb .

Confirm the container was built, by finding it  in the image list with

docker images

Step 3 – Launch the containers

With all the prep done, you can launch the containers

docker run -d -v /var/docker/mongodb1/data:/data -t yourname/mongodb
docker run -d -v /var/docker/mongodb2/data:/data -t yourname/mongodb
docker run -d -v /var/docker/mongodb3/data:/data -t yourname/mongodb

Confirm they’re running with

docker ps

You should see similar output to

ID IMAGE COMMAND CREATED STATUS PORTS
5c249b3b5e21 caseblocks/mongodb:latest mongod -f /data/mong 12 hours ago Up 12 hours 49182->27017
dc33980b796a caseblocks/mongodb:latest mongod -f /data/mong 12 hours ago Up 12 hours 49181->27017
0bffc776aec2 caseblocks/mongodb:latest mongod -f /data/mong 12 hours ago Up 12 hours 49180->27017

Step 4 – Configuring the MongoDB cluster

You now need to find out the IP addresses of the 3 MongoDB containers using the container id (which you can find in the docker ps output).

docker inspect 5c249b3b5e21

Amongst the big lump of JSON being returned you should see a NetworkSettings section

"NetworkSettings": {
    "IPAddress": "172.17.0.47",
    "IPPrefixLen": 16,
    "Gateway": "172.17.42.1",
    "Bridge": "docker0",
    "PortMapping": {
        "Tcp": {
            "27017": "49182"
        },
        "Udp": {}
     }
 },

Write down or copy and paste the IPAddress values.

Launch a mongo shell using a new container

docker run -i -t -entrypoint='mongo' yourname/mongodb 172.17.0.47/admin

Once in the shell execute the following MongoDB commands

rs.initiate()
rs.add('172.17.0.46')
rs.add('172.17.0.45')

Now you might have a problem with routing inside the cluster. The container from which you initiated the cluster will have used its hostname rather than IP address. The other two containers need an IP address to route to, so lets change that.

Retrieve the replicaset config

cfg = rs.conf()

You’ll get output similar to

{
 "_id" : "cbrep",
 "version" : 3,
 "members" : [
 {
 "_id" : 0,
 "host" : "5c249b3b5e21:27017"
 },
 {
 "_id" : 1,
 "host" : "172.17.0.46:27017"
 },
 {
 "_id" : 2,
 "host" : "172.17.0.45:27017"
 }
 ]
}

You’ll need to rewrite the first host entry in the members list above.

cfg.members[0].host = '172.17.0.47:27017'
rs.reconfig(cfg)

You should be all set now. You can confirm everything is running smoothly by checking the log files from your Docker host environment

tail /var/docker/mongodb1/data/mongodb.log
tail /var/docker/mongodb2/data/mongodb.log
tail /var/docker/mongodb3/data/mongodb.log

Feel free to leave comments and questions below. Good luck.

Install Redis 2.6 from source on Ubuntu 12.04 and running as a daemon

Update: if you want a quick and easy way to install Redis and can live with Redis 2.2, checkout - Easy way to install Redis on Ubuntu 12.04.

Documented here are steps to getting Redis 2.6.x running on Ubuntu 12.04 onwards using an init script (previous versions of Ubuntu should work too). The setup is intended to be used on a developer desktop/laptop rather than production infrastructure.

As ever, first download and unzip Redis from here.

cd /tmp
wget http://redis.googlecode.com/files/redis-2.6.9.tar.gz
tar -zxf redis-2.6.9.tar.gz
cd redis-2.6.9
make
sudo make install

Your Redis binaries should now be located in /usr/local/bin.

To get an init script and Redis config working cleanly with this setup, download my init and config files from my Github ‘dotfiles’ repo. My init script and redis.conf are pretty standard – intended for general development purposes.

wget https://github.com/ijonas/dotfiles/raw/master/etc/init.d/redis-server
wget https://github.com/ijonas/dotfiles/raw/master/etc/redis.conf
sudo mv redis-server /etc/init.d/redis-server
sudo chmod +x /etc/init.d/redis-server
sudo mv redis.conf /etc/redis.conf

Before you can fire up the Redis server for the first time, you’ll need add a redis user and prep a data and logging folder.

sudo mkdir -p /var/lib/redis
sudo mkdir -p /var/log/redis
sudo useradd --system --home-dir /var/lib/redis redis
sudo chown redis.redis /var/lib/redis
sudo chown redis.redis /var/log/redis

Also, you need to activate your Redis services init script by adding it to your system’s run-level configuration. That way the service will startup during the boot sequence and stop nicely during the OS’ shutdown procedure.

sudo update-rc.d redis-server defaults

You’re now ready to launch Redis server with

sudo /etc/init.d/redis-server start

Good luck!

Installing MongoDB 1.8.1 on Ubuntu 10.10 & 11.04 and running with an ‘init’ script.

Installing MongoDB 1.8.1, in my case as a developer database, is easy. This blog post just itemises all the steps so that you can pretty much blindly follow along. I’ll probably use these steps myself as I seem to be doing this regurlarly ;-)

Download the 64bit Linux binaries from here and unzip the contents to /usr/local.

cd /tmp
wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-1.8.1.tgz
sudo tar -zxf /tmp/mongodb-linux-x86_64-1.8.1.tgz -C /usr/local

Setup some symbolic links.

sudo ln -s /usr/local/mongodb-linux-x86_64-1.8.1 /usr/local/mongodb
sudo ln -s /usr/local/mongodb/bin/bsondump /usr/local/bin/bsondump
sudo ln -s /usr/local/mongodb/bin/mongo /usr/local/bin/mongo
sudo ln -s /usr/local/mongodb/bin/mongod /usr/local/bin/mongod
sudo ln -s /usr/local/mongodb/bin/mongodump /usr/local/bin/mongodump
sudo ln -s /usr/local/mongodb/bin/mongoexport /usr/local/bin/mongoexport
sudo ln -s /usr/local/mongodb/bin/mongofiles /usr/local/bin/mongofiles
sudo ln -s /usr/local/mongodb/bin/mongoimport /usr/local/bin/mongoimport
sudo ln -s /usr/local/mongodb/bin/mongorestore /usr/local/bin/mongorestore
sudo ln -s /usr/local/mongodb/bin/mongos /usr/local/bin/mongos
sudo ln -s /usr/local/mongodb/bin/mongosniff /usr/local/bin/mongosniff
sudo ln -s /usr/local/mongodb/bin/mongostat /usr/local/bin/mongostat

The first “ln -s” above sets up a handy symbolic link between the versioned mongodb folder and its unversioned counterpart. When 10Gen release updates, say version 1.8.2, all you need to do is download, unzip, and link the ’1.8.2 mongodb folder’ to the unversioned folder and ‘hey presto’ everything should just work.

To get an init script working cleanly with this setup, download mine from my Github ‘dotfiles’ repo. Please note – my init script enables journaling and the REST interface (on line 51).

wget https://github.com/ijonas/dotfiles/raw/master/etc/init.d/mongod
sudo mv mongod /etc/init.d/mongod
sudo chmod +x /etc/init.d/mongod

You’ll need to add a mongodb user and prep some folders

sudo useradd mongodb
sudo mkdir -p /var/lib/mongodb
sudo mkdir -p /var/log/mongodb
sudo chown mongodb.mongodb /var/lib/mongodb
sudo chown mongodb.mongodb /var/log/mongodb

Also, you need to activate your MongoDB service’s init script by adding it to your system’s run-level configuration. That way the service will startup during the boot sequence and stop nicely during the OS’ shutdown procedure.

sudo update-rc.d mongod defaults

Lastly to launch MongoDB

/etc/init.d/mongod start

Good luck!

UPDATE: Since April 6 Ubuntu now has prefabbed packages containing MongoDB 1.8.1, maintained by 10Gen. See the instruction below.

Barbler: Integrating JRuby Warbler into Apache Builder

After having used Apache Builder for a week and extracted our Warbler-code into a bonafide extension, I’m sharing it with the community under the fetching name Barbler.

Barbler integrates itself between the build and packaging stages of the Apache Builder lifecycle and makes calls into Warbler to automate WAR-file creation. Now Warbler does a really good job for packaging standalone Rails apps. Unfortunately I needed something more integrated into our application build process, that pulls in our Spring Framework-based Java code, Scala code, and Rails application and produces a single WAR-file containing all dependent libraries, Rails code, XML deployment descriptors and Java class files. Apache Builder does everything other than the Rails-packaging. Barbler steps to provide that missing step.

Create a barbler.rb file in your project folder, which also contains your buildfile and copy the following contents into it:

# Barbler 
# is an Apache Builder extension to integrate the JRuby Warbler gem.
# For tips on how to use Barbler checkout http://www.denofubiquity.com/ruby/barbler/
# 
# This code is licensed under the Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
# Please distribute Barbler code with this code intact.
# (c) Ijonas Kisselbach 2009

require 'warbler'

module Barbler
  include Extension
  
  first_time do
    # Define task not specific to any projet.
    desc 'Warbles Rails sourcecode tree into a staging folder'
    Project.local_task('warble' => 'build') { |name| "Warbling #{name}" }   
  end
  
  before_define do |project|    
    project.task('warble'=>project.task('build'))
    project.group ||= project.parent && project.parent.group || project.name
    project.version ||= project.parent && project.parent.version    
  end
  
  # To use this method in your project:
  #   warble(:rails => path_to(:rails), :tasks => [:app, :public])
  def warble(*args)
    options = args.pop
    rails_path = options[:rails]
    warble_tasks = options[:tasks]
    
    # Define the warble task for this particular project.
    Rake::Task.define_task 'warble' do |task|
      # get all the important components from the Rails GUI into the staging directory
      puts "Warbling #{rails_path}"
      Dir.chdir(rails_path) do
        warble_cfg = eval(File.open("config/warble.rb") {|f| f.read})
        Warbler::Task.new(:war, warble_cfg)
        warble_tasks.each {|task| Rake::Task["war:#{task}"].invoke}
      end
    end
  end
  
end

class Buildr::Project
  include Barbler
end

Add the folllowing line to the top of your buildfile:

warble(:rails => path_to(:rails), :tasks => [:app, :public])

You can then define your warble task using the following line

warble(:rails => path_to(:rails), :tasks => [:app, :public])

whereby the first parameter is a path string to where your Rails code is located. You may locate your code in src/main/rails in which case you’d use

warble(:rails => path_to(:source, :main, :rails), :tasks => [:app, :public])

The second parameter is the list of Warbler tasks that you’d like to have executed. See the Warbler documentation for more help, or check out the Warbler source code – it’s very readable.

Integrating Warbler and Buildr into Scala, JRuby, Java and Rails bliss

At Vamosa we’re big fans of the Java Virtual Machine. It allows us to use the right tool for the job and deliver a high-quality consistent product for our end-users, whilst still getting the most of our developers. For years we were a .NET and Java shop. Our GUI developers would work in Visual Studio writing a C# application that via SOAP webservices would talk to the Java-backend. In June 2008 we decided to abandon our .NET Desktop GUI and redevelop and expand its functionality, delivered to the end-user’s browser using HTML+CSS+JavaScript from our Java-backend.

We spend 7months hacking away trying to get Google Web Toolkit to behave before abandoning ship a month ago and switching to Rails. We already had some success building a MRI-based RubyOnRails application called Vamosa Check and Fix. Our GUI developer pool was loving the ease of web development that comes with Rails, and really hated the total lack of productivity from GWT (worthy of a separate post).

Meanwhile I was experimenting with Scala – IMO the Java language reinvented for the 21st century. So there we were steaming ahead with JRubyOnRails, old-skool Java Spring-based code, and sexy-new Scala code. Three languages, one set of JVM byte code. So how do you build and package all this code ???

Your options are:

  • Apache Maven – horrible for legacy projects that don’t build according to Maven doctrine.
  • Apache Ant + Ivy – might be an option to you.
  • Apache Buildr – JRuby-based build system

For us, Apache Buildr had the best fit because its a DSL based-on Rake, which happily runs on JRuby. It provided the dependency management that kept us coming back to Maven (and quickly running away again). It’s JRuby/Rake-based allowing for tight integration with Warbler, the JRubyOnRails WAR-packaging gem. And lastly there’s not a shred of XML in sight. Its a DSL, so the buildfile has a nice declarative feel to it, yet can be modified quickly using some standard Ruby-syntax to provide branching and looping. All the other build systems use XML, and then try and retrofit branching and looping, eg. using elements.

Today we have all our source code in the following folder structure:

project
src
|-- main
|   |-- java
|   |-- resources
|   |-- scala
|   `-- webapp
`-- test
|-- java
|-- resources
`-- scala

rails
|-- app
|-- config
|-- db
|-- doc
|-- lib
|-- log
|-- nbproject
|-- public
|-- script
|-- test
|-- tmp
`-- vendor

and our Apache Buildr buildfile in the root of the project tree looks like this:

require 'buildr'
require 'buildr/scala'
require 'rubygems'
require 'warbler'

# define the version of the Vamosa product
VERSION_NUMBER = '3.0.0'

# define repositories from which artifacts can be downloaded
repositories.remote << 'http://www.ibiblio.org/maven2/'
repositories.remote << 'http://scala-tools.org/repo-releases'  # define artifacts that are not available from remote repositories  artifact("javax.jms:jms:jar:1.1").from(file("libs/javax.jms.jar"))  # define the artifacts that the project depends on  SCALA         = group('scala-library', 'scala-compiler', 'axiom-dom', :under=>'org.scala-lang', :version=>'2.7.5')
SCALATEST     = [ 'org.scala-tools.testing:specs:jar:1.5.0','org.scalatest:scalatest:jar:0.9.5']
XUNIT         = ["junit:junit:jar:4.4", "org.dbunit:dbunit:jar:2.2.3", "org.mockito:mockito-all:jar:1.7" ]
JDBC_DRIVERS  = ["mysql:mysql-connector-java:jar:5.1.6"]
HIBERNATE     = [ "org.hibernate:hibernate-core:jar:3.3.2.GA",
  "org.hibernate:hibernate-annotations:jar:3.4.0.GA",
  "org.hibernate:hibernate-commons-annotations:jar:3.3.0.ga",
  "org.hibernate:hibernate-search:jar:3.1.0.GA",
  "org.hibernate:hibernate-ehcache:jar:3.3.2.GA",
  "org.hibernate:jtidy-r8:jar:20060801",
  'c3p0:c3p0:jar:0.9.1.2',
  'commons-collections:commons-collections:jar:3.2.1',
  'commons-lang:commons-lang:jar:2.4',
  'net.sf.ehcache:ehcache:jar:1.6.2',
'javax.persistence:persistence-api:jar:1.0']
# DELETED FURTHER ARTIFACTS FOR SAKE OF BREVITY...

# now lets do some work
platforms = ["mysql", "oracle", "mssql", "db2"]
platform = "mysql"
desc 'Enterprise Content Governance Platform'
define 'ContentMigrator' do
  project.version = VERSION_NUMBER
  project.group = 'com.vamosa'
  manifest['Copyright'] = 'Vamosa Ltd. (C) 2003-2009'
  compile.options.target = '1.5'

  compile.with HIBERNATE, SPRING, COMMONS, LOGGING, CONTENT_PARSER, QUARTZ, J2EE_API, SCRIPTING, SOAP, JFREE_CHART, JAVASSIST, LUCENE, XALAN
  test.with XUNIT, SCALATEST
  test.using :scalatest

  # get all the important components from the Rails GUI into the staging directory
  Dir.chdir("rails") do
    puts "Changed current directory to: #{Dir.pwd}"
    warble_cfg = eval(File.open("config/warble.rb") {|f| f.read})
    Warbler::Task.new(:war, warble_cfg)
    Rake::Task['war:app'].invoke
    Rake::Task['war:public'].invoke
  end
  puts "Changed current directory to: #{Dir.pwd}"

  # package it up
  package(:war, :file => _("target/#{id}-#{VERSION_NUMBER}-#{platform}.war")).tap do |task|
    task.include 'war/*'
    task.include "src/main/resources/#{platform}.session-factory.xml", :as=>'WEB-INF/session-factory.xml'
    task.include 'src/main/resources/jboss.jms-context.xml', :as=>'WEB-INF/jms-context.xml'
  end
end

The key things we like about this setup are:

  1. Easily handling dependency artifacts like the Sun API jars locally. For example we store javax.jms.jar in our Git source repo, in the projects libs/ folder and then point to it using artifact(“javax.jms:jms:jar:1.1″).from(file(“libs/javax.jms.jar”)).
  2. Integrate Warbler tasks and cherry-pick the ones you want to run, such as in our case just war:app & war:public but e.g. not war:xml because our web.xml is stored in src/main/webapp/WEB-INF instead.
  3. Its Ruby so we can use loops & branching such as:
%w(mssql mysql oracle db2).each do |platform|
  package(:war, :file => _("target/#{id}-#{VERSION_NUMBER}-#{platform}.war")).tap do |task|
    task.include 'war/*'
    task.include "src/main/resources/#{platform}.session-factory.xml", :as=>'WEB-INF/session-factory.xml'
    task.include 'src/main/resources/jboss.jms-context.xml', :as=>'WEB-INF/jms-context.xml'
  end
end

Apache Buildr isn’t perfect. There are still some weird annoyances around resolving transitive dependencies, i.e. when hibernate.jar in turn depends on commons-logging.jar. But if you find yourself missing commons-logging.jar its easily added.
If something doesn’t work they way you think it ought to, you can easily dig into Buildr’s very readable Ruby code, something I couldn’t do with either Maven or Ant, and either customise it or find a quick workaround. You don’t have this black-box barrier between your buildscript and its output.

UPDATE: A nicer way of integrating Warbler and Buildr can be achieved using my Buildr extension, Barbler.

JRuby-based Chat Server using Terracotta

Two technologies are currently capturing my imagination, JRuby and Terracotta. JRuby is simply for my purposes the most effective language to tackle most of my computing challenges. Terracotta allows me to take those problems and solve them on large clusters of cheap servers in clouds such as those provided by Amazon EC2.

Getting started with JRuby+Terracotta requires a bit of trial and error as its not as well documented as good old Java+Terracotta. The only post you’re likely to find is one by Jonas Boner (see below). During subsequent revisions of both Terracotta as well as JRuby, the example had stopped working. These files bring that example update to date for JRuby 1.3.1 and Terracotta 3.0.1.

You can download the revised source code from my github account. You will need installs of both JRuby and Terracotta with JRUBY_HOME and TC_HOME pointing to the base folders of both products respectively, e.g.

export JRUBY_HOME=$HOME/java/jruby-1.3.1
export TC_HOME=$HOME/java/terracotta-3.0.1

Once these environment variables have been setup you can start a Terracotta server, followed launching multiple clients by typing:

./chat.sh

Background

The key to fixing the example was fixing the java.lang.NoClassDefFoundError: com/tc/object/event/DmiManager, caused by the references to com.tc.object.bytecode.Manager in the terracotta.rb file:

WRITE_LOCK = com.tc.object.bytecode.Manager::LOCK_TYPE_WRITE
READ_LOCK = com.tc.object.bytecode.Manager::LOCK_TYPE_READ
CONCURRENT_LOCK = com.tc.object.bytecode.Manager::LOCK_TYPE_CONCURRENT

Replacing the above fragment with:

WRITE_LOCK = 2
READ_LOCK = 1
CONCURRENT_LOCK = 4

and the whole example springs to life. My next problem to solve is that of Rubifying the Workmanager examples from chapter 11 of the “Definitive Guide to Terracotta” book.

Useful Links