hlfshell

Golang Docker Harness

tldr;

Sometimes you just need to call out to a database instead of mocking it; to that end I created this module to allow one to quickly create and dispose of docker containers for tests.

The Problem

Generally when writing unit or integration tests, you have some level of mocking. Eventually, however, you just have to test the SQL that you’re executing to know that it’s doing what you want with edge cases; especially when you’re writing more complex SQL than simple CRUD operations.

I’ve had success at prior startups using a custom docker-harness to launch, configure, manage, and cleanup docker containers on request for tests. If the image is already local, I found it to be quick and rarely an issue even across hundreds of tests.

Creating and managing a docker container, however, can be a bit of pain the butt. When a recent project required some testing across multiple databases, I created a custom docker harness for reuse; I even built in a couple of “default” templates for databsaes like PostgreSQL and Redis. you can find it here.

The Solution

Let’s look at a simple example:

package main

import (
	"fmt"
	"time"

	harness "github.com/hlfshell/docker-harness"
)

func main() {
	container, err := harness.NewContainer(
		"TestContainer",
		"postgres",
		"latest",
		map[string]string{
			"3306": "",
		},
		map[string]string{
			"POSTGRES_USER":     "postgres",
			"POSTGRES_PASSWORD": "postgres",
			"POSTGRES_DB":       "postgres",
		},
	)
	if err != nil {
		panic(err)
	}

	err = container.Start()
	if err != nil {
		panic(err)
	}
	defer container.Cleanup()

	running, err := container.IsRunning()
	if err != nil {
		panic(err)
	} else if !running {
		fmt.Println("Container did not start properly")
	} else {
		fmt.Println("Container is running!")
	}

	fmt.Println("Container is listening on ports:", container.GetPorts())

	time.Sleep(3 * time.Second)
}

Here we’re creating a new PostgreSQL container (of tag latest), exposing a random port to the DB’s default of 3306, ensuring it’s running, and deferring a proper cleanup of the container after 3 seconds of runtime. Pretty simple.

A typical test would follow much of the same pattern:

func TestSomethingImportant(t *testing.T) {
    container, err := harness.NewContainer(...)
    require.Nil(t, err)
    require.NotNil(t, container)
    defer container.Cleanup()

    ...
}

Taking it a step further, I have some premade databases ready to go. These take a few extra actions, such as opening a database connection, ensuring the database is pingable and actually ready (not just the image being up), etc.

func TestSomeSQLQuery(t *testing.T) {
    container, err := postgres.NewPostgres(
		t.Name(),
		"",
		"username",
		"super-secret-password",
		"database-name",
	)
	require.Nil(t, err)

	err = container.Create()
	require.Nil(t, err)
	defer container.Cleanup()

	db, err := container.ConnectWithTimeout(3 * time.Second)
	require.Nil(t, err)

    ...
}

Here we’re creating the database, requesting a connection, and handling our test as expected. The cleanup process properly handles connection disconnect, container removal, etc. Since the create function handles internally ensuring that the database is up, you cna essentially set-and-forget it. Another example with Redis:

func TestSomeRedisQuery(t *testing.T) {
    container, err := redis.NewRedis(t.Name())
    require.Nil(t, err)

    err = container.Create()
    require.Nil(t, err)
    defer container.Cleanup()

    client, err := container.ConnectWithTimeout(3 * time.Second)
    require.Nil(t, err)
    
    ...
}

…same simplicity.

At the time of this writing I have pre-made setups for:

  • PostgreSQL
  • MySQL
  • Redis
  • Memcached

…and have need, and thus plans for, a few vector databases soon too.

Going bigger

Where this idea gets really useful is when you start chaining these together. Without getting into specifics, we had local servers on robots that phoned home, and we want to emulate tens of thousands of these guys for some load testing. These “robots” on the software side consisted of multiple services coordinating through a singular cloud-centric service. It was simple to use this pattern to create a singular function to create a robot container, which in turn would use Go’s wonderful async features to simultaneously boot required services, delaying only when needed based on the dependency chain. With this pattern it was easy to write deployment servers to emulate a massive fleet to hammer our test servers.

That’s it

Hope this proves useful for some of you!