An animation I made for Asteroid day to continue my space-themed art streak! Thanks to my friend Stefanie for the suggestion! Please click that link and check out her absolutely stunning art, you won’t regret it.


Prints are available through RedBubble, INPRNT and coming soon to Displate.


Observatory Timelapse


Celestial 1

Only a few days after the last inspiring pixel dailies prompt, another one appeared that I couldn’t resist - “Celestial”. I had three ideas for it immediately but I only had time to do the one above on the day. I’m glad I spaced them out anyway because I think I achieved a lot with the extra time.


The second idea I worked on was originally only a planet-rise over some mountains, but it evolved quite a bit as I worked on it until it was about spaceships racing over a peaceful alien village.

I’m quite pleased with the palette and animation on this one.

Celestial 2

The final concept I only completed yesterday, a view of the night sky through some trees.


Prints are available through RedBubble (linked individually above), and Displate.


Celestial 1


Available from 2020-06-25:

Celestial 2

Cosmic Eye

Cosmic Eye

I got back into the pixel dailies a couple of days this week. I’ve never used them as an actual daily practice, but when I see a theme I like I jump in. This day, the theme was “Eye”, and I really like eyes.

Unfortunately I was pushed for time that day so the result is a bit rough - however, I think it gets the idea across, which is “eyes which are full of stars and made of stars and also stars were there”.

I started this one out with my drawing tablet, which I don’t usually use for pixel art and amn’t very good with. I’m trying to get away from using so many rigid straight lines, and treat pixel art more like regular painting. I had to switch to the mouse towards the end for the finer details, but it’s a start.


Cosmic Eye

Devs - Spirituality as a Service


This post contains spoilers for the TV show “Devs”

I liked Devs a lot. It looks at the quasi-religious reverence in which tech entrepreneurs are held in some quarters (most notably amongst themselves, perhaps) and asks, what if this but literally? What if these people were literally gods, or creating a god?

The plot centres on a software engineer named Lily, whose boyfriend is murdered by their boss, Forest, after he attempts to steal some code from the company they work for. The code in question is for the Devs system - a quantum simulator that extrapolates the past and future events of the entire universe from any sample of matter. Lily becomes suspicious of the circumstances of her boyfriend’s death, which is made to look like a suicide, and starts to dig around.

Unfortunately much of the plot, and particularly the climax, rest on a concept that I found it hard to suspend my disbelief about (and I don’t mean the premise of the Devs system).

Several of the main characters are aware of future events, up to a certain point, thanks to their quantum computer’s simulations. They do not attempt to alter their behaviour in even the smallest way, even just to see if it is possible, instead slavishly repeating every word and action they’ve observed.

If it were just Forest, and the lead systems designer, Katie, who acted like this, it might be understood as a consequence of blind faith, or a wilful misunderstanding of causality because reality doesn’t suit their purposes. Forest is single-minded in his pursuit of this technology because he believes it can resurrect his dead daughter - Devs is his church, determinism is the creed, and anything that calls it into question is heresy.

But this notion is dispelled in a scene where a roomful of people are shown a simulation of a few seconds into the future, and mirror it exactly - apparently it is actually a feature of this universe that it is actively difficult to behave contrary to the prediction. I think the reality would be the opposite - it would actually be difficult not to act differently once you were aware of future events. I think you would do so instinctively, and accidentally. It wouldn’t be a violation of causality, because the simulation would also be a cause, with its own effects.

So this concept strains credibility, and works only on a allegorical level - the low-level developers are dazzled by a brief tech demo and its promises while the higher ups are simultaneously in thrall to their own hype and aware of the lies it is based on and the limits of their knowledge.

It also makes the climax of the show absurdly predictable. As soon as we hear that the simulation breaks down at a certain point, and it has something to do with Lily, we know that Lily is going to do something that contradicts the predictions of the simulation. None of the supposedly smart characters in the show demonstrate any awareness of this obvious fact, and it’s frustrating. It is only redeemed because seeing the climax coming reflects the characters’ foreknowledge of the future, in a way.


Overall, it’s interesting enough and well enough written that these problems are easy to look past. Some of the imagery is fantastic, such as the would-be god-developers working in a giant fractal computer floating in a vacuum, completely isolated from the world they’re trying to understand. It’s also a tonal masterpiece, full of haunting establishing shots, temple-like sets, and an unsettling soundtrack. Worth watching for that reason alone, to be honest.

Ludum Dare 46 Results

The Ludum Dare 46 results were published yesterday, and my game did quite well, placing 109th overall and 14th in the “Mood” category, as well as 120th and 121st in graphics and audio respectively. In the largest ever Ludum Dare, those are pretty decent placings I think, despite not breaking the top 100.

Category Rating Placing Percentile
Overall 4.136 109 96th
Fun 3.523 819 77th
Theme 4.14 279 92nd
Innovation 3.86 247 93rd
Humor 3.656 365 89th
Graphics 4.477 120 96th
Audio 4.102 121 96th
Mood 4.523 14 99th


I always feel that the real competition in the Ludum Dare is against myself - just trying to do a little bit better and learn a bit more each time. As such, here’s some indication of my LD result trends over the years.

Ratings Graph Placings Graph Percentiles Graph

Nice upward trends! Note that I was only responsible for the art for “Claustrophobia” and “Rattendorf”, so I can only take partial credit for the overall and mood ratings of those.

The real learning experience this time around was on the audio. I’ve only done the audio for six of the nine Ludum Dares I’ve entered, so I left it out of the graphs above.

Ratings Graph

Looks like I really cranked it up a notch this time after coasting for a long while. Nice.

Moar Gophers

I haven’t decided yet if I’m going to take the game further. I quite like the concept and I certainly have some ideas for it. I’ll probably finish off my gopher renderer and phlog generator before I decide, and then I can do a devphlog for it :D

You can still play the jam version for now, if you missed it.


Overlooking the city

Gophers is my entry for Ludum Dare 46, the most recent of the bi-annual Ludum Dare game jams. It is a short adventure game about maintaining a gopher network in a post-apocalyptic world.

The basic concept is one I’ve been kicking around for a while as a sort of casual RPG/survival game about maintaining computer networks on scavenged technology, so it came to mind immediately when I saw the theme (“Keep it alive”).

I’ve been really interested lately in gopher and other low-overhead technologies, and what the internet would look like if the industries that sustain it collapsed. I’d previously envisioned a relatively cheerful solarpunk game about connecting distant sustainable communities, but I think it took on a much darker tone because of recent events.


I did all the art in Pyxel Edit as usual. My goal was to keep everything abstract and as high-contrast and readable as possible while still allowing for a nice parallax cityscape. I started with a mock-up of the exterior scene, and then essentially flipped the background and foreground colours from that for the bunker scene. I only used 7 colours in the end.

Bunker Scene

I put together a timelapse of the art so you can see the whole process:



The only reason why I considered this a viable idea was because I had previously developed a cutscene graph editor plugin for Godot. It was untested in any game but I thought it would give me enough of a leg up that I would have time for the art and writing.

Graph editor

So in effect, the “gopher network” in the game is actually a dialogue tree!

Actually using the editor in a game did reveal some issues with it, but nothing significant enough to prevent me from finishing - and now I have some ideas on what needs work before I use it for another game!

I also took some code from a previous game of mine for doing the menus and dealing with the settings. Every bit helps when you’re entering the jam solo.

One thing that really came together for me in this jam was using coroutines to manage sequences of events. I’ve always struggled to wrap my head around them previously for some reason, and would clumsily hook up signal handlers for every step. Using the yield statement in Godot made handling interactions much easier and quicker to write.

func _on_Terminal_clicked(walk_target, face_direction):
    yield(_player, "arrived_at_destination")
    GameController.set_spawn_location("bunker", "terminal")
    GameController.set_spawn_direction("bunker", "right")
    yield(FadeMask, "fade_in_complete")
    # Switch to the browser scene
    yield(FadeMask, "fade_out_complete")

Sound Effects

The most exciting part of working on this game, for me, was doing the sound effects. I bought a fancy mic a while back (a Røde NT-USB) to do foley SFX rather than my usual SFXR beeps and boops, but this was the first chance I’ve had to try it out.

My foley kit, or part of it at least

For the Geiger counter sounds I ran my finger over the teeth of a comb. For the bunker door, I rubbed a hammer and a spanner together in various ways. For the dripping sound in the bunker, I just used an eyedropper to drip drops into a glass of water. The footsteps are real footsteps that I recorded, and the cloth sounds when you’re walking around the exterior are me crinkling a vinyl jacket. It was a lot of fun to record all these and I don’t think I was even being all that creative. I couldn’t figure out how to do buzzing or flickering sounds for the electric light within the time I had though, unfortunately.

One big problem I encountered was that my apartment is apparently incredibly noisy, as am I. It was a windy day and the shutters on my window were banging constantly, my neighbours were going about their noisy lives, oblivious, and my body stubbornly refused to go without oxygen during the recordings. Noise reduction in Audacity helped a bit (make sure you record periods of “silence” to enable this), but there are definitely some extra environmental sounds in there. Thankfully I think they mostly just appear as mysterious underground reverb or get buried by other things. It’s something I’m definitely going to have to think about for next time.

I did a bunch of post-processing in Audacity to pick the best bits out of the recordings, and make things sound better. I had to reduce the pitch on the bunker door sound to make it sound heavier, for example.


I was so proud of the sound effects that I almost wasn’t going to do any music, but I’m glad I did. I got to it in the last few hours of the jam, so I had to keep it very simple. It’s mostly just the notes of a Dmin7 chord played in a few different arrangements on pad instruments, with some slow bass drums coming in and out. The title screen music layers a couple of different pads as well as a Rhodes doing sus4 arpeggios from each note of the chord.

I put everything together in LMMS. I spent a good chunk of time experimenting with different instruments so even though it’s really minimalistic it still took a while!

Abandoned Ideas

I had planned several other game elements, including the protagonist saying things to himself (or the player), and another type of interaction involving connecting cables and swapping out computer components.

A full game would probably have more complex survival elements instead of a simple timer, and would see you having to scavenge in the environment for computer equipment and other supplies.

We’ll see if anything like that comes to fruition in the future!

For All Mankind

Red Moon

This post contains spoilers for the TV show “For All Mankind”

For All Mankind” is a strange show. It reimagines the space race of the late 1960s in such a way that the USA is the underdog, with the USSR beating them to the moon by a month. While NASA’s failures are compounded by the crash-landing of the Apollo 11 lander, the Soviets rack up another victory when they land the first woman on the moon. Eventually the Americans get their act together and land a woman on the moon as well, and from that point on the two superpowers are neck and neck in space.

The strange thing about this is the extent to which it reflects reality, but just displaces it in time. The USA were playing catch-up for much of the space race, with the USSR achieving all the important early milestones: first artificial satellite, first animal in orbit, first human. The moon landing has so overshadowed those achievements in the popular consciousness that it is the only conceivable starting point for an alternate history like this. By giving it to the USSR, the moon landing becomes Sputnik.

The USSR did achieve another first of particular relevance to this show: they put the first woman into orbit, in 1963. Though female cosmonauts were not a permanent feature of the Soviet space program, female astronauts were not a part of the US space program at all, and they didn’t put a woman into space until 20 years later.

Interestingly, though the fictional Soviet moon landing featured an actual cosmonaut (Alexei Leonov, who conducted the first spacewalk in 1965), the female cosmonaut is not Valentina Tereshkova, the first woman in space, nor any of the women in her program, but a completely fictional character. The show has no problem giving a nod to Mercury 13 candidate Jerrie Cobb in the form of fictional Molly Cobb, but the Soviet women receive no such acknowledgement.

It’s not all bad. The premise feels like it is asking us to celebrate the USA for an egalitarianism that it never possessed, but the drama doesn’t necessarily reflect that. The women face opposition and scepticism as to their abilities - maybe not to the extent that they would have in reality, but it’s there. Gay characters have to live their lives in secret without any attempt to pretend that it could have been otherwise. America’s continued participation in the space race is unequivocally driven by militarism and suspicion. The Soviet cosmonauts even get a few humanising moments, but they are ultimately cast as a sinister other.

It is sad that even now, nearly three decades on from its collapse, the Soviet Union can only ever be condemned for its failures, never acknowledged for its accomplishments. I suppose this show goes further than most in that regard, but it maintains an unquestionably American perspective, with fictional Soviet victories serving merely to encourage America on to even greater heights. It would be nice to see something from the other side some time.

Embedding SVGs in Pelican

In my inaugural post I mentioned that one problem I had encountered while designing this blog was styling the SVG icons. I had grabbed a bunch of the individual icon files from Font Awesome, but because of the way SVGs, CSS and HTML interact, I wasn’t able to colour them directly using CSS color or fill properties, and instead had to use filter properties (which I calculated using this tool, so it wasn’t too much of a hardship).

I also didn’t particularly like that retrieving the icons involved numerous separate requests, nor the visible “pop-in” in Firefox that resulted from having them referenced as external files. The files are tiny, with the request overhead often as large or larger than the files themselves.

A further advantage that I was missing out on by not using Font Awesome as intended was that I couldn’t use their handy <i> tag shortcuts for specifying the icons to use.

Now, I have taken steps towards solving all of these many problems!

Just use Font Awesome normally you weirdo

Let’s back up a sec and talk about why I didn’t just use Font Awesome as intended in the first place (yes tldr; it is probably because I’m a weirdo).

Font Awesome has two ways that it can work: Web Fonts + CSS, or SVG + JavaScript. The former would involve retrieving an additional CSS file or two, as well as a couple of web fonts. The web font for the solid collection alone is 79.4KB - larger than anything else on this website. The JavaScript that would be required for the other method would likely be approaching 1MB in size - larger than this entire website so far! I want a lean, fast-loading, low-power website, and these approaches seem entirely at odds with those goals.

It also struck me as odd to be statically generating a site, yet also having the client browser swapping in SVG images. I’ve nothing against JavaScript, but clearly this is work that can be done in advance!

Doesn’t caching solve this problem?

Well… maybe? In same cases? But not necessarily.

The average size of an icon in Font Awesome’s “solid” collection is 660B. A visitor would have to encounter over 1500 such embedded icons before downloading the JavaScript and caching it would be cheaper. The Web Fonts are much better, with caching the separate files becoming worthwhile after only 214 icons. That’s about 5 views of this blog’s index page, or 15 individual posts.

As such, if somebody reads 16 posts on this blog, they will have transferred more data than they would have if I’d used the Font Awesome web fonts. However, if 15 people read one post each and never visit again, the embedded approach comes out way ahead. So it very much depends on the traffic profile of the site, and I don’t think this site is one that people will be checking in on daily.

Embedding also offers other advantages, such as reducing initial load times.


My solution is a pelican plugin that post-processes the generated HTML files and embeds any SVGs it finds, whether specified as <img> tags or <i> tags.

It also, crucially, sets the fill attribute of any SVG paths to currentColor, which causes the fill colour to be taken from the current CSS text colour.

Taking the plugin beyond being merely a static implementation of Font Awesome, it also supports embedding of arbitrary SVG files. This can be achieved either by using <i> tags with the class pi to search a custom icon set, or through <img> tags where the SVG file is referenced by URL.


The plugin probably has loads of rough edges at the moment. I haven’t at all tested if it supports Font Awesome’s more advanced behaviour, or even investigated how those features work, so there is a lot to be done there.

I may explore an approach that would combine the advantages of static generation with the advantages of a separate, cacheable SVG file. My initial thoughts on how to approach this plugin were to combine any referenced SVGs into a single file, and then reference them in the HTML using an SVG <use> tag. I need to learn a lot more about SVGs to know if that’s even feasible.

I also want to try to support other icon frameworks that support a similar <i> tag shortcut, such as Fork Awesome and Friconix.

In the meantime, it’s serving my purposes already on this site.

Runtime Class Modification

Python is probably my favourite language, so I was excited some years ago when a project appeared on Kickstarter to develop a Python runtime for microcontrollers, and an associated microcontroller board.

However, writing Python for a microcontroller does have some constraints that aren’t really a factor when writing Python for other environments. Having maybe only 100KB of RAM to work with, keeping code size as low as possible is essential.

When I wrote a package to support the TI tmp102 temperature sensor, I initially included all the required functionality in a single importable class. It used 15KB of RAM after import, which does leave space for other code, but since some of the functionality is mutually exclusive I knew I could probably do better.

This post is about what I ended up with and how it works.

Importable Features

The core functionality of the package can be leveraged by importing the Tmp102 class and creating an instance. This leaves the sensor in its default configuration, in which it performs a reading 4 times per second and makes the most recent available to your code on request. The details of initialising the object are explained in the documentation if you actually want to use the module, so I won’t go into them again here.

from machine import I2C
from tmp102 import Tmp102
bus = I2C(1)
sensor = Tmp102(bus, 0x48)

That’s all well and good, but what if you want to make use of some of the more advanced features of the sensor, such as controlling the rate at which it takes readings (the “conversion rate”)? Such features are structured as importable modules which add the required functionality into the Tmp102 class. The CONVERSION_RATE_1HZ constant in the example below, as well as other relevant code, are added to the class when the conversionrate module is imported.

from tmp102 import Tmp102
import tmp102.conversionrate
sensor = Tmp102(

If you don’t need to change the conversion rate in your project then the code to do so is never loaded. If you do need this or other features, all the functionality is still exposed through a single easy to use class.


The package is structured like this:

+-- __init__.py
+-- _tmp102.py
+-- alert.py
+-- conversionrate.py
+-- convertors.py
+-- extendedmode.py
+-- oneshot.py
+-- shutdown.py

The base Tmp102 class is defined in _tmp102.py, along with some private functions and constants.



def _set_bit(b, mask):
    return b | mask

def _clear_bit(b, mask):
    return b & ~mask

def _set_bit_for_boolean(b, mask, val):
    if val:
        return _set_bit(b, mask)
        return _clear_bit(b, mask)

class Tmp102(object):

    def __init__(self, bus, address, temperature_convertor=None, **kwargs):
        self.bus = bus
        self.address = address
        self.temperature_convertor = temperature_convertor
        # The register defaults to the temperature.
        self._last_write_register = REGISTER_TEMP
        self._extended_mode = False

To hide the private stuff from users of the package, the __init__.py imports the Tmp102 class and then removes the _tmp102 module from the namespace.

from tmp102._tmp102 import Tmp102

del _tmp102

The interesting stuff happens in the feature sub-modules. Each feature module defines an _extend_class function which modifies the Tmp102 class. Since importing a module runs it, this function can be called and then deleted to keep the namespace nice and clean - the module will actually be empty once imported. This pattern should be familiar to JavaScript developers!

def _extend_class():
    # Modify Tmp102 here - Check the next code block!

del _extend_class

Let’s take a look at the oneshot module, which adds functionality to the Tmp102 class to allow the sensor to be polled as necessary instead of constantly performing readings - very useful if you want to save power.

def _extend_class():
    from tmp102._tmp102 import Tmp102
    from tmp102._tmp102 import _set_bit_for_boolean
    import tmp102.shutdown

    SHUTDOWN_BIT = 0x01
    ONE_SHOT_BIT = 0x80

    def initiate_conversion(self):
        Initiate a one-shot conversion.
        current_config = self._get_config()
        if not current_config[0] & SHUTDOWN_BIT:
            raise RuntimeError("Device must be shut down to initiate one-shot conversion")
        new_config = bytearray(current_config)
        new_config[0] = _set_bit_for_boolean(
    Tmp102.initiate_conversion = initiate_conversion

    def _conversion_ready(self):
        current_config = self._get_config()
        return (current_config[0] & ONE_SHOT_BIT) == ONE_SHOT_BIT
    Tmp102.conversion_ready = property(_conversion_ready)

So what’s going on here? First, the Tmp102 class and any required functions are imported. Since it was imported in the package’s __init__ the class is already defined. Importing the private functions and constants in a function like this keeps them out of the global namespace.

from tmp102._tmp102 import Tmp102
from tmp102._tmp102 import _set_bit_for_boolean

The oneshot module depends on the functionality from the shutdown module, so it is imported next.

import tmp102.shutdown

Next, a couple of constants are defined. Through the magic of closure, these will only be available to the methods defined in this module.


The rest of the function defines a method and a property which are added to the class by simply assigning them to attributes. These will be available to any instances of the class, exactly as if they were included in the class definition.

def initiate_conversion(self):
    Initiate a one-shot conversion.
    current_config = self._get_config()
    if not current_config[0] & SHUTDOWN_BIT:
        raise RuntimeError("Device must be shut down to initiate one-shot conversion")
    new_config = bytearray(current_config)
    new_config[0] = _set_bit_for_boolean(
Tmp102.initiate_conversion = initiate_conversion

def _conversion_ready(self):
    current_config = self._get_config()
    return (current_config[0] & ONE_SHOT_BIT) == ONE_SHOT_BIT
Tmp102.conversion_ready = property(_conversion_ready)

The other feature modules follow the same pattern.


Importing the base Tmp102 class uses about 3.53KB of RAM - quite a saving if that is all you need. The feature modules vary between 0.8KB and 4KB, or thereabouts. Importing them all uses 13.44KB, but it is unlikely that they would all be required in any given application.


I thought of this approach as “monkey-patching” for a long time - the last refuge of the desperate and the damned - but I’m not sure that it is really, because the modifications are all being made internally to the package. It is definitely outside the norm for Python, but it achieved the goal of reducing RAM usage while maintaining a clean API.

Self-Fulfilling Prophecies

Don't Panic

We see it in every crisis - somebody posts a picture on social media of a bare shelf or a rumour goes around that the shops are running out of something (such as, to pick a good completely at random, toilet paper), and suddenly the shelves are emptying everywhere, and it seems to make sense to secure a stockpile.

It starts as an irrational fear, but it is reified by the seemingly rational self interests of individual consumers. It makes sense, on an individual level, to buy extra because everybody else is, or might be. The expectation of shortages leads to shortages, just as the expectation of economic growth helps create growth, and the fear of a crash leads to or worsens a crash, as everybody tries to get off the merry-go-round at the same time.

Market economies amplify and feed off our emotions and impulses in the face of incomplete information. We’re not generally privy to the details of the stocks and supply chains of any given good. If we were, we could determine whether a perceived shortage is real and how long it might be expected to last, and act accordingly. Even better than obtaining and acting on such information individually - which could still lead to panic buying in the event of an actual shortage - would be to evaluate and respond to the situation collectively, to ensure that everybody can get a reasonable share of goods even in the event of a shortage.

Markets don’t offer any mechanism for collective reasoning or action. The best a market can offer is price-gouging, where massive price increases disuade all but the most desparate until everybody comes to their senses. Thankfully, retailers in societies that haven’t completely devolved into neoliberal hellscapes tend to opt for rationing instead. Nobody wants to be seen to be a profiteer by a community that they are going to want to continue to serve after the crisis has passed.

It’s unfortunate that we have to be reliant on the reputational concerns of retailers to ensure the provision of essential goods in a crisis. The expectation of shortages leads to shortages, but somehow the certainty of occasional crises doesn’t lead to distributed production, resilient supply chains, or emergency stockpiles. Our economy’s blinkered focus on short-term profits and fetishisation of “efficiency” doesn’t allow for this kind of thinking.