Devs - Spirituality as a Service

Subtle
Subtle

This post contains spoilers for the TV show “Devs”

I liked Devs a lot. It looks at the quasi-religious reverence in which tech entrepreneurs are held in some quarters (most notably amongst themselves, perhaps) and asks, what if this but literally? What if these people were literally gods, or creating a god?

The plot centres on a software engineer named Lily, whose boyfriend is murdered by their boss, Forest, after he attempts to steal some code from the company they work for. The code in question is for the Devs system - a quantum simulator that extrapolates the past and future events of the entire universe from any sample of matter. Lily becomes suspicious of the circumstances of her boyfriend’s death, which is made to look like a suicide, and starts to dig around.

Unfortunately much of the plot, and particularly the climax, rest on a concept that I found it hard to suspend my disbelief about (and I don’t mean the premise of the Devs system).

Several of the main characters are aware of future events, up to a certain point, thanks to their quantum computer’s simulations. They do not attempt to alter their behaviour in even the smallest way, even just to see if it is possible, instead slavishly repeating every word and action they’ve observed.

If it were just Forest, and the lead systems designer, Katie, who acted like this, it might be understood as a consequence of blind faith, or a wilful misunderstanding of causality because reality doesn’t suit their purposes. Forest is single-minded in his pursuit of this technology because he believes it can resurrect his dead daughter - Devs is his church, determinism is the creed, and anything that calls it into question is heresy.

But this notion is dispelled in a scene where a roomful of people are shown a simulation of a few seconds into the future, and mirror it exactly - apparently it is actually a feature of this universe that it is actively difficult to behave contrary to the prediction. I think the reality would be the opposite - it would actually be difficult not to act differently once you were aware of future events. I think you would do so instinctively, and accidentally. It wouldn’t be a violation of causality, because the simulation would also be a cause, with its own effects.

So this concept strains credibility, and works only on a allegorical level - the low-level developers are dazzled by a brief tech demo and its promises while the higher ups are simultaneously in thrall to their own hype and aware of the lies it is based on and the limits of their knowledge.

It also makes the climax of the show absurdly predictable. As soon as we hear that the simulation breaks down at a certain point, and it has something to do with Lily, we know that Lily is going to do something that contradicts the predictions of the simulation. None of the supposedly smart characters in the show demonstrate any awareness of this obvious fact, and it’s frustrating. It is only redeemed because seeing the climax coming reflects the characters’ foreknowledge of the future, in a way.

Lily
Lily doing some reflecting

Overall, it’s interesting enough and well enough written that these problems are easy to look past. Some of the imagery is fantastic, such as the would-be god-developers working in a giant fractal computer floating in a vacuum, completely isolated from the world they’re trying to understand. It’s also a tonal masterpiece, full of haunting establishing shots, temple-like sets, and an unsettling soundtrack. Worth watching for that reason alone, to be honest.

Ludum Dare 46 Results

The Ludum Dare 46 results were published yesterday, and my game did quite well, placing 109th overall and 14th in the “Mood” category, as well as 120th and 121st in graphics and audio respectively. In the largest ever Ludum Dare, those are pretty decent placings I think, despite not breaking the top 100.

Category Rating Placing Percentile
Overall 4.136 109 96th
Fun 3.523 819 77th
Theme 4.14 279 92nd
Innovation 3.86 247 93rd
Humor 3.656 365 89th
Graphics 4.477 120 96th
Audio 4.102 121 96th
Mood 4.523 14 99th

Graphs

I always feel that the real competition in the Ludum Dare is against myself - just trying to do a little bit better and learn a bit more each time. As such, here’s some indication of my LD result trends over the years.

Ratings Graph Placings Graph Percentiles Graph

Nice upward trends! Note that I was only responsible for the art for “Claustrophobia” and “Rattendorf”, so I can only take partial credit for the overall and mood ratings of those.

The real learning experience this time around was on the audio. I’ve only done the audio for six of the nine Ludum Dares I’ve entered, so I left it out of the graphs above.

Ratings Graph

Looks like I really cranked it up a notch this time after coasting for a long while. Nice.

Moar Gophers

I haven’t decided yet if I’m going to take the game further. I quite like the concept and I certainly have some ideas for it. I’ll probably finish off my gopher renderer and phlog generator before I decide, and then I can do a devphlog for it :D

You can still play the jam version for now, if you missed it.

Gophers

Overlooking the city

Gophers is my entry for Ludum Dare 46, the most recent of the bi-annual Ludum Dare game jams. It is a short adventure game about maintaining a gopher network in a post-apocalyptic world.

The basic concept is one I’ve been kicking around for a while as a sort of casual RPG/survival game about maintaining computer networks on scavenged technology, so it came to mind immediately when I saw the theme (“Keep it alive”).

I’ve been really interested lately in gopher and other low-overhead technologies, and what the internet would look like if the industries that sustain it collapsed. I’d previously envisioned a relatively cheerful solarpunk game about connecting distant sustainable communities, but I think it took on a much darker tone because of recent events.

Art

I did all the art in Pyxel Edit as usual. My goal was to keep everything abstract and as high-contrast and readable as possible while still allowing for a nice parallax cityscape. I started with a mock-up of the exterior scene, and then essentially flipped the background and foreground colours from that for the bunker scene. I only used 7 colours in the end.

Bunker Scene

I put together a timelapse of the art so you can see the whole process:

Gophers

Code

The only reason why I considered this a viable idea was because I had previously developed a cutscene graph editor plugin for Godot. It was untested in any game but I thought it would give me enough of a leg up that I would have time for the art and writing.

Graph editor

So in effect, the “gopher network” in the game is actually a dialogue tree!

Actually using the editor in a game did reveal some issues with it, but nothing significant enough to prevent me from finishing - and now I have some ideas on what needs work before I use it for another game!

I also took some code from a previous game of mine for doing the menus and dealing with the settings. Every bit helps when you’re entering the jam solo.

One thing that really came together for me in this jam was using coroutines to manage sequences of events. I’ve always struggled to wrap my head around them previously for some reason, and would clumsily hook up signal handlers for every step. Using the yield statement in Godot made handling interactions much easier and quicker to write.

func _on_Terminal_clicked(walk_target, face_direction):
    _player.set_destination(walk_target)
    yield(_player, "arrived_at_destination")
    _player.face(face_direction)
    GameController.set_spawn_location("bunker", "terminal")
    GameController.set_spawn_direction("bunker", "right")
    FadeMask.fade_in()
    yield(FadeMask, "fade_in_complete")
    # Switch to the browser scene
    self.get_tree().change_scene("res://browser/Browser.tscn")
    FadeMask.fade_out()
    yield(FadeMask, "fade_out_complete")

Sound Effects

The most exciting part of working on this game, for me, was doing the sound effects. I bought a fancy mic a while back (a Røde NT-USB) to do foley SFX rather than my usual SFXR beeps and boops, but this was the first chance I’ve had to try it out.

My foley kit, or part of it at least

For the Geiger counter sounds I ran my finger over the teeth of a comb. For the bunker door, I rubbed a hammer and a spanner together in various ways. For the dripping sound in the bunker, I just used an eyedropper to drip drops into a glass of water. The footsteps are real footsteps that I recorded, and the cloth sounds when you’re walking around the exterior are me crinkling a vinyl jacket. It was a lot of fun to record all these and I don’t think I was even being all that creative. I couldn’t figure out how to do buzzing or flickering sounds for the electric light within the time I had though, unfortunately.

One big problem I encountered was that my apartment is apparently incredibly noisy, as am I. It was a windy day and the shutters on my window were banging constantly, my neighbours were going about their noisy lives, oblivious, and my body stubbornly refused to go without oxygen during the recordings. Noise reduction in Audacity helped a bit (make sure you record periods of “silence” to enable this), but there are definitely some extra environmental sounds in there. Thankfully I think they mostly just appear as mysterious underground reverb or get buried by other things. It’s something I’m definitely going to have to think about for next time.

I did a bunch of post-processing in Audacity to pick the best bits out of the recordings, and make things sound better. I had to reduce the pitch on the bunker door sound to make it sound heavier, for example.

Music

I was so proud of the sound effects that I almost wasn’t going to do any music, but I’m glad I did. I got to it in the last few hours of the jam, so I had to keep it very simple. It’s mostly just the notes of a Dmin7 chord played in a few different arrangements on pad instruments, with some slow bass drums coming in and out. The title screen music layers a couple of different pads as well as a Rhodes doing sus4 arpeggios from each note of the chord.

I put everything together in LMMS. I spent a good chunk of time experimenting with different instruments so even though it’s really minimalistic it still took a while!

Abandoned Ideas

I had planned several other game elements, including the protagonist saying things to himself (or the player), and another type of interaction involving connecting cables and swapping out computer components.

A full game would probably have more complex survival elements instead of a simple timer, and would see you having to scavenge in the environment for computer equipment and other supplies.

We’ll see if anything like that comes to fruition in the future!

For All Mankind

Red Moon

This post contains spoilers for the TV show “For All Mankind”

For All Mankind” is a strange show. It reimagines the space race of the late 1960s in such a way that the USA is the underdog, with the USSR beating them to the moon by a month. While NASA’s failures are compounded by the crash-landing of the Apollo 11 lander, the Soviets rack up another victory when they land the first woman on the moon. Eventually the Americans get their act together and land a woman on the moon as well, and from that point on the two superpowers are neck and neck in space.

The strange thing about this is the extent to which it reflects reality, but just displaces it in time. The USA were playing catch-up for much of the space race, with the USSR achieving all the important early milestones: first artificial satellite, first animal in orbit, first human. The moon landing has so overshadowed those achievements in the popular consciousness that it is the only conceivable starting point for an alternate history like this. By giving it to the USSR, the moon landing becomes Sputnik.

The USSR did achieve another first of particular relevance to this show: they put the first woman into orbit, in 1963. Though female cosmonauts were not a permanent feature of the Soviet space program, female astronauts were not a part of the US space program at all, and they didn’t put a woman into space until 20 years later.

Interestingly, though the fictional Soviet moon landing featured an actual cosmonaut (Alexei Leonov, who conducted the first spacewalk in 1965), the female cosmonaut is not Valentina Tereshkova, the first woman in space, nor any of the women in her program, but a completely fictional character. The show has no problem giving a nod to Mercury 13 candidate Jerrie Cobb in the form of fictional Molly Cobb, but the Soviet women receive no such acknowledgement.

It’s not all bad. The premise feels like it is asking us to celebrate the USA for an egalitarianism that it never possessed, but the drama doesn’t necessarily reflect that. The women face opposition and scepticism as to their abilities - maybe not to the extent that they would have in reality, but it’s there. Gay characters have to live their lives in secret without any attempt to pretend that it could have been otherwise. America’s continued participation in the space race is unequivocally driven by militarism and suspicion. The Soviet cosmonauts even get a few humanising moments, but they are ultimately cast as a sinister other.

It is sad that even now, nearly three decades on from its collapse, the Soviet Union can only ever be condemned for its failures, never acknowledged for its accomplishments. I suppose this show goes further than most in that regard, but it maintains an unquestionably American perspective, with fictional Soviet victories serving merely to encourage America on to even greater heights. It would be nice to see something from the other side some time.

Embedding SVGs in Pelican

In my inaugural post I mentioned that one problem I had encountered while designing this blog was styling the SVG icons. I had grabbed a bunch of the individual icon files from Font Awesome, but because of the way SVGs, CSS and HTML interact, I wasn’t able to colour them directly using CSS color or fill properties, and instead had to use filter properties (which I calculated using this tool, so it wasn’t too much of a hardship).

I also didn’t particularly like that retrieving the icons involved numerous separate requests, nor the visible “pop-in” in Firefox that resulted from having them referenced as external files. The files are tiny, with the request overhead often as large or larger than the files themselves.

A further advantage that I was missing out on by not using Font Awesome as intended was that I couldn’t use their handy <i> tag shortcuts for specifying the icons to use.

Now, I have taken steps towards solving all of these many problems!

Just use Font Awesome normally you weirdo

Let’s back up a sec and talk about why I didn’t just use Font Awesome as intended in the first place (yes tldr; it is probably because I’m a weirdo).

Font Awesome has two ways that it can work: Web Fonts + CSS, or SVG + JavaScript. The former would involve retrieving an additional CSS file or two, as well as a couple of web fonts. The web font for the solid collection alone is 79.4KB - larger than anything else on this website. The JavaScript that would be required for the other method would likely be approaching 1MB in size - larger than this entire website so far! I want a lean, fast-loading, low-power website, and these approaches seem entirely at odds with those goals.

It also struck me as odd to be statically generating a site, yet also having the client browser swapping in SVG images. I’ve nothing against JavaScript, but clearly this is work that can be done in advance!

Doesn’t caching solve this problem?

Well… maybe? In same cases? But not necessarily.

The average size of an icon in Font Awesome’s “solid” collection is 660B. A visitor would have to encounter over 1500 such embedded icons before downloading the JavaScript and caching it would be cheaper. The Web Fonts are much better, with caching the separate files becoming worthwhile after only 214 icons. That’s about 5 views of this blog’s index page, or 15 individual posts.

As such, if somebody reads 16 posts on this blog, they will have transferred more data than they would have if I’d used the Font Awesome web fonts. However, if 15 people read one post each and never visit again, the embedded approach comes out way ahead. So it very much depends on the traffic profile of the site, and I don’t think this site is one that people will be checking in on daily.

Embedding also offers other advantages, such as reducing initial load times.

Solutions

My solution is a pelican plugin that post-processes the generated HTML files and embeds any SVGs it finds, whether specified as <img> tags or <i> tags.

It also, crucially, sets the fill attribute of any SVG paths to currentColor, which causes the fill colour to be taken from the current CSS text colour.

Taking the plugin beyond being merely a static implementation of Font Awesome, it also supports embedding of arbitrary SVG files. This can be achieved either by using <i> tags with the class pi to search a custom icon set, or through <img> tags where the SVG file is referenced by URL.

Future

The plugin probably has loads of rough edges at the moment. I haven’t at all tested if it supports Font Awesome’s more advanced behaviour, or even investigated how those features work, so there is a lot to be done there.

I may explore an approach that would combine the advantages of static generation with the advantages of a separate, cacheable SVG file. My initial thoughts on how to approach this plugin were to combine any referenced SVGs into a single file, and then reference them in the HTML using an SVG <use> tag. I need to learn a lot more about SVGs to know if that’s even feasible.

I also want to try to support other icon frameworks that support a similar <i> tag shortcut, such as Fork Awesome and Friconix.

In the meantime, it’s serving my purposes already on this site.

Runtime Class Modification

Python is probably my favourite language, so I was excited some years ago when a project appeared on Kickstarter to develop a Python runtime for microcontrollers, and an associated microcontroller board.

However, writing Python for a microcontroller does have some constraints that aren’t really a factor when writing Python for other environments. Having maybe only 100KB of RAM to work with, keeping code size as low as possible is essential.

When I wrote a package to support the TI tmp102 temperature sensor, I initially included all the required functionality in a single importable class. It used 15KB of RAM after import, which does leave space for other code, but since some of the functionality is mutually exclusive I knew I could probably do better.

This post is about what I ended up with and how it works.

Importable Features

The core functionality of the package can be leveraged by importing the Tmp102 class and creating an instance. This leaves the sensor in its default configuration, in which it performs a reading 4 times per second and makes the most recent available to your code on request. The details of initialising the object are explained in the documentation if you actually want to use the module, so I won’t go into them again here.

from machine import I2C
from tmp102 import Tmp102
bus = I2C(1)
sensor = Tmp102(bus, 0x48)
print(sensor.temperature)

That’s all well and good, but what if you want to make use of some of the more advanced features of the sensor, such as controlling the rate at which it takes readings (the “conversion rate”)? Such features are structured as importable modules which add the required functionality into the Tmp102 class. The CONVERSION_RATE_1HZ constant in the example below, as well as other relevant code, are added to the class when the conversionrate module is imported.

from tmp102 import Tmp102
import tmp102.conversionrate
sensor = Tmp102(
    bus,
    0x48,
    conversion_rate=Tmp102.CONVERSION_RATE_1HZ
)

If you don’t need to change the conversion rate in your project then the code to do so is never loaded. If you do need this or other features, all the functionality is still exposed through a single easy to use class.

How?

The package is structured like this:

tmp102
+-- __init__.py
+-- _tmp102.py
+-- alert.py
+-- conversionrate.py
+-- convertors.py
+-- extendedmode.py
+-- oneshot.py
+-- shutdown.py

The base Tmp102 class is defined in _tmp102.py, along with some private functions and constants.

REGISTER_TEMP = 0
REGISTER_CONFIG = 1

EXTENDED_MODE_BIT = 0x10

def _set_bit(b, mask):
    return b | mask

def _clear_bit(b, mask):
    return b & ~mask

def _set_bit_for_boolean(b, mask, val):
    if val:
        return _set_bit(b, mask)
    else:
        return _clear_bit(b, mask)


class Tmp102(object):

    def __init__(self, bus, address, temperature_convertor=None, **kwargs):
        self.bus = bus
        self.address = address
        self.temperature_convertor = temperature_convertor
        # The register defaults to the temperature.
        self._last_write_register = REGISTER_TEMP
        self._extended_mode = False
        .
        .
        .

To hide the private stuff from users of the package, the __init__.py imports the Tmp102 class and then removes the _tmp102 module from the namespace.

from tmp102._tmp102 import Tmp102

del _tmp102

The interesting stuff happens in the feature sub-modules. Each feature module defines an _extend_class function which modifies the Tmp102 class. Since importing a module runs it, this function can be called and then deleted to keep the namespace nice and clean - the module will actually be empty once imported. This pattern should be familiar to JavaScript developers!

def _extend_class():
    # Modify Tmp102 here - Check the next code block!
    pass

_extend_class()
del _extend_class

Let’s take a look at the oneshot module, which adds functionality to the Tmp102 class to allow the sensor to be polled as necessary instead of constantly performing readings - very useful if you want to save power.

def _extend_class():
    from tmp102._tmp102 import Tmp102
    from tmp102._tmp102 import _set_bit_for_boolean
    import tmp102.shutdown

    SHUTDOWN_BIT = 0x01
    ONE_SHOT_BIT = 0x80

    def initiate_conversion(self):
        """
        Initiate a one-shot conversion.
        """
        current_config = self._get_config()
        if not current_config[0] & SHUTDOWN_BIT:
            raise RuntimeError("Device must be shut down to initiate one-shot conversion")
        new_config = bytearray(current_config)
        new_config[0] = _set_bit_for_boolean(
            new_config[0],
            ONE_SHOT_BIT,
            True
        )
        self._set_config(new_config)
    Tmp102.initiate_conversion = initiate_conversion

    def _conversion_ready(self):
        current_config = self._get_config()
        return (current_config[0] & ONE_SHOT_BIT) == ONE_SHOT_BIT
    Tmp102.conversion_ready = property(_conversion_ready)

So what’s going on here? First, the Tmp102 class and any required functions are imported. Since it was imported in the package’s __init__ the class is already defined. Importing the private functions and constants in a function like this keeps them out of the global namespace.

from tmp102._tmp102 import Tmp102
from tmp102._tmp102 import _set_bit_for_boolean

The oneshot module depends on the functionality from the shutdown module, so it is imported next.

import tmp102.shutdown

Next, a couple of constants are defined. Through the magic of closure, these will only be available to the methods defined in this module.

SHUTDOWN_BIT = 0x01
ONE_SHOT_BIT = 0x80

The rest of the function defines a method and a property which are added to the class by simply assigning them to attributes. These will be available to any instances of the class, exactly as if they were included in the class definition.

def initiate_conversion(self):
    """
    Initiate a one-shot conversion.
    """
    current_config = self._get_config()
    if not current_config[0] & SHUTDOWN_BIT:
        raise RuntimeError("Device must be shut down to initiate one-shot conversion")
    new_config = bytearray(current_config)
    new_config[0] = _set_bit_for_boolean(
        new_config[0],
        ONE_SHOT_BIT,
        True
    )
    self._set_config(new_config)
Tmp102.initiate_conversion = initiate_conversion

def _conversion_ready(self):
    current_config = self._get_config()
    return (current_config[0] & ONE_SHOT_BIT) == ONE_SHOT_BIT
Tmp102.conversion_ready = property(_conversion_ready)

The other feature modules follow the same pattern.

Savings

Importing the base Tmp102 class uses about 3.53KB of RAM - quite a saving if that is all you need. The feature modules vary between 0.8KB and 4KB, or thereabouts. Importing them all uses 13.44KB, but it is unlikely that they would all be required in any given application.

Conclusion

I thought of this approach as “monkey-patching” for a long time - the last refuge of the desperate and the damned - but I’m not sure that it is really, because the modifications are all being made internally to the package. It is definitely outside the norm for Python, but it achieved the goal of reducing RAM usage while maintaining a clean API.

Self-Fulfilling Prophecies

Don't Panic

We see it in every crisis - somebody posts a picture on social media of a bare shelf or a rumour goes around that the shops are running out of something (such as, to pick a good completely at random, toilet paper), and suddenly the shelves are emptying everywhere, and it seems to make sense to secure a stockpile.

It starts as an irrational fear, but it is reified by the seemingly rational self interests of individual consumers. It makes sense, on an individual level, to buy extra because everybody else is, or might be. The expectation of shortages leads to shortages, just as the expectation of economic growth helps create growth, and the fear of a crash leads to or worsens a crash, as everybody tries to get off the merry-go-round at the same time.

Market economies amplify and feed off our emotions and impulses in the face of incomplete information. We’re not generally privy to the details of the stocks and supply chains of any given good. If we were, we could determine whether a perceived shortage is real and how long it might be expected to last, and act accordingly. Even better than obtaining and acting on such information individually - which could still lead to panic buying in the event of an actual shortage - would be to evaluate and respond to the situation collectively, to ensure that everybody can get a reasonable share of goods even in the event of a shortage.

Markets don’t offer any mechanism for collective reasoning or action. The best a market can offer is price-gouging, where massive price increases disuade all but the most desparate until everybody comes to their senses. Thankfully, retailers in societies that haven’t completely devolved into neoliberal hellscapes tend to opt for rationing instead. Nobody wants to be seen to be a profiteer by a community that they are going to want to continue to serve after the crisis has passed.

It’s unfortunate that we have to be reliant on the reputational concerns of retailers to ensure the provision of essential goods in a crisis. The expectation of shortages leads to shortages, but somehow the certainty of occasional crises doesn’t lead to distributed production, resilient supply chains, or emergency stockpiles. Our economy’s blinkered focus on short-term profits and fetishisation of “efficiency” doesn’t allow for this kind of thinking.

Catnip

I put together an animation over the last few weeks of a cat trying to catch gliders on a 70’s styled computer. Check it out on YouTube:

Catnip

I actually only set out to draw the computer, and I’m not sure at what point the cat entered the picture, but I’m glad it did! My goal wasn’t to depict any specific vintage computer, but to create a somewhat implausible one from imagination. I did look at a bunch of references for ideas on what to include, mostly from oldcomputers.net

Music

I had originally planned to create a patch in VCVRack to accompany the animation, but I struggled to create something that felt right for the animation. Several attempts to create something in LMMS also failed. I ended up putting something together in beepbox.

Bad Idioms

Human languages are full to the brim with idioms - figurative ways of saying things that native speakers trot out without even thinking about them. Often, when translated literally into another language, the result is utter nonsense. For example, the phrase “tomar el pelo” in Spanish translates literally to English as “to take the hair”, but the idiomatic way to say the same thing in English would be “to pull (someone’s) leg”. The same thing is roughly true of programming languages, with different languages having their own idiomatic or expected ways of achieving the same ends.

I recently made the mistake, after a period of writing Python code, of applying one of Python’s idioms to C#. The task at hand was to check if a dictionary of lists already contained a particular key, and if not, add a new list for that key. The C# way to do this would probably be to check for the existence of the key first, then decide what to do - or even better, use the TryGetValue method of the dictionary to assign the value to a variable. This is known as “Look Before You Leap”.

List<object> l;
if (dict.ContainsKey(objectType))
{
    l = dict[objectType];
}
else
{
    l = new List<object>();
    dict.Add(objectType, l);
}
List<object> l;
if (!dict.TryGetValue(objectType, out l))
{
    l = new List<object>();
    dict.Add(objectType, l);
}

But instead of doing either of those things, I applied a more pythonic idiom - that of “Easier to Ask Permission than Forgiveness” - and just tried retrieving the value, and catching the KeyNotFoundException:

List<object> l;
try
{
    l = dict[objectType];
}
catch (KeyNotFoundException ex)
{
    l = new List<object>();
    dict.Add(objectType, l);
}

This turned an operation that should have taken milliseconds into one that was taking seconds, introducing a perceptible delay into my application.

Curious to know to exactly what extent performance differed between the above choices, and whether EAFP really would have been the better choice in Python, I decided to throw together some benchmark tests.

Python

import timeit

setup = """
d = {
    'a': [1, 2, 3,],
    'b': [4, 5, 6,],
    'c': [7, 8, 9,],
}
"""

test_except = """
try:
    v = d['d']
except KeyError:
    v = []
    d['d'] = v

del d['d']
"""

test_check = """
if 'd' in d:
    v = d['d']
else:
    v = []
    d['d'] = v

del d['d']
"""

print(timeit.timeit(setup=setup, stmt=test_except, number=1000000))
print(timeit.timeit(setup=setup, stmt=test_check, number=1000000))

This gave results of 0.46 seconds for a million EAFP operations, and about 0.08 seconds for a million LBYL operations, with everything else, I hope, being equal between the two tests. If the new key is not deleted every time (so that only the first check fails), the EAFP operation becomes marginally faster than the alternative (0.026 vs 0.037 seconds) on most runs.

C#

Dictionary<string, List<string>> dict = new Dictionary<string, List<string>>()
{
    { "a", new List<string>() },
    { "b", new List<string>() },
    { "c", new List<string>() }
};

DateTime exceptStart = DateTime.UtcNow;
for (int i = 0; i < 1000; i++)
{
    List<string> v;
    try
    {
        v = dict["d"];
    }
    catch (KeyNotFoundException ex)
    {
        v = new List<string>();
        dict.Add("d", v);
    }
    dict.Remove("d");
}
TimeSpan exceptResult = DateTime.UtcNow - exceptStart;

DateTime tryGetStart = DateTime.UtcNow;
for (int i = 0; i < 1000000; i++)
{
    List<string> v;
    if (!dict.TryGetValue("d", out v))
    {
        v = new List<string>();
        dict.Add("d", v);
    }
    dict.Remove("d");
}
TimeSpan tryGetResult = DateTime.UtcNow - tryGetStart;

DateTime checkStart = DateTime.UtcNow;
for (int i = 0; i < 1000000; i++)
{
    List<string> v;
    if (!dict.ContainsKey("d"))
    {
        v = new List<string>();
        dict.Add("d", v);
    }
    else
    {
        v = dict["d"];
    }
    dict.Remove("d");
}
TimeSpan checkResult = DateTime.UtcNow - checkStart;

Console.WriteLine("Except: {0}", exceptResult.TotalSeconds);
Console.WriteLine("TryGet: {0:f10}", tryGetResult.TotalSeconds);
Console.WriteLine("Check: {0:f10}", checkResult.TotalSeconds);
Console.ReadKey(true);

Note that the EAFP test here is only performed a thousand times - because even running it that many times takes around 15 entire seconds! The two LBYL tests are nothing in comparison, executing a million times in around 0.05 seconds. This is a much bigger difference than I would have expected.

Conclusion

The performance of a single operation like this doesn’t necessarily say a lot about the real-world performance of any given application, but I think it is probably best to stick to the idioms of the language you’re working in - and in C#, that means only throwing exceptions in exceptional circumstances. In Python, there may be circumstances where it would be better to “Look Before You Leap” as well, but the difference in performance is probably not large enough to matter in most cases.

Remember Blogs?

I’ve read a lot of articles recently (here’s one) lamenting the state of the web. Once distributed, egalitarian, ungovernable, and fast, now centralised, intentionally manipulative, and bloated both technically and conceptually. Even when you manage to fight your way through the popups demanding your attention or personal information, often what is underneath is not worth the effort - more likely a vehicle for advertising than for insight.

It’s also incredibly power-hungry. It’s hard to tie down an exact figure for exactly how power-hungry, but the internet as a whole could account for up to 10% of global energy use. A good chunk of that is streaming video and music, which is a topic for another day, but of the power consumed in serving the web, some of it is related to actual valuable content that people want to see, and some of it is related to the trends described above. The latter is waste. At least bloated JavaScript and CSS frameworks can be cached, but advertising has to be constantly served anew.

So, anyway, all this to say… I’ve decided to start a blog.

The Tech

My technical goals for this website are for it to be…

  • Lightweight & fast to load - I set up a WordPress site recently, on the best hosting I can afford. It is not lightweight or fast to load.
  • Content focused - Read one thing or read them all, but I’m sure you can only read one article at a time.
  • Nice to look at - Apparently it doesn’t take much. Also going for consistent branding between all my sites and profiles.
  • Responsive - Readable on phones as well as desktops!
  • Easy to deploy - I don’t have time to configure and maintain a teetering stack of back-end technology, and if I have to move to different hosting at some point, I want it to be a simple task.
  • Easy to update - If writing posts is a chore, I won’t ever do it.
  • Hackable - Created using technologies that I’m somewhat familiar with, so that it is feasible for me to modify or extend if I want/need to.

I decided almost immediately that a statically-generated site was going to be the best way to achieve most of those goals. I’m a big fan of Python, so although hackability could be achieved by a JavaScript or C# based generator, I checked out the Python ones first, and found plenty of viable options. I settled on Pelican because it’s…

  • Popular - It seems to be one of the more popular Python generators.
  • Blog-oriented - Some generators are geared towards documentation or are intended as replacements for content management systems, but that’s not what I’m doing.
  • Supports Markdown - I’m sure reST is fine, but I already have to use Markdown elsewhere so I’d rather stick with that.
  • Easy to update - Just create a new Markdown file and run a command to rebuild.
  • Extensible - It includes a plugin system to modify the output.

I also decided to hand-craft my own theme, and to avoid a CSS framework. I love the look of Bootstrap, and how quick it is to get started with, but it’s over 200kb and a lot of that is undoubtedly unnecessary for my needs. The spirit of the exercise is bare-bones and DIY!

The Theme

The first step in hand-crafting a theme was… to find an existing theme to copy! Atilla was the closest to the style I was after, so I took a copy of that and gutted it of CSS and JavaScript and other elements that didn’t meet my needs. Then I started building the CSS back up while trying to keep it as minimal as possible. It may not implement every feature supported by Pelican, but you can find it on my Github if it seems like something you could adapt for your own needs.

One departure that I made from the standard Pelican configuration was to have the social media links be taken from a collection of tuples with three elements, so that I could specify both an icon and a title to use.

# Custom social list that includes icons
SOCIAL_ICONS = (('Twitter', 'twitter.svg', 'https://twitter.com/http_your_heart'),
                ('Mastodon', 'mastodon.svg', 'https://mastodon.art/@hyperlinkyourheart'),
                ('Instagram', 'instagram.svg', 'https://www.instagram.com/hyperlinkyourheart/'),
                ('YouTube', 'youtube.svg', 'https://www.youtube.com/channel/UCc_O9Hp5UfQ-IHswi1H54Zg'),
                ('Twitch', 'twitch.svg', 'https://www.twitch.tv/hyperlinkyourheart'),
                ('Itch', 'itchio.svg', 'https://hyperlinkyourheart.itch.io/'),
                ('GitHub', 'github.svg', 'https://github.com/khoulihan'),
                ('Atom Feed', 'rss.svg', '/feeds/all.atom.xml'),)

I like that I can just throw custom configuration into the config file and then make use of it in the templates. However, it probably makes the theme less generally useful.

As it stands currently, loading this post requires less than 30kb to be transferred.

Plugins

Currently, the only plugin I’m using is the css-html-js-minify plugin that is available in the pelican-plugins repository. I haven’t found anything I need to write my own plugin to handle yet, but I’m sure I will get to it.

One problem that needs solving is that the SVG icons are a big nuisance, because it doesn’t seem to be possible to change their colour without using the CSS filter property, which is not nearly as convenient as just setting the colour directly. In order to do that, using the fill property, I would have to embed the SVGs, or reference them as symbols in a <use> tag within an <svg> tag. The individual icon files (from FontAwesome) aren’t set up like that, and I didn’t want to use their spritesheet because it is rather large.

What I might do in the future is write a plugin to compile the individual files into a single spritesheet of symbols, then find and replace any references to them with appropriate <svg> tags. Essentially this will be doing the job that the FontAwesome toolkit usually does in the browser.

The Content

Uuuh… I’ll get back to you on that. Things I like, things I do, that sort of thing.

Feedback

There’s a couple of different strategies for allowing comments on a static site - I’m not going to attempt them for now, and perhaps never will! If you have any feedback or thoughts there are many ways to reach me, such as Mastodon or Twitter, and I think that’s just fine.