COVID Movies

Well it happened - I finally caught the ‘rona. Actually I think I had it quite early on, but it was a mild dose. This time I was basically insensible for two weeks and maybe even still not fully recovered two months later*. Quite the nasty bug.

To pass the time while I was couch-bound, I watched various comfort movies and movies I haven’t seen since I was a kid.

Eternal Sunshine of the Spotless Mind

One of my all-time favourites. I appreciate it for it’s examination of flawed characters and their very real and relatable relationship. I appreciate its message that even bad experiences sometimes have lessons to teach us, and that if we don’t integrate them we will be doomed to repeat them.

Screenshot from Eternal Sunshine of the Spotless Mind - Joel is in the scanning chair, on a New York street
Where is my mind?

Much of the movie resembles the recollection of memory, wandering back and forth through time, finding and losing details and emotional attachments as it goes. The largely practical effects (or so I understand) contribute excellently to this surreal quality, with literally disappearing details, spatial and temporal distortions, and even a sense of things being just on the tip of the mind, yet unreachable. It was particularly enjoyable in my delirious fever state.

The Wizard

I love Powerglove. It’s so bad.”

This glorified ad for the NES is still a good time. I was obsessed with the Californian dinosaur statues after watching this movie when I was a kid.

One notably frustrating aspect of watching this movie now is how terrible all three finalists are at SMB3. Right at the beginning of the second level there’s a koopa coming towards them, and they all stand still waiting for it. One of them even has the tanooki suit, but waits patiently for it to move into range to jump on its head instead of just spin attacking it. They all also missed the power-up box on the ground in the same level that they could have used the shell to get. Just awful to watch.

Screenshot from The Wizard where the main character is waiting to jump on a koopa in SMB3 instead of just spin attacking them, and also a Jackie Chan WTF meme
What are you doooing??

Flight of the Navigator

I used to love this one when I was a kid, but I barely remembered it - I didn’t even remember the “time-travel” aspect, or as one character exclaimed: “light-speed theory!” Yeah ok champ.

Screenshot of the meal robot, R.A.L.F.
The real hero of this movie was R.A.L.F.

It’s quite a poignant story until the titular navigator boards the ship, at which point everything is explained by Max (the ship’s computer) and the plot is basically over. The final third of the movie just consists of flying around, meeting cute alien creatures, zany antics and an increasingly annoying Max. It’s fun and looks great, but I’m not sure I would say it holds up for adult viewing.

One thing that definitely doesn’t hold up is Sarah Jessica Parker’s adult character flirting with a 12 year old. Women are only supposed to flirt with robots, aliens and weird anthropomorphic ducks!

Screenshot of Sarah Jessica Parker's character stroking the main character's face
This is an inappropriate flirtation Ms. In The City

Wargames

Still amazing. Probably my favourite aspect of this movie is its realistic no-nonsense portrayal of hacking. There’s no dumb made-up jargon or macguffins, no self-conscious cyberpunk mythologising, just a kid with a modem studying a target to guess a password, with his only motive to learn and have fun. It’s so… pure, so unencumbered.

Screenshot of Matthew Broderick hacking
Just a boy and his IMSAI 8080.

The performances reflect that as well, with Ally Sheedy and Matthew Broderick really capturing the goofy, carefree energy of youth at the beginning of the movie.

Screenshot of Matthew Broderick smiling
A man with a face like this could get away with killing two women through reckless driving while holidaying in Northern Ireland in 1987.

One thing that has always bugged me is that WOPR/Joshua isn’t actually playing the game he’s supposed to be - if he were, he would only make moves as America. Instead he fakes a series of Soviet attacks, which should be David’s prerogative, seemingly to goad a real-life response.

Thankfully, whatever his motive, he ultimately fails, and instead learns a lesson about the futility of nuclear war in an incredibly powerful climax.

Two screenshots of the big WOPR screen showing Ireland getting nuked
Was it really necessary to nuke Ireland twice?

Star Trek: Generations

I understand some people hate this movie, but I love it. It’s the only movie that really captures TNG-era Trek at all, and it was incredibly exciting to see “my” trek on the big screen.

Screenshot of Data saying that he hates the drink that Guinan has had him try
Star Trek fans getting a taste of their first Next Generation movie.

Probably my favourite aspect of this movie is Data’s subplot of acquiring emotions and coming to terms with them. Sure, a lot of it is just comic relief (good comic relief though), but it also feels like a culmination of the character’s arc from the series.

Two shots of the Enterprise D saucer section crashing
It's just like a long episode of the TV show!

Star Trek: First Contact

There are aspects of this movie that I like - I like the Borg, I like Picard’s relationship to the Borg, I like seeing the pivotal moment of first contact, and Zephram Cochrane’s warp ship looks awesome.

However… there is much more about it that I don’t like.

I think this was the introduction of the Borg queen, and while Alice Krige is amazing in the role, I think the existence of a queen changes the nature of the Borg somewhat - from something truly alien to something much more prosaic and uninteresting - and I don’t appreciate that.

Screenshot of Geordi
Also the vibes are way off on visor-less Geordi.

Much worse than that are the changes to the character of Picard, who went from being a diplomat and a scholar to an action hero.

Screenshot of Picard looking incredibly muscular
The diplomacy of big biceps.

Also the Borg time-travel plan as a last resort doesn’t make any sense - if they have the means and will to assimilate Earth in the past they could do the time travelling from anywhere, there’s no reason to barrel into the Sol system and do it only if their cube is destroyed. And they could just try again. But then, it’s a common problem in Star Trek that things that are really easy to do one moment are impossible, or forgotten about, or work completely differently in the next.

Star Trek IV: The Voyage Home

This movie mostly consists of the Enterprise crew bumbling around San Francisco in the 80’s and trying (and often failing) to grasp the cultural differences and blend in. It is ridiculous and hilarious, and I love it!

Shot of the Enterprise crew on a street in San Francisco in the 1980s
Look at these adorable goofs.

One thing this movie does really well is make the characters feel like they are from a very different time than the one they are visiting. Other Star Trek time-travel stories, such as the Voyager crew’s visit to 1990’s LA, Sisko and Bashir’s visit to San Francisco in 2024, or the Enterprise D crew’s visit to 19th century San Francisco (boy, they sure visit San Francisco a lot), don’t quite capture the gulf of time in the same way - they generally come across as a little too savvy.

Another great aspect is how little fucks they give about telling people they are from the future, or even sharing technology with them. I bet the boys from the Department of Temporal Investigations had a heck of a time sorting this mess out!

Screenshot of Scotty entering the recipe for transparent aluminium into a Mac Plus
Apparently they had telepathic keyboards in the 80s.

It’s also brilliant that there’s no real villain except for our society’s indifference to environmental degradation.

Shot of the Klingon ship hovering over a whaling ship
Or maybe the villain is just... whalers?

Star Trek VII: The Undiscovered Country

I think this was the first Star Trek movie I got to see in the cinema. I was definitely too young to understand the political allusions at the time, but I seem to remember enjoying it anyway. The themes are classic Star Trek stuff - overcoming prejudices, working to secure peace, etc.

Screenshot of the Klingon character Chang, who has an eye patch
A somewhat less abstract enemy than the previous Star Trek movie on this list...

One striking aspect of this movie is how differently the Klingons are portrayed than they are in The Next Generation, particularly the early seasons. In this movie they are erudite, sophisticated, and almost Romulan in their sneakiness. In TNG’s early seasons I guess we mostly just see Worf, but he is almost bestial, without the temperance and nuance of character that he would develop later. It’s interesting that the releases overlap with this apparent disconnect.

Screenshot of the Enterprise crew and the Klingon crew they are escorting having dinner together
Contrast this with scenes of Klingons dining in TNG...

Back to the Futures

I used to watch these once a year, but it has been a while since I’ve watched any of them, and even longer since I’ve watched all three in a row.

Screenshot of Marty sidling away from the Doc as the Delorean is about to speed towards them, and the Doc looking at him accusingly
How dare you?

I had some really weird misconceptions about these movies when I was a kid. In particular, I thought that every era had versions of the same characters - not family members who happen to look remarkably the same, but actually incarnations of the same characters. For example I thought 1885’s Doc was a different Doc than 1985’s Doc - and that Seamus was actually “another Marty”. It’s hard to even grasp now what I was thinking with my undeveloped child brain, but somehow I also remember it really clearly.

Of course, now that I understand it fully, half the fun of a movie like this is poking holes in the time-travel logic. Obviously I’m not saying anything new here, but a couple of the things that stood out to me on this viewing were:

  • The way “artifacts” from other timelines change is ridiculous. If the future has been changed, why would they ever be in an intermediate state where things are fading away?
  • Marty has a picture of himself and his siblings, and after he changes the future they all fade away because they were never born - but why would the picture even have been taken then?
  • Similarly, Jennifer takes a fax from the future indicating that future Marty will be fired. After the timeline is changed, the contents of the paper are erased. Why would she have even taken the paper in this new timeline? It’s a blank piece of paper, what are you doing Jennifer??
  • Also LOL when she holds it up to the Doc at the end of III and says “It erased - what does it mean?” He has no context for what this piece of paper is!
  • The same thing happens with the picture of the headstone in the third movie. It changes between showing different names, and then in the end it becomes a photo of an empty patch of ground. Why would they have taken a photo of an empty patch of ground?
  • Also hilarious is when newspapers are shown changing from one story to another, very conveniently about the same person!
  • There’s a big question of why changes effect objects, but not people. Why does Marty not remember being raised by his new, cool parents, or that he has his dream truck? Why does he not remember his dad dying and his mom marrying Biff when that becomes the new reality? The only effect we see on Marty is when he is about to fade out of existence when it seems like his parents aren’t going to get together - but he should also become a different person, like his siblings did.
  • The oft criticised scenes where Marty appears to inspire black men to achieve and create are really bizarre. His other actions change the future, but Mayor Wilson was already Mayor before the time-travel antics, and Chuck Berry had already made his music. They can’t be time loops if Marty never changes along with everything else.

The other half of the fun is, as it has always been, Christopher Lloyd’s wonderful face:

Various shots of Doc Brown expressing emotion
Such range.

Despite having seen these movies so many times, this was the first time I realised that Marty’s future daughter in BTTF2 was also played by Michael J. Fox! And I still didn’t even catch it when she was on screen, I only noticed it in the credits!

Shot of Marty's future daughter coming down the stairs
I still don't see it to be honest.

The Warriors

I’m not sure what age I was when I saw this before - probably too young - and all I really remembered about it was that there was a bunch of outrageously costumed gangs running around, and the bit at the end when they make it back to Coney island.

I get the impression that the violence depicted was shocking and realistic for the time, but now it seems quite tame - just some light, bloodless brawling.

One thing I can say after watching this now is that we don’t have enough mime gangs or Star Trek alien baseball gangs around these days. I think crime would be a lot more fun if people injected some theatrics into it.

A couple of shots of the baseball gang who look like the Star Trek aliens with the half black and half white faces.
They could be straight out of DS9.
Shot of the mime gang coming out of the subway
Uh-oh I hope they don't mime beating me up!

The copy I watched this time was the director’s cut, so I got comic-book style transitions and a voice-over opening which apparently weren’t in the initial theatrical release. The transitions in particular suited it well because the gang uniforms are so cartoonish.

Short Circuit

The impact this movie had on me as a kid was to make me want to be friends with robots - and to be Ally Sheedy’s boyfriend.

Screenshot of Johnny 5 and Stephanie dancing
I'm jealous of both of them.

As with the character Max in Flight of the Navigator, Johnny 5 becomes increasingly annoying as the movie goes on, though he remains somewhat more endearing. His repeated insistence that he is alive and that he wants to remain so, and doesn’t want to be involved in hurting any other creatures, was a powerful message to receive as a kid.

Unlike Flight of the Navigator the plot continues all the way through to a fantastic fake-out downer-upper ending.

The Rocketeer

Shot of the rocketeer flying towards a biplane
Outta my way, I'm a rocketman!

A barely controlled rocket-powered superhero foils the plans of some horrible Nazis - what more could you want from a movie? How about the gorgeous Jennifer Connolly also kicking ass?

Shot of Jennifer Connolly's character smashing something over a bad guy's head
Smashing!

The action set-pieces in this movie lack the impact of those from more recent superhero movies, but it’s also somewhat refreshing that he’s just an ordinary guy who can take an ordinary amount of abuse. Also the art-deco Iron Man aesthetic is amazing.

Except… how does he not burn his arse? Forget the helmet, what he needs are some thermally insulated pants!

Screenshot of the Hollywoodland sign becoming the Hollywood sign when a Nazi drops on the "land" part and explodes
Did you know that the sign used to say Hollywoodland until they blew it up for this movie?

Note: Once I was feeling better I got into other things and never finished off this post. It has actually been over six months now since I originally intended to post it, and therefore the opening paragraph is a bunch of lies. I’m finishing it off now because I think I have COVID again for the second (or maybe third) time, though less severe! Back to top.

From Hell’s Heart

Demon character from my game bound in a circle of protection
What have they done to my boy??

From Hell’s Heart is my entry for Ludum Dare 55, the first Ludum Dare I have participated in since I made Out of Gas for Ludum Dare 48 in 2021.

I was not thrilled with it overall (and neither was anybody else - but we’ll get to that), but it was good to get into working on actual gameplay programming again after several years of only working on tooling, and it has inspired me to do a lot more gamedev work since.

The Concept

I was quite disappointed by the theme. I had an idea for the theme “It Spreads”, which was one of the final round themes, that I was very invested in. I was going to make a zombie shooter where the weapons were all spreadables like jam and nutella and so on, and I had a lot of fun art and gameplay ideas for that.

So when the theme of “Summoning” was announced I was disappointed. But, the theme is rarely the one I would prefer, so I got to brainstorming and came up with the idea of “reverse-Doom”, which evolved into something not explicitly Doom related, but where you play as a demon fighting soldiers nonetheless. For a brief moment I was even considering calling it “What if Doom but you’re the Demon”.

The thing I liked about this concept was that it allowed me to address the theme in two ways - you are a demon who has been summoned, and you can summon other demons to help you. However, the fact that it is perfectly playable and beatable without ever summoning your demon friends diminishes this somewhat!

One of my hopes for this jam was to try out my dialogue graph editor, Digression, in another game, and unfortunately there was no place for it with this concept. I used it heavily in Out of Gas and Gophers in the past, but it has been fleshed out significantly since then and is ready to be put through its paces. Oh well.

Art

I couldn’t get Pyxel Edit to run on my new computer when setting up for the jam (it is Windows only, and I have run it using Wine in the past), so I decided to branch out and try Pixelorama - a pixel art editor built in Godot. Although my muscle memory from Pyxel edit tripped me up constantly in minor ways it was a good experience overall and I think I will continue using it going forward, as I always prefer open-source and native applications when they are available. The only thing I really found lacking was the way it handles the grid/tiles. There are no tools for quickly copying and manipulating tiles, and the grid is specified in the global settings instead of being per-file, even though one grid is unlikely to suit different files even in the same project.

I also had to adapt the way I managed spritesheets, as Pyxel edit allows you to define multiple animations in a single file, with the frames being tiles, while Pixelorama will only animate the entire file as a single unit. As such, character sprites are spread across multiple files each. In Godot this means swapping the sprite texture for each animation, but that didn’t seem to cause any problems.

I made some big mistakes with the level art early on, initially designing it at twice the appropriate resolution for the characters I had in mind. I was able to take the basic design ideas and downscale them into the final wall and floor tiles easily enough, but it still wasted a lot of time. I designed them for an auto-tiling approach which I developed a while ago, where a tool script populates the floor tiles when you update the walls.

The initial oversized art I designed. The walls are about 5 times the height of the characters.
Humongous walls, and some variants of the player character.

I based the character shapes roughly on the design I came up with for Guerrilla Gardening for Ludum Dare 41, which is very round and fun, and I’ve found works really well for shooter games. It also features outlines for the characters which means there’s less risk of ending up with characters that don’t show up well against the background!

I was quite pleased with the art overall, though when I designed the wall and floor variants for the opening “cutscene” I realised that the game is far too red overall, as the characters looked much better against the bluer background!

Screenshot of part of the opening scene, with blue-grey crates against blue-grey walls and floor
The crates had much less contrast though.

I am determined next time I do a jam solo to not do full animations for each character or game element, and instead use the animation features of Godot to bounce, squash and spin things to bring them to life. This will give me much more time to create a variety of characters and objects, which I think would contribute more to a jam game than full run-cycle animations. The only place I used this kind of technique in this game was for the spinning and pulsing of the portal, and I think it was quite effective. People do some great things with this kind of animation and I am missing out.

Music

Music is an element of jam games that I am always trying to find a new and better approach to, as it is something that I enjoy a lot but am not particularly skilled at. In the past I have used beepbox, LMMS and Bosca Ceoil with different degrees of success.

In the week prior to the jam I came across an application called Helio which I thought looked incredibly promising. It looked like it might be as easy to use as Bosca Ceoil but with a wider variety of possible instruments, so I was excited to try it out.

Another type of tool that I am always looking to bring into the fold is modular synth emulators. I have tried VCV rack in the past but never managed to use it for a game, and I recently came across a fork of it called Cardinal and had been practicing with that. I discovered that I could use Cardinal as a plugin for Helio and create instruments in it to control using Helio, so I planned to either do that or create a patch to generate all the music for the game, depending on what idea I was working on. The concept for From Hell’s Heart didn’t really seem to call for techno or ambient music, so I ended up trying the latter.

Unfortunately, things did not go smoothly at all with creating the music. I found Helio to be lacking in some essential features, and with a confusing menu system. It crashed constantly, forgetting instrument settings each time and sometimes changing the volume of notes or moving them around arbitrarily. I think the interface with Cardinal may have been responsible for some of this, and ultimately abandoned trying to use it for any instruments in favor of a selection of sound fonts. There was no way to preview the sound fonts in Helio, and I ended up previewing them on the command line using fluidsynth, but not all of the ones I found that I liked worked when loaded into instruments in Helio. It was just a frustrating mess all around.

Most bafflingly in terms of missing features in Helio, I couldn’t find a way of creating more than one pattern for an instrument, so I ended up having to use multiple tracks of the same instrument when I wanted a different pattern.

It also lacks a central mixer where effects can be applied. Instead you can add effects on a per-instrument basis. I tried to do this in Cardinal as well, but again abandoned that as time dragged on and the application kept crashing.

So overall it was a bit of a nightmare, took five precious hours, and the track ended up being overly repetitive, with just one melody repeated over and over with the only variation being different instruments coming in and out. I hated it when I was done, but it grew on me a bit when I put it in the game. It did kind of have the mood I was aiming for.

Every time I try to make music in Linux I feel like I am missing out on some brilliant workflow that ties a variety of different applications together with their different specialities, but I am just failing to grasp how it all works, and this time was no different. Back to LMMS next time I think…

Sound Effects

Somewhere that I did get to use Cardinal was for the sound effects! I mostly used the Audible Instruments synth modules with a variety of different setting, and used Audacity to record different notes being played for variety. Some of the results I was really pleased with (the demon snarls), others much less so (the enemy voices, which sounded like robots saying random words), but overall it worked pretty well. I would have preferred to do some foley stuff but there wasn’t time, and it was a step up from SFXR at least.

Code

Since my long-time project, Just a Robot, is a shooter, I had a fair bit of base code to plunder for this jam. I don’t think I did or learned anything particularly interesting this time around, but the base code did include a technique for showing a character silhouette when they are obscured by a wall which is interesting enough, and some automatic configuration of floor and mask tilemaps to allow rooms to be banged out quickly with little possibility of error.

The trick to the silhouettes is to create a mask of the parts of the walls that should obscure game entities using a BackBufferCopy node, and then check that mask in a shader on any sprite that should be obscured. Objects in the game do not need to be children of the TileMap, and in fact there is a different TileMap for walls and floors.

Screenshot of Godot editor scene tree showing mask TileMap and BackBufferCopy
Yes I used nodes as "folders" here, bad very bad.

The mask TileMap is populated automatically in the editor using a script like this:

@tool
extends TileMap

@export var copy_map: NodePath
@export var copy_from_layer: int = 0
@export var copy_to_layer: int = 0
@export var refresh_frequency: int = 10

@onready var _copy_map_node = get_node(copy_map)

var time = 0

func _process(delta):
    if Engine.is_editor_hint():
        _tool_process()

func _tool_process():
    if copy_map != null and refresh_frequency > 0:
        _copy_map_node = get_node(copy_map)
        var current_ticks = Time.get_ticks_msec()
        if time == 0 or current_ticks - time > refresh_frequency * 1000:
            time = current_ticks
            _copy_map()

func _copy_map():
    var cells = _copy_map_node.get_used_cells(copy_from_layer)
    self.clear_layer(copy_to_layer)
    for cell in cells:
        if not _copy_map_node.get_cell_source_id(copy_from_layer, cell) == 0:
            self.set_cell(
                copy_to_layer,
                cell,
                _copy_map_node.get_cell_source_id(copy_from_layer, cell),
                _copy_map_node.get_cell_atlas_coords(copy_from_layer, cell),
                _copy_map_node.get_cell_alternative_tile(copy_from_layer, cell)
            )
    self.fix_invalid_tiles()

The mask tileset just has 100% red everywhere that should be obscured, so the shader just checks for red in the screen texture and displays a grey colour instead of the sprite’s texture anywhere it finds it, leaving the alpha intact:

shader_type canvas_item;

uniform bool hide_when_occluded = true;
uniform sampler2D SCREEN_TEXTURE : hint_screen_texture, filter_linear_mipmap;

void fragment() {
    vec4 mask = textureLod(SCREEN_TEXTURE, SCREEN_UV, 0.0);
    if (mask.a > 0.0) {
        if (mask.r > 0.9) {
            if (hide_when_occluded) {
                COLOR.a = 0.0;
            } else {
                COLOR.rgb = vec3(0.2, 0.2, 0.2);
            }
        }
    }
}

It turned out that people really hated the tight corridors and enemies being obscured though, so I probably won’t be using this technique again!

A similar script to the one that populates the mask also populates a TileMap with floor tiles based on the wall tiles. The floor tiles have a different size than the wall tiles however, so it is a bit longer and more involved. I think I would prefer to just put floor and wall tiles on different layers of the same TileMap in the future, or as TileMapLayers or whatever is being introduced in Godot 4.3…

Another neat thing I did in the editor was to have spawn points for portals and enemies draw a line to the thing they are associated with, or an obvious warning if they are not configured correctly. This helped me ensure that everything was set up properly when rushing through the room designs in the last few hours of the jam! I am trying to get into the habit of adding this sort of tooling to everything that might benefit from it.

Screenshot of Godot editor showing lines from spawn points to associated entities
Lines, lines, everywhere are lines

@tool
extends Marker2D


@export var destination: GameRoomSpawnPoint


func _process(delta):
    if Engine.is_editor_hint():
        _tool_process()

func _tool_process():
    queue_redraw()


func _draw():
    if Engine.is_editor_hint():
        if destination != null:
            _draw_to_destination()
        else:
            _draw_warning()


func _draw_warning():
    self.draw_circle(Vector2(0, 0), 15.0, Color.RED)


func _draw_to_destination():
    self.draw_circle(Vector2(0, 0), 15.0, Color.GREEN)
    self.draw_line(
        Vector2(0, 0),
        destination.global_position - self.position,
        Color.GREEN,
        1.0
    )

I also took from the base code a node-based state machine for enemy AI. This turned out to be a bit confusing and inflexible, and when I tried to introduce some new behaviour on the final day I ended up making the game crash constantly, and had to roll back. I’m still unclear on exactly what went wrong there - I was getting null references due to the summoned allies despawning, even with null checks before referencing them. It was probably really obvious but I was exhausted by then. In any case there were other problems with the state machine I developed and I have started investigating the Godot plugin Beehave as an alternative for the future.

Something that did not work very well was the enemy navigation and avoidance. I completely misunderstood how the avoidance system was supposed to be used, so it was actually not in play at all. In experiments I’ve done since the jam I got it working to some extent, but it seems like it does not work very well anyway, with agents just grinding to a halt in many circumstances even after avoiding an obstacle! The navigation itself would have worked much better with one small settings change of path_postprocessing in the NavigationAgent2D to “edgecentered”, as I discovered later. As I had it for the jam, navigation agents are always getting stuck on wall corners…

Results

My game got lots of nice commments, and also a lot of complaints about out-of-bounds glitches, enemies being obscured by the walls, and being too easy.

I was uninspired by the theme this time around and my skills were somewhat rusty in every area, and the game was generic and buggy as a result, so I was unsurprised by the negative comments. I did think it looked good and was reasonably juicy and sounded alright. I wasn’t expecting great overall ratings, but I did think it would do ok in the graphics category at least. However, the ratings were quite bad across the board, at least compared to previous jams!

Category Rating Placing Percentile
Overall 3.571 574 65th
Fun 3.63 418 75th
Theme 3.413 792 52nd
Innovation 2.739 1080 35th
Humor 2.609 823 50th
Graphics 3.935 428 74th
Audio 3.619 343 79th
Mood 3.457 686 59th

This game did worse in almost every category in percentile terms than my Ludum Dare 38 entry which I didn’t even complete - it was just an opening scene with a short dialog and no gameplay. Quite a drop from my “Gophers” peak!

Ratings Graph Percentile Graph

Post-jam and Take-away

I won’t make my usual promise to work on a post-jam version of this game, both because it’s not interesting enough to be worth it, and because I never keep that promise anyway!

Instead I have been inspired to start working on Just a Robot again since the jam, trying to get combat and enemy behaviour working the way I have always envisioned them, to see if it would actually be fun. The idea is to make it a cover-based shooter where the enemies abilities are close to the player’s, more or less, and combat feels weighty and dangerous in a way that is distinct from bullet-hell shooters.

I successfully implemented a cover system for the player, but when I went to enable the enemies to use it and switch to using Beehave for their AI I quickly discovered that the way I had built them so far was too centralised and inheritance based, so after watching a few tutorials and considering things a bit I decided to start again mostly from scratch with a more composition-based approach. This is working really well for the player so far and I’m nearly back at the same point as I was previously.

I also thought about and experimented with tiling approaches a bit, creating both a very small scale autotiling tileset with multiple terrains, and a mockup of a more organic looking tileset with a wide variety of designs, angled walls and floor sections and the like. I am tired of creating boxy, uninteresting levels. I was initially inspired in my gamedev journey by the art and design of Hyper Light Drifter, and I want to get back to achieving something of that look and sense of verticality - maybe not in jam games, but in Just a Robot.

Complex tiles mockup
Too bright for this game, but moving in the right direction I think

I started on a greybox version of the above to actually use in designing the game, but it’s not quite finished yet.

I Accidentally Visual Scripting

For some months now I’ve been trying to make some progress on the game I’ve been “working on” for probably the better part of a decade, Just a Robot (yep, 7 years ago was the first post there, and it was already in progress for a few years before that!). I haven’t really been working on it for most of that time, though it has always been in the back of my mind.

Variations on the character art over the years, including sprites and a portrait
All I actually seem to do is redesign the art...

While I have re-implemented some basic gunplay mechanics, and experimented with auto-tiling and the like, my main task has been improving the editor for cutscene graphs that I designed previously (and which I’ve mentioned a few times before) based on my experience of using it in a couple of jam games, and the anticipated requirements for this game.

I am also hoping that it will be something that other people might find useful, and that I can release in the Godot asset library. As such I try to do things in a generalised and user-friendly way, with nothing that would tie it specifically to my game, or rough edges that I would just ignore myself but be embarrassed if other people had to deal with them.

As a result, it’s taking quite a while!

One thing I’ve noticed is that as I try to maintain flexibility I seem to be implementing tiny haphazard visual scripting systems for calculating values and defining conditions. Of course the graph editing is itself a type of visual scripting, but that that was both what I was expecting to be designing, and has well established UI conventions in the engine. I don’t think there are any conventions or controls for defining values and conditions in the way that I am.

Variable Changes

In the initial incarnation of the graph editor, it was possible to set the value of variables, and branch based on the value of variables, but the way it was implemented was… not good.

  • Variable names were entered as strings. This seemed like a problem to me - as the scope of a project grew I anticipated that it would become harder and harder to keep track of what variables were being used, and if they were entered correctly everywhere.
  • Values were also always strings - if you wanted a boolean, just enter “true” or “false”!
  • There was no scoping, just one global pool of variables for all graphs.
  • You could only assign constant values to variables. No incrementing or decrementing, arbitrary calculations, or using the values of other variables.
  • You could only compare to constant values for branching.

Screenshot of early version of the editor, showing string variable name and value fields
This Naomi be Wolf

My initial changes to improve this situation were to introduce scoping and type definitions for variables. Variables could be scoped to the graph, to the area (i.e. “level” or “room” or however the game wanted to define it), or be global. They could also be boolean, int, float, or string. The UI would change to reflect the type, so you would get a checkbox for booleans and a numbox for numbers.

This was better, but still kind of awful, because everywhere a variable was used the scope and type would have to be selected again! Unlike in code there was no way to declare a variable once and then use it elsewhere with its type and scope already known.

Screenshot of one of the graphs from my game "Out of Gas", showing scoped boolean variables with values set by checkboxes
Teenagers up to no good as usual

So next… I implemented variable declarations, more or less! These are defined in the project, and anywhere that a variable is required in a graph it can be selected from a searchable dialog.

Dialog for defining a variable
Defining a variable
Variable set node with the variable selection control and a boolean value
Variable Set node
Dialog for finding a variable
Selecting a variable

Now I felt like I was getting places, though the problem of only being able to set and compare constants remains, and is what I’m currently tackling. But more about that in a minute.

Choice Conditions

Another feature of the initial version of the graph editor was the ability to make dialogue choices only conditionally available to the player - for example making a choice dependent on having encountered a particular character or completed a particular quest. This was of course based on the comparison of a variable to a string constant, so it had all the same deficiencies as everything else about the variables, as well as a few of its own:

  • Only one variable could be used for each choice.
  • The only comparison available was equality, no greater or less than or anything like that.
  • There was no possibility of negation.

The minimal improvement would be to allow a comparison of a single variable with a selectable operator, and a constant value. But what if the choice depends on the value of multiple variables? What if you want to make different choices available if the player has encountered a character but not completed a quest, than if they have done both? This could maybe be achieved using branching nodes to set a third variable, but that seemed like jumping through a lot of unnecessary hoops. I wanted to be able to define complex conditions directly on the choices.

My solution was to add a dialog where an arbitrarily complex condition can be defined - though currently only with constants on the right side of any operator.

A screenshot of the condition definition dialog alongside the node that it was invoked from
The Farnsworth Condition

As you can see above, the condition is structured as a tree with boolean operators grouping the results of comparison operators. The whole condition is summarised as its (frankly much more understandable) equivalent GDScript.

So mission mostly accomplished - it was now possible to define conditions on choices with an arbitrary number of clauses. However, it was starting to seem like a lot of UI complexity to achieve something that is quite simple to do in code - almost like a type of visual scripting…

Visualising Values

Now it’s time to tackle the other side of the equation - the values to set or compare the chosen variables to.

For setting variables, I wanted it to be possible to increment or decrement values as well as setting static ones… But why not also allow values to be multiplied or divided? What if you want to set a variable to 2x another variable, or the result of a more complex calculation? These don’t seem like particularly likely requirements for my game, but why reduce the flexibility by only allowing a handful of fixed operations?

For comparisons, I initially only considered allowing a choice between a constant or another variable. But would these not also benefit from more flexibility? What if you want to check if a variable is greater than half another variable? If setting variables was as flexible as described above, this could be achieved by setting a temporary variable, but that seems unnecessarily round-about and annoying to have to do. Since the task is the same for both situations (obtain a value for the right side of an operator), it seemed like it would be prudent to create one control that would cover both.

Another factor is that most of the above concerns only apply to integer and float variables. Booleans have another set of operators that might be applied to them. Strings have a more restricted set of operators, but there are a variety of functions that you might want to apply, or methods that you might want to call on them - to_lower, rstrip, replacen, etc. In fact, the same might be true of integers and floats…

With all that in mind the requirements have become quite complex - I’m faced with implementing a small but significant subset of GDScript in a GUI!

The design I have so far allows for multiple variables or constants to have operators to be applied between them, grouped by brackets if necessary, and for a selection of appropriate (for the type) functions to be called:

Screenshot of the design of the proposed value calculation control in the designer
That's a really long and weird way to write 1...

One thing that annoys me about this is that it’s much the same structure as is involved in creating the conditions (a tree), but uses completely different controls and looks and works completely differently. It’s going to look quite strange when they are side by side in the conditions dialog… However, the tree control used for the conditions probably suits that better because the elements need to be selectable, and the “prefix notation” also suits it better I think, while it would be unfamiliar for most people for mathematical operators.

I can’t really tell at this point if this is at all intuitive or if there’s an obviously much simpler solution that I’ve overlooked. But it certainly makes me long to be able to just enter the values as code, regardless of their composition.

Could I Uuuh… Do That?

I have been thinking about what would be involved in allowing the user to enter conditions and values as the GDScript code they would likely already be familiar with.

Godot includes an Expression class which can be used to parse and execute arbitrary expressions, so conceivably that could be used to run whatever the user input, and make the entire GDScript language available to them. It looks like it can even parse the text for errors before running it, which would be all that I would want to do in the editor anyway. One likely difficulty is that the referenced variables would not actually be GDScript variables at runtime, but entries in one of several dictionaries, so I might have to write my own parser anyway to pick those out and replace them. I’m not sure I really want to do that.

If that didn’t work out, the alternative would be to parse the input myself (even more parsing!) and convert it into the same resource structures that I’m intending to have the UI create. This would likely be much more limited in which language constructs and operations it could support - and that might cause confusion or frustration for the user.

And, there are reasons why allowing these things to be defined as code might not be a good idea anyway:

  • I don’t really like the idea of having to switch from a GUI way of doing things to a code way of doing things when the rest of the plugin is very much GUI driven.
  • I don’t like the fact that the predefined variables would not be selectable, negating some of their utility. Or at least, to make them selectable I would have to implement some sort of auto-completion on top of everything else, which might be beyond me!
  • Some people use C# with Godot, and it’s unclear if the Expression class can parse C# - I suspect it can’t.
  • Using the Expression class would likely allow arbitrary GDScript to be entered and executed during graph processing. That might be too much flexibility! If I want to allow that it will be its own node type.

One other GUI option I can think of would be to allow value calculations and conditions to be defined using their own type of graphs. This might have the advantage of using well established UI conventions and existing controls. On the other hand, a graph editor is not ideal for defining a tree, and it would probably appear even more unnecessarily complex and confusing for the undoubtedly most common use cases of setting variables to constant values and defining simple conditions based on single variables. It would also not be any less work for me!

Conclusion

UI is hard

All My Yield()s, Gone!

In a previous post I described a way to use the state object returned by a yield() call to control the traversal of a graph - specifically, a graph describing a cutscene or dialogue - where some nodes in the graph require waiting on input from the user or some other event before proceeding.

In Godot 4 the yield function was replaced with the await keyword. This has the same basic purpose: to suspend execution of the current function and return to the caller, to be resumed at a later time. However, it does not return the state object that yield did, so there is no built-in way to resume the function from the caller (that I can see, anyway).

Fortunately, it is not difficult to recreate the functionality. The first thing we need to do is define a very simple class that we can include instances of in the signals from the graph controller:

class ProceedSignal:
    signal ready_to_proceed(choice)

    func proceed(choice: int = -1):
        ready_to_proceed.emit(choice)

This class includes a signal and a method that the consumer of the graph can call to tell the graph controller that it can proceed to the next node - similar to the resume() method on the coroutine state object in Godot 3.

The _await_response function which previously yielded to create a resumable coroutine state, now just returns a new instance of this class. It could alternatively just be created directly where this function is called:

func _await_response():
    return ProceedSignal.new()

In process_cutscene() we now await calls to process node types that require waiting on the consumer, rather than yielding them:

func process_cutscene(cutscene):
    _graph_stack = []
    _local_store = {}
    _current_graph = cutscene
    _current_node = _current_graph.root_node

    ...

    while _current_node != null:

        if _current_node is DialogueTextNode:
            await _process_dialogue_node()
        elif _current_node is BranchNode:
            _process_branch_node()
        ...

And when processing such a node, we just create the ProceedSignal object, emit the relevant signal with it, and then await the ready_to_proceed signal from it:

func _process_dialogue_node():

    ...

    text = _current_node.text

    var character_name = null
    var variant_name = null
    if _current_node.character != null:
        character_name = _current_node.character.character_name
    if _current_node.character_variant != null:
        variant_name = _current_node.character_variant.variant_name

    var process = _await_response()
    call_deferred(
        "_emit_dialogue_signal",
        text,
        character_name,
        variant_name,
        process
    )
    await process.ready_to_proceed

    _current_node = _get_node_by_id(_current_node.next)

Nothing much changes from the consumer’s point of view, it just needs to store the object and then call the proceed() method when it’s ready:

func _on_cutscene_controller_dialogue_display_requested(
    text,
    character_name,
    character_variant,
    process
):
    # Hang on to the process object so we can tell the cutscene controller
    # to continue when we're ready to proceed
    _current_process = process

    ...


func _on_dialogue_display_continue_clicked():
    DialogueDisplay.hide()
    _current_process.proceed()

That’s all the changes required for this project! Of course, the coroutine state object also had a property indicating if it was resumable or not, is_valid. It would not be difficult at all to reproduce this by simply adding such a property (perhaps behind a setter that would make it read-only except internally), and setting it to false once proceed() is called.

It could also be expanded to allow more complex communication between the coroutine and the consumer, or to make a long running coroutine cancellable. The controller can only await one type of signal to continue, but you could have it pass different instructions when resuming e.g. you could give it stop() and proceed() methods. Additional properties on the signal object could be used to pass back other data without having to pass it in the signal at all.

enum ProceedSignalType {
    STOP,
    PROCEED
}

class ProceedSignal:
    signal ready_to_proceed(signal_type)

    var consumer_state

    func stop():
        ready_to_proceed.emit(ProceedSignalType.STOP)

    func proceed():
        ready_to_proceed.emit(ProceedSignalType.PROCEED)

Cutscene Graph Editor Status

The cutscene graph editor has been upgraded to support Godot 4, at a new home. I’ve also added a bunch of minor features, such as multi-node deletion, copy & paste and duplication support, and support for dragging from an output port to create a new node.

I’m now working on improving some parts of the tool that were lacking in flexibility. My current task is improving the addition of conditions to the choice and random nodes, which previously only allowed a single variable to be compared for equality to determine if a branch should be considered. The new system moves the condition specification UI out of the nodes themselves and into a dialog box, and allows any number of variables to be evaluated using a variety of different operators.

The Farnsworth Condition
The Farnsworth Condition

Future plans include new ways of defining and interacting with sub-graphs, more flexible ways of manipulating variables, built-in variables and meta-data, and better ways of defining characters.

Update: Pure Signals

It occurred to me after posting this that there is an even easier way to achieve this - as long as you don’t need to keep any state in the signalling object. Because Godot 4 allows you to pass signals and callables, you could just pass the signal to proceed with the signal that initiates the action. When the consumer is ready to proceed they can then just call emit on it.

I might still prefer the other way of doing it because _current_process.proceed() reads a little better than _current_process_proceed_signal.emit().

A Culture of Conspiracy

I read “A Culture of Conspiracy” by Professor Michael Barkun a few years ago. In it, he describes several broad categories of conspiracist belief, and traces the development of a variety of beliefs from their origins through to the time of writing as they branch and mutate and recombine.

I started this post not too long after reading it but I’ve let it languish for several years now. I hope I’m not misrepresenting its contents due to my failing memory, but I would recommend you read it yourself and find out - it’s definitely worthwhile, and I’m only picking out a few bits of it to talk about that were most interesting to me personally.

Conspiracy Types

Barkun identifies three types of conspiracy theories based on their scope:

  1. Event conspiracies - limited to a specific event, such as the assassination of JFK.
  2. Systemic conspiracies - concern the plans of a specific organisation or group with broad goals, such as taking over the world.
  3. Superconspiracies - this type of conspiracy links together various other conspiracies of the event and systemic varieties in a hierarchical manner (e.g. the CIA assassinated JFK, but the CIA are a tool of the Illuminati and his assassination served their purposes, but really the Illuminati are in thrall to Satan etc.)

Mostly this makes me wonder if systemic or event conspiracies ever really exist on their own in anybody’s mind anymore, because they all seem to espouse and nod along with such a mish-mash of ideas. He does note that superconspiracies have been on the rise since the 1980s. But is something like QAnon even separable from the web of conspiracist ideas in which it seems to be embedded?

Stigmatised Knowledge

Barkun identifies the origin of conspiracist thinking in stigmatised knowledge:

By stigmatized knowledge I mean claims to truth that the claimants regard as verified despite the marginalization of those claims by the institutions that conventionally distinguish between knowledge and error.

Stigmatised knowledge is not exclusive to conspiracism, but it is inevitably a feature of it, and I think it leads to it readily even when it initially exists without it. For example a believer in a discredited alternative medical treatment might not initially believe in any specific conspiracy in connection to it, but eventually they will have to explain why it is not accepted into the mainstream, and an easy answer is that it is being suppressed by a conspiracy of insiders who benefit from it not being adopted.

With this concept in mind it is easy to understand the origin of “pipelines” into conspiracism from any pseudoscientific field, fringe religious movement, or even from political ideologies that see their righteousness as obvious and their victory as inevitable. When their distorted worldview meets reality and they have to explain their failures, conspiracism is right there. Once they’re understanding one piece of stigmatised knowledge as being suppressed by a conspiracy, it’s a small step to accepting other such claims.

Fact is Fiction, Fiction is Fact

The commonsense distinction between fact and fiction melts away in the conspiracist world. More than that, the two exchange places, so that in striking ways conspiracists often claim first that what the world at large regards as fact is actually fiction, and second that what seems to be fiction is really fact.

Anybody who has spent any time listening to conspiracists will recognise the truth of this statement right away.

First of all, all conspiracy theories necessarily involve claims that one or more generally accepted truths are actually lies intended to pacify or deceive the sheep. Often any information coming from an institution is dismissed without consideration, because it is a given that institutions - be they governments, universities or the “mainstream media” - are “in on it”.

With fact rendered fictional, it becomes very easy to point to fiction to fill gaps in their evidence. Sometimes this takes the form of taking fictional sources as literal accounts, and Barkun describes some of these instances. Other times fictional stories are said to contain encoded messages or to be for the purpose of softening up the masses to accept some coming revelation or societal change - nothing can ever be somebody’s neat idea for a sci-fi concept, or allegory, or their opinion of where society is or is going, unplanned.

In some accounts, I believe that describing their literal plans in fiction is believed to have an occult purpose for the conspirators, much like is claimed about symbols on currency or on buildings (e.g. Denver airport). Advertising their plans in a way that will only be understood by an enlightened few is somehow a part of bringing them to fruition. This is how Alex Jones is interpreting H.G Wells’ “Time Machine” when he talks about Eloi and Morlocks, saying “it’s all right there folks” - and similarly for the other pop cultural works he references, of course.

Emergency Management

One of the things I was most eager to learn about from this book was FEMA camp conspiracy theories. I find these theories amongst the most frustrating (and amusing) because of how they look past the very real historical precedents of concentration camps, and the present day realities of mass incarceration and political repression in the United States and elsewhere, and focus instead on a long running conspiracy that is always just on the cusp of rounding up those troublesome “patriots”.

Of course, the longer this conspiracy has been in the milieu the more absurd it becomes. Barkun identifies the origins of this theory as a pamphlet by a man named William Pabst, written sometime prior to 1979. Pabst warns: “your country and way of life [will be] replaced by a system in which you will be a slave in a concentration camp”.

As such, more recent incarnations of this theory imply that the US government (acting on behalf of some hidden puppet-masters, perhaps) has been building and maintaining a network of secret camps for over 40 years without ever putting their nefarious plans into motion!

Historical instances of the use of internment and concentration camps by governments are of course very real. However, they have never required such extensive periods of preparation. When the British government decided to round up Irish Nationalists in Northern Ireland, they built temporary structures in the weeks prior to doing so, and more permanent structures over the next few years after that. The United States forced 120,000 Japanese Americans into camps during WWII, first in hastily converted racetracks and fairgrounds, and then in more permanent facilities built over a few months in 1942. Even the horrifying machinery of the Nazis did not require decades to construct, instead comprising a mix of repurposed buildings of many types, and camps newly constructed during the course of the war - a system that imprisoned and exterminated millions.

The Speed of Lies

When this book was first published in 2003 it had already been updated from the largely completed manuscript to include chapters concerning the explosion of conspiracism after the 9/11 attacks. The second edition, published in 2013, which is the one I read, had been updated with chapters about birtherism and millenarian conspiracies about the year 2012.

In a testament to the veracity of the saying that “a lie can travel half-way around the world while the truth is putting its shoes on”, many of the conspiracies considered in the book, even the later additions, seem quaint and out-of-date from the vantage point of 2023. Of course any book on the constantly shifting, slippery world of conspiracism will be out-of-date (in some ways) within a few years of coming out.

Nonetheless, the analysis is still useful to understanding the process of the creation and dissemination of conspiracist ideas. Indeed there is no amount of time and lack of confirmation that will kill many conspiracies - the reason I was so focused on FEMA camp conspiracies in this post was because somebody told me just a few years ago that Hillary Clinton would have put everybody in camps, and similar rhetoric arose even more recently when the language around COVID-19 mitigation measures was claimed to be intended to “make us feel like we’re in prison” - a FEMA camp of the mind I guess.

The only writing of Barkun’s that I’ve read concerning more recent developments in the conspiracy sphere is an article in “Foreign Policy” about QAnon which examines the efforts of its adherents to cope with its failed prophecies. As far as I’m aware QAnon is still going strong despite its predictive failures.

QAnon has been described by some as a “big tent” conspiracy theory because of its ability to adapt and incorporate new claims. However, it’s hardly unique in that regard - NWO conspiracy theories and many others have been interpreting events through their particular lenses and adapting and incorporating new claims for decades. QAnon might be unique in terms of its longevity despite having made specific, dated predictions that failed to come to pass, but to me it seems more like a systemic conspiracy that conspiracists have been rolling into their own long-existing superconspiracies. It only seems like QAnon is the “big tent” because it broke so spectacularly into the mainstream. As it breaks down under the weight of its failures it seems like it is adapting to include other theories, but it is actually the other theories that are absorbing it into themselves and trying to salvage the parts of it that are useful.

More Conspiracism

I started listening to the Knowledge Fight podcast during Alex Jones’ defamation trials to get the scoop on developments, and I haven’t been able to stop listening since. Dan and Jordan’s analysis of Jones’ bullshit is excellent, and it’s a great way of keeping up with what he’s saying. It’s also incredibly entertaining.

I have been meaning to check out the QAnon Anonymous podcast as well for a while to get a more general view, but I haven’t gotten around to it yet.

Syncthing Update

In a previous instalment, I described how I used Syncthing to sync my notes in Joplin on my laptop to Joplin on my phone. Unfortunately that arrangement didn’t last long - updates to different versions of Joplin on different devices resulted in incompatible versions of the notes database being synced, and at one point the Android version became unable to export to the filesystem at all. I never got to the bottom of that, but I had moved to using Logseq for most of my notes on the computer anyway, so it didn’t really matter much.

I also mostly fell out of using Syncthing, since I no longer required it for its primary purpose. However I got a Framework laptop recently and had the need to sync my Logseq graphs to it, as well as my music collection, work files, etc. It was such a joy to get it set up and watch files start to zip across to the new machine that I once again had to sing the praises of this amazing piece of software.

Some shares I have set up so far:

  • The camera roll on my phone - send only so that remote devices can’t add or delete photos. It’s great to have photos zip across to whichever computer I’m using without having to involve Google Photos.
  • The Default sync folder - why not? If some random file is needed everywhere it can go in there. It always throws me a bit that these create a common shared folder rather than each device’s folder being it’s own thing, but it’s cool, it’s fine, I’ll get used to it.
  • My Logseq graph folders - I set these up to backup any changes just in case, because Syncthing will not perform merges. However I’m not really that worried about it because I will only work on one machine at a time, and if the sync runs regularly it shouldn’t be a problem.
  • My music collection - I set this up to ignore *.zip and *.part for uh… reasons. One of the nuisances of a collection of downloaded music is ensuring that every new acquisition gets to every device where you might want to listen to it before you want to listen to it. Well, problem solved! And no more shaming from Spotify for streaming the same album on repeat for several months! (It was Manu Chao)
  • Several shares specifically between two particular devices e.g. huge desktop replacement laptop to the Framework, for when I want to share files specifically between those but not my phone.

I’m considering syncing some specific work folders as well so I can more easily untether from my desk, but… haven’t decided yet.

The Ancaps

Screenshot of a couple embracing in front of a bonfire, into which anarcho-capitalists are throwing books produced by a government
What if we kissed at the Anarchist book burning?

Much has been said already about the fact that HBO’s documentary series “The Anarchists” is not really about anarchists, and by people far more capable of making the argument than I. Nonetheless, I do have some thoughts on that and other aspects of the documentary.

My overall impression of the documentary is that it is philosophically vacuous and insincere. “Anarchism” is defined superficially by the characters in what is essentially an examination of interpersonal drama. The history of Anarchism proper, and its inherent conflict with capitalism is not explored, but neither, really, is “anarcho”-capitalism or the ideas behind it, on their own terms or otherwise. The community is simply mined for drama and spectacle. The main propaganda points of the doc lie in the fact that “freedom” is just implicitly associated with laissez-faire capitalism, and the appropriation of the word “Anarchism” and anarchist symbols by the right, a long-running project.

The Inherent Contradictions of Ancapitalism

Though the documentary has little interest in examining them, the cracks and contradictions in the ideology do show through.

I think it is safe to say that everybody who enthusiastically embraces an extreme capitalist ideology thinks that they are, or will be, the boss of whatever enterprise they are involved in. Of course this produces tension when it turns out that somebody’s property rights, and a lack of any critique of property or the hierarchies it produces, makes subordinates of people who consider themselves entitled to be in charge.

The Anarchapulco conference that the documentary focuses on was not organised in a non-hierarchical manner from its conception because anarcho-capitalism does not renounce all hierarchies, only the existence of the state. Jeff Berwick, the founder and apparent “owner” of the conference behaves throughout as if organising the conference is something that an employee should be doing on his behalf, with his own role limited to giving a keynote, receiving adulation, and partying.

The first such employee we are introduced to is Nathan Freeman, who apparently had a leading role in organising the conference for several years after attending the initial one. It seems to me that Freeman thought himself and Berwick were partners in the endeavour. Berwick obviously saw things differently, and replaced Freeman in 2019 with an outsider.

Tragically, it seems like Freeman couldn’t cope with this humiliation, and essentially drank himself to death. Berwick didn’t even offer condolences to his family, because he’s an enormous piece of shit.

There are a number of aspects to the circumstances of his death that a documentary that was an honest examination of anarcho-capitalism would interrogate. He fell victim to a crypto scam shortly before becoming sick. What is the anarcho-capitalist perspective on this kind of crime? What does history tell us about private money and its effects on society? It’s glossed over as an unfortunate, unavoidable risk of “freedom”. He had no insurance, and his family had to rely on charity to pay his medical bills. What is the anarcho-capitalist perspective on the provision of healthcare? The question is not even asked.

Interestingly, John and Lily, the young couple who flee to Mexico after being arrested on drug traficking charges, do form a critique of the hierarchical, commercial nature of the Anarchapulco conference, and start their own alternative conference called Anarchaforko. It’s a bit unclear the extent to which this is organised at all rather than just people showing up and doing whatever, but it seems to work, and I would love to hear more about how this fits with their apparent objectivist leanings. But of course we get nothing like that.

Screenshot of Lily Forester post on Facebook: "This conference was supposed to be for ancaps by ancaps!"
Well that's yer problem right there...

Stateless in Mexico

Probably the funniest aspect of the documentary for me is that the participants seem to think that Mexico is “more anarchist” than the US, just based on the general vibes. Mexico, of course, does have a state, and I don’t have any reason to think that it is “less of” a state than that of the US.

I think this sense of “anarchiness” is probably the result of a few different factors. Many of the ancap immigrants are relatively wealthy, and apparently speak little or no Spanish. They are essentially just squatting on top of Mexican society, with no real connections to it, and using their wealth to extract what they need from it. The Mexican state protects them, as states generally protect the wealthy. They have little negative contact with it, and don’t hear about other people’s negative interactions with it as they would in the US, because they don’t speak the language. They’re just living in a little fantasy colonialist bubble.

Some members of the community are not so well off, and they do have negative experiences with the Mexican state, ranging from dealing with bureaucracy to being pursued, threatened and arrested by the police.

Although Lily Forester is a member of the latter group, it is her concluding statement on the existence of the state that best sums up the general attitude:

I just want to be left alone, like, a state can exist if it’s going to leave me alone.

On a personal level I can relate, especially given what she went through, but it’s a far cry from the moral clarity of this Fannie Lou Hamer quote:

Nobody’s free until everybody’s free.

Any meaningful conception of freedom can’t ignore that other people are subject to repression or exploitation, but that is exactly what these ancaps constantly do - the Mexican state is fine because I’m a rich foreigner and it leaves me alone, capitalist hierarchies are fine because I’m on top of them.

M’Aidez!

As I mentioned above, the primary participants in the documentary fall roughly into two groups - one comprised of relatively wealthy entrepeneurs like Berwick and the Freemans, and the other of struggling working class people like Lily Forester and John Galton, Jason Henza, and Paul Propert.

Though both groups are motivated by more-or-less the same ideology (Henza claims himself and Forester are not ancaps, but I don’t really see much distinction between anarcho-capitalism and voluntaryism or agorism myself), the differences between their circumstances is stark. The wealthy run their businesses from their lavish properties while the rest do odd jobs, deal drugs, and otherwise hustle to survive while living in marginal circumstances. As Thaddeus Russell notes:

It’s very easy to escape governments, banks and states if you’re already a Bitcoin millionaire. If you’re like John and Lily, you’ve got no resources, nothing, it’s hard, it turns out, and dangerous, in fact, to be an anarchist in Mexico.

The tension between these two groups is discussed at several points. The drug dealing and other illegal activity (like the theft of a Bitcoin ATM) are an inconvenience for the wealthy, and the unhinged Paul Propert is a potentially deadly threat to everybody, but they have no solutions. Everybody is just on their own to fend for themselves.

The documentary explores the backgrounds of Galton, Forester and Propert in some detail and finds a variety of broken homes, substance abuse problems, and other traumas. Like the characters themselves, it doesn’t seem to consider for a moment that the source of these traumas is the very social system that they cling to so tightly.

Nonetheless the clearest critique the documentary has for the anarcho-capitalist project is the lack of solidarity and support that those lacking means, and in dangerous circumstances, receive from the community, and what this would imply for an anarcho-capitalist society. Erika Harris, who ends up feeling alienated from the community and leaving Acapulco for Belize, makes this plea for mutual aid after John Galton is murdered, and Lily and Jason are on the run:

There’s an emergency among us, how will we respond? With shelter, with safehouses, with passage over borders if necessary … We need each other to get this done. I mean, we need each other just to move one inch forward.

Unfortunately, her plea seems to have fallen on deaf ears.

Jeff Berwick setting a printout of an American flag on fire with a 100 Bolivar note
Vuvuzela iPhone Death to America

Nimpressions

Python is my go-to language for personal projects, and even client projects when I can get away with it (though usually those are Windows based and within the .Net ecosystem, so I stick with C#). However, it often gives me pause to be using one of the slowest and least energy efficient languages available - I might do another post about that, but suffice it to say that it doesn’t align with my values to needlessly waste resources.

The ideal would be a language that’s as easy to write as Python, but as fast and energy efficient as C, or close to it. Well recently I came across a language that claims be both of those things: Nim.

I put together a simple command line application (named Luz) in Nim this week in order to try it out. Appropriately enough given my reason for trying Nim, it just shows the current electricity rate band, and optionally a chart, because where I live there are two peak periods during the day when it is better not to do anything power-intensive. I went on to make a start on a very simple Gemini server called Sparkle, which is still a WIP. Here are some of my thoughts on the experience as a mediocre developer with some Python and C# experience.

Luz in action
Going from bad to worse

choosenim

Nim has a tool for installing its toolchain and and switching between different versions of the compiler, similar to pyenv. Unfortunately it didn’t work for me on Pop! OS 22.04 due to it having too new a version of libssl. I was able to install the Nim compiler manually easily enough by just downloading the tarball and copying the contents to an appropriate location, and then adding the bin directory to my path. There was an install script in the tarball but it didn’t copy everything for some reason.

Not a great start, and I’m not sure what I’m missing out on by not using choosenim, but I can figure that out later if I continue using the language.

Typing

Static typing is something I’m well used to from C# of course, but I don’t engage with Python’s type hinting at all. There is type inference in many situations, and many familiar collection types such as sets, tables, sequences and tuples which are as convenient to instantiate as their Python equivalents, though of course you can’t mix unrelated types within them (aside from tuples)(and why would you do that anyway, you monster). Mostly it is just convenient to know at compile time where there are type mismatches, rather than hearing about them at runtime or just getting weird behaviour.

Nim is only very minimally object-oriented. There is inheritance, but not multiple-inheritance, mixins, or anything resembling the interfaces or traits of other languages. This is probably one of the most concerning aspects of the language for me. It seems like it will inevitably lead to repeated code at some point if procedures can’t accept abstract interfaces as input instead of concrete types.

On the other hand I try to steer away from an object-oriented style in Python unless it really makes sense for the problem I’m working on. In Luz, the classes I created were little more than structs, with no inheritance required, and that’s perfectly sufficient for many problems.

There are also apparently libraries that create a means to specify interfaces using meta-programming, but that’s not something I’ve explored yet.

type
  Holiday = ref object
    date: DateTime
    localName: string
    name: string
    countryCode: string
    fixed: bool
    global: bool
    counties: Option[seq[string]]
    launchYear: Option[int]


var holidays = initTable[int, seq[Holiday]]()


proc isHoliday*(d: DateTime): bool =
  result = false
  # This will occur if API key was not provided
  if not holidays.hasKey(d.year):
    return result
  for y, h in holidays[d.year]:
    # global indicates that the holiday applies to the whole country
    if h.global:
      if h.date.yearday == d.yearday:
        result = true
        break

Uniform Function Call Syntax

This is really neat - any procedure or function can be called as if it is a method of the type of its first parameter.

proc sendErrorResponse(
  requestSocket: AsyncSocket,
  code: StatusCode,
  meta: string
) {.async.} =
  await requestSocket.send(&"{ord(code)} {meta}\r\L")


proc processRequest(requestSocket: AsyncSocket) {.async.} =
  ...
  # These calls are equivalent
  await requestSocket.sendErrorResponse(
    StatusCode.notFound,
    "Not Found"
  )
  await sendErrorResponse(
    requestSocket,
    StatusCode.notFound,
    "Not Found"
  )

This means that any type can be “extended” in a sense just by writing procedures with that type as the first parameter, no need for sub-classing or a special extension method syntax.

Blocks

One neat little feature is that you can open a new code block anywhere, with or without a name, and as well as being visually separated from the code around it it will have its own scope. A break statement will break out of that block, but not the containing one.

I didn’t find much use for this in either of the projects I’ve worked on so far, but it’s definitely something I can see being useful for longer procedures and certain control-flow situations.

Closures

Nim supports passing around references to procedures, which allows for a number of neat constructs, including closures. The below procedure creates a closure that animates a spinner when called in a loop while waiting for an IO operation to conclude. It contains everything it needs, including a constant.

proc getDisplayProgressClosure(): proc() =
  const phases = ["🮪", "🮫", "🮭", "🮬"]
  var lastTime = now()
  var phase = 0
  var initial = true

  proc displayProgress() =
    let elapsed = now() - lastTime
    if elapsed.inMilliseconds > 100 or initial:
      lastTime = now()
      if not initial:
        erasePrevious
      initial = false
      styledEcho(
        fgGreen,
        &"{phases[phase]}",
        fgCyan,
        " Retrieving holidays..."
      )
      inc(phase)
      if phase > phases.high: phase = 0

  result = displayProgress

Templates & Compile Time Execution

One of the most exciting features of Nim, for me, is the ability to execute code at compile time, and otherwise manipulate the final state of the code.

For example to embed a file in a binary in C# you have to set a property against the file in the IDE (or maybe in the project file) to make it an embedded resource, and then do some reflection to pull it back out at runtime. In Nim, you can just call readFile and assign the result to a constant.

const DEFAULT_BANDS = readFile "./config/bands.json"
const DEFAULT_CONFIG = readFile "./config/luz.toml"

There is also a compile-time branching statement, when. This is similar to the pre-processor #if in C#, or #ifdef in C, but it fits more naturally with the rest of the code.

Templates allow you to insert specified code in other parts of the codebase, with substitutions, before compilation. One use for this is as an alternative to short procedures, so the code gets inlined, saving a function call.

I feel like I’m only at the start of getting my head around this feature. I thought it might be a good way to output variations of a procedure for operating on different types, but I’m not sure the result is readable or concise enough to be worthwhile:

template createGetSetting(
  valueType: untyped,
  argValueTypeGet: untyped,
  envValueTypeGet: untyped,
  confValueTypeGet: untyped
) =
  proc getSetting(
    args: Table[string, Value],
    arg: string,
    conf: TomlValueRef,
    confSection: string,
    confKey: string,
    env: string,
    default: valueType
  ): (valueType, ConfigVariableSource) =

    result = (default, ConfigVariableSource.Default)
    if arg in args:
      if args[arg].kind != vkNone:
        return (
          argValueTypeGet(args[arg]),
          ConfigVariableSource.CommandLine
        )

    let envStr = getEnv(env, "")
    if envStr != "":
      return (
        envValueTypeGet(envStr),
        ConfigVariableSource.Environment
      )

    result = (
      conf[confSection][confKey].confValueTypeGet(),
      ConfigVariableSource.ConfigFile
    )


proc splitOnComma(val: string): seq[string] =
  result = val.split(',')


proc getStringSequence(value: TomlValueRef): seq[string] =
  let values = value.getElems()
  result = @[]
  for v in values:
    result.add v.getStr()


proc parseIntArg(val: Value): int =
  result = parseInt($val)


createGetSetting(string, `$`, `$`, getStr)
createGetSetting(int, parseIntArg, parseInt, getInt)
createGetSetting(bool, toBool, parseBool, getBool)
createGetSetting(seq[string], `@`, splitOnComma, getStringSequence)

The result of the above code is four different procedures called getSetting which look for a setting in the command line arguments, an environment variable, or a config file, and return it as the expected type.

Even though the above code is a mess and I’m probably going to rethink it, I will say this - writing the template was surprisingly intuitive.

Nim’s meta-programming features become even more powerful with macros and pragmas, but I haven’t really gotten into them yet so I can’t say much about them.

Standard Library

There’s some pretty great stuff in the standard library, including very easy to use asynchronous http and networking libraries, and parsers for a variety of text-based file formats. Everything seems to be appropriately cross-platform as well. I haven’t got much else to say about it!

Python Modules

Something I’m always looking out for in a language is the ability to write Python modules in it. There seem to be a couple of Nim libraries for doing this, both based on an underlying nimpy library. They both look incredibly easy to use, but notably the support for exporting Python classes in nimpy seems to be experimental. It is also a bit unclear how it deals with Python objects as parameters of procedures rather than basic types.

My only point of comparison is Cython, which is a really cool project that compiles Python code to C, and includes an optional extended syntax for optimisation, which is essentially writing C code but with a Python-like syntax. As cool as this is I think the breadth of options is confusing, and when you get down to writing optimised routines things start to break in very unhelpful C-like way - i.e. successful compiles and unceremonious runtime segfaults.

I much prefer the idea of writing modules in a language that is its own thing, and with Nim being as easy to write as it is, I’m excited to try it for this purpose.

Conclusion

I didn’t perform even rudimentary benchmarks, but I think it’s safe to assume that anything written in Nim will be faster than the equivalent Python code. Luz runs instantaneously, and Sparkle responds to requests almost instantaneously as well. Neither of them are doing anything that I wouldn’t expect Python to do at an acceptable speed under the same circumstances, however.

One thing about Nim benchmarks that I have seen is that they are generally performed with the -d:danger compiler flag, which disables all runtime checks. This is done in the name of “fairness” in comparison with C, but it doesn’t really seem fair to me if the norm for the language in production is -d:release.

I definitely found Nim very natural to develop in. Unlike Rust, which I also tried (failed) to learn recently, most of the concepts were already familiar to me from other languages, and the syntax was also very familiar. I often found myself writing correct Nim code first time, and where I made mistakes they were flagged during compilation in a way that was easy to understand. Runtime errors are also handled relatively gracefully - no segfaults even though Nim compiles to C, like Cython does.

Overall, a very interesting language that I look forward to doing more with.

Sparkle in action
It's called Sparkle because it's barely there...

Recent Movie Watchings

I’ve watched a lot of movies recently that I have a bit to say about, but not enough for a big post dissecting them on their own, like Wrong Turn and Ready Player One, so I’m just throwing them all together here.

Kimi

Kimi is a 2022 psychological thriller about an agoraphobic woman, Angela, who works from home for a smart speaker company - creators of the eponymous “Kimi” - listening to supposedly anonymised audio clips that the speaker’s AI couldn’t understand. On one of the clips she hears what she believes to be an assault in the background, and when her employers are reluctant to investigate she has to (gulp)… leave her apartment!

Screenshot from Kimi, of Angela out and about and wearing a mask
First movie I've seen set during the pandemic!

The main thing that I really liked about this movie was the portrayal of her struggle to leave her apartment, and the paradoxical sense of claustrophobia when she does. I felt much the same at one point in my life and it rang true to me.

On the other hand, when it gets down to thriller time, the action is quite repetitive and pointless. She gets captured, escapes, captured again almost straight away, escapes again right outside her building, and then there is somebody waiting for her in her apartment anyway. Boring. It gets better from there, but too late.

One thing I really didn’t like was the role of the smart speaker, Kimi. Although the plot early on does highlight a lack of privacy and data protection when Angela is able to find out whose speaker recorded the clips, and obtain further recordings, this is undermined by the plot being fundamentally about solving a murder thanks to the speaker’s ubiquitous surveillance. It then takes on a heroic role at the climax when Angela is able to outwit several hired goons by ordering it to do various things like cut the lights and play music and so on. Overall, I would say the movie comes down on the side of being pro corporate surveillance.

Mary Shelley

This 2017 historical drama is about the life of Mary Shelley and the sources of inspiration for her novel Frankenstein. Turns out men are the real monster??

Screenshot from Mary Shelley, of Mary (played by Elle Fanning) in a bonnet
[Stares motherfuckerly]

I enjoyed this one a lot. I read up about her a bit after watching it and it seems like it was a bit loose with some of the details of her life (like how many children she had, and when they died), but what am I a Mary Shelley scholar?

Like Frankenstein, it explores the theme of men’s irresponsibility towards the procreative act, and neglect of their progeny, but more explicitly, and as such it’s a great complement to the book. Interestingly, the male characters don’t really seem to get it, and focus on the idea that Frankenstein is about Mary alone feeling neglected, rather that a more general lack of responsibility on their part. She doesn’t correct them.

The Death of Stalin

The Death of Stalin is a political black comedy from 2017 about the aftermath of Stalin’s death. I found it pretty funny, but it was also deeply weird to hear a bunch of undisguised American and British accents from characters in a movie set in the Soviet Union. Probably it would have been worse if they put on stereotypical Russian accents, of course, but Cockney Stalin?

Screenshot from The Death of Stalin, of Stalin laughing right before he has a stroke
Cockney Stalin?? Ridiculous!

As usual, I would probably prefer to see something from post-soviet creators examining their own history, through a satirical lens or otherwise.

The Batman

The Batman is the latest in the saga of the Bat-men, this time starring Bobby Battinson. I think it might be my new favourite Batman movie, though I didn’t see the Ben Affleck one so I am not qualified to declare it the objectively best Batman movie.

The movie leans heavily into noir and gothic aesthetics, and imagines Bruce Wayne as a moody orphan who is uninterested in much outside of being a bat - including the effect his inherited wealth is having on society. Having become, under his father’s watch, a sort of slush fund for corruption, Bruce Wayne’s wealth is the underlying cause of much of the violence that Batman seeks to combat alongside his friends in the police.

Screenshot from The Batman, of emo Bruce Wayne
Emomelon Wayne

His main adversary is the Riddler, portrayed here as a vigilante serial killer with shades of Seven’s John Doe and the Zodiac killer. While Batman is beating up common criminals and thugs, the Riddler targets the powerful and corrupt, and as such it’s hard to identify the villainy in his actions for much of the movie (aside from the fact that he’s, y’know, doing murders and all that). The general public certainly see him as a hero. Meanwhile, he sees himself and Batman as partners, playing off each other in a common crusade to clean up the city (and who else could, but the only two men smart enough to appreciate a good riddle). It isn’t until his plan to “wipe the scum off the streets” by flooding the city is revealed that we see his contempt for the innocent as well as the guilty.

Unfortunately the overall politics of the movie could probably be summed up as “we just need more good billionaires”. Bruce comes to realise that his vast wealth comes with responsibilities, and it seems like he’s going to do some philanthropy alongside his nightly costumed kickpunching. I guess we’ll find out in the sequel if enlightened liberal capitalism is the solution to capitalism’s problems.

I didn’t even realise that Colin Farrell was in this until I saw his name in the credits. He’s completely unrecognisable as the Penguin.

Choose or Die

Choose or Die is a 2022 horror thriller about a cursed retro video game. This seemed like a fun premise, but unfortunately the movie as a whole was fucking crap.

Screenshot from Choose or Die, of Kayla and Isaac standing in front of Isaac's car, looking concerned
There was a pixel art sequence leading up to this scene, out of nowhere

My main fault with it is that the game (named CURS>R) has apparently boundless powers to reshape reality to its whims, and that the choices it presents players with are seemingly arbitrary, and differ wildly in terms of their consequences. For example, the first choice the main character, Kayla, is given is between coffee and cake in a diner, with apparently no negative consequences. Another character’s first choice is between eating a computer or eating their own arm - both potentially fatal, one would think. For one of the “levels” of the game, Kayla is asked to choose between a blue door or a red one, with no other information. It reminded me of the first text-based video game I wrote when I was 7, which was just a collection of random scenarios where every path ultimately ended with the player being eaten by a tiger.

The climax sees Kayla facing off with a previous player (who we are introduced to in the opening scene, but learn very little about). At this point a moral is shoehorned in about white male entitlement in videogaming - which would be a fine theme if it wasn’t introduced so late and handled so clumsily.

I did like the grungy 80’s aesthetic, and that it seemed almost self-aware about how played out that kind of nostalgia is at this point. Also Asa Butterfield is great as a basement-dwelling retro video gaming obsessive. I do love me some Asa Butterfield…

Screenshot from Choose or Die, of Isaac (played by Asa Butterfield)
Buttery good

Sim-Universe

I just done watched Thought Slime’s video about the simulation argument (actually many months ago by the time I’m actually publising this), and it’s a topic about which I’ve had some thoughts myself, so I thought maybe it was time to write some of them down.

Like comrade Slime, I think that it’s an interesting thought experiment, but a lot of what is said about it is poorly thought through at best. It’s particularly frustrating when Nick Bostrom’s argument is held up as “proof” of the “certainty” that we are living in a simulation, alongside arguments and assertions that completely contradict it. The argument itself doesn’t claim to be proof of any such thing - it presents three possibilities based on premises about which we have almost no information.

Why would we simulate?

One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears.

This is Nick’s description of what futuristic super-computing civilisations would do with their computational power, but he doesn’t really get into why they might do this. Into this absence people pour all sorts of ideas. A common one is that we are equivalent to NPCs in a video-game. A related one is that we exist so that the simulators can pop in and out of our minds and ride us around for some reason - historical educational purposes perhaps, or the thrill of slumming it in the stupid-ages.

These are interesting concepts for science-fiction, but I don’t find them compelling as claims about the reality of our world. Video-games are indeed able to present more visually convincing realities than in the past, but they don’t do that by simulating entire physical universes in minute detail. They might run physics simulations for a variety of things in the vicinity of the player - beyond the bare minimum necessary to convince, they are hollow, simplified facades, and anything not relevant to the context of the current gameplay is non-existent. Similarly, what would it add to a player’s experience to have NPCs living lives outside of that context and having inner lives?

Nick Bostrum actually gets into some of the mechanisms that could be used to reduce the computational requirements of a simulation:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible… But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world

The implicit assumption here is that the simulation is being made convincing for the benefit of the simulated minds (i.e. us), which always run at full resolution. Video-games are not run for the entertainment of NPCs however. If simulations are being run for the amusement of posthuman “players”, and they are interested in reducing the computational requirements, as Nick assumes, why would they not prune the most computationally expensive component - simulated human minds that are not immediately relevant to the player’s current experience? Would they even need to simulate fully conscious humans at all to provide convincing NPCs to players?

Nick does suggest something akin to such pruning in his original argument:

In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious.

However, it is again expressed as if the purpose of the simulation is solely to fool its unwitting inhabitant(s), with no proposed utility for the creators of the simulation.

I submit to you that if you are experiencing a private and mundane moment right now, and are conscious of it, you are probably not a character simulated on some posthuman equivalent of a PlayStation.

A more reasonable suggestion, to my mind, is that we would run such simulations in order to study our own civilisation at different stages of development, or to see how civilisations might develop under different circumstances. Would these simulations even require fully conscious simulated participants in order to be useful? Would they need to simulate the full lives of everybody who has ever lived? Or would they drastically reduce the number of minds needing to be simulated by cutting out all the boring parts? Would there really even be anything to be learned from such simulations?

This lack of clarity about why a posthuman civilisation would run ancestor simulations is at the heart of a lot of my issues with the argument. Without that understanding, we can’t really say whether such a civilisation would run them or not, or how many, or what their parameters would be. It’s just sort of assumed that they probably will because it would be a cool thing to be able to do, and some people say they would do it right now if it were possible. But that’s an easy thing to say when it’s impossible, and you don’t have to worry about the ethical concerns or the resources involved.

Another type of simulation we might run are of universes with different physical laws, but as the quotes above about simplifying the simulations suggest, these would have a different set of priorities, and wouldn’t really qualify as “ancestor simulations”. Whether they would even result in conscious entities would probably depend on the parameters of the simulation - they wouldn’t be the goal. If we take seriously the suggestion that we live in this kind of simulation, we can’t even assume that the simulators are anything like us, not even in their remote past, or that the simulating universe resembles ours in any way - so how can we possibly speculate about their motives, or what is computationally possible in their universe?

Simulations Within Simulations

One of the silliest suggestions that some people seem to take seriously is that the posthuman civilisation in the base reality would run simulations beyond the point where the simulated civilisations would be running their own simulations, with those simulations running further simulations, and so on.

Nick likens this scenario to running code in a virtual machine:

It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.)

His example is terrible, but the basic assertion is correct, a computer can simulate another computer in various ways, with varying levels of overhead. In the best case, code running in the virtual machine runs directly on the host hardware with no translation necessary. Obviously, this doesn’t add any processing power - software running in the host has to share its resources with the software running in the virtual machine.

Now, let’s think through this scenario a little bit.

Say you are a posthuman civilisation that has converted an entire planet into a giant computer. All the computation you decide to do is running on this computer. For some reason, you decide to run an ancestor simulation of your quite recent past, such that the simulated universe is on the cusp of achieving their own planet-computer. All of the computation of that universe would actually be running on your computer, alongside all the existing computation of your civilisation, and all the other work required for the simulation, all the fake stars and physics and advanced posthuman minds. Then you let them run their own simulation of their own recent past - now you have to support the load of three civilisations with planet-sized computers on only one actual physical planet-sized computer. And then four, and then five, and on and on.

A little while ago we were talking about cutting corners to save resources and focus on running our ancestors minds, and now here we are supporting an infinite regress of posthuman computers for no obvious purpose. There wouldn’t be any shortcuts here - if a computer 10 levels down wants to compute a hash or calculate millions of primes you would actually have to do the work or they would know.

There are two possible workarounds/objections to this that I can think of:

  1. Simulations could be run slower than the host reality to allow room for it. Would a time-dilated simulation be useful? I guess that depends on what you’re running it for!
  2. Posthuman level simulations would only be allowed to develop once the host reality had converted enough matter to pure computer that supporting them was not a burden. In other words, the simulations would always have to lag behind by some significant amount.

Fair enough, I guess that would do it, if keeping the simulations going is really important, you might always dedicate a proportional amount of your ever increasing computational resources to them. I do come back to the why though - would a simulation of a posthuman-level civilisation be a fun game for posthumans? Would there be anything to learn from it that you didn’t document when you were going through that phase?

One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman

Oh, well. Better to return to monke then, lest techno-god smite us for our arrogance.

If God Did Not Exist…

One possibility for why a posthuman civilization might choose not to run ancestor simulations is that doing so would raise some thorny ethical concerns. Take it away Nick:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation

Yes I think that might be likely… wait, what are you…

However, from our present point of view, it is not clear that creating a human race is immoral

Ooof. It’s not just creating a human race that we’re talking about here, it’s creating a human race and trapping them in a false reality for our own edification or amusement, and in some hypothetical scenarios, instantly terminating billions of them when they reach a certain level of development. I think most people today would baulk at the prospect of treating even a single person like that, much less generation after generation of unwitting playthings.

Even worse are the moral implications for us, today, of taking some of Nick’s proposals seriously. In relation to the idea that many minds might be simulated only partially some amount of the time in order to save resources (discussed above), he suggests that it would also be a way for the simulators to avoid inflicting suffering:

There is also the possibility of simulators abridging certain parts of the mental lives of simulated beings and giving them false memories of the sort of experiences that they would typically have had during the omitted interval. If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions. Of course, this hypothesis can be seriously entertained only at those times when you are not currently suffering.

You weren’t traumatised, you see, you just have a false memory of trauma. And no need to worry about the consequences if you feel compelled to abuse, murder or rape: those are just zombie shadow-people you’re hurting, and they don’t really feel pain! Nothing is real and nothing matters!

But wait! Maybe our simulators will take it upon themselves to reward or punish us for our behaviour in their simulation (without informing us that they will do so, or on what basis), and dedicate ludicrous amounts of resources to simulating all the minds they have ever simulated, indefinitely, in an afterlife:

Further rumination on these themes could climax in a naturalistic theogony that would study the structure of this hierarchy, and the constraints imposed on its inhabitants by the possibility that their actions on their own level may affect the treatment they receive from dwellers of deeper levels. For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility.

It genuinely disturbs me that there are people who are only good because they believe there is some force outside the universe that will reward them for it, or punish them for misbehaviour - and, even worse, people who would take on the role of cosmic arbiter themselves if given the chance.

Postsingular Posthumans

Inevitably, discussions about the simulation argument are little more than speculation based on almost no information. The kind of civilisation that would be capable of running such simulations would be one that has passed through a technological singularity - a point at which technological progress becomes so rapid that its path is impossible to predict. In fact the simulation argument requires that a civilisation has achieved the ability to simulate a human-equivalent mind - an Artificial General Intelligence - widely considered to be the invention that will instigate the singularity, since such an intelligence would probably be able to improve itself at an exponential rate.

We have zero examples of a post-singularity, posthuman civilisation, and only one example of a human-level civilisation, on which to base our speculations. What will super-intelligent posthumans value? Almost by definition such a civilisation would be beyond our comprehension.

The simulation argument seems mostly, to me, to be an attempt to imagine God in a way that is appealing to 21st century techies. I’m inclined to think that such a god, like all others, is not just unknowable, but non-existent.