Nimpressions

Python is my go-to language for personal projects, and even client projects when I can get away with it (though usually those are Windows based and within the .Net ecosystem, so I stick with C#). However, it often gives me pause to be using one of the slowest and least energy efficient languages available - I might do another post about that, but suffice it to say that it doesn’t align with my values to needlessly waste resources.

The ideal would be a language that’s as easy to write as Python, but as fast and energy efficient as C, or close to it. Well recently I came across a language that claims be both of those things: Nim.

I put together a simple command line application (named Luz) in Nim this week in order to try it out. Appropriately enough given my reason for trying Nim, it just shows the current electricity rate band, and optionally a chart, because where I live there are two peak periods during the day when it is better not to do anything power-intensive. I went on to make a start on a very simple Gemini server called Sparkle, which is still a WIP. Here are some of my thoughts on the experience as a mediocre developer with some Python and C# experience.

Luz in action

choosenim

Nim has a tool for installing its toolchain and and switching between different versions of the compiler, similar to pyenv. Unfortunately it didn’t work for me on Pop! OS 22.04 due to it having too new a version of libssl. I was able to install the Nim compiler manually easily enough by just downloading the tarball and copying the contents to an appropriate location, and then adding the bin directory to my path. There was an install script in the tarball but it didn’t copy everything for some reason.

Not a great start, and I’m not sure what I’m missing out on by not using choosenim, but I can figure that out later if I continue using the language.

Typing

Static typing is something I’m well used to from C# of course, but I don’t engage with Python’s type hinting at all. There is type inference in many situations, and many familiar collection types such as sets, tables, sequences and tuples which are as convenient to instantiate as their Python equivalents, though of course you can’t mix unrelated types within them (aside from tuples)(and why would you do that anyway, you monster). Mostly it is just convenient to know at compile time where there are type mismatches, rather than hearing about them at runtime or just getting weird behaviour.

Nim is only very minimally object-oriented. There is inheritance, but not multiple-inheritance, mixins, or anything resembling the interfaces or traits of other languages. This is probably one of the most concerning aspects of the language for me. It seems like it will inevitably lead to repeated code at some point if procedures can’t accept abstract interfaces as input instead of concrete types.

On the other hand I try to steer away from an object-oriented style in Python unless it really makes sense for the problem I’m working on. In Luz, the classes I created were little more than structs, with no inheritance required, and that’s perfectly sufficient for many problems.

There are also apparently libraries that create a means to specify interfaces using meta-programming, but that’s not something I’ve explored yet.

type
  Holiday = ref object
    date: DateTime
    localName: string
    name: string
    countryCode: string
    fixed: bool
    global: bool
    counties: Option[seq[string]]
    launchYear: Option[int]


var holidays = initTable[int, seq[Holiday]]()


proc isHoliday*(d: DateTime): bool =
  result = false
  # This will occur if API key was not provided
  if not holidays.hasKey(d.year):
    return result
  for y, h in holidays[d.year]:
    # global indicates that the holiday applies to the whole country
    if h.global:
      if h.date.yearday == d.yearday:
        result = true
        break

Uniform Function Call Syntax

This is really neat - any procedure or function can be called as if it is a method of the type of its first parameter.

proc sendErrorResponse(
  requestSocket: AsyncSocket,
  code: StatusCode,
  meta: string
) {.async.} =
  await requestSocket.send(&"{ord(code)} {meta}\r\L")


proc processRequest(requestSocket: AsyncSocket) {.async.} =
  ...
  # These calls are equivalent
  await requestSocket.sendErrorResponse(
    StatusCode.notFound,
    "Not Found"
  )
  await sendErrorResponse(
    requestSocket,
    StatusCode.notFound,
    "Not Found"
  )

This means that any type can be “extended” in a sense just by writing procedures with that type as the first parameter, no need for sub-classing or a special extension method syntax.

Blocks

One neat little feature is that you can open a new code block anywhere, with or without a name, and as well as being visually separated from the code around it it will have its own scope. A break statement will break out of that block, but not the containing one.

I didn’t find much use for this in either of the projects I’ve worked on so far, but it’s definitely something I can see being useful for longer procedures and certain control-flow situations.

Closures

Nim supports passing around references to procedures, which allows for a number of neat constructs, including closures. The below procedure creates a closure that animates a spinner when called in a loop while waiting for an IO operation to conclude. It contains everything it needs, including a constant.

proc getDisplayProgressClosure(): proc() =
  const phases = ["🮪", "🮫", "🮭", "🮬"]
  var lastTime = now()
  var phase = 0
  var initial = true

  proc displayProgress() =
    let elapsed = now() - lastTime
    if elapsed.inMilliseconds > 100 or initial:
      lastTime = now()
      if not initial:
        erasePrevious
      initial = false
      styledEcho(
        fgGreen,
        &"{phases[phase]}",
        fgCyan,
        " Retrieving holidays..."
      )
      inc(phase)
      if phase > phases.high: phase = 0

  result = displayProgress

Templates & Compile Time Execution

One of the most exciting features of Nim, for me, is the ability to execute code at compile time, and otherwise manipulate the final state of the code.

For example to embed a file in a binary in C# you have to set a property against the file in the IDE (or maybe in the project file) to make it an embedded resource, and then do some reflection to pull it back out at runtime. In Nim, you can just call readFile and assign the result to a constant.

const DEFAULT_BANDS = readFile "./config/bands.json"
const DEFAULT_CONFIG = readFile "./config/luz.toml"

There is also a compile-time branching statement, when. This is similar to the pre-processor #if in C#, or #ifdef in C, but it fits more naturally with the rest of the code.

Templates allow you to insert specified code in other parts of the codebase, with substitutions, before compilation. One use for this is as an alternative to short procedures, so the code gets inlined, saving a function call.

I feel like I’m only at the start of getting my head around this feature. I thought it might be a good way to output variations of a procedure for operating on different types, but I’m not sure the result is readable or concise enough to be worthwhile:

template createGetSetting(
  valueType: untyped,
  argValueTypeGet: untyped,
  envValueTypeGet: untyped,
  confValueTypeGet: untyped
) =
  proc getSetting(
    args: Table[string, Value],
    arg: string,
    conf: TomlValueRef,
    confSection: string,
    confKey: string,
    env: string,
    default: valueType
  ): (valueType, ConfigVariableSource) =

    result = (default, ConfigVariableSource.Default)
    if arg in args:
      if args[arg].kind != vkNone:
        return (
          argValueTypeGet(args[arg]),
          ConfigVariableSource.CommandLine
        )

    let envStr = getEnv(env, "")
    if envStr != "":
      return (
        envValueTypeGet(envStr),
        ConfigVariableSource.Environment
      )

    result = (
      conf[confSection][confKey].confValueTypeGet(),
      ConfigVariableSource.ConfigFile
    )


proc splitOnComma(val: string): seq[string] =
  result = val.split(',')


proc getStringSequence(value: TomlValueRef): seq[string] =
  let values = value.getElems()
  result = @[]
  for v in values:
    result.add v.getStr()


proc parseIntArg(val: Value): int =
  result = parseInt($val)


createGetSetting(string, `$`, `$`, getStr)
createGetSetting(int, parseIntArg, parseInt, getInt)
createGetSetting(bool, toBool, parseBool, getBool)
createGetSetting(seq[string], `@`, splitOnComma, getStringSequence)

The result of the above code is four different procedures called getSetting which look for a setting in the command line arguments, an environment variable, or a config file, and return it as the expected type.

Even though the above code is a mess and I’m probably going to rethink it, I will say this - writing the template was surprisingly intuitive.

Nim’s meta-programming features become even more powerful with macros and pragmas, but I haven’t really gotten into them yet so I can’t say much about them.

Standard Library

There’s some pretty great stuff in the standard library, including very easy to use asynchronous http and networking libraries, and parsers for a variety of text-based file formats. Everything seems to be appropriately cross-platform as well. I haven’t got much else to say about it!

Python Modules

Something I’m always looking out for in a language is the ability to write Python modules in it. There seem to be a couple of Nim libraries for doing this, both based on an underlying nimpy library. They both look incredibly easy to use, but notably the support for exporting Python classes in nimpy seems to be experimental. It is also a bit unclear how it deals with Python objects as parameters of procedures rather than basic types.

My only point of comparison is Cython, which is a really cool project that compiles Python code to C, and includes an optional extended syntax for optimisation, which is essentially writing C code but with a Python-like syntax. As cool as this is I think the breadth of options is confusing, and when you get down to writing optimised routines things start to break in very unhelpful C-like way - i.e. successful compiles and unceremonious runtime segfaults.

I much prefer the idea of writing modules in a language that is its own thing, and with Nim being as easy to write as it is, I’m excited to try it for this purpose.

Conclusion

I didn’t perform even rudimentary benchmarks, but I think it’s safe to assume that anything written in Nim will be faster than the equivalent Python code. Luz runs instantaneously, and Sparkle responds to requests almost instantaneously as well. Neither of them are doing anything that I wouldn’t expect Python to do at an acceptable speed under the same circumstances, however.

One thing about Nim benchmarks that I have seen is that they are generally performed with the -d:danger compiler flag, which disables all runtime checks. This is done in the name of “fairness” in comparison with C, but it doesn’t really seem fair to me if the norm for the language in production is -d:release.

I definitely found Nim very natural to develop in. Unlike Rust, which I also tried (failed) to learn recently, most of the concepts were already familiar to me from other languages, and the syntax was also very familiar. I often found myself writing correct Nim code first time, and where I made mistakes they were flagged during compilation in a way that was easy to understand. Runtime errors are also handled relatively gracefully - no segfaults even though Nim compiles to C, like Cython does.

Overall, a very interesting language that I look forward to doing more with.

Sparkle in action

Recent Movie Watchings

I’ve watched a lot of movies recently that I have a bit to say about, but not enough for a big post dissecting them on their own, like Wrong Turn and Ready Player One, so I’m just throwing them all together here.

Kimi

Kimi is a 2022 psychological thriller about an agoraphobic woman, Angela, who works from home for a smart speaker company - creators of the eponymous “Kimi” - listening to supposedly anonymised audio clips that the speaker’s AI couldn’t understand. On one of the clips she hears what she believes to be an assault in the background, and when her employers are reluctant to investigate she has to (gulp)… leave her apartment!

Screenshot from Kimi, of Angela out and about and wearing a mask

The main thing that I really liked about this movie was the portrayal of her struggle to leave her apartment, and the paradoxical sense of claustrophobia when she does. I felt much the same at one point in my life and it rang true to me.

On the other hand, when it gets down to thriller time, the action is quite repetitive and pointless. She gets captured, escapes, captured again almost straight away, escapes again right outside her building, and then there is somebody waiting for her in her apartment anyway. Boring. It gets better from there, but too late.

One thing I really didn’t like was the role of the smart speaker, Kimi. Although the plot early on does highlight a lack of privacy and data protection when Angela is able to find out whose speaker recorded the clips, and obtain further recordings, this is undermined by the plot being fundamentally about solving a murder thanks to the speaker’s ubiquitous surveillance. It then takes on a heroic role at the climax when Angela is able to outwit several hired goons by ordering it to do various things like cut the lights and play music and so on. Overall, I would say the movie comes down on the side of being pro corporate surveillance.

Mary Shelley

This 2017 historical drama is about the life of Mary Shelley and the sources of inspiration for her novel Frankenstein. Turns out men are the real monster??

Screenshot from Mary Shelley, of Mary (played by Elle Fanning) in a bonnet

I enjoyed this one a lot. I read up about her a bit after watching it and it seems like it was a bit loose with some of the details of her life (like how many children she had, and when they died), but what am I a Mary Shelley scholar?

Like Frankenstein, it explores the theme of men’s irresponsibility towards the procreative act, and neglect of their progeny, but more explicitly, and as such it’s a great complement to the book. Interestingly, the male characters don’t really seem to get it, and focus on the idea that Frankenstein is about Mary alone feeling neglected, rather that a more general lack of responsibility on their part. She doesn’t correct them.

The Death of Stalin

The Death of Stalin is a political black comedy from 2017 about the aftermath of Stalin’s death. I found it pretty funny, but it was also deeply weird to hear a bunch of undisguised American and British accents from characters in a movie set in the Soviet Union. Probably it would have been worse if they put on stereotypical Russian accents, of course, but Cockney Stalin?

Screenshot from The Death of Stalin, of Stalin laughing right before he has a stroke

As usual, I would probably prefer to see something from post-soviet creators examining their own history, through a satirical lens or otherwise.

The Batman

The Batman is the latest in the saga of the Bat-men, this time starring Bobby Battinson. I think it might be my new favourite Batman movie, though I didn’t see the Ben Affleck one so I am not qualified to declare it the objectively best Batman movie.

The movie leans heavily into noir and gothic aesthetics, and imagines Bruce Wayne as a moody orphan who is uninterested in much outside of being a bat - including the effect his inherited wealth is having on society. Having become, under his father’s watch, a sort of slush fund for corruption, Bruce Wayne’s wealth is the underlying cause of much of the violence that Batman seeks to combat alongside his friends in the police.

Screenshot from The Batman, of emo Bruce Wayne

His main adversary is the Riddler, portrayed here as a vigilante serial killer with shades of Seven’s John Doe and the Zodiac killer. While Batman is beating up common criminals and thugs, the Riddler targets the powerful and corrupt, and as such it’s hard to identify the villainy in his actions for much of the movie (aside from the fact that he’s, y’know, doing murders and all that). The general public certainly see him as a hero. Meanwhile, he sees himself and Batman as partners, playing off each other in a common crusade to clean up the city (and who else could, but the only two men smart enough to appreciate a good riddle). It isn’t until his plan to “wipe the scum off the streets” by flooding the city is revealed that we see his contempt for the innocent as well as the guilty.

Unfortunately the overall politics of the movie could probably be summed up as “we just need more good billionaires”. Bruce comes to realise that his vast wealth comes with responsibilities, and it seems like he’s going to do some philanthropy alongside his nightly costumed kickpunching. I guess we’ll find out in the sequel if enlightened liberal capitalism is the solution to capitalism’s problems.

I didn’t even realise that Colin Farrell was in this until I saw his name in the credits. He’s completely unrecognisable as the Penguin.

Choose or Die

Choose or Die is a 2022 horror thriller about a cursed retro video game. This seemed like a fun premise, but unfortunately the movie as a whole was fucking crap.

Screenshot from Choose or Die, of Kayla and Isaac standing in front of Isaac's car, looking concerned

My main fault with it is that the game (named CURS>R) has apparently boundless powers to reshape reality to its whims, and that the choices it presents players with are seemingly arbitrary, and differ wildly in terms of their consequences. For example, the first choice the main character, Kayla, is given is between coffee and cake in a diner, with apparently no negative consequences. Another character’s first choice is between eating a computer or eating their own arm - both potentially fatal, one would think. For one of the “levels” of the game, Kayla is asked to choose between a blue door or a red one, with no other information. It reminded me of the first text-based video game I wrote when I was 7, which was just a collection of random scenarios where every path ultimately ended with the player being eaten by a tiger.

The climax sees Kayla facing off with a previous player (who we are introduced to in the opening scene, but learn very little about). At this point a moral is shoehorned in about white male entitlement in videogaming - which would be a fine theme if it wasn’t introduced so late and handled so clumsily.

I did like the grungy 80’s aesthetic, and that it seemed almost self-aware about how played out that kind of nostalgia is at this point. Also Asa Butterfield is great as a basement-dwelling retro video gaming obsessive. I do love me some Asa Butterfield…

Screenshot from Choose or Die, of Isaac (played by Asa Butterfield)

Sim-Universe

I just done watched Thought Slime’s video about the simulation argument (actually many months ago by the time I’m actually publising this), and it’s a topic about which I’ve had some thoughts myself, so I thought maybe it was time to write some of them down.

Like comrade Slime, I think that it’s an interesting thought experiment, but a lot of what is said about it is poorly thought through at best. It’s particularly frustrating when Nick Bostrom’s argument is held up as “proof” of the “certainty” that we are living in a simulation, alongside arguments and assertions that completely contradict it. The argument itself doesn’t claim to be proof of any such thing - it presents three possibilities based on premises about which we have almost no information.

Why would we simulate?

One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears.

This is Nick’s description of what futuristic super-computing civilisations would do with their computational power, but he doesn’t really get into why they might do this. Into this absence people pour all sorts of ideas. A common one is that we are equivalent to NPCs in a video-game. A related one is that we exist so that the simulators can pop in and out of our minds and ride us around for some reason - historical educational purposes perhaps, or the thrill of slumming it in the stupid-ages.

These are interesting concepts for science-fiction, but I don’t find them compelling as claims about the reality of our world. Video-games are indeed able to present more visually convincing realities than in the past, but they don’t do that by simulating entire physical universes in minute detail. They might run physics simulations for a variety of things in the vicinity of the player - beyond the bare minimum necessary to convince, they are hollow, simplified facades, and anything not relevant to the context of the current gameplay is non-existent. Similarly, what would it add to a player’s experience to have NPCs living lives outside of that context and having inner lives?

Nick Bostrum actually gets into some of the mechanisms that could be used to reduce the computational requirements of a simulation:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible… But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world

The implicit assumption here is that the simulation is being made convincing for the benefit of the simulated minds (i.e. us), which always run at full resolution. Video-games are not run for the entertainment of NPCs however. If simulations are being run for the amusement of posthuman “players”, and they are interested in reducing the computational requirements, as Nick assumes, why would they not prune the most computationally expensive component - simulated human minds that are not immediately relevant to the player’s current experience? Would they even need to simulate fully conscious humans at all to provide convincing NPCs to players?

Nick does suggest something akin to such pruning in his original argument:

In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious.

However, it is again expressed as if the purpose of the simulation is solely to fool its unwitting inhabitant(s), with no proposed utility for the creators of the simulation.

I submit to you that if you are experiencing a private and mundane moment right now, and are conscious of it, you are probably not a character simulated on some posthuman equivalent of a PlayStation.

A more reasonable suggestion, to my mind, is that we would run such simulations in order to study our own civilisation at different stages of development, or to see how civilisations might develop under different circumstances. Would these simulations even require fully conscious simulated participants in order to be useful? Would they need to simulate the full lives of everybody who has ever lived? Or would they drastically reduce the number of minds needing to be simulated by cutting out all the boring parts? Would there really even be anything to be learned from such simulations?

This lack of clarity about why a posthuman civilisation would run ancestor simulations is at the heart of a lot of my issues with the argument. Without that understanding, we can’t really say whether such a civilisation would run them or not, or how many, or what their parameters would be. It’s just sort of assumed that they probably will because it would be a cool thing to be able to do, and some people say they would do it right now if it were possible. But that’s an easy thing to say when it’s impossible, and you don’t have to worry about the ethical concerns or the resources involved.

Another type of simulation we might run are of universes with different physical laws, but as the quotes above about simplifying the simulations suggest, these would have a different set of priorities, and wouldn’t really qualify as “ancestor simulations”. Whether they would even result in conscious entities would probably depend on the parameters of the simulation - they wouldn’t be the goal. If we take seriously the suggestion that we live in this kind of simulation, we can’t even assume that the simulators are anything like us, not even in their remote past, or that the simulating universe resembles ours in any way - so how can we possibly speculate about their motives, or what is computationally possible in their universe?

Simulations Within Simulations

One of the silliest suggestions that some people seem to take seriously is that the posthuman civilisation in the base reality would run simulations beyond the point where the simulated civilisations would be running their own simulations, with those simulations running further simulations, and so on.

Nick likens this scenario to running code in a virtual machine:

It may be possible for simulated civilizations to become posthuman. They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.)

His example is terrible, but the basic assertion is correct, a computer can simulate another computer in various ways, with varying levels of overhead. In the best case, code running in the virtual machine runs directly on the host hardware with no translation necessary. Obviously, this doesn’t add any processing power - software running in the host has to share its resources with the software running in the virtual machine.

Now, let’s think through this scenario a little bit.

Say you are a posthuman civilisation that has converted an entire planet into a giant computer. All the computation you decide to do is running on this computer. For some reason, you decide to run an ancestor simulation of your quite recent past, such that the simulated universe is on the cusp of achieving their own planet-computer. All of the computation of that universe would actually be running on your computer, alongside all the existing computation of your civilisation, and all the other work required for the simulation, all the fake stars and physics and advanced posthuman minds. Then you let them run their own simulation of their own recent past - now you have to support the load of three civilisations with planet-sized computers on only one actual physical planet-sized computer. And then four, and then five, and on and on.

A little while ago we were talking about cutting corners to save resources and focus on running our ancestors minds, and now here we are supporting an infinite regress of posthuman computers for no obvious purpose. There wouldn’t be any shortcuts here - if a computer 10 levels down wants to compute a hash or calculate millions of primes you would actually have to do the work or they would know.

There are two possible workarounds/objections to this that I can think of:

  1. Simulations could be run slower than the host reality to allow room for it. Would a time-dilated simulation be useful? I guess that depends on what you’re running it for!
  2. Posthuman level simulations would only be allowed to develop once the host reality had converted enough matter to pure computer that supporting them was not a burden. In other words, the simulations would always have to lag behind by some significant amount.

Fair enough, I guess that would do it, if keeping the simulations going is really important, you might always dedicate a proportional amount of your ever increasing computational resources to them. I do come back to the why though - would a simulation of a posthuman-level civilisation be a fun game for posthumans? Would there be anything to learn from it that you didn’t document when you were going through that phase?

One consideration that counts against the multi-level hypothesis is that the computational cost for the basement-level simulators would be very great. Simulating even a single posthuman civilization might be prohibitively expensive. If so, then we should expect our simulation to be terminated when we are about to become posthuman

Oh, well. Better to return to monke then, lest techno-god smite us for our arrogance.

If God Did Not Exist…

One possibility for why a posthuman civilization might choose not to run ancestor simulations is that doing so would raise some thorny ethical concerns. Take it away Nick:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation

Yes I think that might be likely… wait, what are you…

However, from our present point of view, it is not clear that creating a human race is immoral

Ooof. It’s not just creating a human race that we’re talking about here, it’s creating a human race and trapping them in a false reality for our own edification or amusement, and in some hypothetical scenarios, instantly terminating billions of them when they reach a certain level of development. I think most people today would baulk at the prospect of treating even a single person like that, much less generation after generation of unwitting playthings.

Even worse are the moral implications for us, today, of taking some of Nick’s proposals seriously. In relation to the idea that many minds might be simulated only partially some amount of the time in order to save resources (discussed above), he suggests that it would also be a way for the simulators to avoid inflicting suffering:

There is also the possibility of simulators abridging certain parts of the mental lives of simulated beings and giving them false memories of the sort of experiences that they would typically have had during the omitted interval. If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions. Of course, this hypothesis can be seriously entertained only at those times when you are not currently suffering.

You weren’t traumatised, you see, you just have a false memory of trauma. And no need to worry about the consequences if you feel compelled to abuse, murder or rape: those are just zombie shadow-people you’re hurting, and they don’t really feel pain! Nothing is real and nothing matters!

But wait! Maybe our simulators will take it upon themselves to reward or punish us for our behaviour in their simulation (without informing us that they will do so, or on what basis), and dedicate ludicrous amounts of resources to simulating all the minds they have ever simulated, indefinitely, in an afterlife:

Further rumination on these themes could climax in a naturalistic theogony that would study the structure of this hierarchy, and the constraints imposed on its inhabitants by the possibility that their actions on their own level may affect the treatment they receive from dwellers of deeper levels. For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators. An afterlife would be a real possibility.

It genuinely disturbs me that there are people who are only good because they believe there is some force outside the universe that will reward them for it, or punish them for misbehaviour - and, even worse, people who would take on the role of cosmic arbiter themselves if given the chance.

Postsingular Posthumans

Inevitably, discussions about the simulation argument are little more than speculation based on almost no information. The kind of civilisation that would be capable of running such simulations would be one that has passed through a technological singularity - a point at which technological progress becomes so rapid that its path is impossible to predict. In fact the simulation argument requires that a civilisation has achieved the ability to simulate a human-equivalent mind - an Artificial General Intelligence - widely considered to be the invention that will instigate the singularity, since such an intelligence would probably be able to improve itself at an exponential rate.

We have zero examples of a post-singularity, posthuman civilisation, and only one example of a human-level civilisation, on which to base our speculations. What will super-intelligent posthumans value? Almost by definition such a civilisation would be beyond our comprehension.

The simulation argument seems mostly, to me, to be an attempt to imagine God in a way that is appealing to 21st century techies. I’m inclined to think that such a god, like all others, is not just unknowable, but non-existent.

Joplin & Syncthing

I’ve been using Evernote for quite a few years now for keeping work and personal notes, despite never really being happy with it. Aside from the fact of entrusting my data to a private company, most of my issues with it were minor nuisances, and momentum kept me using it because I didn’t see a good alternative that seemed impressive enough to be worth the hassle of migration. I considered writing my own alternative many times, but of course there were even greater barriers to that!

A few weeks ago I decided to finally take the plunge and try out Joplin, a FOSS note-taking desktop and mobile app.

Joplin

Migrating to Joplin was relatively easy, as it can import Evernote’s “ENEX” export format. Unfortunately Evernote made me jump through a few hoops to create these files - the web app, which I usually use, wouldn’t do it, and the export had to be done notebook by notebook.

Joplin did a fairly good job of converting the formatting to its native Markdown, but my notes were a mess anyway so it hardly mattered. The main thing that seemed to go wrong was headers being converted to bold text instead of actual header lines. I also had to reorganise the notebook hierarchy since the notebooks were exported and imported individually. With that done I was about where I had been with Evernote, albeit only on my main computer.

Joplin includes both a Markdown editor and a WYSIWYG editor. I haven’t tried the WYSIWYG editor because I like writing in Markdown these days, and I’m hoping using it will result in more structured notes than I’ve been keeping in the past. The default layout has the Markdown and rendered output side by side which I do find a little strange - I can never decide which side I should be reading. However, there is a button in the top right corner of the window to switch between dedicated editing, reading and side-by-side modes.

Sync That Thing

Joplin can sync between instances using all of the main cloud storage services as well as its own cloud offering. However, the method that appealed to me, because it keeps my notes out of anybody else’s hands, was to use Syncthing.

Syncthing is a P2P file synchronisation protocol and app that supports all the operating systems that I use (not iOS), and doesn’t require any complex network configuration.

Joplin is actually unaware of Syncthing - to use it, you need to select the “File system” sync target and point it to a folder. It will periodically export changes to this folder and import any changes it finds there.

Joplin sync settings

Syncthing is managed via a web interface on localhost port 8384 by default. There are two main tasks to perform here - connecting your devices, and sharing your notes folder.

Devices in Syncthing are identified by quite unwieldy SHA-256 hashes, but it provides a number of ways to simplify exchanging these. Devices on the LAN are listed in the add device dialog, and if you’re using it on mobile there is an option to scan a QR code for the device you’re connecting. Devices have to grant permission to other devices that add them, and once they do you can choose which folders to share.

Add device

Adding a folder is just a matter of entering the path to it on the file system, and giving it a name. You can select existing remote devices to sharing with on the “Sharing” tab, or share it later. Devices have to approve shared folders as well, and you will have an opportunity to choose a target location at that point as well.

Add folder

Syncthing on Android actually has two user interfaces, a native one and the same web UI as is available on desktop, which is a bit confusing. I found I had to drop down to the web UI to approve remote device and folder connections.

The Joplin configuration should be basically the same on any devices you want to sync - just choose the “File system” target and point to the synced notes folder. On desktop there is an option to clear the local notes and take everything fresh from the sync target, but the Android app seems to be missing this. As such, you might end up with multiple copies of Joplin’s initial documentation notes.

Snags

There are a few things to watch out for, and a few things that I personally find a bit confusing.

The first problem I encountered was due to my hesitation about where to have my phone’s photos stored on my laptop. I accepted the share to one location initially, and when I later deleted and recreated the share in another location I somehow orphaned 33 files. My phone is still stuck at 99% synced as a result.

One thing that appears strange to me is that you can share a folder from one device to another, and then share it from the second device to a third without the third device being aware of the first. I’m not sure if there are any consequences to that setup or if i is functionally the same as having all the devices aware of each other.

On the Joplin side, the formats of the sync repository occasionally need to be updated for new versions of the software. It then becomes unusable by older versions. It remains to be seen how much of an issue this will be - my main worry is that I will update one device beyond what is available on one of the others, or that I will be forced to update at an inconvenient time.

I have had one newly created note fail to sync to my phone so far, though it went to another device, and notes created subsequently synced to it no problem. This may have been the result of one of the issues described above, but I haven’t figured it out yet.

The final thing to be aware of is that Syncthing won’t try to resolve conflicts between files, instead choosing and renaming a “loser” when conflicts occur. I’m not sure what Joplin will make of the renamed files, but it’s something to be aware of if you’re moving between devices and possibly updating the same note before it can be synced.

Beyond Notes

Syncthing has actually been a revelation for me. As well as my notes I’ve been using it to sync photos from my phone to my laptop (previously I was relying on Google Photos), and for sending miscellaneous files from my laptop to my phone (previously Google Drive’s job). I’ve also been using it to send video files to my phone, something I wasn’t even bothering with before.

It feels great to be able to cut Google out of the loop as well as Evernote, and so far it has been working away well in the background without me having to think much about it after the initial setup.

Out of Road

Out of Road

I don’t remember where I got the inspiration for this one. I guess I’ve been thinking a lot about climate change recently, what with the great crypto/NFT debates earlier in the year and recent extreme weather events and wildfires. It seems particularly timely given the recent IPCC report.

In terms of technique, I used a 3D render as a base, which is something I’ve done before, but this time I used somebody else’s CC licensed model because cars are kinda complex.

Ford Mustang Mach 1” by BaldGuyMartin is licensed under Creative Commons Attribution.

Timelapse

Out of Road

Wrong Turn… into Wokeness

Beyond woke, yet the unintended result is the victims are punished for being woke. Skip it. This is not a Wrong Turn movie.

David J, May 02, 2021 - 1.5/5 stars on rottentomatoes.com

Content warning: homophobia, misogyny, racism, spoilers for the movie Wrong Turn (2021)

It seemed pretty clear to me after watching Wrong Turn that it had a conservative message. I haven’t seen the previous movies in the franchise, but I understand they are based on some unfair stereotypes about Appalachian people, so it seems a fair enough twist even if it doesn’t resonate with me personally. When I looked at the audience reviews on Rotten Tomatoes, however, I discovered that a good many people who seemed like they would be on-board with that perspective instead understood it to be “woke” propaganda.

The Message

In Wrong Turn (2021), a young, white, American woman, Jen, throws off the shackles of a guaranteed prominent position in her father’s construction business to hike the Appalachian trail with her boyfriend and their friends. Despite warnings from the locals, they stray off the well-worn path, and fall victim to a primitivist cult known as The Foundation.

It’s clear from the start that Jen is the character that the audience are supposed to relate to. She’s torn between the path her father has laid out for her and the ideals of her friends, and is never shown to have taken those ideals to heart personally. She is more down-to-earth and capable than the other characters - she is the one that has to change the tyre when they get a flat en-route, for example, and later she is the only one able to think on her feet in high-pressure situations.

A woman changing a tyre is obviously peak woke

Her boyfriend, Darius, is an idealistic, politically active black man who expresses socialist and environmentalist ideas. Their friend group are well educated young urbanites, and include a gay couple - pretty much every review of the movie describes them as “diverse”. They’re incredibly cringey to be honest - when an aggressive local accuses them of never having done a day’s work in their lives they actually start bragging about their educational achievements and white-collar jobs - all except our hero, Jen, who is just a little lost in life.

Wokevengers assemble!

Not long after setting out they divert from the trail to find a civil-war fort that one of them is interested in. They quickly become lost, and fall victim to various traps before finally encountering, and murdering, a member of the Foundation. Shortly thereafter they are captured and taken to the Foundation’s camp.

Finally we learn what the villains of the piece are all about - not inbred hill people as expected, but an egalitarian, primitivist, socialist cult whose leader has an impeccable hipster coiffure. The surviving friends are put on trial, and their every defense is twisted back on them - they are the intruders! They rushed to judgement based on appearances! They murdered someone in cold blood! They need to respect the Foundation’s culture! Such hypocrites!

King of the hipsters

Far from being inbred, this cult thrives on the recruitment of wayward travelers who they either brainwash into accepting their ideology, or blind with a hot poker and leave to fumble in a dark cave. Really subtle stuff.

Jen and Darius are the only survivors of the trial, and only because Jen convinces the group’s leader, Venable, that they could be useful members of the community. Jen, considering herself to have no relevant skills apparently, is only able to offer herself, as Venable’s wife.

As I mentioned earlier, Jen is actually the only member of the friend group that is portrayed as having any degree of competence or skill relevant to life in the real world. The rest of them are useless, out-of-touch, and varying degrees of obnoxious. But in the woke socialist utopia of the Foundation, she is only valued for her body.

As a result of her relationship with Venable, Jen ends up pregnant. Of course she never considers a termination.

It’s true that “woke” people are amongst the victims in this movie, but more importantly it is “wokeness” that is the monster, leading good All-American girls like Jen off the conservative middle-class path and into a life of bondage, exploitation and sin. She’s the character we’re supposed to find relatable - the rest of them deserve their fate because of their embrace of “woke ideology”, and their deaths are likely intended to be entertaining for that reason. They are so in tune with the villains that Darius chooses to stay with them instead of taking the opportunity to escape.

Let me have another go at summarising what I think this movie is trying to express.

In Wrong Turn (2021), a young, white, American woman is led astray by her “woke” boyfriend and her “woke” friends. While seeking to dig up civil war history that is best left buried, they encounter the logical end-point of “woke” ideology made manifest, and it abuses them horrifically. Jen escapes thanks to the savvy and skills instilled in her by her conservative upbringing, the refusal of her father to abandon his search for her, and the kindness of the misunderstood locals. She returns to her middle-class path through life by working in the family business, and violently rejects further attempts to lead her back to the horrors of “wokeness”.

In short, this is a conservative movie espousing conservative ideals. I disagree with David J, quoted above - the characters are intentionally punished for being “woke”.

Hollyweird is dying a slow death”

Let’s take a look at some of the audience reviews on Rotten Tomatoes from people who seem to have missed the point.

Its just more hand fisted political bs with pretty crappy character development.

Gage S, Apr 14, 2021 - 1.5/5 stars

I guess you could interpret “hand fisted political bs” to be referring to the conservative political messaging. I choose not to.

Woke” America is destroying our culture Absolutely irredeemable

James A, Mar 14, 2021 - 0.5/5 stars

It might seem like James gets it, if not for the 0.5 stars.

I would have given this a four, if it wasn’t for the “over the top” libtardation seen in the movie that Hollyweird so much loves these days. The mixed race couple, the gay couple, the Arab guy, the Asian guy, Black guy wearing “Black Owned” T-Shirt, The racist Sheriff, White-Guilt guy gets mad, calling Confederate monument “Racism” like a White-Knight. Pretty embarrassing stuff. Original from 2003 was better, this wasn’t bad. Hollyweird is dying a slow death, stuff like this in movies is asinine…

Davis H, Mar 06, 2021 - 3/5 stars

It’s like this guy stopped watching a third of the way through. Also a pretty explicit example of how the mere existence of characters who are not straight or white is unacceptable political content to some people.

Another woke joke bad movie

michael b, Mar 07, 2021 - 0.5/5 stars

Joke’s on you, buddy.

This next one’s pretty gross and misogynistic, and you won’t miss much if you choose to skip it.

Revolves around a queer braindead friend group, which just so happens to have every single race in it. Due to the fact that the whore with the least amount of brain cells becomes a rambo bitch and survives till the end makes me not able to give this more than 4 stars. Giving this 4 stars because you get to see a dumbass friend group suffer. Also in the foundations court room they never mentioned how the foundation drew first blood with the gay dude getting a big log to the face for the last time.

the b, Jun 10, 2021 - 2/5 stars

I had to highlight this one because they understood at least part of what the movie was about - watching “woke” straw-people being punished. I can’t give this review more than 4 stars however because of their views on Jen - I guess because she has sex she must be a “dumb whore”. 2/5 stars.

Final Observation

What strikes me about these reviews is that they reveal how the movie reinforces a conservative (or more broadly right-wing) worldview regardless of whether the viewer actually understands the messaging. Either the messaging is understood and received as intended, or the mere presence of POC and gay characters reinforces a perception of a liberal Hollywood elite pushing a “woke” agenda.

Gemini Launch!

In the olden-times, before the Web became basically synonymous with the Internet itself in many people’s minds, there was another, competing hypertext protocol: Gopher.

I say “was”, but of course Gopher never really went away - it was kept alive by enthusiasts, and in recent years there has been a resurgence of interest in it as a sort of haven from the ubiquitous surveillance and relentless commercialisation of the Web.

I’ve long been interested in Gopher (I even made a game about it), and have intended to start a phlog for a while without ever going ahead with it. Something about it always just seemed a little bit awkward and off-putting. I was torn between using Gophermap (i.e. menu) files for everything, or using plain text for posts and sacrificing any hypertextuality. I was torn between finding the need to wrap text to be cool and retro, or a hassle that results in an inferior experience for both creating and consuming content.

Gemini is a new protocol which takes inspiration from both Gopher and the Web, and from a certain perspective, improves on both.

When I heard about Gemini I didn’t really get it at first. I thought it was just Gopher with SSL, which is nice, but I figured I’d get set up on Gopher first and then consider a Gemini mirror. A few days ago I saw a screenshot of the Lagrange browser on Mastodon and started to look into it a bit more. When I realised just how many issues of both Gopher and the web it addresses, I was hooked! I spent several days after that setting up a capsule (the Gemini equivalent of a “site”).

My Gemlog in Lagrange

Static Generation

After experimenting with a Gemini server for a bit and creating a few static text/gemini files, I decided that I wanted to statically generate my gemlog the same way that I do my blog. I expected to have to write something from scratch to do this, but after some experimentation I was able to get the Pelican static site generator (which I also use for my blog) to both read and output .gmi files. It does take a bit of configuration however, and I had to monkeypatch a couple of methods in Pelican.

Unfortunately this means that it is only guaranteed to work with the current version of Pelican, 4.6.0, and could break at any time. Nonetheless, the plugin is available on GitHub if you want to try it out.

Gemini Reader

The first thing required was a custom “Reader” that can handle .gmi files instead of the usual Markdown or reStructuredText files. It’s simple enough - it just parses the file up to the first blank line as metadata, and the rest of the content is returned unmodified, since we are also going to output the same format.

class GeminiReader(BaseReader):
    enabled = True

    file_extensions = ['gmi', 'gemini']

    def read(self, filename):
        metadata = {}
        content = ""
        with open(filename, mode='r') as f:
            end_of_meta = False
            while not end_of_meta:
                current = f.readline()
                if current == '\n' or current == '':
                    end_of_meta = True
                    continue
                current = current.strip()
                split = current.split(': ')
                metadata[split[0].lower()] = split[1]
            # After the first blank line, the rest is content.
            content = f.read()

        parsed = {}
        for key, value in metadata.items():
            parsed[key] = self.process_metadata(key, value)

        return content, parsed

Handling Internal Links

Pelican has a mechanism for linking to content internal to the site where you start the URL as {static} or {filename} and it replaces those with the appropriate paths during generation. However, this didn’t work with the Gemini link syntax - the replacement is based on a regular expression that assumes the placeholder will be found in an attribute of a HTML element.

I couldn’t find any setting or hook in the plugin system to alter this regular expression. There is a setting to customise the part that specifies the braces, so you could change the placeholders to ¿¿static?? or something if you like, as long as it is still found in HTML. It seemed like my only option was to replace the method where the problem regex pattern is defined, and use something that matches Gemini links instead.

def _get_intrasite_link_regex(self):
    intrasite_link_regex = self.settings['INTRASITE_LINK_REGEX']
    regex = r"(?P<markup>=> )(?P<quote>)(?P<path>{}(?P<value>[\S]*))".format(intrasite_link_regex)
    return re.compile(regex)

You’ll notice this also has to include a “quote” group because that was present in the HTML version and was expected elsewhere - here it will always be an empty string.

Unfortunately, the problems didn’t end there. I found that the placeholders were removed, but not replaced with the absolute URL of the capsule. This turned out to be because urllib is used to join the URL components, and it doesn’t recognise the gemini protocol. To get around this I had to replace another method, and make a call to a wrapper around urllib.urljoin.

def _urljoin(base, url, *args, **kwargs):
    is_gemini = base.startswith('gemini://')
    if is_gemini:
        base = base.replace('gemini://', 'https://')
    result = urljoin(base, url, *args, **kwargs)
    if is_gemini:
        result = result.replace('https://', 'gemini://')
    return result

Gemini Output

Pelican uses Jinja2 for its templating, which is happy to work with any type of text file, so creating .gmi templates wasn’t an issue. Handily, there is a setting to look for templates with extensions other than .html.

THEME = 'themes/hypergem'
TEMPLATE_EXTENSIONS = ['.gmi', '.gemini']

To get Pelican to output files with a .gmi extension instead of .html, there are a bunch of settings for the different parts of the site. A single “extension” setting like for the templates would be nice, but whatchagonnado? I took the opportunity to customise the article location and file names as well.

# These settings are required to output files as .gmi instead of .html
ARTICLE_URL = 'articles/{date:%Y}-{date:%m}-{date:%d}-{slug}.gmi'
ARTICLE_SAVE_AS = ARTICLE_URL

DRAFT_URL = 'drafts/{slug}.gmi'
DRAFT_SAVE_AS = DRAFT_URL

PAGE_URL = 'pages/{slug}.gmi'
PAGE_SAVE_AS = PAGE_URL

DRAFT_PAGE_URL = 'drafts/pages/{slug}.gmi'
DRAFT_PAGE_SAVE_AS = DRAFT_PAGE_URL

AUTHOR_URL = 'author/{slug}.gmi'
AUTHOR_SAVE_AS = AUTHOR_URL

CATEGORY_URL = 'category/{slug}.gmi'
CATEGORY_SAVE_AS = CATEGORY_URL

TAG_URL = 'tag/{slug}.gmi'
TAG_SAVE_AS = TAG_URL

ARCHIVES_SAVE_AS = 'archives.gmi'
AUTHORS_SAVE_AS = 'authors.gmi'
CATEGORIES_SAVE_AS = 'categories.gmi'
TAGS_SAVE_AS = 'tags.gmi'

Theme

I haven’t got much to say about this. I wanted the article links to be a bit more descriptive than just the date and title, so I did something similar to what medusae.space does and included the article summary, the category, and the tags.

It’s close to general purpose but not quite - I added a custom SITELOGO setting that is used on the index page with an ASCII art version of my logo generated using ascii-generator.site, and there is also a custom template for the custom landing page. The index is renamed using a setting, and another page is renamed to index.gmi to take its place. This is so if I want to add content that isn’t generated by Pelican, I have the scope to do so.

INDEX_SAVE_AS = 'gemlog.gmi'
Title: Hyperlink Your Heart
Date: 2021-06-23 22:59
Slug: index
Authors: Kevin Houlihan
Summary: Capsule index
URL: index.gmi
save_as: index.gmi
Template: capsule_intro
Status: hidden

Hosting

I’m serving the capsule using Jetforce from a first generation Raspberry Pi which I had lying around and haven’t done anything with in a while. There was nothing really involved in setting it up beyond what is described in the documentation, except that I installed it in a virtualenv.

I also took steps to make sure it is running as a dedicated user with no permissions to anything else on the system.

Real professional operation

Future

I’m not sure what’s next, but I’m excited! I might discuss with the Pelican crew if there are any ways around the issues I encountered that I might have overlooked, or if it could be adapted to be more suited to non-HTML output. If not, maybe a Gemini fork is in order. I have no idea if there are further issues with it beyond the functionality that I’ve used.

I have quite a few posts to port over from my blog yet, and I need to get some image optimisation happening there like I have here. Besides that, I guess all I have to do is get to know the community!

Visit My Capsule (and Beyond)

If you’re already familiar with Gemini please check out my capsule.

If you’re not, well, I still encourage you to visit, but I should probably give you some guidance on getting started.

If you just want to dip your toes you can browse the Gemini network using a HTTP proxy (here’s one, and another). For what I would consider the “full experience” you will need a dedicated browser. I’ve been using Lagrange, and highly recommend it, but there are a whole bunch of others if that doesn’t suit you. Many of them also support Gopher, which makes browsing both into a seamless experience outside of the modern Web.

When you want to move beyond my capsule, here are some others I recommend:

I leave you with my anxious young poppy, Jennifer, on AstroBotany:

            O
            |
           \o
            |o
           \/
.  , _ . ., l, _ ., _ .  
^      '        `    '

name  : "Jennifer"
stage : anxious young poppy
age   : 2 days
rate  : 1st generation (x1.0)
score : 326788
water : |██████████| 100%
bonus : |          | 2%

iRehabilitation

I’ve been plagued by temptation lately to buy a Pinebook Pro. My current laptop is really a desktop replacement, a beast that can hardly last an hour untethered from a power socket. It’s usually not worth the hassle of extracting it from its tangled nest of cables when I want to compute elsewhere, and that’s fine - it’s the workhorse. But as a result, the idea of a light, efficient laptop is alluring, especially when it’s one that runs Linux.

Out with the New

However, I don’t really like to buy new devices without good reason. I already have another laptop that meets the criteria of being relatively light and portable - the MacBook Pro that served as my main work machine between 2015 and 2019 when my wife and I were floating around Ireland, France and Spain and living out of the back of our car. I have been using it as a more portable option already on occasion, but it has a few annoying problems:

  • The battery isn’t in great shape, so while it’s a lot better than my main laptop, it’s nothing like that expected of the Pinebook Pro.
  • The OS is outdated. It’s demanding constantly that I update to a newer version of MacOS, but I don’t want to. Apparently it could run the latest version, but I don’t trust Apple to preserve the usability of old devices.
  • It runs MacOS. MacOS is fine - it’s a Unix, it’s not Windows… but it still has a lot of little annoyances, it’s proprietary, and to be honest, I’m bored with it.

Basically it’s lost its shine, and isn’t fun to use anymore.

Happier times house-sitting in Belfast

A Cunning Plan

One of Linux’s oft-heralded killer use-cases is in giving old hardware new life. I’ve never really used it for that explicit purpose - whenever I use Linux, it’s just because I prefer to use Linux, even if it happens to be on old or low-powered machines. This one isn’t exactly an ancient artifact, but I thought maybe installing a Linux distro with a lightweight desktop environment would help stretch the battery life and make it feel a bit snappier, more like a new machine.

The two distros I considered were ElementaryOS and Xubuntu. I’m not sure how lightweight Elementary’s DE is, but I liked the look of it, so I decided to try it out with an eye to maybe using it as my main OS some day.

First impressions were great - it only used 700MB of RAM after booting (compared to nearly 4GB for MacOS!), and the degree of visual flair and polish were incredibly impressive. Unfortunately a couple of things put me off - when I cut the CPU frequency to 800MHz I began to experience occasional lag, and at one point the shell crashed with no way to recover it!

I didn’t have any experiences like that with Xubuntu. I ran it for a whole day from a USB stick, installed a bunch of software, worked on a blog post, and had no issues - so that was that decision made! It’s definitely not as visually impressive as Elementary, but I’d rather a responsive system than a pretty one in this case.

Installation

Installation was pretty smooth, especially compared to installing Linux on PowerPC Macs back in the day. The only snag was with the proprietary wireless driver. This was easily enough installed using the “Additional Drivers” settings dialog when running from the USB stick, but after installation the required driver was no longer available. Having no other means to connect to a network, this was a serious problem!

Solving this involved a couple of steps. First I enabled the “CDROM” source in the Software & Updates settings, under the “Other Software” tab. This caused the driver to become available in the Additional Drivers dialog, but it wouldn’t install. The problem was that the live USB stick was mounted somewhere under /media/kevin, but apt expected it to be mounted at /media/cdrom, which didn’t even exist. Unmounting the USB stick and running the following commands sorted it out, and allowed me to connect to the WiFi to install and upgrade other packages.

sudo mkdir /media/cdrom
sudo mount /dev/sdb1 /media/cdrom
sudo apt install bcmwl-kernel-source

Results

Unfortunately the results were not quite what I’d hoped. I performed a test where I played a movie and music on a loop under both OSes, and Xubuntu was down to 10% battery in 1 hour 46 minutes, while Mac OS took 2 hours and 28 minutes to reach the same level. This was with all cores throttled to 800MHz under Xubuntu, and MacOS doing whatever it does naturally to save energy, but full screen and keyboard backlight brightness on both.

While Xubuntu runs great, and is much more pleasant to use for me, it doesn’t seem to achieve the same battery life under similar loads. I’m torn now between the user experience I prefer under Xubuntu and the superior energy efficiency of MacOS… Or perhaps buying a Pinebook after all!

Cheers on a game-jam well done

Recent Art & Portfolio

Once again I have become neglectful of updating this blog with my artwork, so let’s do a roundup of the last uh… 7 months?!?! and maybe I’ll try to get it back on track from here on. Though I do have a nice portfolio site now that I have been keeping up to date, so if you really like my art you could be following that as well. More on that below.

Socialist Revolutionaries Past & Future

Last October I was trying to get back into the development of my game Just a Robot, and started as I always do by completely re-imagining its entire look. In this case I was inspired by the look of Soviet propaganda and Anarchist woodcuts.

Robot Propaganda

I followed that up in December with this tribute to the luxurious moustache of Irish revolutionary socialist James Connolly.

James Connolly

Spaaaace

For the New Year I committed myself to working on more space and sci-fi themed stuff. I started with this depiction of a space station roughly based on the ISS.

Space station

Also in January, a spaceship approaching Mercury, with some tricky perspective.

Mercury approach

In February I did another piece loosely inspired by Soviet propaganda, and a theme that seemed to be somewhat controversial - the idea of billionaires fleeing into space and leaving the rest of us to our fates. Some people took that as a literal prediction of future events, but I think of it in more allegorical terms - capital is ruining the natural and social environment without any sense of responsibility to the rest of us, while its masters can escape the consequences of their actions without literally leaving the planet - depicting them doing so is just a good way of describing the situation, in my opinion.

Flight of the Billionaires

In March I brought it back to Earth and explored the same theme from another perspective, more or less.

Left Behind

I started a piece in April for Cosmonautics day, but I didn’t get it finished until June - the capsule and final stage of Vostok 1 in orbit. I did a game jam and an oil painting in the meantime though!

Vostok 1

And that pretty much brings us up to date!

Portfolio

My portfolio site, which went live in July 2020, is another statically-generated site based on Pelican, but focused on image galleries instead of blog entries (using the standard gallery plugin). I wanted a central place to put my art that wasn’t a social media platform, and that would display it optimally. I’ve linked to it a few times from here despite never actually mentioning it.

It is inspired by the ideas of Matej Jan on displaying art on the internet, and attempts to display my pieces at the best integer scaling to fit in the browser window, on a background with appropriate contrast, and with no distractions.

Of course, I also wanted to keep it as small and responsive as possible, so it does this with about 3kB of javascript, 11kB of CSS, and a selection of minimal SVG backgrounds. Because all the art is pixel art, transferring it at 1x resolution and resizing it in the browser (as discussed in recent posts) keeps things extremely compact, with the entire portfolio currently only amounting to 390kB. The portfolio does a lot better job of displaying the art than this blog does though, here the images are just resized to max width without attempting an integer scaling.

I think it looks real nice and that my art looks real nice on it, so go check it out!

Image Optimisation

In the last instalment of my epic blogging saga I recounted my discovery that the index page of this site had grown to over 1.7MB of content when loaded fresh, largely due to the images. One of my goals for this site was that it be lean - fast to load and energy efficient - and it was not meeting that goal at all. Clearly avoiding Javascript and CSS frameworks was not enough!

I immediately started thinking about how to improve the situation. Though I had previously ruled out the approach used by Low-tech magazine’s solar-powered website because I didn’t think it would fit my aesthetic, I decided to see if dithering coloured, rather than monochrome, PNGs would work better for me, and improve on the size and quality of an appropriately-sized 75% quality JPEG.

PNG Thunderdome

The one to beat - baseline 75% JPEG, Size: 16.1kb

melanie-pointing_alpha_baseline.jpg

Using Pillow and the same hitherdither library that Low-tech magazine used for their site, I iterated on a script that output hundreds of compressed variations of a given image using different dithering algorithms and parameters.

The best results I found were with the Bayesian algorithm with a 32 colour palette, a 2x2 matrix, and an image size half the expected display size. This produced a result that was relatively readable, and reminiscent of pixel art. The size savings varied by image - sometimes up to 10kB, but often only 2-3kB as for this example. These savings are modest compared to the loss in detail, and I think this approach could only be considered because of the unique aesthetic it produces.

Palette: 32, Dither: bayer, Threshold: 256/8-256-256/8, Order: 2, Size: 13.9kb

melanie-pointing_alpha_halved_pal32_dithbayer_order2_thresh8-1-8.png

The three “threshold” parameters expected by hitherdither were a bit of a mystery to me. Through trial and error I found that some produced much smaller images, but unfortunately not at a level of quality that I found acceptable. Lower palette sizes also resulted in savings, but below 32 colours they started to look too abstract and unreadable to me.

Palette: 32, Dither: bayer, Threshold: 256/2-256-256/2, Order: 2, Size: 9.6kb

melanie-pointing_alpha_halved_pal32_dithbayer_order2_thresh2-1-2.png

A Challenger Appears

I was about ready to commit to this approach and start converting all the images when my wife reminded me that WEBP exists! After converting a few of my test images to WEBP it was clear that it had my dithered PNGs beat - half the size of the JPEG without any loss of quality.

80% quality WEBP, Size: 9.9kb

melanie-pointing_alpha_baseline.webp

Apparently support for WEBP is pretty good these days, but there are a couple of annoying outliers - Safari only supports it on Big Sur, and IE11 still exists, as I’m sure it always will.

As such, I decided I should probably try and fallback gracefully to a JPEG or PNG where WEBP isn’t supported. This can be achieved using <picture> and <source> elements to allow the browser to choose the format it likes best.

<picture>
    <source type="image/webp" srcset="{optimal image url}"/>
    <source type="image/jpeg" srcset="{compatible image url}"/>
    <img src="{compatible image url}"/>
</picture>

Let’s Automate

The above HTML snippet presents a problem - my posts are not written in HTML but in Markdown, and processed by Pelican into HTML, and that process just results in an <img> tag by default.

I threw together a quick Pelican plugin to post-process the generated HTML and replace any <img> tags with <picture> tags, if the referenced images could be replaced with WEBPs. It also processes the referenced images to create scaled JPEG/PNG versions as well as the WEBP version, so I don’t have to do any of that manually either.

Results

The index of the blog is now 778kB at the time of writing - so reduced by over half! I did also replace a particularly large and troublesome GIF with a static image as well, and converted some PNGs to JPEGs. This results in greater savings because the plugin converts PNGs to lossless WEBPs and JPEGs to lossy ones.

The plugin is actually not even working fully yet - for some reason it is missing some <img> tags on the index pages, leaving them serving their original unprocessed files.

I also haven’t done anything about the pixel art images, which if served at their original resolution could be a significant saving.

So in short, huge progress, but still much scope for improvement!

I am almost sorry I’m not going to end up using these dithered PNGs though…

Tracer and Chun-Li

Update 01/06/2021

I fixed the problem with the plugin and all images are now optimised except the few in this post that are flagged to be skipped. I also swapped out all the pixel art images and gifs for 1x resolution versions. These are just being scaled up by CSS, with reasonable results.

The index page is now 539kB, the whole site is just over 1MB, and it is in the 80th percentile for energy usage according to websitecarbon.com (whatever that’s worth). I think the link above showed 75th or 76th percentile or thereabouts at the time of posting, but it will show 80th now.