The USS Quad Damage

What is democracy, anyway?

I don't mean the literal definition, I mean the working theory in most people's heads

A lot of people are saying “democracy is dead”. Sometimes they mention “in the US”, but often enough you just have to figure that out via context. This is important because the people saying it aren’t necessarily American. It’s one of those strangely specific phrases, which twigs me. You can often source a quote from the way it’s phrased. It could be a specific article on the internet which could then source the material from a press release.

Not saying that’s what’s happening here. “Democracy is dead” sounds like one of those phrases which could be decades old and is only being re-used now. Still, why are so many people saying this thing? And why now? It’s like a year after the elections. I always thought democracy = elections.

This is about the best I could get from Cathy Wilcox (who has better things to do than to talk to me, and I suspect would’ve blocked me had I pushed further):

I’m saying a democracy needs checks, balances and separations of power to ensure it is not abused. They need to be seen to operate!

For the sake of simplicity I’m going to attribute this quote to basically everyone who’s been saying “democracy is dead” of late, but this was obviously a pithy statement by a single person. I’ll just try and extrapolate. Based on the timing, I can only guess this was in relation to the removal of James Comey as head of the FBI. But what does this actually mean? What are the “checks and balances” they’re talking about? What are the “separations of power”? Finally, what is “being seen to operate”?

For “separation of power”, a little before this, courts overturned Trump’s muslim ban which would seem like separation of power is working just fine. Perhaps the separation is the executive (Comey, of the FBI) from the legislative, but considering some of the people making this claim are in countries with the Westmister system, there’s no real separation) between executive and legislative branches anyway.

Perhaps this is a euphemism in the same way people said “the USA's democracy was hacked by Russia”, which I would’ve interpreted to mean that they literally interfered in the voting, which they clearly did not. However, whilst I can see the... metaphor... with the Russian hacking I don’t really see it with the separation of powers.

“Checks and balances” seems to allude to the same thing, but I suppose more specifically with the idea that Trump will likely install a “friendlier” FBI head to avoid scrutiny. It’s true that this is a flagrant violations of “rules” or “norms”, but, much like Trump’s tax returns, these things are mostly common practise instead of hard rules which the president is bound by. If anything, the shocking revelation should be that, as it turns out, the president is bound by surprisingly few rules, and not that this particular president is violating those rules.

The real reason this rings hollow to me, though, is that the president is at a level where he is accountable to the people. It’s not as if a common police officer can arrest the US president. Ultimately, it needs to come from the legislative (all the members of parliament who got elected), and, failing that, Americans literally have the second amendment for such an occasion. The current US president may be freely ignoring norms, but this is not the same thing as a military dictatorship which happened in Burma or Pakistan.

But it’s the final bit of the quote which kind of scares me. Again, I’m generalising fairly heavily from the quote, but I think this is the driving force behind the “democracy is dead” crowd: The separations of power must be “being seen to operate”. That is, I think a lot of people are OK with the president having way too much power, as long as he’s not seen to abuse it. That is to say, instead of attempting to limit the president’s power under all circumstances, many prefer the president have that power, but not really abuse it in a way the majority is not OK with.

Whilst the subtext may well be “Kick Trump out, and let Hillary become president because most people voted for her”, these people are also in effect saying “If most people agreed with Trump, the muslim ban was probably fine”. It’s possible (likely, given the amount of tea-leaf-reading I have to do here) that my interpretation here is wrong, but I also lack another, more reasonable, interpretation.

I’m not saying democracy is in particularly great health. The US specifically has massive issues in their first-past-the-post system, voluntary voting, and massive gerrymandering of electorates (not that anyone was complaining about any of this when everyone widely expected Clinton to win). In addition, democracy in general has had some fairly longstanding issues.

One thing which I worry about is Tweedism. Here’s Lawrence Lessig talking about it being a “democracy crisis”:

The solution, which he mentions in the talk, is relatively easy (engage with democracy), but it is not simple. However, problems with democracy go a fair way back, to Socrates, who has a problem, and by extension the solution, which is simple (teach kids how to vote in school) but not easy. Here’s the slightly baitey “Socrates hated democracy” by the school of life:

Overall, when we think of democracy and the problems it faces, I really look to those two videos to inform me. I worry about those who say democracy is dead when this is just a manifestation of the problem which have been there, sometimes for a very long time.

Review: Read Only Memories

I review a game which leans on a gimmick to see if there's anything underneath it

Read Only Memories is badly written.

I say that first and foremost because this game does its best to put its best foot forwards, from the gorgeous pixel art work to the snazzy sound and excellent voice work (especially in the trailer). However, even from the get-go, the game falls a little flat.

The trailer promises a futuristic cyberpunk crime adventure on “the mean streets of neo san francisco” where “the world is on the cusp of a new form of intelligence”. I couldn’t wait to discover "the biggest secret of them all."

The fancy VCR post processing in the trailer is not present in the game. I expected this but was still a touch disappointed that they didn’t include it. Most of the game is also not voice acted. Again, something to be expected, but disappointing, especially given that they could have voice-acted some small scenes to give a lot more flair to the characters.

But ultimately, none of that matters. The writing is bad. It’s so tedious I had to play this game in several sittings. I just got bored. How this game managed to be dull and derivative in a cyberpunk universe is probably the biggest “achievement” it has to its name. I mean this at every level, from the flavour text to the characters to the dialogue to the structure of the story to the universe. It’s all bad.

The flavour text probably had the smallest part to play but was my biggest peeve. The way the game works, you select an item, then you can select an action for it. Some items will have several actions (“Talk”, “Act”, “Look”, etc.) and others will have just one or two (just “Look” for instance). For some reason, the game will often give you the “act” option for several of the characters in the game which, upon selecting it, will have your sidekick robot chide you with a “they probably won't appreciate that” or “don't make me ashamed to be associated with you”.

That’s it. Unlike other adventure games where you’re given a little joke as a reward for trying something stupid, there’s just a shitty piece of dialog asking “why did you click that?”. The real question, of course, is “Why did the developers put that option in there?”. The game was essentially training me not to try things. This might be a part of why people are calling this a “walking simulator”, despite the fact that there are branching narratives and several endings.

It's so tedious I had to play this game in several sittings.

There aren’t any “real” puzzles in this game. From the get go, you get a very strong indication as to what you should do next. The only thing you’re essentially judged on is how nice you are to characters in the game. Incidentally, I was nice to everyone, but more because I was pretty sure I would be prosletyzed by the grating, annoying characters in the game, and I took the path of least resistance, as opposed to actually empathising with those I was being nice to.

The characters and dialogue are derivative and boring. They may as well have been talking about filling out status reports and spreadsheets. Even games which have been poorly translated from Japanese (which, to some extent, this game tries to emulate) have better characters and dialogue than this game. Aside from the various annoying “vocal tics” that characters have, you could essentially take dialogue from one character and have a different character say it, and the game would continue to make sense. As much as the images in the game exude character, the dialogue serves to undermine it.

Worse, there’s no character “development” to speak of. No problems of theirs which you solve nor aspirations or personality traits to speak of. In their efforts to make these characters appear more “mysterious”, the writers have actually erased the “characters” themselves. What’s worse is that stuff does actually happen to these characters. The way the story plays out is just so unsatisfying that this character “catharsis” is a total letdown, whether it’s a sapient robot having to deal with death (and then being suddenly cheerful again) to a hacker who wants revenge for her sister’s death (and... changes a vocal tick?). Lexi, a police officer you meet at the beginning of the game, probably has the most satisfying of the character arcs, but her relationship to you is so tenuous that you’re left wondering why you care.

The story also doesn’t go anywhere, and all the threads connect up haphazardly (or not at all). Spoiler alert: You discover almost none of what you set out to discover at the end of the game. From the start of the game, you are tasked with finding “Hayden”, an old friend who has disappeared in mysterious circumstances. You later find out that he’s dead (expected) but not who killed him. One of your “leads” is a “rogue AI” who destroyed a Frozen Yoghurt stand, but you never find out whether there was such an AI, and what exactly destroyed the stand. In fact, it’s never really made clear why that’s a “lead” in the first place. The only thing that’s “explained” is who killed a bunch of (as far as I can tell, incidental) characters you are talking to while on a wild goose chase which doesn’t yield any results. Quite literally, in the end the antagonist, an Android, essentially admits that in retrospect, killing those people made no sense.

That’s what you’re left with at the end of the story: You do a bunch of essentially random stuff and a bunch of people die for no reason, and you don’t even figure out what you set out to discover at the beginning of the game.

The universe itself has some interesting elements, but overall it has been created by people who had the words “CYBERPUNK ADVENTURE” tattooed in their minds when they started, and they couldn’t shake themselves free of being completely derivative. Nothing in this universe says anything that hasn’t been said elsewhere, or needed to be said at all. It’s a universe full of fan service and Mary Sues, and, worst of all, the lone genius who invented AI. Nobody needs this, and nobody will remember it.

This game is very pretty, with nice music, and good voice acting when it exists, but its writing is awful.

Oh yeah the gimmick! Everyone’s gay, and the game punishes you based on how nice you are to them. It’s literally the only way you can affect the world.

Verdict: Made me want to become a men’s rights activist.

Steam Machines metareview

I take a look at how Steam Machine reviewers look at the hybrid PC and console, and what they miss in the bigger picture.

Depending on who you are, a Steam Machine offers some benefits which other review sites have not talked about

What surprises me about reviews of Steam Machines is just how disconnected from the real world these reviews are when it comes to the value proposition of the Steam Machine. Ars' review claims it is worse than a Steam box with Windows on it:

If you’ve never had a gaming PC and are considering the new Steam Machines as a console competitor, the 1,500+ games in the SteamOS library can be seen as a pretty strong launch lineup. They can also be seen as merely a form of warmed-over, limited backward compatibility with a much larger Windows-based PC gaming library. This leads directly to the biggest problem with the SteamOS ecosystem as a whole: it doesn’t offer much of anything over the existing Windows-based gaming world.

However, Engadget look at it the opposite way:

Truth be told, I didn’t expect a lot from the Alienware Steam Machine when I first turned it on. To me, it was just a collection of things I’d seen before. SteamOS' TV-friendly interface has existed for years as the desktop app’s “Big Picture” mode. Almost every version of the Steam Controller I touched over the years felt like an awkward prototype. Not even the hardware was new to me — the Alpha came close to mimicking the feel of a game console, but the illusion was incomplete. I couldn’t imagine it all coming together into one cohesive whole, but it does. I almost can’t believe it.

Having read that, it shouldn’t come as a big surprise that Ars found the Steam Machine wanting because it had a smaller collection of games than a Windows PC with Steam Big Picture mode, and Engadget found it wanting because it was more expensive than a console, but not more performant (yet).

What I don’t understand is, why is it so hard to see both of these worlds at the same time? I actually think the value of a Steam Machine is at this junction between the PC and the console, and you have to magic-eye both these worlds to see where the possibilities lie for gamers. Ironically, I think regular gamers can see the value far more clearly, though perhaps only tacitly.

Depending on who you are, a Steam Machine offers some benefits which other review sites have not talked about (to wit: Playing a large library of games on a new “console”, having a console-like experience on PC games, not having to pay again for games you already own, and streaming from a PC to the living room). This is going to involve three personas: People who play consoles but don’t touch PCs, people who want to play games but don’t want to pay top dollar for them, and people who want to play with their friends.

The first group: people who want to play consoles but don’t touch PCs, were probably best represented in Engadget’s review. That’s why, despite the criticism, I actually found it a fair review, because it was clearly coming from a persona which made sense. Even the downside makes sense to that persona: That AAA games did not perform better than their console counterparts, and the Steam Machine is substantially more expensive than those consoles.

To them I would say: this is absolutely true, but as Steam Machines succeed (and I don’t think this is an “if”) developers will put more effort into PC releases. This will improve their performance relative to consoles. In addition, unlike consoles which do not have any backwards compatibility, Steam Machines by their very nature are going to be backwards compatible. As newer Steam machines are released, gamers can keep playing their old library. Console gamers have games for consoles that are too old. They have complex TV connectors for out of date consoles so they can enjoy those old games, they have a ton of consoles which are not backwards compatible. With a Steam Machine, they are not paying for the Steam Machine, they’re paying for an insurance policy for their old games.

Second, consider Valve’s strategy of a “Steam Exclusive”. Valve have gone on the record to state that they will not make any “SteamOS exclusive games”, and I think that’s true, but think about that for a moment. Right now, an indie game is already sort-of a timed exclusive for PC — making a game for XBox or Playstation is expensive, and all of these experiments come to PC first. However, the first thing an Indie dev does when it becomes successful is release on XBox and Playstation. This is because they know this is where a chunk of the gamers are. It’s not money they can pass up. However, as Steam Machines become successful enough to cater to Console gamers, Indie devs might just not bother. Why go through the expense of re-writing games for a whole other platform?

I believe this is something console manufacturers are aware of, which is why the latest round of consoles (PS4, XBOne) have a very similar architecture to a PC. They want porting to be simple, not for AAA devs, but for knock-it-out-of-the-park indie devs. This is an attempt to bring the PC to the console gamers. Steam Machines are a way to get console gamers to the PC.

Second, consider “gamers”: people with PCs but also a number of consoles. The Ars review claims that the Steam Machine is inferior to a Windows machine running big picture. In reality, gamers probably have a living room setup without a keyboard and mouse attached. The kind of constant fiddling that comes with a Windows PC with Steam Big Picture kind of takes away from the experience. If you have a bunch of consoles and a PC, gamers tend to think of the consoles as a bit of quick fun and the PC as a time investment. Being able to “fiddle” and play a Windows PC on your TV isn’t really all that great. Being able to play games (even a limited library) that are available on your PC without the time investment is a game changer. Even your saves are carried over.

Third, consider Free To Play. The new generation of consoles have apparently “embraced” F2P, not that you can really tell. PC and F2P is like milk and... something that goes with milk. There are a ton of F2P games on Steam, and if you buy a Steam Machine, whether or not you’re a console gamer, whether or not you have a big library of pre-existing Steam games, you will be able to play F2P games. From your living room.

I’m going to say this again in the big blinking lights because I have no idea how no reviews of the Steam Machine have made this connection yet: You can play Dota2 from your sofa. You can watch Dota2 replays from your Sofa. You can watch your friends' livestreams of Dota2 from your sofa. How many people just said "SHUT UP AND TAKE MY MONEY?"

Finally, consider the systemic reason: many people have consoles because their gaming group has friends who simply will not get a PC.

Let me go off on a tangent for a second: People often get quizzical when gamers defend their console choice violently. This is often justified with a psychological phenomenon which says “humans try and justify their choices”. However, there’s a much much stronger reason: Cultural capital.

See, to a rich person who can afford all the consoles, the “console war” is a waste of time. But imagine you can only afford one. You don’t want to be the poor sod who got a Wii U when all your friends got an XBOne. Even if the games weren’t all multi-player nowadays, just the fact that all your friends are talking about an XBox game which you cannot play means you cannot add to the conversation, which means you are instantly ostracised from the group. It is extremely important that all of your friends get the same console, hence the console wars.

If you have a diverse group of gaming friends, you also have a diverse bunch of hardware. That means some or all of the consoles, and a PC. Notably though, even though some of your friends will travel between XBox and Playstation, (and some in the same position as you will transition between PC and consoles), very few of the console gamers will come to PC. They just aren’t that technically inclined. A console-like PC, one that truly makes it a no-brainer to play a PC game, is a game changer for the PC gamer, because it means a console gamer can finally play a game with them, or share a cultural experience from the PC space with them.

I think that these things are all “obvious” to gamers. I think they can feel the value of a Steam Machine, even if they cannot put it into words. The social pressures of gamers in groups, the opportunities for cross play, the ability to play a few dyed-in-the-wool PC games means that Steam Machines are actually a much better idea than the reviews let on. It’s just a pity that the world of reviewers is so far removed from the experience of players.

Ad blocking is morally right

Think about how ad blocking works, and it's easy to see why the moral quandaries lie with the ad and content companies, not with the individual.

If these companies cannot be moral, why does all the moral responsibility lie with the viewers?

Ad blocking only works because ads are served from an entirely different site from the content. It’s how the browser can tell whether the content should be shown or not. Simple ad blocking will have a black list of advertising sites which it will reject. Privacy tools like privacy badger will ignore everything not from the originating site.

Many content sites claim this is a moral wrong. Blocking ads is depriving them of income. However, in order to hold this view companies must ignore their own moral duties, and we can see they have failed at every level.

The first question one should ask is: why not serve the ads from the same site as the content? After all, this would mean ad blocking would stop working entirely. There are many answers to this.

Firstly, ad companies don’t trust content companies. If a content company had to show an ad, they might simply not show it. If the content company shows the ad, it also has information about how many click throughs and purchases resulted from the ad. Ad companies simply do not trust content providers to accurately provide this information, as there is a direct economic incentive to lie.

Secondly, ad companies want to get more information on their users. If a content company provides ads to a user, the content company would have the data, not the ad company. Moreover, users can only be aggregated across sites if the ad company directly inserts and views cookies. Ads work the way they do to invade your privacy better.

Thirdly, content companies do not trust ad companies to run arbitrary code on their servers, so they prefer to ship that code to the browser. Instead of integrating with the ad company at the server end, content companies universally integrate at the browser end, thus worsening the user experience. Worse, they eventually authorise a third party to arbitrarily execute code on a user’s PC.

The justification for this sort of behavior is basically “it's cutthroat business,” but where’s the morality? If these companies cannot be moral, why does all the moral responsibility lie with the viewers? In the end ad blocking is sensible and only working against the most heinous intrusions into one’s computer. Unfortunately that intrusion is now "industry best practice."

Using an ad blocker is morally right. Don’t listen to the content companies.

The waning of Twitter

I don't think Twitter has long to go, at least for keeping my interest

So, reading Twitter is like eating chips

If you noticed (and I won’t blame you if you didn’t) my blog has been down for
about a year now. Having a blog is a bit quaint today, with modern social
networking if you’re one of the plebs, or Medium if you’re a wannabe writer, or
whatever else the kids are using these days.

While I was redesigning this thing, one of my aims was to be able to put
“tweets”, images, and other things on this blog, the aim being to act like "my
end" of a social network. Just hook up an RSS reader, and it would (roughly)
look like a dodgy facebook. I’d feel much more at ease about it, because this
is on infrastructure that I control, there’s no strange analytics going on, and
I could be more responsible to anyone reading it, too.

But as the year has worn on, so has Twitter. The service is a shadow of its
former self, and the people on it are also falling into patterns of predictable
behaviour. If the medium is the message, then Twitter as a medium is proving to
be dull, and ageing poorly.

In part this is because only so many messages can fit into 140 characters. The
types of conversation that can be carried on in this manner is limiting, the
types of publishing that this can sustain is limiting, and it’s a medium which
has a use-by date built in; the timeliness of a tweet is everything. Most of
all, though, it’s those exemplary tweets, the ones you feel the service was
meant for, which are truly disappointing. These end up looking like those cute
pithy sayings which tend to go on bumper stickers, and have the same kind of
caloric content as chips.

So, reading Twitter is like eating chips. You want “just one more” and you use
that to consume many, but the effect of reading Twitter is also like eating
chips. You feel like you haven’t learned anything new, all conversations end up
being devoid of meaning, either total agreement or confusion and frustration
at not being able to express yourself within that brutal 140 character limit.
It doesn’t feel like the sort of thing you’d want to keep around for posterity.

If anything, the actual text is increasingly full of metadata, and the
“content” is something on other sites, in Twitter’s images, or in Vines. The
tweet is littered with links, hashtags, people’s names, and you can hardly read
it. Click the expand button, however, and there’s the actual content, some text
that someone wrote, some article on the New York Times, whatever. Ironically,
it even kills the “chips” feel. You’re not even reading a tweet any more,
you’re reading a reference to content far away.

And wow is that content truly disappointing. With RSS readers you can tweak and
curate what you read so that only interesting opinions show up, you can at
least contemplate filtering away opinions you don’t care about or data that’s
not of interest to you. On Twitter, it just all gets thrown your way. Things
“other people” read or care about. Things on god-awful news sites with great
writing and not much else. Opinions abound. Data is often lacking.

One of the more recent ideas hovering around in "reading things on the
internet" is that the world has a lot of data now, and really the value is in
being able to trim that data down. The “firehose” of information is worthless
to look at in-person. What is more important is in dividing that up into
chunks, throwing those chunks away when useless, having the rest join together
into a cohesive mass.

So that’s where my attention is going to go: Can I make this site one cohesive
mass, and can I take other data on the internet that I care about and turn that
into something that other people find valuable? Ideally, this place will
eventually look a lot more like Atistotle Pagaltzis' excellent
plasmaturm, and a lot less like Twitter. Wish me luck.

Social Privilege and Online Trolls

I try and alienate myself from everyone, ever.

I’ve generally stayed away from this topic, not only because it is fraught, but also because it leaves me “on my own” as it were. No “sides” of the argument agree with me, and my opinion antagonises both of them, probably needlessly. Further, the opinion doesn’t have the memetic quality that people could latch onto. It’s hard to pick up, hard to agree with, and in the end not even really that important.

This is a dramatisation, but the reality is not far: On day one, two minor radio celebrities play a “prank” on an unsuspecting member of the public, or interview someone with a terrible secret, or carelessly lambast someone with a mental health issue. Day two, they apologise profusely and talk about how no one could have foreseen the consequences. Day three, they complain about internet trolls and how they are victimising everyone (including the hosts!), and these terrible people need to be taught a lesson.

Alternatively, an internet celebrity gets criticised, perhaps legitimately, starts blocking and reporting them for “abuse”, “trolling”, or “internet bullying”. Then posts public or private data of those doing the “abusing”, telling their readers to have at it.

The pattern is the same: use celebrity to avoid or deflect scrutiny, use celebrity to stir up outrage at your detractors, then paint yourself as a victim, somehow. I’m seeing it more and more often, a kind of brute-forcing of celebrity power. What makes it worse is that often these are people whose views I share or whom I otherwise respect.

Now, people calling out trolling on the internet by itself is not a bad thing. It’s the combined pattern of behaviour, where “abusive” is a code word for “disagrees with me”, and where the supposed abuse sits comfortably alongside tacit incitement against the “abusers”.

Long story short, I saw two people doing what I thought was exactly that, and I called them out on it. I was wrong. Basically, they were railing against abusive behaviour on the internet in general, and weren’t trying to take advantage of their status. Unfortunately, the attitude does empower those who are abusing their status on the internet.

It’s a difficult topic, because this sort of thing does affect internet “celebrities” more than most of us. Having a large community around them, getting their attention means getting the attention of everyone who listens to them. For an internet bully or troll, they are a high value target. For most of us, though, a troll or an internet bully isn’t unheard of, though they tend to be someone we know personally.

I guess what I’m saying is, this is mostly a problem for celebrities, and a problem that is directly offset by the power they wield through others on social media. Insofar as it applies to normal people, the scale of the issue has been far overblown because of the effect it has on the powerful.

This usually results in knee-jerk reactions from politicians, who start yammering on about “real name policies” on the internet, which really amounts to a total internet surveillance. While this sort of thing hurts internet users in general, it tends to greatly aid in shutting down dissent from the powerful, and that includes internet celebrities, even minor ones.

Having said that, I was reactionary. And I probably didn’t say what I wanted to say the way I wanted to say it. What I wanted to say was more like the above. It probably doesn’t occur to most people when they do things that benefit them but hurt others. In the same way as how white privilege is invisible, so is social privilege. This is not the sort of thing we want to entrench in our social mores.

Waving the flag

On what it means to be a pirate

I care not for your laws, for I am a giant.

I’ve been having this feeling that we’ve been doing Pirate – The Movement™ wrong in Australia. I’ve never been able to put my finger on it, either, which is frustrating because you can feel pirateyness more than you can define it. There’s been a long discussion on The Why of Pirate Party Australia which is an attempt to distill what it means to be a pirate. I wrote my thoughts at the time:

A Youtube clip linked in IRC talking about the PPAU described us as “humanists”, and that rung especially true to me: Pirates have faith in people

It still rings true, but the problem is this: Lots of people have faith in people, but most of them are not Pirates. I failed to reason about this correctly even though I literally linked Amelia Andersdotter’s insightful post at the beginning of the post. Hers is an important post in the discussion for what it means to be a Pirate, I think.

I keep having this sense that the Pirate Party is clouding my thoughts about the Pirate movement, which has existed long before the political party, even if not in those terms. Back in the demoscene, if someone started making video games, they were called “sellouts”. I wonder if The Future Crew, makers of FutureMark, started making benchmarking software instead of video games because they didn’t want to be considered “sellouts”. It might sound like a random thought but it’s a salient one for chasing our identity.

I keep having this thought: We are defined by our weapons. Most people might not realise this but the act of being a Pirate is an act of intellectual crunk and bombast. Whether it is cracking into software, making intros or demos, or even breaking into online systems, it’s meant to assert dominance. It’s meant to be intimidating. It says "I care not for your laws, for I am a giant."

But it goes further than that. The fundamental act of Piracy is one of social engineering. It actually says: "We care not for your laws, for we are giants." The Pirate Bay is a perfect example of this: It is technology that empowers all of us. It is one of the most trafficked websites on the internet, it is high performance, and it’s still running, despite years of trying to shut it down via various means.

And this is the second part of the meaning of dominance. It doesn’t just dominate in a technical sense, it dominates in a social sense. The Pirate Party is a product of that dominance. It says “we exist, we cannot be ignored, and everyone is on our side”.

A Pirate’s primary fuel is creativity and vast swathes of spare time. Pirate Party Australia has been quite creative, and has put in prodigous amounts of time considering. But we’re still acting a lot like a political party. Don’t get me wrong, we absolutely need to do this, and when it comes to actual policy, we’re a force to be reckoned with. Not only do we have a vast number of well researched policies, but we’ve also made a large number of submissions to various reviews on several policies. As far as it goes, we cannot be accused of not taking part in the process. However, that isn’t very “Piratey”.

When we talk about recruiting, we also talk in terms very much like a political party: Going into universities and recruiting students, setting up societies, and the general political fare. However, the people we neeed aren’t 20, they’re probably closer to 15. And we don’t want them politicking, we want them creating. More than that, we want them to create intellectual crunk and bombast. Because this is how we get to a stage where we are giants, and where we cannot be ignored.

So in the end, have I figured out the why of being a Pirate? I don’t think so. For all the catharsis I’m still lost. All the “advice” here is purely speculative. But I feel like I’m getting closer. The Party is a veneer. It speaks of the movement, but not for the movement. In Australia, it feels like we’ve mostly been taking advantage of the movement rather than contributing significantly in our own right. I think that if we focus on the latter, we’ll eventually see a much stronger Pirate movement in this country, and therefore a stronger Pirate Party. And maybe that new generation of Pirates contributing to the movement can finally answer that question: What does it mean to be a pirate?

On Javascript

A while back I decided to dive headfirst into Javascript with node.js, knockoutjs, and couchdb. I reflect on my experience.

My app, infonom is about half way to MVP. I decided to go full retard on javascript, mainly to force myself to learn the language, and how to use it in a decent context. Honestly, there’s a lot to like. I can see how a lot of people are into it. While I personally can’t survive without the velvet rope of type safety, I did enjoy the sort of flexibility that prototype based languages give you. It’s certainly an odd kind of flexibility, but also a good kind.

Prototypes still confuse me, mostly because of what properties are copied and what properties are mirrored, but it’s a fairly powerful concept. The idea that your “class” hierarchy is based on live objects that have the opportunity to be changed and updated can enable a bunch of tricks that generally require far more impressive language features. You can mimic a classes style system, or traits, or some functional semantics, all by using and abusing the idea of prototypes. In the end I’ve learnt enough about Javascript to make peace with it.

But I realised very quickly that I hated each of the three technologies I used: knockoutjs, node.js, and couchdb. But hold on because there’s a surprise twist coming.

Let’s start with node. There are a couple of reasons I like Node. One is that it makes it very hard to block. The code you must write to block is annoying to write. The second is that it “solves” multithreading in a novel way: By not having any. If you want your code to scale, it must work in a way that’s parallel. If you can “distribute” your code across several cores, you can distribute it across several machines. In normal multithreading there’s a huge discrepancy between many cores on a single machine and then switching to having the service go across multiple machines. Finally, it makes coding fun at first. You’re writing node and refreshing your browser and it’s all very quick and easy.

But it has problems. Incidentally, the node haters are just plain wrong. Without getting too distracted: Python runs slower than js, every language with blocking somehow has very slow frameworks (I wonder why), Node makes concurrency easy across computers, and “sharing code between front-end and back-end” is actually supposed to mean “I can decide later where I want this to run”, which you can’t do in, say, Java.

Where was I? Oh yeah! It does have problems! The first is that it’s javascript which is an awful language. I know, it’s got redeeming features, but that’s like saying “smoking helps you lose weight”. It also quickly scales to the point where it’s no longer fun. The language is no fun, the structure is no fun, the libraries are shit, and you start to wonder what the hell you’re doing in the baby pool. I could just as easily do this in another language by just removing all of the frameworks. It just all starts to grate. The language grates, the environment grates, and the benefit you had at the beginning: “quick” fades into the background.

In short, I’m looking for a way out of Node.

Now, onto knockout. I’m actually a big fan of MVVM. I think this is how applications should work, especially on the web. But after thinking about the zen of knockout, I’m finding myself increasingly at odds with the framework. The thing is, knockout treats JS as the view-model and HTML as the view, but HTML isn’t the view! If anything, it’s literally the view-model! It’s the representation of the model for the view (the browser). So, in a way, knockout is really just an elaborate translation layer between javascript and HTML. I still think there’s a kernel of knockout that’s valuable, but unfortunately it’s not the code.

In short, maybe I’m looking for something more akin to d3.js instead of knockout.

Finally, let’s talk about CouchDB. Couchdb is a document oriented DB written in Erlang. It has multi-master and effectively “solves” CAP in a particularly elegant way. However, it commonly uses Javascript to do map-reduce functions. It’s also no fun par excellence. Unlike “non-scalable” databases like SQL (or Mongo), Couch literally asks you to solve all the scalability problems up-front. It really does your head in. You want to make a simple website, and you have to start considering copy-on-write and merge conflicts. There’s also the temptation of doing micro-optimisations to lower the number of REST calls you make to Couch (talking to Couch is done using a REST interface, and it does so in JSON).

I can’t tell you how irritating it is trying to figure out how your data model is going to work when in reality you actually don’t care just yet and you promise to think about it later. However, you also know better and you won’t think about it later and it’s actually a good thing Couch is forcing you to do this. The good news is that once you’re done, you’re done. Start whacking the data in and... as they say... relax. Though it’s relaxing after like a year of constipation.

In short, I still like Couch but man I’ll really consider an intermediate data model before I use persistence in future apps.

So there you have it. Fuck node off, change knockout to be completely different, and keep Couchdb. In conclusion: Javascript!

Dynamic languages vs static languages

I talk a little about the benefits of so-called dynamic programming languages.

We can look at programs in two different ways, firstly as the space inside a program, and secondly as the space between two programs. It has always been the case that more open systems concentrate more on the space between applications, and more closed systems concentrate on the space inside an application. We can see large monolithic pieces of proprietary software such as Adobe’s Photoshop have a lot of functionality built in, but do not communicate with applications outside themselves very well. On the other hand, small applications such as those on unix systems such as ‘cat’ or ‘cut’ can chain and connect to each other. If we want a dynamic and flexible operating environment, we must think deeply about the spaces between applications.

Importantly, though, is that we aren’t talking only about applications, but services, too. In fact, all data shared between applications, be it via files on a filesystem, or via a protocol, or via an API, all share a common thread: The data is either self-describing or it is not. Note that in reality, there’s no such thing as “self-describing data”. Mostly, the data is in a strict format, but that format might contain a description of the format. Notably, formats such as XML are strongly self-describing, formats such as JSON are loosely self-describing, and formats such as an IP packet are not at all self-describing. Also of note, even a binary format such as ASN.1 can be considered self-describing.

There are also grey areas such as protobuffers (or maybe even bson?), where a standardised externalised description will construct code that forms a generator / parser combination for non self-describing formats. Broadly, though, we may split formats into “self-describing” and “not self-describing”. Part of the reason is actually design intent. The design intent of a “not self-describing” format is to minimise the error space for a packet of information. A packet may, for instance, have a CRC which allows for rejection of the entire packet wholesale. On the other hand, a “size” field of a packet will be described as “n + 1”, so that a value of “0” is still considered valid. This is not being cheap on bits! If a value within a packet is invalid, what is a parser to do? In order to prevent this conundrum, all combinations of values in a packet should be valid, save for errors which allow an entire packet to be rejected.

By the same token, a “packet” of information can be defined as a piece of information which can be safely rejected. What “safely” means here is a little broad, but note that this doesn’t just apply for streaming protocols where both applications are present. A “packet-based” file format is quite common from compression formats to video codecs. The key is the safety and predictability of the parser: The parser simply doesn’t have many error cases to consider based on errors in the packet.

On the other hand, the design goals of a self-describing format are coping with change. These formats are generally only loosely sitting on top of a packet format, for instance XML or JSON which is just a long String in a particular encoding. Unlike the non self-describing format, the self-describing format leans heavily on both the program and the parser to ensure validity. An XML or JSON parser, for instance, have many checks that can be performed to ensure the validity of the message before it is even accepted, and even here there are many error cases that create grey areas for a potential parser.

For instance, if there are invalid characters, what does the parser do? What if there is a formatting error half way through a stream? What if the text validates but not under a more strict interpretation? What if the text validates but does not match the schema? What if all of that is true but there are other logic errors in the correctly formatted interpretation of the data? All of these are usually configurable options that a parser is usually initialised with. Even after all of that, there are various error conditions and corner cases to consider.

What does this have to do with statically typed and dynamically typed programming languages? Well, I’m about to argue that self-describing formats are most attuned to dynamically typed languages, and non self-describing formats are most attuned to statically typed languages.

This mostly has to do with the internalisation and externalisation of type information. In statically typed languages, as in non self-describing data, the type information is externalised. For data, by definition type information is externalised when it is not in the data. For programs, almost by definition, the closer type information gets to runtime, the more “dynamically typed” the language is. The whole point of a statically typed language is that you know the types at compile time.

There are, of course, various grey areas. Java has a runtime and reflection, but is “statically typed”. In my view, though, the runtime makes it dynamically typed when using reflection. Indeed, when you look at the error cases when using, say, Spring, it very much seems like a “dynamically typed language”.

On the other side of the coin, dynamic languages must keep all type information at runtime. This is as much about verifying type information that isn’t available at compile time as it is about determining the types of objects at runtime. While we are aware of Python and Ruby being fully dynamic languages with a full runtime, even languages like C++ and Java have RTTI and reflection. The easy way to think about this is that C++ and Java will throw Exceptions at runtime because the type information is incorrect, just like a dynamic language. In the same way, self-describing data describes its own type information within the data. To some degree, you do not need to know the structure of the data or what it contains.

Obviously, it is possible to use internalised data sources with statically typed languages and externalised data sources with dynamically typed languages, but it’s not an easy fit. For externalised data sources in dynamic languages, there’s a lengthy decomposition into their component parts, whereas in statically typed languages, it’s usually no more difficult than defining the data structure in the first place. Similarly, internalised data structures require complex parsers into static data structures in statically typed languages, whereas a dynamically typed language may not even have to care about the structures and types. It just inspects and alters them as if they were first class objects (in the case of JSON, they actually are).

The links go deeper though. In a static language and an externalised data format, you get the same guarantees: nothing has changed, by definition, since you compiled the code. The data format is the same, and so is the code interpreting it. Nothing can go wrong save for very specific error conditions. You effectively get static typing not only over the program, but also over the data it operates. Contrast with an internalised data format in a static language. All of a sudden you have error conditions everywhere which aren’t in predictable places. You may have noticed that static languages tend to have parsers that are very strict. The reason is purely to offer some clarity over how the program can fail. Having a statically typed program that can take a loosely formatted internalised data format and not explode (such as a browser) is no mean feat.

In the dynamic landscape, however, not only can this loosely formatted data be accepted, it can be passed directly into functions which carry out the actual computation. Even those functions need not know everything about the data structure. A module might be moving objects around or re-organising them, but it really doesn’t care what’s inside. As long as the structure is broadly the same, the code will continue to work. Even if the structure has changed completely, if the atoms remain intact then functions can operate over those atoms, keeping the structure the same. Even if both of those change, a dynamic language can introspect and deal with changes fairly elegantly.

This is where the idea of dynamic languages just being unityped static languages sort of falls down. If that were true, you couldn’t add two strings and two numbers as distinct operations. Once a value has been bound as an integer, it can be added, but importantly, if the language doesn’t know what the type of some data is, it doesn’t matter. As long as the transformations on that data don’t mess with the data the program doesn’t know about, the code just keeps on working. You can grab a bunch of XML data and pass it through a bunch of functions that do an addition operation, and the functions don’t need to know what data they’re adding, because that forms part of the internalised description of the data. Is it an integer XML attribute? The integers get added. Is it a String? They get concatenated. Is there other data or other data structures nearby that the code doesn’t understand at all? Doesn’t matter; executed correctly.

And this is where I’m getting at: In a world with many small apps, dynamic apps are king. This is why most scripts that keep a computer running nicely were written in Shell, then Perl, and now Python. These applications are effectively wiring data between applications. They need to know very little about the data, even though they may need to manipulate it before passing it on. Want to write a log parser? Python is probably far easier, more flexible, and more useful. Want to take a bunch of deep and complex JSON and parse out a simple calculation? Maybe Javascript is just the ticket. Having dynamic languages as high level co-ordinators of other applications is probably a very good idea.

It feels like people treat static or dynamic languages as a religion of sorts, but the fact of the matter is, these languages are more indicative of the kinds of problems we’re solving as opposed to the way in which we’re solving them. Static languages treat data as ingots of steel, and that’s good for when you want data that’s got to take a beating. Dynamic languages treat data like putty, and that’s good for when you need it to fit in that damn hole. In the end we need both kinds of languages sitting next to each other if we’re going to be able to process data correctly and flexibly, and we should be able to understand how to apply each to have the most powerful code.

In the end, the argument over dynamic and static languages is really an argument over which kind of data structures the program should be processing: data where the structure is expressed internally to the data structure, or data where the structure is expressed externally to the data structure. Ultimately, if we want the most flexible software, we need to know when to use which kind of data structure, and how to pass these data structures between programs in a landscape where the code is always changing. I feel that having static and dynamic languages co-operate in a multi-process environment will yield better and more flexible architectures than with a language monoculture.

Monads for Java programmers

I try and translate the value of monads from the wild and crazy world of Functional Programming to the rough and tumble world of Java programming.

I’ve had about three goes at understanding monads. The first alone, the second via some category theory training at work, and the third via this tutorial. While each of the three approaches has helped me understand what’s going on, the monad tutorial is probably the best bang for buck. It’s quick and easy to understand.

One thing I’m aware of is that everyone who has tried to learn category theory has written a monad tutorial. There’s a joke going around that it only demonstrates their understanding of Monads, and doesn’t help anyone else. I’m going to attempt to buck that trend by explaining monads by using Java, and the terminology of traditional OO programming.

So firstly, the thing to understand is that Monads are a design pattern. Ultimately, what they are about is solving a design problem, in the same way as Interfaces, Factories, or Builders solve design problems. The difference is that usually these design patterns are about layering software, and abstracting responsibilities. Monads, however, are about abstracting side-effects. I think programming as a discipline has seen how important that is, so let’s have a look at how Monads solve this problem:

interface Monad<I>
    <O> Function<Monad<I>, Monad<O>> wrapFunction(Function<A, Monad<O>> fun);
    Monad<I> wrapValue(I val);

Before I continue, I’m doing my best to represent the Monad pattern in a “natural” way in Java. In languages with more advanced type systems, the same “code” above will actually be far more powerful, but even in Java-land, this is quite a useful pattern. Also, the Function definition is from Guava. I hope you’re familiar with it.

The above Interface will look odd to start with. After all, why have these “wrap” functions on an interface? What’s the interface for? How do you use it? How does one even remotely abstract side-effects with this? The answer is: Category Theory. At this point I wave my hands about and say “WOOOO” and everyone’s really impressed with how smart I am. Seriously, though, the theory is clever trickery but when you actually work with it in practise it’s pretty straight forward.

People talk about Monads as “boxes”, and that’s an apt metaphor, but be careful, these are mathsey-boxes, so the metaphor will break down easily (there’s no unwrap!), and the value of them is not in the wrapping and unwrapping anyway. Note that these don’t work at all like the Proxy or the Facade design patterns, which you can kind of think of as “boxes”.

In order to demonstrate its use, I’m going to create a UselessMonad:

class UselessMonad<I> implements Monad<I>
    I val;

    I unwrapVal() {
        return this.val;

    <O> Function<UselessMonad<I>, UselessMonad<O>> wrapFunction(Function<I, UselessMonad<O>> fun) {
        return new Function<UselessMonad<I>, UselessMonad<O>> {
            UselessMonad<O> apply(UselessMonad<I> a) {

    UselessMonad<I> wrapValue(I a) {
        return new UselessMonad<I>(a); // Pretend there's a constructor.

Note: I’ve skipped public final etc. etc. for brevity. Note also that the Interface types are now UselessMonad. I did that so that the type signatures are clear in the Monad interface above, but you should know how to change things so that the cast isn’t required.

OK, So this basically wraps a function and a value so that they are “boxed” in a UselessMonad. So far, this should look... odd... but not “difficult”. You should hopefully also notice that you can do this:

UselessMonad<String> v = new UselessMonad<String>();

Function<String, UselessMonad<String>> sayHello = ...;

String val = v.wrapFunction(sayHello).apply(v.wrapValue("world")).unwrapVal();

OK First let’s talk about the v jiggery-pokery I’ve done. In Java, you can only put an interface on an instance. Unfortunately, we want to put those interface methods to be static. Since we can’t do that, this is a hack to get around it.

Next, I want to talk about the slightly strange signature of sayHello. I mean, why would a function like that return a UselessMonad? And also, what if a function just returned a String and not a UselessMonad? Well, you can just construct a Function from another Function!

Function<I, UselessMonad<O>> uselessOf(Function<I, O> fun) {
    return new Function<I, UselessMonad<O>>() {
        UselessMonad<O> apply(I a) {
            return v.wrapValue(fun.apply(a));

To put it into words: You can take a Function that takes an A and returns a B, and create another Function that takes an A and returns a UselessMonad<O>, by calling wrapValue() on the result of the function. Before we continue down the rabbit hole, I just want to make it clear that it’s easy to generate functions that return the “Monad” version of a value from “normal” functions.

OK, so what does that wrapFunction line way up above actually do? Well, it does the equivalent of the following:

String val = sayHello.apply("world").unwrapVal();

So why write all that wrapping and unwrapping cruft if all you want to do is apply a Function? Well, the magic trick above is that the input and output of the function are both boxes! What this means is that you can write:

Function<String, String> howYaDoin = ...;

String val = v.wrapFunction(uselessOf(howYaDoin)).apply(

I’ve thrown in a uselessOf in there for good measure. Hopefully it doesn’t make things too confusing. Nice, right? I mean, it looks like a dog’s breakfast, but the good thing is that you can just keep wrapping functions till the cows come home. This is the whole point of the Monad! You go to all this trouble of functions that return functions and wrapping functions and the crazy signatures just so you can do this wrapping and applying and wrapping and applying.

But how does this help abstract away side effects?

Well, imagine your Monad wasn’t actually useless. Imagine it was something like Guava’s Optional:

class Optional<I> extends Monad<I>
    // Imagine the rest of Optional code here.
    <O> Function<Monad<I>, Monad<O>> wrapFunction(Function<I, Monad<O>> fun) {
        return new Function<Monad<I>, Monad<O>> {
            Monad<O> apply(Monad<I> val) {
                if (val.isPresent()) {
                    return fun.apply(val.get());
                return absent();

    Monad<I> wrapValue(I val) {
        return Optional.of(val);

This means you can call code like:

Optional<String> val = v.wrapFunction(optionalOf(howYaDoin)).apply(

Now, we can see the Monad in action: it allows you to chain up commands that return an Optional without having to constantly check for null, even though the howYaDoin function doesn’t expect a null as its input, and even though the maybeSayHello function might return absent(), it all just works. What’s even better is that this isn’t just true of Optional, but things like Futures, Lists, Logging, Transactions, etc. can all be written and composed in this way. A few more things to note here are that the “side-effects” of what’s Optional and what’s not are written in a completely type-safe way. You can’t accidentally do something silly like pass an Optional value somewhere where it’s not expected. There’s also a clear separation of concerns between the side-effects of potentially failing functions, and the actual main flow of the code. This is the problem that Monads solve.

Another important note: You might, as you’re writing Functions and things, get to a point where you have a Monad<Monad<I>>. Another important property of Monads is that you can take a nested structure of monads and create a single one:

Monad<I> flatten(Monad<Monad<I>> m) {
    m.wrapFunction(new Function<Monad<I>, Monad<I>>() {
        Monad<I> apply(Monad<I> a) {
            return a;

Again, you need to liberally sprinkle more generics on there, but the idea is correct. This is slightly magical as well, but basically it takes a function that just returns the value that it’s given, wraps it so the input will be a Monad<Monad<I>>, and will return a Monad<I>.

That’s all kind of neat, but for the Optional case, why not just do:

Optional<String> val = Optional.of("world").transform(maybeSayHello).transform(howYaDoin);

The short answer is: Yeah, that’s probably the smarter way. In languages like Haskell the Monad has significantly more power than in Java, so Monad is clearly the more attractive solution there. However, in Java, Guava’s chaining approach is pretty good. There are cases where the Monad way will result in less boilerplate code, but it’s probably not worth the added complexity. This is especially true if you look at all those wrappings occurring everywhere!

In conclusion, hopefully now you have a hands-on understanding of what a Monad is and where you might want to use it. It has limited usefulness in Java, but nevertheless is a very powerful design pattern. Keep in mind that this tutorial is not meant to cover the category theory and all the requirements of the design pattern (such as the Monad Laws, which you must not break), nor does any of this code compile. It’s probably worth writing this code in a way that is type-safe and compiles, and having a play with Monads. They are a very powerful pattern and once you get the hang of them you’ll see uses for them all over the place.