| |

Rethinking an electricity grid

The more I think about it, the stranger the idea of an electricity grid seems to me.
by Sunny Kalsi on March 15, 2014, 5:53 p.m.

There are two kinds of energy grids people generally talk about. The first is the traditional kind — large quantities of power delivered from a faraway power station to your home. In the second case, there’s a “smart grid” where there are smaller power generators everywhere and they “share” this power around. For both of these grids, an argument of “base load power” kind of wafts in and out.

The more I think about it, the stranger the idea of any kind of energy grid sounds. I say this having been in India and knowing just how annoying it is to lose electricity for several hours a day. In fact, considering Indian “grid-connected” electricity is so unreliable, it makes me wonder why people bother connecting to it at all. Think about the things in your house that require electricity.

Firstly, let’s get heating out of the way. The vast majority of household energy is wasted on heating. I say “wasted” because heat is literally the easiest and most efficient thing to generate. In contrast, electricity is really really hard to generate — almost all generators “throw away” a lot of heat in order to generate a precious bit of electricity. To go through all that work just to throw it all away as more heat is a level of waste bordering on insanity.

Cooling is a different matter, but often a huge amount of cooling happens due to bad home design. Either the house is inefficient at keeping hot air out, or it is bad at holding thermal loads where they need to be. Before air conditioners, people often used water features to keep an area cool. This can still be seen in public places — water keeps the area nearby cool. Similar ideas can be used alongside a good thermal load to keep an area cool. Other than a small 10-20W load to keep air moving inside a house (50 – 100Wh per day) cooling should not be “a thing”.

What I’m saying is that heating water, cooking food, and heating rooms, should be achieved directly through energy sources, such as gas, coal, wood, or other easily transportable fuel. For most households, this will probably reduce electricity usage by between 2/3 and 4/5. If you have a gas connection, I’m saying you should immediately use that, and then consider the rest of this article.

A second common source of electricity used today is for lighting. In the past, each lightbulb was around 100W. That figure seems shocking to me now, but that’s how it was. Today, all the lights in my entire house would total 100W. I’d estimate I only use about half that for general usage. If I had “smart” lighting which would dim (but not switch off) if no one was around, I’d halve that usage again. That’s 25W to light a house at night. Considering a house would only see about 4 hours of lights usage, that’s 100 Wh of energy required to power the lighting in your house.

However, you should probably actually use less than this. If you think about how lighting affects sleep, your lights should start to dim as the night wears on, gently lulling you to sleep. Ideally you would “want” to use closer to 50 Wh for a healthy lifestyle, but where you draw the line exactly will differ from person to person. Either way, a small lead acid battery will probably suffice for that sort of usage.

Another major source of electricity usage is in the kitchen. Even if you don’t use electricity for heating, chances are you’ll still want to use a microwave oven, blenders, toasters, and food processors. What’s worse, these things are often very high power devices — 600W, 1200W, are all common power ratings for these things. Thankfully, you often only use them for a minute or so. Overall, another 50Wh ought to be enough, per day.

Then there’s the laundry, which could use similar amounts of power but for two hours. On the one hand, this might be around 2kWh, but on the other hand, you can probably wait a day or so to use it. Vacuum cleaners have the same usage patterns and power draw. If you were using Solar, for example, you could just wait for a sunny day to use the washing machine or vacuum, and you could also “redirect” power from other uses to power the washing machine. The important thing here is that you don’t need to store the power for using the washing machine, just use whatever power generation method you’ve already got set up.

A reasonable sized fridge, however, would use close to 1 – 2kWh every day, and this is non-negotiable. Even for a country with a lot of sun, you not only need to keep a fridge working all day, you also need to ensure that you don’t end up in a situation where the fridge doesn’t have power. This would mean you’d need a 10 – 20 L Lead-acid battery to keep a fridge going.

However, think of a fridge, its design, what it contains, and why. The freezer is necessary but it’s also often fairly efficient and small. A good fridge would be designed to have the doors pointing upwards, and sliding to open. Ice-cream drawers are a bit like that. Finally, the things you keep in a fridge are often fresh food and leftovers. Judicious use of space and controlling your food could mean that your fridge could take as little as 500Wh, which is a far more comfortable 5L lead-acid battery.

This brings us to an area that’s undergoing a huge amount of change — computing and entertainment. In the past, TVs (even LCDs) were huge power wasters, often taking 100 – 200W. Today, not only are TVs using far lower power, TVs are kind of becoming obsolete. Entertainment is far more “personal” today — people using Tablets or laptops or phones. Even newer consoles are hugely more efficient than the the recent past — the PS4 uses around 100W of power compared to the PS3’s 300W. Hopefully if trends like the PS Play take off then we’ll have consoles which absolutely sip power, maybe 5 or 10W. In addition, computers are increasingly laptops, which are by necessity lower power.

Even more interestingly, many of these devices have battery backup, and there’s a glut of them. If you don’t have the power to run a TV, you can move to your laptop. With this in mind, anywhere between 100-200Wh would be well and truly sufficient to entertain yourself for a night. With a little bit of self control.

So let’s figure this out in terms of supply power and storage power. I’m going with my 'ideal’ scenario, the 'worst case’ is an exercise for the reader. For supply power, the system needs to supply around 1kW to power a vacuum cleaner or washing machine (or cooking for daytime cooking). This is only for a couple of hours per day. The rest of the time is spent storing energy. This means around 6-8kWh of storage per day. It’s also a fairly modest solar power installation.

The biggest draw is for the fridge — 500Wh per day. Then 50Wh for light, 50Wh for cooling, 50Wh for night-time kitchen power usage, 100Wh for entertainment. This adds up to 850Wh, let’s make it 1kWh. Firstly, this can be done with around 10L of lead-acid batteries. It’s a lot, but not unheard of. Hybrid cars already have batteries in that ballpark. You could double that number and get roughly the worst case scenario or a little more breathing room.

In the end, all of this can be had for between $5-10k. This is a very short RoI, and minimal ongoing costs. We can also expect these costs to go down fairly quickly over time. Importantly, it can mean a house or community gains a significant amount of flexibility when deciding how to set up its housing. This is doubly true if instead of using a gas pipeline, gas cylinders, wood, or coal is used instead. It can also prevent bushfires.

The real reason I’m thinking about this is because I’m going through the math of the earlier idea of “a fireproof house”, but also because I’m thinking about this from a policy perspective — Is it completely foolhardy to put money into electricity distribution networks and large power stations? Shouldn’t this money instead be put into a completely distributed way of generating power? Is the idea of an electricity “network” completely overrated?

Three Strikes law in Australia

I take a look at Senator George Brandis' address at the Opening of the Australian Digital Alliance Fair Use for the Future
by Sunny Kalsi on March 1, 2014, 12:19 p.m.

He seems to have completely forgotten Macaulay's principles that he outlined at the beginning of his own speech

Senator George Brandis, the Australian Attorney General, is one of the key figures in drafting up more modern copyright laws. He gave a speech in mid-February looking at copyright reform. In it, he advocates for having a “graduated response scheme” and against having fair use laws in Australia. A “graduated response scheme” is also known as a “three strikes” scheme, whereby copyright holders can accuse a person of violating their copyright. If a person gets accused three times, their internet connection is terminated.

One of the most astonishing things about the speech in my view is how far the principles Brandis outlines lie from the practical solutions he advocates. For instance, he talks about how copyright grants a monopoly:

... Lord Macaulay’s speech is justly recalled for its anticipation of the fundamental principles that underpin the modern system of copyright protection

His central insight is to remind us that copyright is a monopoly – a necessary monopoly – but a monopoly nonetheless.

He then goes on to talk about the value of copyright. As a pirate (although the opinion is purely my own) people often assume that we do not value creative works. In fact, it is the opposite: pirates often value the work far more highly than Brandis is doing here. Indeed, we think of creative works as priceless cultural artefacts, caged by copyright. This is where the idea of “Information wants to be free” comes from. If you are an artist and have used copyright, you have trapped your work to perform for your benefit.

Brandis then goes on to re-iterate that the key idea is balance:

Of course, as Lord Macaulay noted all those years ago, copyright is a monopoly and, as we all know, monopolies are presumptively a bad thing.

The challenge for us today is how to balance the benefits for creators against a range of other public interests including the interests of users, educators and other important public goods.

However, in spite of issuing the reminder himself, he seems to forget this important point almost immediately:

I remain to be persuaded that [the 'fair use’ extension] is the best direction for Australian law, but nevertheless I will bring an open and inquiring mind to the debate.

It’s unclear why Brandis does not like fair use. I could assume that it causes some problematic legislation due to esoteric reasoning that only lawyers care about. However, the Australian Law Reform Commission seemed to advocate for fair use, and I can see no problems historically with fair use protections. Indeed, it is an important and necessary part of ensuring that copyright monopolies are restricted — they create “balance” between the granting of a powerful monopoly and ensuring that this does not result in creators being able to take advantage of their position. This and other exceptions are a necessary part of copyright because they form the “responsibility” half of the liberty they are being granted by having a monopoly.

Unfortunately, he goes further:

The illegal downloading of Australian films online is a form of theft. I say Australia films, but of course the illegal downloading of any protected content is a form of theft.

This is something that bears repeating: technically Brandis is not correct. This is a point of moral outrage, that we should be as morally outraged at violating a copyright monopoly as we are with theft. However, pirates do not agree, and there’s plenty of evidence that shows how creative works do not follow the laws of economics, as much as we may want them to. To quote Lord Macaulay as quoted by George Brandis:

the least objectionable way of remunerating them is by means of copyright

Macaulay is not saying that copyright is a good solution, rather that it is the least bad solution. To take an idea that is delicate and requires balance, and shoehorning a naive argument onto it is worrying, especially when it is used to stir up feelings and not rational discourse.

Perhaps stirred by these feelings, Brandis continues:

However, the High Court’s decision of 2012 in the iiNet case changed the position. The Government will be considering possible mechanisms to provide a ‘legal incentive’ for an internet service provider to cooperate with copyright owners in preventing infringement on their systems and networks.

This may include looking carefully at the merits of a scheme whereby ISPs are required to issue graduated warnings to consumers who are using websites to facilitate piracy.

He seems to have completely forgotten Macaulay’s principles that he outlined at the beginning of his own speech. What Brandis is suggesting, in case the duplicitous wording makes it unclear, is that ISPs violate your privacy by forwarding information from third parties to you in the least worst case, or in the worst case actually hand over your details to third parties. He identifies these third parties as “copyright owners” but that’s assuming someone isn’t a copyright owner. Anything we write, any pictures we take, any creative work we make or share, is all under copyright in Australia. Presumably Brandis is talking about certain privileged “copyright owners”, as if they can somehow be trusted with the private information of every Australian on the internet.

It is where he mentions graduated response that it becomes clear that he has lost all sense of the idea of “balance”. Let’s completely ignore the fact that graduated response schemes never work, wherever they’ve been tried. A graduated response scheme generally means that a “copyright owner” is allowed to accuse someone of copyright violation thrice, and that would mean that person is then banned from using the internet — something the UN now considers a human right.

These types of infringements are progressive in that they treat copyright violation like speeding — there is a perceived balance between understanding wilful violation, the severity of the offence, and the severity of the penalty. However, this isn’t like getting a speeding ticket. The police do not issue the infringements. Instead, they are issued by private institutions. This means there is a far less even handed idea of the events that took place. The police are at least neutral, but for private institutions there are economic incentives to handing out infringements.

The courts are also rarely involved in arbitration. When issued a speeding ticket, I can go to a real judge to argue my case. In graduated response schemes, there is often a separate method of arbitration that is heavily skewed towards the copyright owner. The offender is often assumed guilty, or some other important part of the judicial process is missing.

Finally, the penalty — losing the internet — is also far too severe. Driving is not considered a human right, whereas the internet is. Losing internet access shouldn’t be compared to losing your license, but instead to losing your right to freedom of assembly.

Even considering graduated response shows that Brandis has completely forgotten the core principles he outlined in his own speech: That copyright is a monopoly, and monopolies are a bad thing, and that they are the least bad solution to a complex problem, and need to be handled deftly.

Vim vs Emacs for real!

I talk about the age old battle between vim and emacs, and specifically where vim loses out, and how to help.
by Sunny Kalsi on Feb. 18, 2014, 9:06 p.m.

There’s one thing about Emacs that’s it’s “killer feature”, it’s the consistency of vision from start to end. The reason people call it an “operating system” is that it has abstractions so powerful that you can just build and build. Nothing else comes close to being as fully featured. We’ve seen things like Eclipse and Idea being good in a “special purpose” sense, but only because they’re effectively supported by corporations.

By contrast, vim is a mess. I say this as a vim guy and a maybe-intermediate vim user. It has one amazing, seductive trick up its sleeve, and that is the power of its grammar and movement. It’s a text meta-editor, and that makes it immensely powerful. I suspect that many vim users are drawn simply by this, and will go from beginner to expert without ever really getting past simply feeling guilty for not being ninja enough for vim.

Unfortunately the power of the grammar sits alongside the immensely clunky and frankly random layout, command sprawl, and variety of input methods. Vim is hard to use because it carries with it a boatload of legacy.

I’m not like that. I’m also not about minimising keystrokes like many vim users. I’m all about Actions Per Second. I’d prefer to press more keys if I can do it more comfortably and more quickly than having to remember more keypresses. To that end, I’ve started to think more deeply about vim’s central grammar, its usefulness, and how it can be improved.

Stage 1: More consistency

Most of the grammar is set up so that if you type a lowercase letter, it will do a thing in the forward direction. If you type an uppercase letter, it will do a thing in the backwards direction. A classic example is / to search forward, ? (shift-/) to search backwards. There are some exceptiont to this rule, and I’d like to remap the keys so it makes more sense.

  • Shift 'i’, 'a’, and 'A’: 'i’ inserts before the current character and 'a’ inserts after the current character. It would be more consistent for 'I’ to go before the current character, and 'i’ to go after. 'I’ currently inserts at the beginning of the line. It would be better if 'A’ prepended at the beginning of the line, and 'a’ appended at the end of the line.
  • Move 'b’ to 'W’: 'w’ moves to the next word, 'b’ moves to the previous word. Instead, 'w’ would move to the next word, and 'W’ would move to the previous word. Remove the concept of “WORD”, which are used by old 'W’, 'E’, and 'B’. 'b’ is now unbound.
  • Shift '{’, '}’, '[’, ']’, '(’, ')’ around: Currently, '}’ goes to the end of the paragraph, and '{’ goes to the beginning of the paragraph. '[’ and ']’ deal with sections in GROFF files, and there’s some strange mumbo jumbo with, of all things, opening and closing braces. Instead, just remove whatever '[’ and ']’ do, make '[’ and '{’ do end and begin sentences, respectively, and ']’ and '}’ for paragraphs.
  • Switch '@’ and 'Q’: Currently, 'q’ records a macro, 'Q’ enters 'ex-mode’, and '@’ plays macros back. instead, 'Q’ should play-back macros, and '@’ should enter 'ex-mode’.
  • Move '*’ and '#’ to now-unbound 'b’ and 'B’: These look forwards and backwards for an identifier.
  • Move '^’ to 'H’, '$’ to 'L’, and ’0’ to '’’: The 'H’ and 'L’ keys effectively take you to a random spot in the file, based on your screen real-estate. These should be unbound. Instead, move the much more useful functionality of going to the beginning and end of a line to the cursor keys. The quote character '’’ goes to the beginning of the line of a mark, and it seems too special purpose to keep around
  • Ctrl-F/B to 'J’ and 'K’, move 'J’ to 'Z’: 'K’ currently looks for help on the currently selected word. Seriously what the even? 'J’ is for joining lines and can be quite useful. We’ll move this to 'Z’, which is for quitting vim (and surely everyone would agree that this is not something you need to do that often). Ctrl-F/B go page up and down. This makes the lowercase 'arrow’ keys move a single character, and uppercase 'arrow’ keys move to the extremities.
  • Ctrl-R to 'U’ – 'U’ is for 'undo line’, and that’s confusing in a world with a full undo stack. Instead, it should just redo.

All the shift number keys are now unbound except for '!’, '@’, '%’, and '&’. “Lost” functionality in the list above could potentially be restored by shoving it up there, but I doubt anyone would miss any of it.

Stage 2: Simplify and cleanup

Here, we remove some special purpose or non-forward thinking commands from the grammar. This is a little more “optional” because there’a a certain amount of “unused functionality isn’t hurting anyone”, but I’m aiming for maximum simplicity. Controversially, I’m also proposing removal of 'g’ and 'z’ for extra commands. The idea is that you’re pressing an extra character anyway, why not just go all the way with ':’?

  • Remove '~’: I love toggle-case, but it’s not internationalizable, and it’s simply too special case to abide.
  • Move '%’ to 's’, move '&’ to 'S’: I can accept ex-mode and filtering to be heavy-weight enough to be on the number line, but matching braces and repeating search/replace is invaluable. I don’t really know anyone who uses 's’ substitute character or 'S’ substitute line.
  • Move '`’ to 'M’: 'M’ is also a strange cursor movement command based on screen size. Instead we should make it goto mark, since 'm’ is the mark key.
  • Unbind 'z’ and 'g’: The idea being, if you’re already hitting an extra character, may as well go all the way and hit ':’.

Stage 3: No 'Ctrl’ key combos.

We already have commands, shift versions of those commands, and a handful of Ctrl commands. This is a lot to remember. Instead, the idea is that we remove all Ctrl-commands, and make it so the Ctrl-key is like a temporary push into command mode. Hitting Ctrl-x would be the same as typing ESC, x, i. This allows for entering commands and movements whilst in insert mode.

  • Ctrl-V to ’0’: ’0’ or start of line has been moved to '’’, so is freed up for block visual mode.
  • i_Ctrl-N/P to 'g’: Ctrl-N/P in insert mode do any-word completion. The idea is to use 'g’ to initiate the completion, then select using the 'j’ an 'k’ keys. This is a little more annoying than currently, but not hugely. This means that you can do any-word completion in either command mode or insert mode, and it functions like a mini buffer / register. This also allows plugins to integrate more tightly with the experience of 'g’.
  • 'z’ for an “action” command: This will be a new idea for vim. The idea is to do an action based on the text under the cursor. This could be template substitution, auto-fixing bugs, etc. These would show up like choices in the Ctrl-N list. Like 'g’, 'z’ could be used outside insert mode.

Stage 4: And nothing else

The idea is not to pollute the namespace. This would mean using only the given functionality in or implied by the commands as defined. The idea is to work like surround.vim, where we extend the grammar, but follow all its rules, such as using “cs” to mean “change surround”. Of course, the ex-cmd line ':’ would still be open slather. The file formats could redefine the meanings of words, sentences, and paragraphs, as well as good autoformatting. Plugins could “hook into” 'g’ autocomplete or 'z’ actions. And that’s it.

So there we have it, a simple, consistent grammar for vim in the 21st century. No cruft, clean extension points, and no (real) loss of power from a powerful movement and command system.

Retina on the PC

I talk about my new monitor
by Sunny Kalsi on Jan. 15, 2014, 10:39 p.m.

One thing that might be surprising is that in books it is unusual to see a font above 10pt for regular type. Most books are at 8pt, “large” type is usually at 9pt, and very rarely you’ll see text at 10pt. Headings can be 12 or 14pt. By contrast, displays regularly use 16pt for regular type, and 24 or even 48pt for headings. Part of the reason is that our eyes are typically further away when reading from screens compared to books.

But that’s not the only reason. Screens, modern LCD screens in particular, have an awful pixel density, also known as PPI (pixels per inch). This is partly due to the fact that modern LCDs have a fixed matrix of dots, and you “pay” for each one, unlike CRTs which have a single cathode ray which can be made to go pretty much anywhere on the screen. Partly, though, this is driven by consumer demand, and “bigger is better” also means that we’ve been paying for screens that have been increasingly blurry.

Font rendering on this kind of screen is awful, to the extent that rendering 8 point text is extremely difficult to read, and even 10pt text can be a struggle. Even for a PPI-conscious user, the best monitors which are still on the market would only reach 100PPI. Contrast that with a printer which renders at 600DPI (dots per inch). The font rendering problem is also exacerbated on LCD displays because they tend to have sharp pixel edges compared to CRTs which tend to have “blurry” edges that do extra signal processing for us for free.

The math is actually rather shocking. When you look at Shannon’s law and how small pixels need to be to be pleasant to read at “book” sizes, you need close to double the resolution that monitors are commonly at today. Another convention which has come about recently is a Retina screen, more correctly a retina threshold in this context, one where a particular pixel density at a particular minimum distance would mean that Shannon’s limit matches our eyes. A 1080P monitor that’s merely 21” would have to be 84 centimeters away from you to be “retina”. At a more comfortable 45cm distance, you would need roughly 200PPI.

The only monitor to have that pixel density was the IBM T220/221 and it was discontinued several years ago. This monitor has a cult following, to the extent that it has held its value quite well for several years. There was simply no other way to get a monitor like this.

Until now. In November 2013 Dell launched the UP2414Q which, although not quite the same pixel density of the T221, is closer to matching it than any other PC monitor that I’ve been able to find. It also fills a magical enclave of numbers: it’s a 4K monitor, it’s 23.8”, which is ideal for a desktop monitor, and it manages a comparatively massive pixel density of 185PPI. It’s also an IPS display with 100% sRGB coverage, factory calibrated.

While some have the attitude that this sort of monitor will somehow become massively cheaper, I’ve been desperate enough for this for long enough, and the price was suitably low for me to take the plunge and buy one. In my book, people still see bigger as better, and may not see the value of paying more for a 24” monitor compared to a 27” monitor, so odds are about even that even this monitor will become a relic with a cult following, like the IBM T221.

So how does it look? What are the caveats? Does it work on Linux? Was it worth it? Do (Linux) apps even look proper on it? I’ll try and tackle these questions presently.

Looks

How does it look? The answer is “good”. It’s 185PPI, so it just sits on the brink of being retina at a comfortable PC viewing distance. Not one to buy factory-cal monitors, the colour accuracy is something I’m not used to, and it’s really quite nice. The problem is, we’ve been fairly spoilt with tablet and phone displays, and this monitor looks like a phone display from 5 years ago. If you’ve seen the 15” retina macbook pro display, the colours look like that, though the pixel density is slightly (but noticeably) worse. The original Nexus 7 is probably also a good comparison for screen quality.

It’s also much, much larger than a retina macbook or a Nexus 7.

Hardware Caveats and Notes

One of the biggest caveats with this (or any) 4K display is how you drive the thing. HDMI and DVI won’t work, they’re only good for a refresh rate of 30Hz. Before you think “it’s OK I won’t be doing any gaming” it’s really not OK. 30Hz feels… wrong. Your mouse feels drunk and the screen updates slowly when you drag windows and you won’t be able to do your work.

Because HDMI won’t work, you need display port, but not all video cards can even drive 4K, and not all of them can drive it at 60Hz. For Nvidia, you need a Kepler based card. AMD fares slightly better, but check! You also need DisplayPort 1.2. DP 1.1a won’t be enough. Again, this is something to check.

DisplayPort 1.2 can only do 60Hz refresh under what is known as MST (Multi Stream Transport) mode. What’s happening is that DisplayPort is just a multiplexed HDMI stream (roughly), which isn’t capable of pushing out 60Hz. To work around this, the monitor pretends to be two screens: the left half and the right half, driven by two HDMI-ish streams. This means that not only does your video card need to support this mode, your driver needs to as well. Apparently this will be fixed with DP 1.3 and HDMI 2.0, but for the Dell UP2414Q, this will be an unfortunate reality for me, forever.

I had my fair share of driver issues on my Linux box, and it appears that Windows also has those same issues, but manifested in a different form. Most notably, coming back from suspend will sometimes work and sometimes not. I have to switch the monitor off and on again (same as with Windows, apparently). There are also strange screen tearing issues between the two displays, and gaming is apparently not an option.

While I’m talking about monitor caveats, here’s another thing to remember: Because most OSes are used to the very low DPI monitors, they do various acrobatics to make small fonts readable, notably font hinting and sub-pixel rendering. At high DPI, this actually makes things look worse. Be sure to switch both of these “features” off. Do keep Anti Aliasing, though.

Software Caveats and Notes

If you have multiple monitors, the other monitor’s PPI will be completely different. This causes a number of issues no matter your situation. If you’re using Win7 or Linux (with spanning display) you have a fixed PPI across all your monitors. This will mean your low-PPI monitor will be “zoomed in” by a factor of roughly two. This is just crazy. All the elements are just comically large. Even a 1080P screen is a quarter of the pixels of a 4K screen, so the extra “screen” real-estate you get with a fixed PPI is negligible.

If you have separate PPIs like Windows 8 or Linux (with separated display), the low-PPI display will be blurry. With a higher PPI, it becomes much more comfortable to read text at a smaller size. If you take advantage of the smaller size, but then move that window to the low-PPI display, it will become completely unreadable. 8 pt font goes from being comfortable to being an impressionistic painting. This is pretty frustrating.

If you have two 4K displays, god help you. Whether hardware can even drive dual DP1.2 displays is beyond me, but it needs to be a crazy good video card. Then there are potentially various software and hardware bugs that only you are there to find. Finally, you would be better off waiting until 2015 when DP 1.3 comes out and there’s a single “link” to 4K.

How does the software look?

One of the good parts is that while Linux has various issues with the scaling, it generally deals with it better than Windows. In a similar way to how Windows programs each deal with scaling in esoteric ways, Linux will have DPI settings split across many different dimensions.

First is the X display setting for PPI, which some programs use, and others don’t, because it’s “unreliable”. Then, Gnome and KDE each have a “scaling” value, which they use to re-scale their UI elements. Firefox also has one. Finally, some programs will use pixels directly. Luckily, these are all configurable, and once you set them they will generally not need to be changed.

Firefox renders the web well at the high DPI setting. Strangely, some of the icons on the UI will be blurry, even if they’re SVG (the webpages themselves will be fine though). Hopefully the GTK3 build of Firefox will fix that issue. GTK mostly deals with the scaling perfectly. I had a strange issue with Nautilus not scaling, but with all the Gnome UI elements being gigantic, 4K actually fixes them up somewhat.

One thing I was surprised at is how much our hind-brain thinks about the screen as a grid of pixels though. When all of the elements are scaled but still crisp, your brain feels a bit lost. Things still line up, but because of the crispness, your brain doesn’t “sense” that they’re on the same pixel line, and you think something is wrong when it isn’t. The location of your mouse is also a lot more fluid, and the acceleration values are different (because they’re based off pixels), so it confuses movement.

Overall, though, it looks great, especially for text-oriented apps like gvim. Writing on this thing is gorgeous. The text is sharp — the contrast and weight looks far more natural, especially on italic and bold text. Serif fonts actually look better on the monitor than Sans (although both look better than on a ye olde monitor). Reading on this thing is also great. Pictures look much the same, but certain moire effects and aliasing are definitely noticeably lower. To some extent, this isn’t revelatory, because we’re used to this kind of detail from phones and tablets. However, it’s the sheer size of this thing that’s awesome. This display is 8 Megapixels, which is the same as a Canon 350D or a Galaxy S3. You can look at a picture taken on a Canon 350D without scaling (it’s slightly too tall, but also slightly too narrow). That’s pretty special.

So, that’s what it’s like owning a retina 4K monitor. It’s not perfect, but it is kind of special.

Sea Monsters and OSGi

You are my white whale, OSGi! I'd do my captain Ahab yell as I pierced thee with a polearm of some sort but I haven't actually read the book.
by Sunny Kalsi on Jan. 8, 2014, 10:22 p.m.

OSGi apps tend to have that feel of clunkiness.

If you’ve encountered OSGi anywhere, chances are you hated it with a burning, fiery passion, whether you were a software developer or a user. After using it for a while and wanting to stop the hurting, I came upon an Epiphany and… well, I’ll let you uncover the aftermath for yourself:

Perhaps the problem isn't too much OSGI, but too little…

— Sunny Kalsi (@thesunnyk) January 7, 2014

Let me explain both the epiphany and “the problem with OSGi”, as well as the initial developer experience with it, because most people haven’t even heard of OSGi.

Read this before you start the adventure

You’ll likely encounter “OSGi” when you’re trying to achieve a relatively modest goal in a heavily pluggable Java application (like Eclipse). You’ll notice a sea of undocumented interfaces peppered throughout various packages that will enable you to achieve your goal. When you’ve finished dry-retching because there’s really nothing else left in your stomach you’ll just think pragmatically, write your code, and try and test your application (or plugin, or whatever it is). People have to work in abattoirs, your life isn’t that bad.

The unit tests will go well enough; everything’s an interface so you can mock things to your heart’s content. Unfortunately it isn’t clear how the interfaces are actually supposed to work so you’ve guessed a bit but you should figure that out when you run the thing, right?

You start the (whatever it is) up, or more likely, do a jiggery pokery thing. This gives you a ClassNotFoundException, even though the classes are clearly there and the ClassLoader knows they’re there, so you restart. Now you still get the ClassNotFoundException, but it’s different classes. After messing around for a bit it starts to work and you can’t figure out why, but oh well at least you can make progress.

Then about an hour later while testing something unrelated, you get ClassNotFoundException again.

A week later and either a package somewhere has been updated or you get a report from the “field” or something minor and insignificant changes. Now, a thing is not working. There’s no error messages, and sometimes there are, but it’s hard to tell because the app behemoth kind of sort of throws error messages like it was candy and the app was trying to attract children.

Several years after therapy, you try and clear your head and figure out what the hell you were even thinking. In trying to make progress you forgot to do what you would normally do in this situation: read the documentation. You figure OSGi sucks but you’ll try and figure out how it ticks so you can defeat it.

What is OSGi?

Firstly, think about Java. Operating Systems are often designed to allow individual processes to dynamically insert code and run it. This allows for shared libraries and dynamically linked libraries. However, once a process is running, it’s quite difficult to add a new library into a running process (the proper term for this is a “component model”). The JVM also has no built in mechanism to do this. OSGi “solves” this problem in a fairly simple (for the user) way. Java JARs get a new manifest (or three) with some data about what interfaces they provide, there’s stuff for starting up services and long running tasks… and… that’s it.

OSGi doesn’t seem like a monster after all. I mean, you can see where all the problems come from! Because OSGi bundles may not be loaded, or because your manifest fails to import all the correct things, or because some other jar fails to export the correct things, even though your code compiles fine, it may fail at runtime with a ClassNotFoundException. Also, because interfaces are dynamically loaded, you could have the wrong version of an interface and then strange things will happen.

Worse, you cannot find out until runtime. There’s a whole class of compile-time problems — an interface method has been changed or removed, a bundle can’t be found for some reason, even though the jar is right there — that have become runtime problems.

This is why I don’t understand Neil Bartlett’s comment (maybe there’s something I still don’t understand):

charlesofarrell thesunnyk When OSGi is properly used, you just don't get ClassNotFoundException.

— Neil Bartlett (@nbartlett) January 7, 2014

OSGi apps tend to have that feel of clunkiness, which you see as a user as well as a developer. Nothing feels tightly integrated, nothing feels friction free. But looking at it from the other side, you can almost pity OSGi, because when you know the weaknesses of OSGi, you can cater for them. OSGi is a problem, but that problem has a flipside — the dynamic loading. There’s just no other way of achieving that without having the same set of problems. OSGi is almost not the issue, but who OSGi hangs out with.

A Toxic Scene

The problem is, you never get “OSGi” all by itself, you get a menagerie of tools, each toxic on their own, but combining to create a concoction so vile I’ve… lost interest in this metaphor. Anyway, one of the major problems is dependency injection.

The issue with dependency injection is that it’s “viral”, which people originally argued was a good thing, but now I’m not sure. In any case, the problem is that everything is finely sliced into interfaces, and these interfaces tend to find their way into your component exports, and then into other components’ imports, which means that if you ever change the interface — one which you probably consider internal — something’s going to break and there’s no way to even test for it, much less try and check against at compile-time.

On the other side of that equation, it’s always possible, and very enticing, to use an interface you can see to do a thing you want to do, even though whoever wrote the other component hasn’t “officially” let you see it (which might even matter if there was documentation for the “official” stuff). Now you have a dependency on a version of some code which will likely change under your nose right as you forgot that you were even using it.

Secondly, there’s usually XML files everywhere. What they define is hard to know, the value they provide is dubious, they’re never edited outside of the development team (even though that was their original design intent) and they again serve to convert a whole bunch of compile time issues into runtime issues. Error in the XML file? Runtime problem! String that’s supposed to be an Enum but doesn’t match? Runtime! Class with misspelled name or package? Runtime!

There’s also often a panoply of huge and useless services tagging along for the ride. OSGi necessarily makes application startup slower, but often, it’s the other things that are starting up that make the developer experience awful. This huge startup time makes you more desperate to never restart that thing, so you rely ever more heavily on OSGi.

The Epiphany

The epiphany is that everyone hates OSGi, and this is precisely the thing that causes this toxic cycle. Running away from the pain just makes it worse. The only way out of this madness is to embrace OSGi, get rid of its friends, and start building components only when they’re needed.

This means bundling components in a “static” way rather than via OSGi. This means exporting interfaces sparingly, having a very small touch area, small APIs which communicate efficiently and in a flexible way. This means considering what really constitutes a service and what’s really just a library. In the end, if you only have 5 or 10 components in your application instead of 500 or 1000, the classes of pain that can bite you are reduced by orders of magnitude, and OSGi starts to pay for itself in spades.

The problem isn’t too much OSGi, it’s too little.