| |

On Javascript

A while back I decided to dive headfirst into Javascript with node.js, knockoutjs, and couchdb. I reflect on my experience.
by Sunny Kalsi on June 17, 2014, 11:01 p.m.

My app, infonom is about half way to MVP. I decided to go full retard on javascript, mainly to force myself to learn the language, and how to use it in a decent context. Honestly, there’s a lot to like. I can see how a lot of people are into it. While I personally can’t survive without the velvet rope of type safety, I did enjoy the sort of flexibility that prototype based languages give you. It’s certainly an odd kind of flexibility, but also a good kind.

Prototypes still confuse me, mostly because of what properties are copied and what properties are mirrored, but it’s a fairly powerful concept. The idea that your “class” hierarchy is based on live objects that have the opportunity to be changed and updated can enable a bunch of tricks that generally require far more impressive language features. You can mimic a classes style system, or traits, or some functional semantics, all by using and abusing the idea of prototypes. In the end I’ve learnt enough about Javascript to make peace with it.

But I realised very quickly that I hated each of the three technologies I used: knockoutjs, node.js, and couchdb. But hold on because there’s a surprise twist coming.

Let’s start with node. There are a couple of reasons I like Node. One is that it makes it very hard to block. The code you must write to block is annoying to write. The second is that it “solves” multithreading in a novel way: By not having any. If you want your code to scale, it must work in a way that’s parallel. If you can “distribute” your code across several cores, you can distribute it across several machines. In normal multithreading there’s a huge discrepancy between many cores on a single machine and then switching to having the service go across multiple machines. Finally, it makes coding fun at first. You’re writing node and refreshing your browser and it’s all very quick and easy.

But it has problems. Incidentally, the node haters are just plain wrong. Without getting too distracted: Python runs slower than js, every language with blocking somehow has very slow frameworks (I wonder why), Node makes concurrency easy across computers, and “sharing code between front-end and back-end” is actually supposed to mean “I can decide later where I want this to run”, which you can’t do in, say, Java.

Where was I? Oh yeah! It does have problems! The first is that it’s javascript which is an awful language. I know, it’s got redeeming features, but that’s like saying “smoking helps you lose weight”. It also quickly scales to the point where it’s no longer fun. The language is no fun, the structure is no fun, the libraries are shit, and you start to wonder what the hell you’re doing in the baby pool. I could just as easily do this in another language by just removing all of the frameworks. It just all starts to grate. The language grates, the environment grates, and the benefit you had at the beginning: “quick” fades into the background.

In short, I’m looking for a way out of Node.

Now, onto knockout. I’m actually a big fan of MVVM. I think this is how applications should work, especially on the web. But after thinking about the zen of knockout, I’m finding myself increasingly at odds with the framework. The thing is, knockout treats JS as the view-model and HTML as the view, but HTML isn’t the view! If anything, it’s literally the view-model! It’s the representation of the model for the view (the browser). So, in a way, knockout is really just an elaborate translation layer between javascript and HTML. I still think there’s a kernel of knockout that’s valuable, but unfortunately it’s not the code.

In short, maybe I’m looking for something more akin to d3.js instead of knockout.

Finally, let’s talk about CouchDB. Couchdb is a document oriented DB written in Erlang. It has multi-master and effectively “solves” CAP in a particularly elegant way. However, it commonly uses Javascript to do map-reduce functions. It’s also no fun par excellence. Unlike “non-scalable” databases like SQL (or Mongo), Couch literally asks you to solve all the scalability problems up-front. It really does your head in. You want to make a simple website, and you have to start considering copy-on-write and merge conflicts. There’s also the temptation of doing micro-optimisations to lower the number of REST calls you make to Couch (talking to Couch is done using a REST interface, and it does so in JSON).

I can’t tell you how irritating it is trying to figure out how your data model is going to work when in reality you actually don’t care just yet and you promise to think about it later. However, you also know better and you won’t think about it later and it’s actually a good thing Couch is forcing you to do this. The good news is that once you’re done, you’re done. Start whacking the data in and… as they say… relax. Though it’s relaxing after like a year of constipation.

In short, I still like Couch but man I’ll really consider an intermediate data model before I use persistence in future apps.

So there you have it. Fuck node off, change knockout to be completely different, and keep Couchdb. In conclusion: Javascript!

Dynamic languages vs static languages

I talk a little about the benefits of so-called dynamic programming languages.
by Sunny Kalsi on May 28, 2014, 11:25 p.m.

We can look at programs in two different ways, firstly as the space inside a program, and secondly as the space between two programs. It has always been the case that more open systems concentrate more on the space between applications, and more closed systems concentrate on the space inside an application. We can see large monolithic pieces of proprietary software such as Adobe’s Photoshop have a lot of functionality built in, but do not communicate with applications outside themselves very well. On the other hand, small applications such as those on unix systems such as 'cat’ or 'cut’ can chain and connect to each other. If we want a dynamic and flexible operating environment, we must think deeply about the spaces between applications.

Importantly, though, is that we aren’t talking only about applications, but services, too. In fact, all data shared between applications, be it via files on a filesystem, or via a protocol, or via an API, all share a common thread: The data is either self-describing or it is not. Note that in reality, there’s no such thing as “self-describing data”. Mostly, the data is in a strict format, but that format might contain a description of the format. Notably, formats such as XML are strongly self-describing, formats such as JSON are loosely self-describing, and formats such as an IP packet are not at all self-describing. Also of note, even a binary format such as ASN.1 can be considered self-describing.

There are also grey areas such as protobuffers (or maybe even bson?), where a standardised externalised description will construct code that forms a generator / parser combination for non self-describing formats. Broadly, though, we may split formats into “self-describing” and “not self-describing”. Part of the reason is actually design intent. The design intent of a “not self-describing” format is to minimise the error space for a packet of information. A packet may, for instance, have a CRC which allows for rejection of the entire packet wholesale. On the other hand, a “size” field of a packet will be described as “n + 1”, so that a value of “0” is still considered valid. This is not being cheap on bits! If a value within a packet is invalid, what is a parser to do? In order to prevent this conundrum, all combinations of values in a packet should be valid, save for errors which allow an entire packet to be rejected.

By the same token, a “packet” of information can be defined as a piece of information which can be safely rejected. What “safely” means here is a little broad, but note that this doesn’t just apply for streaming protocols where both applications are present. A “packet-based” file format is quite common from compression formats to video codecs. The key is the safety and predictability of the parser: The parser simply doesn’t have many error cases to consider based on errors in the packet.

On the other hand, the design goals of a self-describing format are coping with change. These formats are generally only loosely sitting on top of a packet format, for instance XML or JSON which is just a long String in a particular encoding. Unlike the non self-describing format, the self-describing format leans heavily on both the program and the parser to ensure validity. An XML or JSON parser, for instance, have many checks that can be performed to ensure the validity of the message before it is even accepted, and even here there are many error cases that create grey areas for a potential parser.

For instance, if there are invalid characters, what does the parser do? What if there is a formatting error half way through a stream? What if the text validates but not under a more strict interpretation? What if the text validates but does not match the schema? What if all of that is true but there are other logic errors in the correctly formatted interpretation of the data? All of these are usually configurable options that a parser is usually initialised with. Even after all of that, there are various error conditions and corner cases to consider.

What does this have to do with statically typed and dynamically typed programming languages? Well, I’m about to argue that self-describing formats are most attuned to dynamically typed languages, and non self-describing formats are most attuned to statically typed languages.

This mostly has to do with the internalisation and externalisation of type information. In statically typed languages, as in non self-describing data, the type information is externalised. For data, by definition type information is externalised when it is not in the data. For programs, almost by definition, the closer type information gets to runtime, the more “dynamically typed” the language is. The whole point of a statically typed language is that you know the types at compile time.

There are, of course, various grey areas. Java has a runtime and reflection, but is “statically typed”. In my view, though, the runtime makes it dynamically typed when using reflection. Indeed, when you look at the error cases when using, say, Spring, it very much seems like a “dynamically typed language”.

On the other side of the coin, dynamic languages must keep all type information at runtime. This is as much about verifying type information that isn’t available at compile time as it is about determining the types of objects at runtime. While we are aware of Python and Ruby being fully dynamic languages with a full runtime, even languages like C++ and Java have RTTI and reflection. The easy way to think about this is that C++ and Java will throw Exceptions at runtime because the type information is incorrect, just like a dynamic language. In the same way, self-describing data describes its own type information within the data. To some degree, you do not need to know the structure of the data or what it contains.

Obviously, it is possible to use internalised data sources with statically typed languages and externalised data sources with dynamically typed languages, but it’s not an easy fit. For externalised data sources in dynamic languages, there’s a lengthy decomposition into their component parts, whereas in statically typed languages, it’s usually no more difficult than defining the data structure in the first place. Similarly, internalised data structures require complex parsers into static data structures in statically typed languages, whereas a dynamically typed language may not even have to care about the structures and types. It just inspects and alters them as if they were first class objects (in the case of JSON, they actually are).

The links go deeper though. In a static language and an externalised data format, you get the same guarantees: nothing has changed, by definition, since you compiled the code. The data format is the same, and so is the code interpreting it. Nothing can go wrong save for very specific error conditions. You effectively get static typing not only over the program, but also over the data it operates. Contrast with an internalised data format in a static language. All of a sudden you have error conditions everywhere which aren’t in predictable places. You may have noticed that static languages tend to have parsers that are very strict. The reason is purely to offer some clarity over how the program can fail. Having a statically typed program that can take a loosely formatted internalised data format and not explode (such as a browser) is no mean feat.

In the dynamic landscape, however, not only can this loosely formatted data be accepted, it can be passed directly into functions which carry out the actual computation. Even those functions need not know everything about the data structure. A module might be moving objects around or re-organising them, but it really doesn’t care what’s inside. As long as the structure is broadly the same, the code will continue to work. Even if the structure has changed completely, if the atoms remain intact then functions can operate over those atoms, keeping the structure the same. Even if both of those change, a dynamic language can introspect and deal with changes fairly elegantly.

This is where the idea of dynamic languages just being unityped static languages sort of falls down. If that were true, you couldn’t add two strings and two numbers as distinct operations. Once a value has been bound as an integer, it can be added, but importantly, if the language doesn’t know what the type of some data is, it doesn’t matter. As long as the transformations on that data don’t mess with the data the program doesn’t know about, the code just keeps on working. You can grab a bunch of XML data and pass it through a bunch of functions that do an addition operation, and the functions don’t need to know what data they’re adding, because that forms part of the internalised description of the data. Is it an integer XML attribute? The integers get added. Is it a String? They get concatenated. Is there other data or other data structures nearby that the code doesn’t understand at all? Doesn’t matter; executed correctly.

And this is where I’m getting at: In a world with many small apps, dynamic apps are king. This is why most scripts that keep a computer running nicely were written in Shell, then Perl, and now Python. These applications are effectively wiring data between applications. They need to know very little about the data, even though they may need to manipulate it before passing it on. Want to write a log parser? Python is probably far easier, more flexible, and more useful. Want to take a bunch of deep and complex JSON and parse out a simple calculation? Maybe Javascript is just the ticket. Having dynamic languages as high level co-ordinators of other applications is probably a very good idea.

It feels like people treat static or dynamic languages as a religion of sorts, but the fact of the matter is, these languages are more indicative of the kinds of problems we’re solving as opposed to the way in which we’re solving them. Static languages treat data as ingots of steel, and that’s good for when you want data that’s got to take a beating. Dynamic languages treat data like putty, and that’s good for when you need it to fit in that damn hole. In the end we need both kinds of languages sitting next to each other if we’re going to be able to process data correctly and flexibly, and we should be able to understand how to apply each to have the most powerful code.

In the end, the argument over dynamic and static languages is really an argument over which kind of data structures the program should be processing: data where the structure is expressed internally to the data structure, or data where the structure is expressed externally to the data structure. Ultimately, if we want the most flexible software, we need to know when to use which kind of data structure, and how to pass these data structures between programs in a landscape where the code is always changing. I feel that having static and dynamic languages co-operate in a multi-process environment will yield better and more flexible architectures than with a language monoculture.

Monads for Java programmers

I try and translate the value of monads from the wild and crazy world of Functional Programming to the rough and tumble world of Java programming.
by Sunny Kalsi on April 29, 2014, 1:07 a.m.

I’ve had about three goes at understanding monads. The first alone, the second via some category theory training at work, and the third via this tutorial. While each of the three approaches has helped me understand what’s going on, the monad tutorial is probably the best bang for buck. It’s quick and easy to understand.

One thing I’m aware of is that everyone who has tried to learn category theory has written a monad tutorial. There’s a joke going around that it only demonstrates their understanding of Monads, and doesn’t help anyone else. I’m going to attempt to buck that trend by explaining monads by using Java, and the terminology of traditional OO programming.

So firstly, the thing to understand is that Monads are a design pattern. Ultimately, what they are about is solving a design problem, in the same way as Interfaces, Factories, or Builders solve design problems. The difference is that usually these design patterns are about layering software, and abstracting responsibilities. Monads, however, are about abstracting side-effects. I think programming as a discipline has seen how important that is, so let’s have a look at how Monads solve this problem:

interface Monad<I>
{
    <O> Function<Monad<I>, Monad<O>> wrapFunction(Function<A, Monad<O>> fun);
    Monad<I> wrapValue(I val);
}

Before I continue, I’m doing my best to represent the Monad pattern in a “natural” way in Java. In languages with more advanced type systems, the same “code” above will actually be far more powerful, but even in Java-land, this is quite a useful pattern. Also, the Function definition is from Guava. I hope you’re familiar with it.

The above Interface will look odd to start with. After all, why have these “wrap” functions on an interface? What’s the interface for? How do you use it? How does one even remotely abstract side-effects with this? The answer is: Category Theory. At this point I wave my hands about and say “WOOOO” and everyone’s really impressed with how smart I am. Seriously, though, the theory is clever trickery but when you actually work with it in practise it’s pretty straight forward.

People talk about Monads as “boxes”, and that’s an apt metaphor, but be careful, these are mathsey-boxes, so the metaphor will break down easily (there’s no unwrap!), and the value of them is not in the wrapping and unwrapping anyway. Note that these don’t work at all like the Proxy or the Facade design patterns, which you can kind of think of as “boxes”.

In order to demonstrate its use, I’m going to create a UselessMonad:

class UselessMonad<I> implements Monad<I>
{
    I val;

    I unwrapVal() {
        return this.val;
    }

    <O> Function<UselessMonad<I>, UselessMonad<O>> wrapFunction(Function<I, UselessMonad<O>> fun) {
        return new Function<UselessMonad<I>, UselessMonad<O>> {
            UselessMonad<O> apply(UselessMonad<I> a) {
                fun.apply(a.val);
            }
        }
    }

    UselessMonad<I> wrapValue(I a) {
        return new UselessMonad<I>(a); // Pretend there's a constructor.
    }
}

Note: I’ve skipped public final etc. etc. for brevity. Note also that the Interface types are now UselessMonad. I did that so that the type signatures are clear in the Monad interface above, but you should know how to change things so that the cast isn’t required.

OK, So this basically wraps a function and a value so that they are “boxed” in a UselessMonad. So far, this should look… odd… but not “difficult”. You should hopefully also notice that you can do this:

UselessMonad<String> v = new UselessMonad<String>();

Function<String, UselessMonad<String>> sayHello = ...;

String val = v.wrapFunction(sayHello).apply(v.wrapValue("world")).unwrapVal();

OK First let’s talk about the v jiggery-pokery I’ve done. In Java, you can only put an interface on an instance. Unfortunately, we want to put those interface methods to be static. Since we can’t do that, this is a hack to get around it.

Next, I want to talk about the slightly strange signature of sayHello. I mean, why would a function like that return a UselessMonad? And also, what if a function just returned a String and not a UselessMonad? Well, you can just construct a Function from another Function!

Function<I, UselessMonad<O>> uselessOf(Function<I, O> fun) {
    return new Function<I, UselessMonad<O>>() {
        UselessMonad<O> apply(I a) {
            return v.wrapValue(fun.apply(a));
        }
    };
}

To put it into words: You can take a Function that takes an A and returns a B, and create another Function that takes an A and returns a UselessMonad<O>, by calling wrapValue() on the result of the function. Before we continue down the rabbit hole, I just want to make it clear that it’s easy to generate functions that return the “Monad” version of a value from “normal” functions.

OK, so what does that wrapFunction line way up above actually do? Well, it does the equivalent of the following:

String val = sayHello.apply("world").unwrapVal();

So why write all that wrapping and unwrapping cruft if all you want to do is apply a Function? Well, the magic trick above is that the input and output of the function are both boxes! What this means is that you can write:

Function<String, String> howYaDoin = ...;

String val = v.wrapFunction(uselessOf(howYaDoin)).apply(
    v.wrapFunction(sayHello).apply(v.wrapValue("world"))).unwrapVal();

I’ve thrown in a uselessOf in there for good measure. Hopefully it doesn’t make things too confusing. Nice, right? I mean, it looks like a dog’s breakfast, but the good thing is that you can just keep wrapping functions till the cows come home. This is the whole point of the Monad! You go to all this trouble of functions that return functions and wrapping functions and the crazy signatures just so you can do this wrapping and applying and wrapping and applying.

But how does this help abstract away side effects?

Well, imagine your Monad wasn’t actually useless. Imagine it was something like Guava’s Optional:

class Optional<I> extends Monad<I>
{
    // Imagine the rest of Optional code here.
    <O> Function<Monad<I>, Monad<O>> wrapFunction(Function<I, Monad<O>> fun) {
        return new Function<Monad<I>, Monad<O>> {
            Monad<O> apply(Monad<I> val) {
                if (val.isPresent()) {
                    return fun.apply(val.get());
                }
                return absent();
            }
        };
    }

    Monad<I> wrapValue(I val) {
        return Optional.of(val);
    }
}

This means you can call code like:

Optional<String> val = v.wrapFunction(optionalOf(howYaDoin)).apply(
    v.wrapFunction(maybeSayHello).apply(v.wrapValue("world")));

Now, we can see the Monad in action: it allows you to chain up commands that return an Optional without having to constantly check for null, even though the howYaDoin function doesn’t expect a null as its input, and even though the maybeSayHello function might return absent(), it all just works. What’s even better is that this isn’t just true of Optional, but things like Futures, Lists, Logging, Transactions, etc. can all be written and composed in this way. A few more things to note here are that the “side-effects” of what’s Optional and what’s not are written in a completely type-safe way. You can’t accidentally do something silly like pass an Optional value somewhere where it’s not expected. There’s also a clear separation of concerns between the side-effects of potentially failing functions, and the actual main flow of the code. This is the problem that Monads solve.

Another important note: You might, as you’re writing Functions and things, get to a point where you have a Monad<Monad<I>>. Another important property of Monads is that you can take a nested structure of monads and create a single one:

Monad<I> flatten(Monad<Monad<I>> m) {
    m.wrapFunction(new Function<Monad<I>, Monad<I>>() {
        Monad<I> apply(Monad<I> a) {
            return a;
        }
    }).apply(m);
}

Again, you need to liberally sprinkle more generics on there, but the idea is correct. This is slightly magical as well, but basically it takes a function that just returns the value that it’s given, wraps it so the input will be a Monad<Monad<I>>, and will return a Monad<I>.

That’s all kind of neat, but for the Optional case, why not just do:

Optional<String> val = Optional.of("world").transform(maybeSayHello).transform(howYaDoin);

The short answer is: Yeah, that’s probably the smarter way. In languages like Haskell the Monad has significantly more power than in Java, so Monad is clearly the more attractive solution there. However, in Java, Guava’s chaining approach is pretty good. There are cases where the Monad way will result in less boilerplate code, but it’s probably not worth the added complexity. This is especially true if you look at all those wrappings occurring everywhere!

In conclusion, hopefully now you have a hands-on understanding of what a Monad is and where you might want to use it. It has limited usefulness in Java, but nevertheless is a very powerful design pattern. Keep in mind that this tutorial is not meant to cover the category theory and all the requirements of the design pattern (such as the Monad Laws, which you must not break), nor does any of this code compile. It’s probably worth writing this code in a way that is type-safe and compiles, and having a play with Monads. They are a very powerful pattern and once you get the hang of them you’ll see uses for them all over the place.

On Linux and Desktops

I talk about what Linux really means in the modern day.
by Sunny Kalsi on April 27, 2014, 4:32 p.m.

I generally try and stay away from the whole “linux on the desktop” argument. It very rarely makes any sense. Unfortunately, with the release of Ubuntu 14.04 a lot of people are yammering on about “Linux on the desktop”, and unfortunately they’re all concluding with the same old tired cliches. What’s worse is the obvious rebuttal while on point, is unsatisfying.

It sounds like making excuses, whereas the real issue is a fundamental misunderstanding of what “Linux” actually is. There are various complaints about how you will lose data if you install it, how it won’t run Windows apps, how it takes “tinkering” to get it to work because you have to maybe go into the computer’s BIOS. What? That’s like saying putting petrol in the car is inconvenient because you have to open the petrol tank. Before self-service in service stations, I’m sure that would’ve been counter-intuitive. Today, most people manage.

For cars, we have mechanics that take care of most of the day-to-day running of the vehicle, if not refilling the petrol. They will manage other disposable items like tyres, spark plugs, oil sumps, belts, hoses, the list goes on. There’s no similar “job” for computers. It’s usually done by the resident nerd. Someone who manages to be proficient enough to get the user up and running again. Those guys are tinkerers, and Linux is most definitely up their alley, but that’s a tautology. For those that don’t care, they don’t even know what Linux is (or Windows or Max OS, for that matter, only that they might need to pay for it, like they pay for AOL).

But in the end, what is someone using when they are using “Linux”? This isn’t some meditation on “Linux” vs “GNU/Linux”, I mean really. If someone is using Mac OS X, are they using “Linux”? Some might say “oh no, it’s just another Unix”, but it’s most definitely not. Mac OS uses CUPS and Samba, for instance, and that’s not coming from some Posix heritage, it’s coming straight from the open source community. It might not be under some organisational banner like “GNU” or “Apache”, but it forms part of the core operating system, as we understand it in the modern age.

The thing with “Linux” as a banner is, it is almost defined by its otherness. Is BSD “Linux”? Is it more Linux than Mac OS X is Linux? Someone running OpenBSD might be offended at being labelled a “Linux” user, but to the metaphysical grandma, they are the same. Windows is Windows, Mac is Mac, Linux is “Other”. It doesn’t matter if you’re running Debian Hurd ARM with a Mach microkernel running mostly BSD software, you’re the “other”, the “Linux guy”.

In a practical sense, there’s a huge number of “Linux guys” who actually run Mac OS with some extra GNU packages. There’s also a fair number who run Windows with Cygwin on it. The “oddity”, if anything, are the people who run Visual Studio toolchains in a traditional Windows environment. For a software developer, if you’re not using Linux, you’re missing out. Today, I believe that’s true even for ex-pats running Mac OS. Linux can be a little janky on some hardware, but Mac OS is more janky in the software development experience. In fact, on Mac hardware, Linux would probably fare pretty well.

But I digress. Really, do you need to be running the Linux Kernel specifically in order to run Linux? Do you need to be running the GNU toolchain? Do you need to be running GTK or QT? Do you need to be using X or Wayland or Mir? Today, almost everyone uses a huge chunk of open source every day, whether they’re technical folk or not, and “Running Linux” isn’t really a thing. Today, you can run a fully FOSS stack, top to bottom, and the most negative thing someone can say about it is that you might have to format your drive or enter the BIOS.

But at the end of the day, does that even matter for the metaphysical grandma? I’ve had people say to me “Oh hey I’m interested in trying out that Linux thing. Does it run on Windows?” Today, I’m not even sure any more. Does it?

App Stores are dumb

I think people talk about an app store as a "must have thing" for phones, but I've never been sold on it.
by Sunny Kalsi on April 22, 2014, 2:22 p.m.

My attitude can be summed up with that silly beer drinking app. That’s pretty much the first Iphone app I saw, and it’s been par for the course ever since. I can’t actually think of a good and useful app from the app store that doesn’t just do what a phone already does. Let’s start with the Iphone, since it typifies my point completely.

I’d like to note that I’m not talking about tablets here. Not to mention the fact that I don’t understand what they’re for, I also see their use cases as being fairly different.

Firstly, games are by far the most popular (and profitable) thing on app stores. I feel like that’s true because we don’t have a different device to use when on the train. I think games kind of neatly fill that gap. However, this isn’t really a “use case”, or “killer app”. It’s entertainment or passing time, but I don’t really see it as adding systemically to what a phone really is.

Another popular “category” of apps is photo taking and management apps. These are actually quite decent, but single use. They also don’t offer anything substantially different to one another. Why use the Instagram app over the Flickr app? Because they’re owned by different companies, I suppose. Really, wouldn’t it be preferable to have an app that uploads to both and is a superior picture taking app? I’m going to say video taking and management apps like Youtube or Vine are in this category as well. I don’t really see how a phone simply couldn’t come with the best of these apps.

Finally, there’s “messaging” apps. That is, these apps replicate the functionality of a phone, but on a separate (usually proprietary) network to the phone network you’re on. This is basically a market workaround for the fact that phone companies are arseholes and just won’t give you what you want, and so are messaging companies who make the apps. This could all be nicely integrated by the backend but instead you personally have to know that some friends use Whatsapp, others use Facebook, and others prefer SMS.

And… that’s it? Well, there’s mapping apps, which tell you where you are, where other things are, and how to get to those other things. I think this is the closest thing that you get to a “killer app” on the phone. There’s also music streaming or playback apps. That’s undeniably useful. Health and fitness tracking apps, too, can be quite useful.

But I’m starting to struggle now. Even email and calendar apps aren’t as useful today as my palmpilot was several years ago. On top of that, it seems that most people stay with the one that the phone installs by default. The only other apps I can find are just rehashes of websites — things like ebay or amazon.

I think I used to see Android as different, because of its ability to have apps interconnect seamlessly. I thought “apps” could be used and connected together like LEGO. Of course, there’s no money in that. Apps today on Android are much like apps on the Iphone — small islands unto themselves, and around as useful.

I want to make it clear here that I’m not talking about apps, I’m talking about app stores. The question I keep running up against is: why? Clearly, an app store makes a lot of money for Apple, and it probably makes at least a little bit for Google, but what value does it add for a user? I don’t think it adds a lot. I think you hear a lot about how Android users don’t spend much and iOS users spend more and that has more to do with their spending money or whatever, but I don’t think most people can really address the core issue here: Why have an app store at all?

Really, if a phone came with the best apps for each of the categories I’ve mentioned, pre-installed, why would I need an app store? What could I possibly do that I couldn’t do without an app store? Because the thing is, I could do a lot more without an app store! I could have a fully integrated experience! If a single messaging app could handle Twitter, Skype, phones, video calls, Viber, Facebook, et cetera, wouldn’t that be great? At one point, Android was going for that with their “People” app (you can still see people’s most recent Facebook and Twitter updates in that app). The idea was that you could “plug in” different accounts and have a seamless experience across them all.

The same goes for emails and calendars. All I want is an integrated experience that matches what I could do 10 odd years ago. Also photos. Many newer phones come with specialised technology for their cameras, but they can’t go all out because they need to support legacy applications. However, if they could all just integrate with backend services like Flickr, Instagram, etc. then a phone manufacturer could tinker a lot more with their hardware. The same goes for sound and video apps, fitness apps, whatever! They should all really be a single app which plug into backend services. No need for an app store.

That dream is dead, but why aren’t companies like HTC aggessively pursuing this end goal? HTC already pre-bundles a bunch of great third party apps, and integrates them through blinkfeed. Find an external way to update those apps, and install new functionality, make first-party apps that are best in class, remove the app store, and finally regain control of your own destiny! Heck, work with telcos to actually collate and organise these services and actually provide something of value to consumers instead of shafting them!

In short, I don’t understand app stores. I don’t see their value for users, I don’t see them solving new problems, I don’t see the value for phone manufacturers, I don’t see the value for app creators, and I don’t know why they exist, except to enrich their platform owners. Instead, we should focus on good integration, seamless and frequent updates, strong applications, and a consistent interface.