I talk a little about what it means to be "Gnome" and the way Free GUI apps should be designed.
by on June 14, 2013, 11:02 p.m.
We can see that "applications" actually start coming together like LEGO blocks.
If you think about modern operating system design, most of the action is in the mobile space, and the moves in mobile space are clear: Applications are installed by the user, not by root. The phone does not trust the user. The user does not trust the apps (although they still manage to steal all the users’ data), and the apps do not trust each other. Everything is kept piecemeal, separate, isolated. There are gates and checkpoints. Sharing data is hard.
This has actually been part of the “dream” of OS design and research — good separation between applications — for some time now, but shifting a paradigm like this requires a whole new playing field, which smart phones provide. The reasoning here is simple: applications are proprietary, so they cannot be trusted. The operating system needs to provide a solid base, so the user cannot be trusted to make changes.
If we look at Ubuntu Phone it has the same attitude. It is building a “solid base” on top of which applications are installed, all of which do not trust each other, nor the user.
But free software isn’t like proprietary software. The unix principle of “do one thing and do it well” marries perfectly with free software, where you can read the source code, compile it yourself, and connect it together. When writing free software, part of the thinking should be how everyone can work together to a common goal, and how we can take advantage of the freedom. This freedom engenders trust — trust between the applications, between the user and the operating system — which dramatically influences the design of the entire system.
If we think about a distribution like Debian, if we use free software, many people have put this software together, are working together, and the functionality of the software is known end-to-end, even though you may not have everything installed. The contents of the “app store” are similarly created and shared by trusted parties, their work is transparent.
Think about Gnome for a second, and what it means. Originally it was the GNU Network Object Modelling Environment. Really, the purpose here was for it to get apps to talk to each other. As time went on, Gnome realised that really, it was becoming a bloated framework that did too much, and began to split it’s work up into small libraries and services. These are used everywhere, even if you’re not in a Gnome environment! Ultimately, their role is really to be the “common language” that UI apps use to communicate with one another. Since we’re in an environment of trust, there may not be any code involved here!
I’ve been thinking recently: what would Gnome 4.0 look like?
- Firstly, most of the “actual work” should continue to be done in a library and / or a service. Luckily this already happens — with libraries like GStreamer or library / services like Telepathy. This ensures that the UI is simply a facade, and a particular task could happen with a similar CLI tool.
- An application should no longer be a monolithic thing, rather an Application Fragment (AF). Each fragment runs in its own process.
- Application Fragments should have inputs and outputs. An AF can “ask for” an input to be filled, or can ask for an output to go somewhere. We call this “share”.
- There should be an Application Container (AC), which manages AFs, but it might not know what fragments it is managing specifically. The Gnome Shell as it currently stands should be an AC. An AF might also be an AC. A terminal might be one of these cases.
- These inputs and outputs should be satisfied by putting messages on a bus, such as DBus. The (nearest parent) AC should look up the MIME type (or similar) of the message, and try and fill that with an AF it knows about. The actual data should be streamed using message passing.
- An AF should also have a “context” which it gets initialised with. This should likely just be a YAML or JSON stream fed into stdin. before exiting, the AF should emit a context which can be used for the next invocation (if nothing is emitted, then the existing context will be used). Several contexts might exist, including a “null” context where no data is fed into stdin. Several AFs might be running with different contexts.
- The AC controls the contexts of the AFs appropriately.
Let’s look at some examples:
- The “save file dialog” is simply an application fragment which handles generic file output. A separate AF would simply “share” data to a “save file”.
- The “settings” of an AF might be an input. A settings AF could output settings data for an AF. It would only need to know the schema of the settings, not the individual application in question!
- A terminal application might output an image, which the terminal emulator can handle, or perhaps it gets percolated up to the Shell. In this way we can implement something similar to what Termkit was attempting.
- An image could be viewed with one AF, but edited using another AF. Granted this would mean tight integration between the two AFs (very specialised sharing of data, or sharing a very specific message stream), but nevertheless this would basically be an image viewer normally in which you could temporarily “inject” an editor.
- A tiling AC which contains a traditional style “application” with a file browser in the left window, an editor in the right, with additional tools as flexible AFs. A text search box that could work just as well in a text editor and a document editor.
- Clipboard as an AF. You select data and “share” it straight to another app, or can “store in clipboard”
We can see that “applications” actually start coming together like LEGO blocks. Small, self contained ideas that do one thing and do one thing well. They have trust in their ecosystem that it can provide the other parts that they are after. They have trust in other applications that they can communicate with them properly and that they are not malicious, and they have trust in the user. It’s a virtuous cycle which means applications get functionality that multiplies exponentially and organically.
How might we present applications? Maybe a bit like the cards that Google is using: These are small, interconnected, and solve that problem of “doing one thing well”. In an earlier Gnome 3 review I suggested that nature ought to be a theme. This still makes some sense, with “vines” connecting AFs together. Perhaps these are like fruits or leaves that come off a tree trunk. You sort of expand that tree as you launch more AFs, or “prune” branches as tasks get completed. Maybe completed tasks get turned into branches, and later you can go back and change what you did. New tasks are like saplings.
Whatever it is, we should be aiming for authenticity with the framework we have as free software developers. The current model of copying proprietary software does no favours to the movement nor does it take advantage of the places where the movement excels. As I think forward to what Gnome 4 could be, I’m hopeful that it will emphasise the ideas of trust and sharing that make the free software community unique.
I talk about how the surface pro is the true incarnation of "post PC" compared to, say, the iPad.
by on May 30, 2013, 8:41 a.m.
Talking about a computer is like talking about an engine. By itself it does nothing. There’s an old joke where someone is playing with a computer and his partner asks “What does it do?”; “It… err… computes”. I dislike the concept of “Post PC”, and much prefer Anandtech’s view of “Good enough computing”. It takes certain tasks that some people used to do with powerful PCs and converts them into tasks that people can do with less powerful phones and tablets.
Things like email. This is why there’s been the derisive rise of the idea that you need a real PC to do real work. Many managers have been able to supplant PCs because they used them purely for communications. However, I think the main shift has been that many people who used to use nothing (perhaps pen and paper) can now use a tablet or phone to do their “computing” tasks.
However, past a certain amount of complexity, you need a certain amount of computing power, there’s no getting around that. Whether it is gaming, or complex calculations, or handling large amounts of data, at a certain point a tablet alone won’t cut it. To some extent, the computation could be in a server farm somewhere, but it would be naive to suggest there are no use cases where a powerful machine wouldn’t improve the user experience.
If you’re a developer in today’s workflow, you need a powerful machine. Sometimes this machine will simulate a complex environment with many servers doing many calculations. You need this for yourself so you can test behaviour adequately, and be confident that your code is working as expected. In this scenario, an iPad is insufficient. However, the more I think about it, the more I start to think of the Surface Pro as the ideal tool.
Today, many developers will swear by their kilowatt Desktop PCs. However, many more are shifting to increasingly powerful laptops like the Macbook Pro. However, the portability is a compromise: The keyboard sucks, the touchpad sucks, and the form factor is inflexible for actually carrying around when it’s on. I’m not Mac-bashing here, I mean any laptop has this problem. Likely, you’ll dock your MBP and have it connected to a second screen, a keyboard, and a mouse.
However, what you really want is just the screen.
When you are at your desk, you have a high quality mechanical keyboard, and an ergonomic mouse. You plug your Surface Pro in and it has a full HD screen and can drive a second screen. When you need to go to a meeting or move desks, you take your surface and can jot down notes with the wacom pen and do minor typing with the on-screen keyboard. A workplace usually has desks so carrying it around is not a big deal.
You can also temporarily “move desks” fairly easily by taking your keyboard along. It’s only marginally more annoying than a laptop and you get a full machine — you won’t have a mouse but you have a touchscreen, and you won’t have a second screen but you screen have a full-HD display. This is a full laptop without all the useless stuff like a keyboard, but instead you get some useful tablet functionality. Unlike the iPad, the Surface Pro actually obviates the need for your laptop or desktop.
And that’s truly Post PC!
An idea has come to me over time and all at once: We've been thinking the GPL backwards this entire time. I will try and go through what the problems are and where the solutions lie.
by on May 27, 2013, 11:31 p.m.
GPL software should then focus on creating a holistic end user experience with that GPL guarantee
Unlike MIT-style licenses which are very permissive, the GPL attempts to secure guarantees for the end user. However, such guarantees are often very hard to deliver, and can be of limited value. Further, the license, when at the “source code” level, is too far removed from an end user for them to consider it valuable. In addition, the “viral” nature of the GPL makes it difficult for business to co-operate in the creation of more software.
My proposal is in a couple of parts, and comes from some observations in history:
- Write library code in a permissive license like BSD / MIT. This allows industry to participate, and doesn’t hurt the Libre movement. Instead, use a “guarantee” or branding to re-assure a user that all the binaries are in fact direct from the source, and the source is available. In the early days of BSD / GPL, a lot of people thought it was OK to re-license BSD code as GPL, since GPL is extra guarantees “on top of” BSD. Obviously, this was naive. However, the intent here was to show the user that their code was not tampered with, and present in its full form.
- Write end-user code in a GPL style license, and again add that branding as end-user guarantee. The GPL will need to be modified so that linked libraries need only be “guaranteed” (i.e. are formed with BSD style licenses and are not encumbered and the source code is included) as opposed to being fully viral. This will also remove the requirement for an “LGPL”. The thing the user is looking for is the guarantee of the software as presented, not that the software cannot be encumbered in future. It will also be impossible to encumber GPL software, even though it will potentially be linked against non-GPL software, because the guarantee ensures that the source code is available.
GPL software should then focus on creating a holistic end user experience with that GPL guarantee, using either GPL or BSD style licenses for libraries. The guarantee pushes the deliverable of “freedom” front and center to the user’s faces. It also makes the GPL less viral, since it can be linked against BSD-style licenses. However, none of the guarantee is weakened: The software is completely free-as-in-speech. Any use of proprietary software will taint the guarantee at the library layers, thus disallowing its use for the GPLed end user software.
The GPL is clearly not working, and the reason is that it is too onerous on developers, and does not clearly outline the benefit to users. Branding user software with a GPL “guarantee” will more clearly communicate the benefit. Making the GPL “connect” to BSD style licenses (with the guarantee) will reduce the fear of the license for business and individual developers, and will also engender greater co-operation and leverage more BSD licensed software.
I talk about the new Flickr model and how it isn't quite how Flickr advertise it, nor is it quite how people are perceiving it.
by on May 22, 2013, 2:11 p.m.
It's not "extra space" -- Flickr would still have to provide that; rather a change of model.
People talk about Flickr as now being ad supported as opposed to freemium, and see the one terabyte disk space as a brand new feature, with the newer, more expensive offerings as sort of tacked on. However, I disagree, and think you need to think about it from Flickr’s perspective.
Flickr has always stored an indefinite amount of data: terabytes, even, but it has never shown you that it is doing so. Instead, the way this was mediated was with a small upload quota of 20-ish megs per month. There was no limit on image size, only on the quota. Also, Flickr would always keep an unlimited amount of pictures. However, if you didn’t go for the pro edition, you could only see 200 pictures. Then, whenever you went pro, your quota would become unlimited and you could see all your pictures, even the ones which went past the 200 picture limit when you were a “free” customer.
This has some problems for storage, namely that if a customer goes pro for a few months, uploads gigabytes, then goes back to free, you still have to keep all their data around indefinitely. However, most free customers would only use 20 megabytes a month and it would be ages before their data usage mattered.
I actually think this model worked best for Flickr’s community. I would only upload original images, which amounted to a handful a month, but I would be really careful when selecting which image went onto Flickr. The quota became a way of “managing” your photo uploads: high quality, consistent, slow drip. It was like This is my Jam but for pictures. It also made me an aspirational pro user. I could imagine when I wanted all my pics on flickr, or when I wanted to upload more than my 20 megs, or maybe people would start to buy my pictures and I would want all of them to be available.
However, when Flickr was bought by Yahoo, they really wanted MOAR PICTURES, which meant the limit was increased from 20 to 200 megs, but pushed the maximum quality way down. As you can imagine, this broke the model for the customers — the aspirational, self-curated community, but was just fine for Yahoo. However, Flickr was also keeping every single picture, the uploads were still unlimited, but the user was limited to seeing only 200 pictures.
In this light the new announcement makes sense. It’s not “extra space” — Flickr would still have to provide that; rather a change of model. The high quality uploads are back, but instead of unlimited space and limited upload bandwidth, instead we get a huge (but limited!) pot we can throw our pics in. It takes the aspirational pro user and makes them a free user. However, this is really a compromise, because Yahoo still want all your pictures. They want this to be a cloud storage facility for pictures, and they’ll pay for it with advertising.
However, they also know their customers well. Pro users would likely be whales. That is, they’d put a huge strain on the system but would also likely be willing to shell out a huge amount. This is why the prices have gone up. A lot of people who want the old Flickr back actually want the aspirational model back. They’re not paying customers, but still want to become those paying customers with unlimited everything, and have that shelter as slowly uploading curated people.
In the end, Yahoo will probably do well from this. However, the Flickr we all know and love, where professional photographers mingled with people working on their art — the true community of Flickr, is gone. I don’t know if people will treat Flickr like they treat GMail, but if so, good on Flickr for knowing what works.
I talk about caring for a random person on the internet.
by on May 9, 2013, 11:13 p.m.
Allie Brosh recently updated her blog for the first time since the end of 2011, when she talked about going into depression. The post immediately previous to that one was some 6 months earlier talking about how she was going to write a book. I’ve talked about the asymmetrical nature of the relationship between an ordinary person and the celebrity before, but in the case of Allie Brosh, that relationship rings especially true.
For me, and probably for many others (a few thousand, judging by the comments), there’s a chunk of our brains dedicated to Allie Brosh as it would be to a distant friend, even though she holds no such part for us — I’m not blaming her, that’s the nature of this sort of relationship. Every now and then I’d find myself thinking how the book was going, and after the end of 2011, how Allie Brosh was going.
Don’t get me wrong, I worry and think of my real friends more, but on the same scale as you would think of a close acquaintance, I worried for her, too. I’m not the only one, as there was a Reddit post asking if anyone knew how Allie was doing after her post on depression. Allie responded saying she was doing OK but still struggling. The post today was something of a watershed moment. Every few months I’d remember and want to know how everything was with Allie, but to no avail.
Allie hits close to home for me, because of the way she eloquently, and entertainingly, describes her life. Whether it is moments from her childhood or recent adult life, she gives you sense of being there, in the drama and chaos of the moment. The way she tells her stories, you don’t feel like a fly on the wall so much as a member of the family. The hyperbole isn’t the kind you get from repeated re-tellings of a story, rather the way memory and closeness removes the detail, but keeps the essential portions, making them more vivid, hyper-real. They embed themselves in your mind. You understand her life, her dog, her attitude.
But we don’t understand her at all. She publishes what she chooses to. To some extent it is for entertainment, but she likely also has a side to herself she wouldn’t show on the internet. The depression just hints at a person we’ll never know. When watching TV or even reading things on the internet I try and keep the asymmetrical nature of the relationship in mind, but in the case of Allie Brosh I’ve failed. I care. And maybe this just makes me a pleb, but I hope she’s OK. For my sake.