The Elegant Chaos Blog

December 20, 2014

Having the BBC make its Panorama program all about Apple is tabloid sensationalism, and it sends out completely the wrong message.

It’s saying to companies who attempt, in however ineffectual a way, to do the right thing: “stick your head above the parapet, and we’ll monitor you twice as hard as all of your competitors, even though they are engaged in exactly the same practices”.

The exploitation of labour in poorer countries is a massive problem, but it’s also what the current free trade, free market economies of the richer countries are largely based on.

Our cheap consumer goods don’t appear like magic from nowhere. These things only cost what they do because the people who make them were paid a pittance to work too long in dangerous conditions, and the materials they were made of were sourced in a similarly dubious way.

I’m not saying that it’s somehow easy for us as individual consumers to combat this, but it’s completely hypocritical for anyone who owns a computer, tv, mobile phone etc to single out individual companies without examining the conditions in which their own possesions were produced.

The way to solve this problem is to regulate. We need to raise the burden of proof on manufacturers and sellers that the wares they are peddling have been produced ethically and sustainably. We then need to make it illegal to import and sell things that don’t meet the standard.

That is something that can only be done by governments, and it can only be done if the political will is there to do it.

Which comes back to voters.

Which means you.

more...

A nice quote from the developers of SQLite, who have apparently got some serious performance improvements in their latest version:

The 50% faster number above is not about better query plans. This is 50% faster at the low-level grunt work of moving bits on and off disk and search b-trees. We have achieved this by incorporating hundreds of micro-optimizations. Each micro-optimization might improve the performance by as little as 0.05%. If we get one that improves performance by 0.25%, that is considered a huge win. Each of these optimizations is unmeasurable on a real-world system (we have to use cachegrind to get repeatable run-times) but if you do enough of them, they add up.

 

I have from time to time been accused of being too obsessed with seemingly trivial performance issues when writing everyday code.

There is an orthodoxy based around the concept of premature optimization which seems to encourage some people to believe that performance should be wilfully ignored when writing code, and only dealt with in an isolated step, using tools like Instruments, once all the dust has settled.

Even then, there is a tendency to focus on the low hanging fruit - the top few methods that show up in a profile - and to ignore the rest, or throw up one’s hands at the prospect of improving them. Small (or not so small) overheads that show up in all the code, such as those associated with message passing, memory allocation, and that sort of thing, can easily get overlooked. Similarly, the decision to aim for a particular programming style of idiom can sometimes overlook the fact that the choices they impose have consequences, like lots of dynamic allocation, or lots of memory copying, or lots of synchronisation, or hitting memory in the wrong order and screwing the cache.

I’m not saying that the basic premise of the premature optimization argument is wrong - far from it. It does make sense not to waste massive amounts of time doing insanely complex optimizations too early, on the wrong code. It does make sense to use tools to guide you, rather than guessing. It does make sense also to write clean code that makes your intent obvious.

Most of the time in any case the biggest improvements come from picking the correct algorithms, rather than in twiddling individual lines of your code. 

What numbers like the ones quoted above show though, is that a large number of small improvements to performance can have a massive impact in aggregate. You shouldn’t obfuscate your code unnecessarily or obsessively, but if there are two ways to achieve the same aim, both of which are clean and easy to understand, and one of them is obviously more efficient in speed or space, then you’d be a fool not to choose it.

You can only make an informed decision about which implementation to choose if you have some basic awareness of performance and the implications of your choices. Aiming for a consistent style (functional, object-oriented, whatever) probably makes a lot of sense if it cleans up your code base and makes the whole thing easier to understand; but only if you acknowledge the impact is has. 

It’s not wise to just defer even thinking about all this stuff until some mythical optimization phase later. 

It definitely is wise to gain a basic understanding of how the building blocks of your language work, and roughly how the things on which you build are implemented, and to make decisions accordingly.

In case you’re wondering, the title of this post comes from the following quote by Herb Sutter:

Definition: Premature pessimization is when you write code that is slower than it needs to be, usually by asking for unnecessary extra work, when equivalently complex code would be faster and should just naturally flow out of your fingers.

more...

October 27, 2014

A cautionary tale from Christoffer Lernö: To Swift and back again

Doesn’t inspire a great deal of confidence.

The big thing that worries me about Swift is the way that Apple’s culture (cult?) of secrecy meant that it was developed in a virtual vacuum, and not even widely dog-fooded within Apple.

Writing a new programming language is a bit like doing a cover of Stairway to Heaven. It’s the kind of thing every aspiring programmer wants to do, and the results usually range from mildly embarrassing to excruciatingly bad. To pull it off, you have to be fucking good.

There’s no denying the pedigree of Swift’s authors, but I actually think that a better name for the language might have been Hubris.

more...

A bug report from Tom Harrington (via Michael Tsai): Code Signing Is Flaky and Unreliable

I’m not sure flaky and unreliable is quite how I’d describe it.

I’d describe it as: unnecessarily complex, overly bureaucratic, badly supported, incompletely documented and subject to random change at any point without notice.

Other than that, it’s really good.

more...

October 26, 2014

Optionals are one of the more interesting things in Swift, if you’re coming from a C/C++/Objective-C background.

I think I grok them now, and this is my attempt to explain it - to myself and others. Not so much what they are, but why they are.

Put crudely, they’re a way of saying “my variable (or parameter) can either contain an object of type X, or nothing”.

Coming from pointer based languages, we’re used to doing representing this as just a pointer to an X, which the pointer being nil (or NULL, or zero) to indicate the “nothing” case.

Formalising this concept at first seems a little esoteric. It’s not though. 

Coming from pointer based languages, we’re probably also used to the idea that pointers can be dangerous. Quite apart from the scenario where they end up accidentally pointing to random memory, the fact that they can be nil means we have to either:

  • Make sure that they aren't
  • Impose an entirely voluntary convention that says “yeah, well, I know that in theory I might pass you nil here, but I promise I never will”. We then have to hope that everyone gets the memo.

In the first case, things are potentially safer, but we can start eating into performance. The overhead is tiny in any one instance, but in a complex web of nested calls over a big object graph, it can conceivably add up. More importantly perhaps, we also add a bit of cognitive baggage that we have to mentally strip away when reading / modifying / reasoning about the code. We have to think about the case where we can’t do what we wanted to do because we were given a nil. We get used to doing this and it becomes second nature, but it’s still there.

In the second situation, things get a bit more hairy. We may think we’re in total control - we might even be right - but we don’t really have any way to verify this. Nor does the compiler in all but the simplest of cases. We can use assertions to try to check our convention is being adhered to, but then we’re back to adding some mental baggage.

Optionals don’t entirely solve any of this, but they seem to me to do two important things.

  • They make it easier to represent this situation, in a way that tells the users of the code, and tells the compiler, and lets both know not to make assumptions.
  • By existing, they allow the non-optional variant to exist. In other words, they allow you to formally express “no, really, this will always be an object of type X*”.

What’s so great about this is that it makes it possibly for you to dump a lot of that mental baggage, for a lot of the code. When it makes no sense to deal with the “nothing” scenario, you don’t have to any more. You *require* there to be an object, and leave it up to the caller to deal with ensuring that requirement is met, or not calling you if it can’t be met.

All of this will seem blindingly obvious to anyone who is used to this concept from another language, but what wasn’t completely clear to me at first was some of the implications.

What it took me a while to get was that if we encounter an API or some source of objects that is giving us an optional X, we should probably want to turn it into a non-optional X as soon as we can. We want to push the “deal with the thing not existing” code out as far to the edges of our call graph as we can, and validate things as early as we can, and as close to the source of the objects as we can. This gives us more chance to deal with unexpected situations early before they become bad, and means that the bulk of the code doesn’t have to.

I think that the loss of the mental baggage in the rest of the code will actually be substantial in many cases, and will be a great aid to productivity (plus a small aid to efficiency).

I may be way off course here, mind you. If I am, please correct me!

 [Update: I’ve been saying “object of type X”, but of course really I should just be saying “type X” I think. They don’t have to be objects].

[*The funny thing is, it’s kind of like a better, more dynamic version of reference parameters in C++, and I had completely got used to that back when I did a lot of C++, and always tried to use const references rather than pointers when I could. It’s been a while since I’ve had to think in those terms though, and I’d rather got out of the habit :)]

more...