The Elegant Chaos Blog

October 27, 2014

A cautionary tale from Christoffer Lernö: To Swift and back again

Doesn’t inspire a great deal of confidence.

The big thing that worries me about Swift is the way that Apple’s culture (cult?) of secrecy meant that it was developed in a virtual vacuum, and not even widely dog-fooded within Apple.

Writing a new programming language is a bit like doing a cover of Stairway to Heaven. It’s the kind of thing every aspiring programmer wants to do, and the results usually range from mildly embarrassing to excruciatingly bad. To pull it off, you have to be fucking good.

There’s no denying the pedigree of Swift’s authors, but I actually think that a better name for the language might have been Hubris.

more...

A bug report from Tom Harrington (via Michael Tsai): Code Signing Is Flaky and Unreliable

I’m not sure flaky and unreliable is quite how I’d describe it.

I’d describe it as: unnecessarily complex, overly bureaucratic, badly supported, incompletely documented and subject to random change at any point without notice.

Other than that, it’s really good.

more...

October 26, 2014

Optionals are one of the more interesting things in Swift, if you’re coming from a C/C++/Objective-C background.

I think I grok them now, and this is my attempt to explain it - to myself and others. Not so much what they are, but why they are.

Put crudely, they’re a way of saying “my variable (or parameter) can either contain an object of type X, or nothing”.

Coming from pointer based languages, we’re used to doing representing this as just a pointer to an X, which the pointer being nil (or NULL, or zero) to indicate the “nothing” case.

Formalising this concept at first seems a little esoteric. It’s not though. 

Coming from pointer based languages, we’re probably also used to the idea that pointers can be dangerous. Quite apart from the scenario where they end up accidentally pointing to random memory, the fact that they can be nil means we have to either:

  • Make sure that they aren't
  • Impose an entirely voluntary convention that says “yeah, well, I know that in theory I might pass you nil here, but I promise I never will”. We then have to hope that everyone gets the memo.

In the first case, things are potentially safer, but we can start eating into performance. The overhead is tiny in any one instance, but in a complex web of nested calls over a big object graph, it can conceivably add up. More importantly perhaps, we also add a bit of cognitive baggage that we have to mentally strip away when reading / modifying / reasoning about the code. We have to think about the case where we can’t do what we wanted to do because we were given a nil. We get used to doing this and it becomes second nature, but it’s still there.

In the second situation, things get a bit more hairy. We may think we’re in total control - we might even be right - but we don’t really have any way to verify this. Nor does the compiler in all but the simplest of cases. We can use assertions to try to check our convention is being adhered to, but then we’re back to adding some mental baggage.

Optionals don’t entirely solve any of this, but they seem to me to do two important things.

  • They make it easier to represent this situation, in a way that tells the users of the code, and tells the compiler, and lets both know not to make assumptions.
  • By existing, they allow the non-optional variant to exist. In other words, they allow you to formally express “no, really, this will always be an object of type X*”.

What’s so great about this is that it makes it possibly for you to dump a lot of that mental baggage, for a lot of the code. When it makes no sense to deal with the “nothing” scenario, you don’t have to any more. You *require* there to be an object, and leave it up to the caller to deal with ensuring that requirement is met, or not calling you if it can’t be met.

All of this will seem blindingly obvious to anyone who is used to this concept from another language, but what wasn’t completely clear to me at first was some of the implications.

What it took me a while to get was that if we encounter an API or some source of objects that is giving us an optional X, we should probably want to turn it into a non-optional X as soon as we can. We want to push the “deal with the thing not existing” code out as far to the edges of our call graph as we can, and validate things as early as we can, and as close to the source of the objects as we can. This gives us more chance to deal with unexpected situations early before they become bad, and means that the bulk of the code doesn’t have to.

I think that the loss of the mental baggage in the rest of the code will actually be substantial in many cases, and will be a great aid to productivity (plus a small aid to efficiency).

I may be way off course here, mind you. If I am, please correct me!

 [Update: I’ve been saying “object of type X”, but of course really I should just be saying “type X” I think. They don’t have to be objects].

[*The funny thing is, it’s kind of like a better, more dynamic version of reference parameters in C++, and I had completely got used to that back when I did a lot of C++, and always tried to use const references rather than pointers when I could. It’s been a while since I’ve had to think in those terms though, and I’d rather got out of the habit :)]

more...

October 26, 2014

I really don’t agree with this post: No Single Swift Style by Jeremy Sherman.

His conclusion (that we won’t end up with one idiomatic style) may be correct, but I really don’t think it would be a good thing.

In fact I think it would be a pretty damning indictment of the language if it was seen to be "too big to be confined to having a single style”.

It would certainly be grist to the mill for people who argue that Swift is too much of a bastard child formed from many disparate influences. I’m not totally sure I buy into that as a criticism - I don’t think a synthesis of old stuff to form something new is necessarily a bad thing at all - but I certainly do think that not all choice is good. Just because we can do things in ten different ways in a given language, it doesn’t mean we should. I rather hope that we do develop some best practise, and that it falls naturally out of the best features of the language (whatever they turn out to be).

 

more...

October 26, 2014

Interesting blog post from Justin Williams on mistakes he made when taking on Glassboard.

He talks about some technical issues, but it seems to me that the mistakes made were essentially business decisions, including quite possibly the decision to take it on in the first place.

I was always a bit dubious about how Glassboard could ever make anyone any money, so I was doubly surprised when, having had the original developers fail to do so, Justin then decided to give it a try.

There are situations where taking over someone else's product is a really good way to hit the ground running - for example if it’s got no monthly overheads and clear potential for new features and/or growth of the user base.

This seemed to be the exact opposite in all regards. It clearly had a big server infrastructure cost, was in a fairly crowded market, and had a USP that seemed dubious to me at the best of times.

Still - we only learn by making mistakes, so fair play to Justin for taking one for the team!

more...