The Point Of Optionals?
October 26, 2014

Optionals are one of the more interesting things in Swift, if you’re coming from a C/C++/Objective-C background.

I think I grok them now, and this is my attempt to explain it - to myself and others. Not so much what they are, but why they are.

Put crudely, they’re a way of saying “my variable (or parameter) can either contain an object of type X, or nothing”.

Coming from pointer based languages, we’re used to doing representing this as just a pointer to an X, which the pointer being nil (or NULL, or zero) to indicate the “nothing” case.

Formalising this concept at first seems a little esoteric. It’s not though. 

Coming from pointer based languages, we’re probably also used to the idea that pointers can be dangerous. Quite apart from the scenario where they end up accidentally pointing to random memory, the fact that they can be nil means we have to either:

In the first case, things are potentially safer, but we can start eating into performance. The overhead is tiny in any one instance, but in a complex web of nested calls over a big object graph, it can conceivably add up. More importantly perhaps, we also add a bit of cognitive baggage that we have to mentally strip away when reading / modifying / reasoning about the code. We have to think about the case where we can’t do what we wanted to do because we were given a nil. We get used to doing this and it becomes second nature, but it’s still there.

In the second situation, things get a bit more hairy. We may think we’re in total control - we might even be right - but we don’t really have any way to verify this. Nor does the compiler in all but the simplest of cases. We can use assertions to try to check our convention is being adhered to, but then we’re back to adding some mental baggage.

Optionals don’t entirely solve any of this, but they seem to me to do two important things.

What’s so great about this is that it makes it possibly for you to dump a lot of that mental baggage, for a lot of the code. When it makes no sense to deal with the “nothing” scenario, you don’t have to any more. You *require* there to be an object, and leave it up to the caller to deal with ensuring that requirement is met, or not calling you if it can’t be met.

All of this will seem blindingly obvious to anyone who is used to this concept from another language, but what wasn’t completely clear to me at first was some of the implications.

What it took me a while to get was that if we encounter an API or some source of objects that is giving us an optional X, we should probably want to turn it into a non-optional X as soon as we can. We want to push the “deal with the thing not existing” code out as far to the edges of our call graph as we can, and validate things as early as we can, and as close to the source of the objects as we can. This gives us more chance to deal with unexpected situations early before they become bad, and means that the bulk of the code doesn’t have to.

I think that the loss of the mental baggage in the rest of the code will actually be substantial in many cases, and will be a great aid to productivity (plus a small aid to efficiency).

I may be way off course here, mind you. If I am, please correct me!

 [Update: I’ve been saying “object of type X”, but of course really I should just be saying “type X” I think. They don’t have to be objects].

[*The funny thing is, it’s kind of like a better, more dynamic version of reference parameters in C++, and I had completely got used to that back when I did a lot of C++, and always tried to use const references rather than pointers when I could. It’s been a while since I’ve had to think in those terms though, and I’d rather got out of the habit :)]

« No Single Swift Style? Code signing is flaky and unreliable »
Got a comment on this post? Let us know at @elegantchaoscom.