The Elegant Chaos Blog
Various thoughts and ramblings from the world of Elegant Chaos.

October 26, 2014

Optionals are one of the more interesting things in Swift, if you’re coming from a C/C++/Objective-C background.

I think I grok them now, and this is my attempt to explain it - to myself and others. Not so much what they are, but why they are.

Put crudely, they’re a way of saying “my variable (or parameter) can either contain an object of type X, or nothing”.

Coming from pointer based languages, we’re used to doing representing this as just a pointer to an X, which the pointer being nil (or NULL, or zero) to indicate the “nothing” case.

Formalising this concept at first seems a little esoteric. It’s not though. 

Coming from pointer based languages, we’re probably also used to the idea that pointers can be dangerous. Quite apart from the scenario where they end up accidentally pointing to random memory, the fact that they can be nil means we have to either:

  • Make sure that they aren't
  • Impose an entirely voluntary convention that says “yeah, well, I know that in theory I might pass you nil here, but I promise I never will”. We then have to hope that everyone gets the memo.

In the first case, things are potentially safer, but we can start eating into performance. The overhead is tiny in any one instance, but in a complex web of nested calls over a big object graph, it can conceivably add up. More importantly perhaps, we also add a bit of cognitive baggage that we have to mentally strip away when reading / modifying / reasoning about the code. We have to think about the case where we can’t do what we wanted to do because we were given a nil. We get used to doing this and it becomes second nature, but it’s still there.

In the second situation, things get a bit more hairy. We may think we’re in total control - we might even be right - but we don’t really have any way to verify this. Nor does the compiler in all but the simplest of cases. We can use assertions to try to check our convention is being adhered to, but then we’re back to adding some mental baggage.

Optionals don’t entirely solve any of this, but they seem to me to do two important things.

  • They make it easier to represent this situation, in a way that tells the users of the code, and tells the compiler, and lets both know not to make assumptions.
  • By existing, they allow the non-optional variant to exist. In other words, they allow you to formally express “no, really, this will always be an object of type X*”.

What’s so great about this is that it makes it possibly for you to dump a lot of that mental baggage, for a lot of the code. When it makes no sense to deal with the “nothing” scenario, you don’t have to any more. You *require* there to be an object, and leave it up to the caller to deal with ensuring that requirement is met, or not calling you if it can’t be met.

All of this will seem blindingly obvious to anyone who is used to this concept from another language, but what wasn’t completely clear to me at first was some of the implications.

What it took me a while to get was that if we encounter an API or some source of objects that is giving us an optional X, we should probably want to turn it into a non-optional X as soon as we can. We want to push the “deal with the thing not existing” code out as far to the edges of our call graph as we can, and validate things as early as we can, and as close to the source of the objects as we can. This gives us more chance to deal with unexpected situations early before they become bad, and means that the bulk of the code doesn’t have to.

I think that the loss of the mental baggage in the rest of the code will actually be substantial in many cases, and will be a great aid to productivity (plus a small aid to efficiency).

I may be way off course here, mind you. If I am, please correct me!

 [Update: I’ve been saying “object of type X”, but of course really I should just be saying “type X” I think. They don’t have to be objects].

[*The funny thing is, it’s kind of like a better, more dynamic version of reference parameters in C++, and I had completely got used to that back when I did a lot of C++, and always tried to use const references rather than pointers when I could. It’s been a while since I’ve had to think in those terms though, and I’d rather got out of the habit :)]

more...

October 26, 2014

I really don’t agree with this post: No Single Swift Style by Jeremy Sherman.

His conclusion (that we won’t end up with one idiomatic style) may be correct, but I really don’t think it would be a good thing.

In fact I think it would be a pretty damning indictment of the language if it was seen to be "too big to be confined to having a single style”.

It would certainly be grist to the mill for people who argue that Swift is too much of a bastard child formed from many disparate influences. I’m not totally sure I buy into that as a criticism - I don’t think a synthesis of old stuff to form something new is necessarily a bad thing at all - but I certainly do think that not all choice is good. Just because we can do things in ten different ways in a given language, it doesn’t mean we should. I rather hope that we do develop some best practise, and that it falls naturally out of the best features of the language (whatever they turn out to be).

 

more...

October 26, 2014

Interesting blog post from Justin Williams on mistakes he made when taking on Glassboard.

He talks about some technical issues, but it seems to me that the mistakes made were essentially business decisions, including quite possibly the decision to take it on in the first place.

I was always a bit dubious about how Glassboard could ever make anyone any money, so I was doubly surprised when, having had the original developers fail to do so, Justin then decided to give it a try.

There are situations where taking over someone else's product is a really good way to hit the ground running - for example if it’s got no monthly overheads and clear potential for new features and/or growth of the user base.

This seemed to be the exact opposite in all regards. It clearly had a big server infrastructure cost, was in a fairly crowded market, and had a USP that seemed dubious to me at the best of times.

Still - we only learn by making mistakes, so fair play to Justin for taking one for the team!

more...

Marcus Zarra’s NSConference talk today - about how reinventing the wheel is ok, maybe even to be encouraged - was thought provoking. In fact, I think it was downright provocative. Which is great :)

I recognise a lot of the sentiment behind what he was saying, and have undoubtedly had that same urge to reinvent on many occasions.

I generally feel that it’s useful to understand at least one level below the one at which you are working. If you’re using Objective C, understand C. If you’re using C, understand assembler. If you’re using an open source framework, understand how it works or the technology on which it is implemented.

And yes, it’s hard to really understand a coding problem until you’ve tackled it yourself.

Also it’s quite true that generic code is never going to be as optimal as a bespoke solution.

However…

When you use an open source library, you are typically using code that has shipped in existing products. In multiple existing products. Products that have provided the code with hours/days/months of testing with real world data. Is that worth something? Hell yes.

There is code out there which is broken, unused, or just plain rubbish. Undeniable. That’s an argument for choosing carefully, it’s not an argument for not using other people’s code.

The fact that it’s hard to understand a problem until you’ve tackled it yourself makes it tempting to rewrite. Very tempting. I have ripped out and rewritten plenty of other people’s code in my time. Many’s the time I have simplified “unnecessarily complicated code”: “what an idiot this guy is, look at all this crazy shit he’s doing…”.

That same fact is also why you really shouldn’t, most of the time. Unless you have the luxury of enough hours to rewrite the same thing more than once on the same project, or the luxury of doing the same thing repeatedly on each project. When you rewrite, you will fuck up. The fuck up rate will probably never get to zero, but it will almost certainly only reach an acceptable level after two or more attempts.

It’s uncanny how often that “weird shit” that you ripped out near the beginning makes a quiet return (with the variable names changed to protect the innocent), when the subtlety of your understanding of the problem finally catches up with that of the person who wrote the original version.

When we use someone else’s code, the trade off we’re making is to exchange that deeper understanding that we would have gained by doing it ourselves for the luxury of not having to go down all the blind alleys first.

When it comes to the risk that comes with using someone else’s code, I agree with Marcus that it’s always good to understand it.

But if you’re selling your client on the idea of doing an upfront rewrite of something now on the basis that there’s a 20% chance that they might conceivably need to do it later… well, that’s nice work if you can get it, but 80% of the time it’s stuff that they didn’t need to pay for. Sure the cost of that 20% scenario might be much worse than up-front rewrite, but there are other ways to mitigate against that worst case.

Marcus seemed to be describing a situation where you’re basically doing similar projects again and again. In that case I can also see the attraction of starting from scratch each time. Like a master craftsperson essentially making the same chair over and over again, that zen-like quest for perfection is seductive.

Most of us don’t live in that world though.

I’m not saying that we write code once and let it ossify for ever more, or that we grab the first thing off Github that looks vaguely ok and never question it again. There’s a hell of a lot of room in between that and what Marcus seemed to be advocating though.

more...

February 06, 2014
more...