So, NSConference 2012 has finished. I wanted it to go on for another week, but alas it was impossible, and that many cooked breakfasts probably wouldn’t have been advisable.
I won’t go for a full on summary of the whole thing - a few other folks have been doing a good job of that.
Here’s a quick take on my initial thoughts though. After further reflection I may have more, or may revise these ones.
The Good
- Seeing old friends
- Making new friends
- Some great inspirational talks
- Lots of valuable Nuggets Of Information (tm)
- Tom bringing me my favourite beer
The Un-Good*
- Not enough technical content for me this year
- Some (undeniably brilliant) speakers essentially doing the same talk as last year
- Quality of the blitz talks varied wildly, and I think I preferred them being elsewhere
- The conference clearly needs more QR codes
(*this may be a bad translation)
Overall though - another fantastic experience. Many thanks to Scotty and the whole team for looking after us all so well.
Daniel Pasco just blogged an article named Radar Or GTFO.
The gist of his argument is that it’s all very well moaning about Xcode 4, but unless you file Radar bugs on it, you don’t have a leg to stand on.
Since I’ve recently been moaning about Xcode 4, I have a view on this. Not that I have anything against Daniel, but from the title of this blog post, you can probably guess what it is.
Funnily enough, a few weeks ago, I was also moaning about Radar.
You can read the post if you want the full details, but my basic point about Radar is that it’s a crock of shit for anyone outside of Apple. It’s an impenetrable black hole - bugs go in, but little or nothing ever comes out. It takes a lot of time to make a well formed bug report, yet Apple won’t even let you know whether or not they know about an issue unless you go through all of that effort.
If you are lucky enough to have someone publicise a radar number that you can file a duplicate on, you can avoid some of this work, but even then it’s far harder that it ought to be to just post a “me too” report. You can’t even see the content of the bug report so you have to take someone’s word that it’s actually the issue you are complaining about (Open Radar is a little help in this, but it’s run by us, not by Apple, it’s entirely voluntary and therefore very incomplete, and using it is potentially even more work).
I have every sympathy for Michael Jurewitz (developer tools evangelist at Apple), and the Xcode team - it’s not their fault that they are stuck behind The Great Wall of Cupertino. Unless Apple give some serious love to the bug reporting process though, it is they who don’t have a leg to stand on.
Coincidentally, John Gruber made a tangental but highly relevant post on this the other day.
Apple’s resorting to moaning about a lack of bugs filed in Radar is either them being deliberately obtuse, or it exposes a fundamental misunderstanding of human psychology (something that Apple usually can’t be accused of).
People do things (or fail to do things) for a reason. If we’re not filing bugs, it’s because the bug reporting process is fundamentally broken. Fix that, and we’ll drown Apple in bug reports.
Then maybe I can have a development environment that doesn’t crash on me at least once per hour.
I’ve been using Xcode for a good long time (since before it was called Xcode).
It has many detractors, and though I understand why people get frustrated with it, on balance I’m not one of them these days. I say on balance because there are undoubtedly things about it that drive me mad. That’s has been equally true in the past though of Visual Studio, Eclipse, Metrowerks, Think C, etc.
I think a lot of the criticism of Xcode is unjust.
A lot of it comes from switchers who are actually saying “I don’t understand this because it doesn’t look like insert-my-favourite-IDE-here”. Well, no shit! Different it is, but different is not necessarily worse.
A lot of it also comes from the kind of programmers who don’t really understand build systems, don’t like the fact that making large software projects is complicated, and would really like it if someone else just made it all work. These guys are probably happy with a simple project (maybe taken from a sample and hacked about a bit), but as soon as they have to make a change to a build setting, or work with a multi-target, multi-project setup, and something goes wrong, they get snarky and blame the IDE for being crap. These guys would probably not be much happier with anything else, but for whatever reason they’re using Xcode, so they bitch about Xcode.
So, anyway, hopefully we’ve established that I quite like Xcode. Xcode 4, in particular, is a bit improvement in lots of ways over Xcode 3.
Except. Except.. ah..
WHAT THE FUCK HAPPENED TO XCODE 4.3?
On the face of it, not a lot has changed. Except for one really big thing, which is that almost every file that it uses has moved to a new location on the disc, inside the application bundle. But that shouldn’t affect us right? All we generally do is launch the application, and a few other tools, and some command line utilities, and - well, it shouldn’t affect us too badly, surely?
Well, you’d hope not, but you can see how it might flush out certain incorrect assumptions about where things are relative to each other.
Whether that’s the reason, or it’s something else, this version appears to be really, really crashtastic! It crashes when I do stuff. It crashes when I don’t do stuff (seriously, it crashes when I just leave my computer sometimes). It throws up random modal alerts with obscure error messages. It gets confused and refuses to build projects that it happily builds if you approach them from a different direction. Worst of all, it ignores previous settings about where I want my curly brackets god dammit!
It may be that I’m alone in this, and there’s something strange about my setup. I do admittedly have four different generations of Xcode installed on my machine to cope with the different requirements of various clients.
I doubt that’s it though. The word I’m hearing from all the other developers that I know suggests that my experience is pretty typical.
To me this raises some concerns about Apple’s internal Q&A for their tools department. How exactly did this thing end up going out in this state? The first beta of it was actually quite solid, but the last one and the actually release have been woefully unstable.
What concerns me more though is that Xcode is one great big monolithic lump of closed-source code. It may be modular under the hood (there’s clearly some sort of plug-in architecture), but it’s clearly not modular in a very rugged sense, since Apple only ever seem to want to distribute releases of the whole damn thing in one big lump. If that policy is based on the danger of instability if pieces are updated individually, that’s bad in what it hints about the true state of the code. It’s also bad because, frankly, that policy ain’t working.
To be fair, the decision to keep the releases monolithic may be based more on the complexity of supporting lots of versions of Xcode. If all the components were changing all the time that could get messy I grant you, but if that’s the case then the really really need to make sure that official releases are stable.
Because new releases don’t come along that often.
Betas aren’t that rare (indeed, I have one right now, which is even crashier than 4.3), but betas don’t always help. For every thing that is fixed, a couple of others are either broken or in a state of temporary flux.
I don’t have any simple solutions, but I really hope that this is a blip, since currently, working with Xcode is a profoundly unpleasant experience for me.
Actually, I do have a couple of solutions, but I doubt that Apple will go for them.
One is to open source the bastard. How I would love to be able to just dive in and fix the bugs that are biting me. If there’s one thing that really ought to make sense to open source, it’s a development environment, given the skill set of the user base!
Assuming that’s not going to happen, the next thing I’d really like to see is for Apple to split up the release cycles a little bit.
Fair enough maybe we can’t have a crazy free for all where I can download the latest beta of IDEGit.ideplugin and install it individually, but why on earth do I have to wait for iOS 5.1 to be released before someone will fix some bugs in Xcode’s text editor? It’s ridiculous. No reason at all, except for the monolithic release model.
It would be great if at least the release cycle of the editing environment, the compiler tool chain, the sdks and the ancillary tools were independent, so that the IDE could be updated far more frequently.
There are signs that they’re attempting to go that way, but I’d really like to see them pursue it more aggressively. In the meantime, I live in hope of a new, more stable, beta.
Following on from my post on parameterised unit tests, another little utility hidden away in ECUnitTests is some support for running the run loop in unit tests.
Why would you want to do that, I hear you ask?
The answer is: because you want to do something asynchronous in a unit test, which relies on the run loop. An example would be using NSURLConnection to download something.
The Problem
Each unit test typically runs as a single, discreet test method.
If you need to use something like NSURLConnection that relies on the run loop to post notifications or call delegate methods, then you have a problem doing so in a unit test method. The asynchronous stuff won’t run until after your test method has exited (if at all), at which point it’s too late to test the results.
The Solution
The solution is simply to set things up, call your asynchronous method, then run the run loop until something flags that it’s time to stop, and finally test your results.
This is actually very simple. All you need is to set up some sort of boolean variable which you can test to see if it’s time to exit the loop.
You then set it to false, and enter a loop like this:
exitRunLoop = NO;
while (!exitRunLoop)
{
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
}
In order that you don’t get stuck in this loop forever, you need a way for one of your delegate methods or completion blocks to set the value of exitRunLoop to YES.
The easiest way to do this is to make exitRunLoop a property on the test class, and make sure that the test class is also the delegate (or accessible from the delegate or completion block).
To simplify things, the ECTestCase handles this for you, and provides two methods that you can call.
- (void)runUntilTimeToExit;
- (void)timeToExitRunLoop;
If you inherit from ECTestCase in your test class, you can simply call the first one of these when your test needs to wait for something to happen, and call the second one when the thing has happened and you want to stop waiting.
You can see an example of this in ECUnitTests.
Normal Tests
If you’ve used the Objective-C unit testing framework (OCUnit, also know as SenTestKit) , or for that matter any other xUnit testing framework, you’ll be familiar with the basic way it goes:
- define a class that inherits from SenTestCase
- add some test methods which take no arguments, return no results, and have names prefixed by “test”
- at runtime SenTest makes a test suite for each SenTestCase subclass it finds
- it adds each test method that it finds to the suite
- it then runs each suite in turn, running each test once, and reporting the results
Parameterised Tests
The normal tests work great if you want to run each test once, but what if you have a set of test data and you want to run each test multiple times, applying each item of test data to it in turn?
The naive approach is to define lots of test methods that just call onto another helper method supplying a different argument each time. Something like this:
- (void)testXWithDataA { [self helperX:@"A"]; }
- (void)testXWithDataB { [self helperX:@"B"]; }
That gets tired quickly, and it doesn’t allow for a dynamic amount of test data determined at runtime.
What you really want in this case is to add the following abilities to SenTest:
- the ability to define parameterised test methods using a similar naming convention to the normal ones
- the ability to define a class method which returns a dictionary of test data
- have SenTest make a sub-suite for each parameterised method we found
- have the sub-suite contain a test for each data item
- iterate the suites and tests as usual, applying the relevant data item to each test in turn
SenTestKit is very flexible, but it’s also a bit obscure in the way it’s written, so it’s not immediately apparent how to achieve these goals. After a bit of investigation though it turns out to be pretty simple, and I’ve figured it out so that you don’t have to!
You can find the following classes as part of a small open source module I’ve created, called ECUnitTests. This module includes some other utilities too, but the thing I’m concentrating on in this blog post is the ECParameterisedTest class.
ECParameterisedTest: How To Use It
- inherit from ECParameterisedTest instead of SenTestCase
- define test methods which are named parameterisedTestXYZ instead of testXYZ (they still take no parameters)
- either: define a class method called parameterizedTestData which returns a dictionary containing data
- or: create a plist with the name of your test class, which will contain the data
At runtime the data method will be called to obtain the data. The names of each key should be single words which describe the data. The values can be anything you like - whatever the test methods are expecting.
To simplify the amount of modification to SenTest, the test methods still take no parameters. Instead, to obtain the test data, each test method uses the parameterisedTestDataItem property.
ECParameterisedTest: How It Works
To make its test suites, SenTestKit calls a class method called defaultTestSuite on each SenTestCase subclass that it finds.
The default version of this makes a suite based on finding methods called testXYZ, but it’s easy enough to do something else.
Here’s our version:
+ (id) defaultTestSuite
{
SenTestSuite* result = nil;
NSDictionary* data = [self parameterisedTestData];
if (data)
{
result = [[SenTestSuite alloc] initWithName:NSStringFromClass(self)];
unsigned int methodCount;
Method* methods = class_copyMethodList([self class], &methodCount);
for (NSUInteger n = 0; n < methodCount; ++n)
{
SEL selector = method_getName(methods[n]);
NSString* name = NSStringFromSelector(selector);
if ([name rangeOfString:@"parameterisedTest"].location == 0)
{
SenTestSuite* subSuite = [[SenTestSuite alloc] initWithName:name];
for (NSString* testName in data)
{
NSDictionary* testData = [data objectForKey:testName];
[subSuite addTest:[self testCaseWithSelector:selector param:testData name:testName]];
}
[result addTest:subSuite];
[subSuite release];
}
}
}
return [result autorelease];
}
Most of this is self explanatory, but some key things to note are:
- parameterisedTestData is a class method which returns a dictionary containing the data
- the method as a whole returns one SenTestSuite object
- that SenTestSuite object contains more SenTestSuite objects, which in turn contain the actual tests
To make things simple, we want to use the existing SenTestKit mechanism to invoke the test methods. Since SenTestKit expects test methods not to have any parameters, we need another way of passing the test data to each method. Each test invocation creates an instance of a our class, and we do this creation at the point we build the test suite, so the simple answer is just to add a property to the test class. We can set this property value when we make the test instance, and the test method can extract the data from the instance when it runs.
Obtaining Test Data
To obtain the test data, we’ve added a method parameterisedTestData that we expect the test class to implement.
This method returns a dictionary rather than an array, so that we can use the keys as test names, and the values as the actual data. Having names for the data is useful because of the way SenTestKit reports the results.
Typically it reports each test as [SuiteName testName], taking these names from the class and method. Since we’re going to use the name of the test method for each of our suites, we really need another name to use for each test. This is where the dictionary key comes in.
Where the test data comes from is of course up to you and the kind of tests you are trying to perform. There is a simple scenario though, which is that we want to load it from a plist that we provide along with the test class.
Since we need a default implementation of the method anyway, we can cater for this simple case automatically. We look for a plist with the same name as the test class. If we find it, we load it, and return the top level object from it (expecting it to be an NSDictionary).
+ (NSDictionary*) parameterisedTestData
{
NSURL* plist = [[NSBundle bundleForClass:[self class]] URLForResource:NSStringFromClass([self class]) withExtension:@"plist"];
NSDictionary* result = [NSDictionary dictionaryWithContentsOfURL:plist];
return result;
}
An Example
Here’s a fully worked example.
###ExampleTests.m:
@interface ExampleTests : ECParameterisedTestCase
@end
@interface ExampleTests
- (void)parameterisedTestOne
{
STFail(@"test one with data %@", self.parameterisedTestDataItem);
}
- (void)parameterisedTestTwo
{
STFail(@"test two with data %@", self.parameterisedTestDataItem);
}
@end
###ExampleTests.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>NameA</key>
<string>ValueA</string>
<key>NameB</key>
<string>ValueB</string>
</dict>
</plist>
When run, this should produce something like the following log output (simplified for clarity here):
Run test suite ExampleTests
Run test suite parameterisedTestOne
test NameA failed: test one with data ValueA
test NameB failed: test one with data ValueB
Run test suite parameterisedTestTwo
test NameA failed: test two with data ValueA
test NameB failed: test two with data ValueB