I’ve been pondering what language to use to write Masterpiece, and have been vacillating back and forth over the last few days.
Swift is the future!
Swift – Apple’s fancy new programming language destined to replace Objective-C – is designed to make some of the more heinous errors one can make with Objective-C totally impossible. That’s very, very nice, and with the clear message from Apple that this is the future of iOS/Mac development, it seems to make sense to start any new projects in Swift, rather than the decades-old Objective-C.
So: Swift is the natural choice, right?
Hold up. Swift is still very young.
But – and this is a massive “but” – Swift is very new, and although Apple report they’ve been using it to make their own products, I contend it’s as-yet largely untested. Reports of bugs in the developer tools, including nonsensical error messages, crashes and assorted other compiler and editor bugs abound. Just a quick search turns up all kinds of problems:
- http://footle.org/2014/11/04/misleading-swift-compiler-errors/
- https://twitter.com/SteveStreza/status/542830670301515776
- https://twitter.com/jnpdx/status/542493939152859138
- https://twitter.com/tewha/status/542150178476548097
- https://twitter.com/hyperjeff/status/542096016762494976
- https://twitter.com/DamienPetrilli/status/541964343420944384
- http://stackoverflow.com/questions/27371000/xcode6-swift-type-inference-bug
- https://discussions.apple.com/thread/6526114?start=0&tstart=0
- http://finalize.com/2014/10/08/xcode-simulator-bug-with-swift-to-objective-c-call-passing-cmtime-structure/
Good god! No doubt these problems will sort themselves out over the next years, but who wants to be the guinea-pig responsible for finding and reporting them?
Not me. I wanna make stuff.
There’s one other big factor that I think has settled the matter in Objective-C’s favour, once and for all.
Swift and realtime audio is going to be hard.
I’m writing an audio app, and a pretty damn complex one. See, when you’re writing code for realtime audio, it’s absolutely critical you write code that’s fast, doesn’t allocate or release memory, and doesn’t wait on any locks. If you don’t get that right, you open yourself up to the risk of a nasty little thing called priority inversion which, with audio, means glitches because your code can’t provide audio fast enough to the hardware.
At the heart of the Objective-C runtime is all kinds of stuff that can’t take place on an audio thread without risking glitches: locks, memory allocations and the like. That means you really shouldn’t write audio code in Objective-C, but should drop down to plain C where it’s safe.
And you know what? The same thing almost certainly goes for Swift. At the very least, it’s untried and totally undocumented, which isn’t a good sign.
But the lovely thing about Objective-C, is that you can write C code right alongside Objective-C code, and directly access properties of Objective-C classes, with no additional cost. As far as I can tell, that’s really, really messy to do in Swift.
So, that’s that. Objective-C it is.