And this isn't good. But two points: 1. Your Telco doesn't advertise secure end-to-end encryption. 2. Cellular protocols are (relatively) hard to intercept /and/ interception is prevented by a whole bunch of FCC regulations. iMessage will happily use an insecure Wi-Fi access point.
Worrying about evesdropping is derailing your otherwise fine train of arguments.
Security against evesdropping comes down to one and only one factor: The evesdropper cannot distinguish the bytes transmitted by iMessage from a stream of random bytes.
This is a solved problem, and getting it right in practice comes down to the simple rule, "Don't try to implement a crypto scheme. Use an existing library."
That's so easy to do nowadays that Apple would have to be breathtakingly incompetent to get that aspect of the security equation wrong.
Do you mean "no, they do not have to be incompetent to get it wrong" or "I agree, but incompetency is the norm, so it would not surprise me if they erred there"?
Those aren't mutually exclusive. It's actually quite easy to get this stuff wrong (and I'd argue almost guaranteed that some part of it is wrong, although hopefully not a part that makes the whole thing fall apart).
To build something like iMessage, there's basically three discrete levels (this is a little "handwave-y", but I think conceptually accurate):
You have the underlying cryptographic primitives. This is what people spend hours arguing about on the internet, but is actually probably the least of what you should be worried about when designing a system that uses cryptography.
Any good system should be using sound primitives, but the primitives don't by themselves really do very much, so you need to combine them into something useable (and by you, I mean whoever wrote the library you're using, which hopefully is one that's been through a lot of analysis).
So now you've got a cryptographic system (which is composed of usually several primitives, hopefully all of which are sound); but even this system doesn't actually do what you need it to do, it's usually just a function you call to perform some operation as part of the larger thing you're building.
So for something like iMessage to be sound, Apple had to do the following (either explicitly, or implicitly based on what libraries they chose):
1.) Pick a bunch of primitives (which isn't hard, and if there turn out to be problems, a lot more things are screwed than just your application)
2.) Pick a library (which is a little more difficult, and security problems with libraries still get discovered all the time, but let's also assume that they did their homework and both chose and implemented sound ones)
3.) Write the actual application to be secure (which is surprisingly difficult to do right, even when you work for a company with a mountain of cash)
They most likely actually started with number 2 (which would have dictated number 1).
I would argue that the likelihood of vulnerabilities in number 3 is astronomically high (but again, hopefully not ones so severe as to render the system pointless), moderately high in number 2 (just look at how many vulnerabilities are still discovered in OpenSSL for example), and probably low in number 1 (which is the part they probably didn't even pick, as it's generally determined by the library).
iMessage isn't the most complicated piece of software ever written, but there's still a decent amount of functionality it provides, and you'd be surprised how easy it is to screw up systems that use certificates (again, not because of problems with the primitives).
Just to be absolutely clear: if Thomas Ptacek says this is a problem, then it's absolutely a serious concern and I was a complete idiot for saying it wasn't.
It's actually very easy to get this wrong. Especially on the key and trust management side. The crypto algorithms themselves are the simple bit provided you don't try to invent your own.