"Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
The adage is named after Niklaus Wirth, who discussed it in his 1995 article "A Plea for Lean Software".[1][2]"
We could do better languages. But it just takes insane resources to compete with the others ecosystem. IDE's, the Pandas, the format on save.
I've been a professional C++-developer in the past and one of the great things is the tooling and the just sheer knowledge of my past C++ colleagues. They know what happens in the kernel, they know the performance optimisation tricks, they know a lot, because the knowledge they gather just have a longer lifespan.
Ask a JS-developer and they will tell you all the web frameworks that came before React and their quirks...
Software is getting slower due to poor algorithm choice, not due to the one-off factor of maybe 2 that you pay by switching to a compiled but memory-safe language. If C++ was a sensible choice in 1995, 18 months later you would have got the same performance out of a memory-safe language; we're now 10 cycles of Moore's law down the road, our hardware is 1024x faster and our languages are certainly not 1024x slower (unless you're using a scripting language). It's more about stuff that's accidentally quadratic, and that kind of error is if anything easier to make in C++ than in a more concise language where it's easier to see what you're doing.
C++ developers may be smart people because you have to be smart to do anything in C++ - imagine if the same brainpower that's being spent tracking memory usage and pointer/reference distinctions could be put into your actual application logic instead.
I'd argue that the focus on performance is what drove the developers to C++ in the first place. So they will know both the algorithms and the low-level details.
If performance is on your mind constantly, why wouldn't you choose the one with the least restrictions on what you can achieve?
It's not like those people would find joy in being locked into the JVM instruction-set.
I agree with you that algorithm choice is what's most relevant.
However: I'd argue that accidentally quadratic algorithms are easier to hide in a concise language. Writing out a quadratic loop explicitly takes space, and that space alone might make people pay more attention than some subtle implicit language construct. Either way, the most common source of unintended quadratic (or higher) behavior is helper functions and library calls.
The other thing to keep in mind when it comes to algorithms is that cache behavior and therefore memory layout matters a lot for performance on modern hardware. Managed languages really stand in the way of optimizing memory layout, which can be a systematic performance disadvantage compared to C++.
I do hope we get some more innovation in the design space occupied by Rust, where you get fairly explicit control over memory layout, but still have statically checked memory safety guarantees.
> I'd argue that accidentally quadratic algorithms are easier to hide in a concise language. Writing out a quadratic loop explicitly takes space, and that space alone might make people pay more attention than some subtle implicit language construct. Either way, the most common source of unintended quadratic (or higher) behavior is helper functions and library calls.
I disagree. When every loop is full of cruft around setting up the iterators, it's easy to drift past what's actually happening. In a language where looping over a list takes a single syntactic token, it's a lot more obvious when you've nested several such loops.
> The other thing to keep in mind when it comes to algorithms is that cache behavior and therefore memory layout matters a lot for performance on modern hardware. Managed languages really stand in the way of optimizing memory layout, which can be a systematic performance disadvantage compared to C++.
C++ doesn't really make cache behaviour clear either though. I agree that we need better tooling for handling those aspects of high-performance code, but they actually need to come from somewhere lower-level than C++.
Nested loops are obvious in most languages, including C++ -- unless you happen to work with people who don't indent their code properly, but then you have bigger problems than the choice of language.
The real problems tend to come from where the quadratic behaviour doesn't come from nested loops, but from library calls. The canonical example of this is building up a string with successive string concatenation in C.
As for cache behaviour, C++ allows you to control memory layout, which is really what's required there, while most managed languages don't give you that control at all.
> Nested loops are obvious in most languages, including C++ -- unless you happen to work with people who don't indent their code properly, but then you have bigger problems than the choice of language.
We live in a fallen world. In a large enterprise codebase there will almost certainly be parts that aren't indented correctly. And even if everything is indented perfectly, the sheer amount of stuff in a C++ codebase makes everything far, far less obvious.
Drivel. You can't have it both ways. It's easy to see what you're doing in C++ because you have to do it! It's the whole point of the language, and apparently the source of bugs.
Correct me if I'm wrong, but I doubt you've ever written a program in modern C++?
"Modern" C++ is the No True Scotsman of programming languages, so you define it clearly and then I'll tell you whether I've written any. But I've written C++, including professionally. I expect to write some at work tomorrow, in fact.
It's not easy to see what algorithms you're using in a C++ codebase, because most of the lines of code are taken up micromanaging details that are broadly irrelevant. Yes, C++ makes it easy to tell whether you're using 8 bytes or 16 in this one datastructure. But you drown in a sea of those details and lose track of whether you're creating 10 or 10,000 instances of it.
I'd define modern as using RAII extensively and using C++11 at least.
As for algorithms, I honestly don't know what you mean. They're all documented online with respective big-O running times[1]. If you're talking about making unintended copies of things, then yes, C++ does expect you to know what's going on... it's the whole point of the language. If that's too much for you then don't use it, but that doesn't make it a bad language (I'm not denying it has some hair-pulling moments) Use std::move() when appropriate.
In modern C++ the reference count is always 1 or 0 for 80% of the code and so there is no need to actually maintain the count. That other 15% needs a count which I agree is slower than gc done right. The final 5% has cycles and cannot be done by reference count.
Not really. Only shared_ptr uses ref counting, unique_ptr doesn't, and looking at our code base (highly networking orientated) we only use shared_ptr once. You could, in theory, use shared_ptr everywhere, but then you're not using the language properly and may as well resort to Java or similar.
The adage is named after Niklaus Wirth, who discussed it in his 1995 article "A Plea for Lean Software".[1][2]"
We could do better languages. But it just takes insane resources to compete with the others ecosystem. IDE's, the Pandas, the format on save.
I've been a professional C++-developer in the past and one of the great things is the tooling and the just sheer knowledge of my past C++ colleagues. They know what happens in the kernel, they know the performance optimisation tricks, they know a lot, because the knowledge they gather just have a longer lifespan.
Ask a JS-developer and they will tell you all the web frameworks that came before React and their quirks...