From a perspective of an assembler, C, shell, and AWK programmer, Java has never been, nor will it ever be ready for anything. We use it at work (we run thousands of Java applications, written externally and in house), and it is a total piece of shit: 300 GB of RAM consumption on the average(!!!), CPU intensive, slow, not backwards compatible (try changing the Java runtime environment under an application and see how well that works for you). From my experience and point of view, Java is overcomplicated and unintuitive, even for the simplest of tasks. Furthermore, I deeply regret that Java was one of my core computer science courses at the university (way back when Java was all the rage). Personally, I would rather drop dead than use Java, and if they tried to force me to write code in it again, I would quit on the spot!
If I had it my way, Java would be a criminal offense carrying a minimum prison sentence of at least 20 years, and if I ever have my own company and I catch someone using Java, I will fire them on the spot with so much gusto! It will be so awesome.
`gcc` still as bugs. `clang` still has bugs. There was one recently on HN where one pass in clang caused undefined behavior in another. Are these not yet ready for prime time?
Furthermore, this is a known bug, and not too high priority probably since it's pretty rare to stack allocate something huge and then use it in a way that could cause a vulnerability (instead of crashing). The fix would involve replacing the segfault with a stack overflow, which also causes a crash.
> `gcc` still as bugs. `clang` still has bugs. There was one recently on HN where one pass in clang caused undefined behavior in another. Are these not yet ready for prime time?
I thought Rust was a language billing itself as safe. GCC and Clang are compilers, they have not mounted a campaign to bill themselves as safe, nor are they trying to bill themselves as the best thing since sliced bread.
In this case, it's not actually that much of a bug: the segfault is entirely controlled. Threads' stacks are allocated with a guard page at the end that is marked as inaccessible (at the OS level), so any access---read or write---will make the OS kill the process, via a segfault.
This particular type of segfault differs to typical segfaults in other languages because it is guaranteed to happen and guaranteed to be a segfault/kill the program. Segfaults in C code often hint at more serious bugs (e.g. use after free) that can be exploited for remote code execution etc.
The bug is that the stack overflow detection isn't perfect: if writes to particularly large stack frames happen in just the wrong way, then a program can write beyond the guard page instead of being killed. This is obviously unfortunate, and the fix is ensuring LLVM has support for stack problems on all platforms (it currently has them on Windows).
Yes, and this is a compiler bug. It is not a bug in the language, and it can be fixed once LLVM gets stack probes for platforms other than Windows (on which it already works). On Windows with rustc you can't cause this issue currently IIRC.
(It can be fixed in rustc itself by inserting stack probes on large stack allocations, but it won't be able to catch cases where LLVM moves things around and in the process creates a new large stack allocation; i.e. after tons of inlining)
Ultimately, it's very hard to weaponize a segfault caused by a stack overflow (and causing a segfault through a guard-page-skipping overflow itself is a rare thing), so as far as practical aspects are concerned, this doesn't really matter.
Like everyone has said so far, it's a bug, a bug which can and will be fixed.
All software has bugs. Of course, you can certainly decide the severity of each bug for your use-case, but you can find stuff that's just as nasty on any compiler's issue tracker.
AFAIK this could've been fixed in LLVM a long time ago, which would also help C code compiled with clang, although I believe stack probes are opt-in there.
We can't fix it fully in rustc because we don't know the stack size, which can grow with aggressive inlining, for example.
I suppose we could summarily probe allocas we know are larger than the page size, which would solve this particular situation (one large variable), but it's not a panacea.