Increasingly, I don't see clojure as primarily a matter of the clojure-on-jvm implementation, but as a distinct and (generally) hosted dialect of lisp. Clojurescript is a clojure. Janet is a clojure. ClojureCLR is a clojure. Fennel is a clojure. Babashka is a clojure. Ferret is (kind of) a clojure. Maybe someday we'll have a bare-metal clojure. So far as I can tell, the lisp landscape (land of lisp?) has changed from common-lisps and schemes, to common-lisps, schemes, and clojures.
IMHO. Maybe I overexaggerate. Anyway, the author, from my perspective has decided that clojure IS in fact for him, but clojure on the JVM is not, so he chooses to use a different one.
Idk why, but this really rubs me the wrong way. It feels like marketing-speak. Clojure _is_ it's own thing, and so are the other languages (Janet, Fennel etc) that you seem to be detracting as some sort of Clojure spin-offs. Aside from all being lisp-like languages they have little in common that is exclusive of modern programming languages in general.
I've been told this a couple times. I doubt I'd thrive in that world though, unless I was selling something I actually gave a damn about.
IDK, is guile a scheme? guile is it's own thing, but it's definitely a scheme. What about racket? Racket has a whole bunch of stuff that mit-scheme doesn't. But it's still kind of in the scheme family. That's the parallel I was drawing. That and the statement that Hickey made in his History of Clojure that Clojurescript _was_ clojure, not some kind of spinoff.
I'm certainly not detracting from Janet and Fennel, but they _are_ quite close to clojure proper. There is a clear lineage.
I would generalize your observation as a logical consequence of Greenspun's tenth rule and conclude that we will eventually get a hosted lisp for any sufficiently relevant programing language.
I think you are right emotionally speaking. A lot of people, author of blogpost included, started with clojure so now they might consider most lisps clojur'y(enough).
However, the broader lisp community writes code very differently from typical clojure. Different idioms, slight syntax variations, completely different tooling and ecosystems.
While I can agree with you that for 'us' it certainly feels emotionally true. But if you go into other lisps scheme, racket etc you would notice huge differences. Differences which may feel even stronger than java vs c++ diff.
This article is incredibly light on specific details about what exactly about the JVM is problematic and just vaguely hints at it not being open-source enough (despite OpenJDK having been a thing for ages), something that I consider to be more ideological in nature than pragmatic.
As someone who has written Ruby for years and more recently switched over to the JVM, I don't see the big deal. Both ecosystems have their advantages and disadvantages. The fact that so much of Ruby's ecosystem is built on C has its downsides too, e.g. compilation errors on "bundle install" because your version of nokogiri doesn't work with whatever C libraries are installed on your system anymore, or a frequent need to install new ruby versions which can be rather slow when you have to compile them from scratch.
I took a different path to Clojure/ClojureScript, coming from a dynamic language background. Java was foreign to me too, but perhaps in a fundamentally different way. That said, I think the experience is probably very similar so I’ll share mine as it relates (but I’m not speaking for the author).
Clojure (the JVM runtime variant) is not compiled to JVM bytecode. It’s compiled to Java implementations. This detail isn’t very hidden, and few affordances are made (or were when I worked in Clojure) to accommodate the indirection when something goes wrong. When you get an error, you’re now walking down this stack:
0. Debugging the build (which has all of these steps before you might even run code)
1. Debugging your own code
2. Debugging Clojure source written in Clojure
3. Debugging the Java implementation of Clojure
4. Debugging Java’s runtime behavior
Most often, Clojure the language doesn’t do anything to make step 1 or 3 any easier. It’s just a call stack from whatever .clj source into a Java stack with no context whatsoever.
This is incredibly hard to debug, especially because a lot of other important details are hidden. For instance:
(let [foo {…}] …)
Calls one of several kinds of HashMap implementations, and may call many others in the course of processing foo. All of these dump you into Clojure, the runtime.
ClojureScript is fundamentally the same, except:
- It’s got a whole additional runtime to loop through
- The cljs maintainer is (or was when I was writing cljs) actively hostile to offers to contribute changes which would make this less painful. Literally close as wontfix, rejecting offers to contribute
Nearly 100% of these experiences are leaky language host marginalia (ie type errors caught in an unhelpful place at runtime). And I wanna be clear: I love the concept of Clojure and don’t regret any of the time I spent working in it (though I do regret some time with cljs). But the probability of encountering incredibly confusing host language stuff very quickly approaches 1 if you’re developing actual things for real users. And then it gets really “hope you have a good mental map in memory” really fast.
I don't understand your complaint. Are you regularly finding bugs in the runtime and in the core libraries? If you compiled to byte code how would that makes anything easier?
Or you're just annoyed that stacks includes Java frames? In CIDER you can make it only display frames from your own code or only Clojure frames. Then you don't need to see Java frames.. and it'll look equivalent to being compiled to bytecode...
> Are you regularly finding bugs in the runtime and in the core libraries?
When I did work in Clojure/Script, I would regularly find bugs I couldn’t understand without stepping through core libraries and the runtime to understand them. Often they were my own bugs, sometimes bugs or just non ideal behavior. Hiding the stack frames would have been a hinderance for these cases.
Back in the day when I wrote Java for living, it was too common pattern that development is done on OpenJDK until we hit some mysterious problem. The problem then disappeared when we switched to Oracle JDK. Other problems are that Java's "everything is an object" and "generics parameters are lost at run time" ideas poison every other language too, and garbage collection never works as intended.
I used to have similar complaints about Java until I sat down and actually read the JVM and language specs and read some JEPs. The truth is, the Java maintainers have made a lot of good design decisions over the years and they’ve been incredibly careful when it comes to trade offs. Generic erasure for instance, was necessary to prevent major backwards compatibility and break the world problems with the massive existing ecosystem of running java programs. Also it’s not an idea that “poisons” other languages. It’d be pretty uneconomical to preserve generics in the runtime representations of any language—c++ doesn’t do this either and it heterogenous approach (opposed to java’s homogenous approach) has its own set of drawbacks (bloated code size, for instance)
In actuality, Java is pretty great and a lot of the pain I experienced using it was not because there was something inherently worse about Java’s design decisions but rather because I had made the (very incorrect) assumption that one could simply port over a lot of the unstated assumptions we make about the semantics of other languages when coding in Java, which isn’t true.
> Generic erasure for instance, was necessary to prevent major backwards compatibility and break the world problems with the massive existing ecosystem of running java programs.
Isn't that the crux of the general criticism of Java's generics though? That it's ugly and deeply limited precisely because it was designed around a hard requirement of backward-compatibility?
The biggest complaint is erasure. I’ve only ran into erasure issues a handful of times in my decade of programming Java. Most of the time you just change your design and it’s fine.
> Also it’s not an idea that “poisons” other languages. It’d be pretty uneconomical to preserve generics in the runtime representations of any language—c++ doesn’t do this either and it heterogenous approach (opposed to java’s homogenous approach) has its own set of drawbacks (bloated code size, for instance)
I'm not sure what to answer, except that we live in different realities and in my reality Java is pretty terrible even if a lot of effort has been put into it.
Java language design decisions affect JVM design, and JVM design affects other languages that run on it. Additionally, it seems that JVM languages justify their existence by FFI to Java, so the Java worldview leaks to other languages that way.
Hey, I mean fair enough. I can definitely respect that answer. Mind you I’m not trying to argue that Java should win any awards from an ahistorical perspective, however recognizing that the team had to make decisions in the context of maintaining a solid experience for an already existing large userbase made me appreciate their decisions more. I still wouldn’t choose Java as my favorite or goto language, but it’s not so bad as to warrant some of the disparagement it often receives.
In Ruby, everything is an object too (actually even more so than in Java) and in dynamically typed languages, generic type parameters of course don't exist, so I don't understand how those things matter to Clojure or (J)Ruby. Garbage collection pauses may be slow on the JVM, but Java is still miles faster than Ruby (idk about clojure).
To be honest, I don't really understand Ruby either. It's like combination of performance of Python, compatibility of C, transparency of any mystery dependency + build system and yet another syntax on top of it.
For example: I mean, you can monkeypatch anything (including _Object_) at runtime. You can use this to make 1 + 1 = 3, if you desired, or to add a method to all instances of any class currently instantiated. Want your database to respond to .fuckoff(), no problem..
This kind of stuff is also how rspec (used to?) actually run tests.
It's cool, but also pretty insane.
irb(main):001:0> class Object
irb(main):002:1> def fuckoff
irb(main):003:2> "lol"
irb(main):004:2> end
irb(main):005:1> end
=> :fuckoff
irb(main):006:0> 5.fuckoff
=> "lol"
irb(main):007:0> "Hello World".fuckoff
=> "lol"
irb(main):008:0>
Yes, the nice thing is that you can use metaprogramming to create really powerful and readable DSLs.
The bad thing is that this is often abused horribly, I blame the Rails ecosystem with its "just add this gem to the Gemfile and it will magically change your application" attitude.
At least Kotlin (probably Scala too) requires you to explicitly import the corresponding method though.
In Ruby, however, there is no concept of an import. You can require other files but this is transitive. So if something anywhere in your chain of dependencies is defining a new method on String, or Object, it's available everywhere. Plus, in Rails usually you don't even have to require anything, as the framework magic does that for you.
The fact that there is no proper import/module system is easily one of my most disliked features about Ruby, it makes it incredibly hard to understand where things come from.
Also, Ruby doesn't only allow you to add new behaviour to existing classes. It also allows overriding existing methods. And that suffers from the exact same problems, i.e. it could be in some transitive dependency, and suddenly your strings will be doing things you didn't expect them to do...
clojure on the jvm has much better performance than ruby, but for my small side projects, that doesn’t matter.
I guess I was trying to describe the frustration of switching languages and ecosystems while also trying to ship, thought I could do it, but I couldn’t.
I suppose I would have had a much better time and stayed in C-land (and gotten much better performance) if I had tried golang instead of clojure.
If you have deadlines and can't afford being "bad" at something for a while then, yes, learning new technologies or ecosystems might not be a wise choice. I think that's independent of what those technologies are.
I switched to the JVM because I was annoyed enough at the things I had to deal with in Ruby that learning how to do things in a new way was worth it for me in the long run.
> I suppose I would have had a much better time and stayed in C-land (and gotten much better performance) if I had tried golang instead of clojure.
But Go is a completely different language than Clojure? By what criterion are you deciding which language to focus on? Mostly I'd think that people who choose Clojure do so because they want to use a homoiconic, dynamically typed, functional programming language and not something that looks a lot like a cleaned-up C.
These days I care less about how the language looks and more about the development/deployment experience, and how much memory it’s going to take on a VPS, since that’s the primary thing driving VPS costs.
Ruby (and rails) for web applications is very hard to beat ergonomically.
Go also feels this way to me, the development experience is very simple, deployment is even easier than ruby/rack (assuming no docker), and you get the added bonus of using a lot less memory on the server.
Saying garbage collection never works as intended somehow elides all the very large scale systems built on garbage collected languages including Java. GC is like most things, it has trade offs and you have to understand them to get the best out of it. It’s not impossible to achieve, though.
I have to admit that I have never built a large scale system, but with small to medium scale systems I have always ended up tuning GC parameters and looking for stray references. And of course there is always some piece of code that depends on finalizers to close some resource, so not doing enough GC a problem just as much as doing too much of it. In my current project (C#) there's the additional problem that there are situations where GC just isn't allowed but there are some language features that just create objects behind the scenes without any warning. So yes, it's not impossible to achieve but it's a lot more difficult than dealing with reference counting or C++ smart pointers.
> I have always ended up tuning GC parameters and looking for stray references.
I've been doing java for years and the key I've learned about the JVM is to stop tuning GC parameters.
The JVM has incredibly good heuristics provided you let them work. The most you should generally do is pick the algorithm and the heap size (maybe the pause time) doing more should only be done if you've got a large amount of evidence (GClogs) to support such changes. Way too many people shoot themselves in the foot trying to hand craft GC settings.
Beyond that, I've found that application changes are almost always more effective than GC changes. Use the JVMs fantastic telemetry tools (Java flight recorder) and find out what's really going on with your application. Again, grab the data first, optimize, and then remeasure to see what's next.
I've managed plenty of apps with 30+gb heaps and 0 tuning.
I wish I could say the same. Most times it ended up one of two ways: Either the app is stuck doing GC, or it crashes with OutOfMemoryError: too many this or that handles, open pipes or something else.
I don't know what kinds of things you're doing on the JVM, but the only situations where I've seen these kinds of errors were either a) batch jobs on massive amounts of data (which yeah... it figures that they could OOM, not sure how you'd prevent that), or b) a serious error in application logic.
In general, I don't really encounter these problems and neither do most other people who write for the JVM.
> too many this or that handles, open pipes or something else.
I'm sorry, but that sort of message is a pretty strong indicator of programmer error.
Are you using try-with-resources to properly close out things?
Are you creating threads in an unbounded fashion? (new Thread)
Are you keeping references to objects for longer than needs be?
Have you used Java flight recorder to identify the root cause of you memory pressure or used eclipse MAT to find large long lubed allocations? (Or both?).
Are you using finalizes? Those have been STRONGLY discouraged (and for good reason) for decades now.
No. I am the poor programmer whose manager forced me to use a library their nephew's dog or someone wrote that does all of those. Except using finalize for closing pipes for subprocesses, that was by Java standard library itself. And keeping references longer than needed, that is every library that implements clever caching, ever. Regarding try-with-resources, RAII was invented for a reason. Java did not copy it, because they seemed to seriously believe that GC is a good substitute.
One of the many features of Java is that it is advertised as being easy enough that any idiot can write it, and managers then hire idiots.
> No. I am the poor programmer whose manager forced me to use a library their nephew's dog or someone wrote that does all of those
My condolences, that sucks. I'd still suggest using the profiling tools I've suggested to find these issues and create change requests to said libs. People will think you're a wizard if you simply learn how to use Java's telemetry. It's an invaluable skill.
> Except using finalize for closing pipes for subprocesses, that was by Java standard library itself
Java uses finalizers as a last resort, not a first line. Finalizers are due to be removed from the language so don't count on them being around for long.
> And keeping references longer than needed, that is every library that implements clever caching, ever.
Sure, caching is legitimate when the cost of creating exceeds the cost of long lived objects. Have you measured to see if the caches you have are improperly sized? Have you made pull requests/change requests to said libraries to fix it if it is?
> Regarding try-with-resources, RAII was invented for a reason. Java did not copy it, because they seemed to seriously believe that GC is a good substitute.
RAII was, few languages have it. The languages that don't have it generally don't have it because tracking the lifetime of objects in a GCed language can be extremely difficult. Consider, for example, how java would handle a destructor for an object allocated in one thread shared with multiple other threads.
Do you know how C++/Rust/D solve that problem? Through a system of reference counted smart pointers which have to be handled very delicately. Java allows such objects to be freely moved.
There are pros and cons to GCs (like most language features) so instead of just defaulting to "man, GCed languages suck because the finalizers aren't called when I want them to be!" perhaps it's a better route to learn the idiomatic way of handling resources and use that?
> People will think you're a wizard if you simply learn how to use Java's telemetry. It's an invaluable skill.
People who want to hire me to fix their Java app think already think I am a wizard. The problem is, they stop believing me the moment I tell that Java is not a good language completely stop listening when I say that I will not work with it as long as I have other options.
> Java uses finalizers as a last resort, not a first line.
At the time it was the only option. Or, reading the JDK source revealed the undocumented feature that destroy() closes the pipes too.
> Have you measured to see if the caches you have are improperly sized? Have you made pull requests/change requests to said libraries to fix it if it is?
Wasn't the point of using JVM and pre-made libraries that they would work out of the box? Anyway, I have found out that terrible libraries are terrible because their authors either genuinely want them to be that way or because they do not recognize good code when they see it. If you have managed an open source project you should know that pull requests out of nowhere are mostly pure nonsense and tend to be ignored.
> Do you know how C++/Rust/D solve that problem?
I do, thank you for asking. As far as I understand, Java does not allow moving objects any more than copying a shared pointer or moving unique one does, but it pretends that objects have no ownership.
> perhaps it's a better route to learn the idiomatic way of handling resources and use that?
It does not solve the problem. The actual solution would be more like "magically make the CTO's nephew and all authors of the zillion libraries the app depends on learn the idiomatic way of handling resources and then rewrite everything using those".
> People who want to hire me to fix their Java app think already think I am a wizard. The problem is, they stop believing me the moment I tell that Java is not a good language completely stop listening when I say that I will not work with it as long as I have other options.
If people hire you to "fix their Java app" and you then subsequently bury your head in the sand and insist that this cannot be fixed (or you will not fix it) without rewriting it in another language - either you have been lying to them, or they were not listening to you. Regardless, you were probably not a good hire.
It seems that you're trying to write Java as if it were C++. That will not work, so stop doing that.
> "magically make the CTO's nephew and all authors of the zillion libraries the app depends on learn the idiomatic way of handling resources and then rewrite everything using those"
Crap code is crap code and exists in every language. I'm not sure what your point is. Most battle-tested libraries in Java properly use try-with-resources. If you work for a crap startup that uses crappy libraries, that's not a problem of the language itself. All this criticism of "the CTO's nephew" is a deflection of any conversation about benefits of drawbacks of individual languages since you just try to measure a language by its worst users.
> If people hire you to "fix their Java app" and you then subsequently bury your head in the sand and insist that this cannot be fixed (or you will not fix it)
To be honest, most of these cases have been what could be called bait and switch. I think it tells something about the language that they can't find employees if they are honest about using it. Other times it had been because recruiters just decided that I'm a Java person and can't work with other technologies.
> It seems that you're trying to write Java as if it were C++. That will not work, so stop doing that.
If you can read my thoughts, James Randi has million dollars for you. I have only contempt, and no you didn't get it right.
How do you expect RAII to work in any GC language? It would be non-deterministic. The point of try-with-resources is that it is deterministic and you know when something is closed out. Every GC language has implemented this.
I would say that RAII is enforced try-with-resources without the extra syntax and releasing memory when resources are released instead of having a garbage collector is a nice performance optimization. C# with "using var" is on the right track, though. Just make RAII default and explicit close/dispose the special case and add syntax for type-deduced immutable values, and I'd be happy.
There are footguns in every language. There are certainly more footguns in C, which you're trying to advocate for, if I understand correctly (although I'm really not sure what your point is).
FWIW, I haven't ever seen anyone use finalize() in Java. I don't doubt that it happens occasionally, but you can't blame a language for some of its users being incompetent.
I'm not advocating C for anything except low-level embedded but even there I think we could do better. I'm not sure what would be the optimal language for writing web backends but I'm convinced that we could do better than Java. I could even go as far as claiming that C# is a lot like Java with some of the worst parts fixed.
One of Java's weaknesses is that they _didn't_ move to everything is an object. Instead we still have all the primitive types (except unsigned, but that's another topic!) and they autobox to and from their object wrappers. I'm assuming moving to objects was considered and discarded for performance reasons. Now I guess we're getting inline classes soon? Which will solve the performance problem, but only decades later.
I think by object he means "reference type" - the only good thing about Java's primitives is that at least it has some value types somewhere!
Almost every decision in Java feels like it was chosen to waste memory. Even using less than int size ints is hard, because it makes you write explicit casts everywhere, which doesn't prevent bugs, but then makes overflow wrap silently, which encourages them.
I'm not quite sure why my stuff keeps hitting the front page, but I'm here for it.
I've since abandoned janet for web app side projects and gone back to ruby entirely but yeah, I still love lisp but it makes more sense to align my work programming language with my side projects that way I can ship my side projects faster, at least that's the thought now.
As the submitter of this story, I can tell you about why this one landed there. I found the link via the discussion on the previous one (https://news.ycombinator.com/item?id=30917772) and submitted it shortly afterwards (https://news.ycombinator.com/item?id=30929823). It didn't end up on the frontpage though, but I got invited to submit it to the "second chance pool" (https://news.ycombinator.com/invited) where submissions get exposed to the frontpage for X hours and if they manage to stay up, they stay there naturally. Seems like people enjoy your writing enough to make it stay for at least some moments :)
As a Clojure fanatic, I too enjoy your writing and hoped to stir up some interesting discussions around Clojures weak-points by submitting this story.
Fair enough, I usually start with the title and try to let the rest flow from there, but it's definitely the JVM that was getting me down, not clojure specifically.
I think the JVM route is smart for bootstrapping a new language since it gives you an existing ecosystem to leverage. However, at this point, I'd love to see a native Clojure.
Rich Hickey was already using a native lisp; then he wrote a couple of bridges between it and JVM/CLR before deciding running a lisp directly on the runtime VM was the way to go.
OP doesn't need or want the JVM or Java libraries, and he wants to stay in the world of Free software that C and Ruby provide.
He analyzed his needs and wants.
This is a good example of what "right tool for the job" analysis looks like.
Though honestly, if OP wasn't in "java land" (his phrase), I'm not sure why Clojure was even on his radar.
I greatly respect people who reject the hype and just go with the language that fits their needs, even if it isn't hyped like C, Ruby or Python but provide plenty of what you'll need.
> Though honestly, if OP wasn't in "java land" (his phrase), I'm not sure why Clojure was even on his radar
Clojure is really appealing as a language and has enough of a following that it doesn’t seem like a risky bet. It also looks pretty self contained. It’s only when you need to push it that you find yourself with confusing JVM stack traces and all the nuances of that ecosystem. When you do you start to figure out the whole thing is not quite as elegant and simple as it first appeared.
Syntax errors like unbalanced brackets/parenthesis/curly braces? Those should definitely give you "pure-clojure" errors as the parsers can't even read the source at that point. Only time you should be getting JVM stack traces is runtime errors.
Yes, this is pretty much it. It's funny the presentation specifically mentions clojure for the "new and shiny."
Today it would be something else, like rust or zig or something.
But yeah it's difficult for me to build my tools and ship finished web applications at the same time, it's compounded by learning a new language.
This is something I've struggled with for a long time. I still struggle with this, the urge to create a new web framework when I would have shipped much faster if I had just used some boring old one for example.
clojure in and of itself is a very pleasant language, and was an exciting new lisp when it came out. i remember going through the same journey as the author, venturing into jvm land just because i wanted to use clojure, and eventually deciding i preferred the unix/c ecosystem.
Not gonna lie, I think the jvm integration is a big win for clojure as a language. The fact that you can pull in and interop with established jvm tools and libraries makes using a niche language like clojure a possibility for professional development.
C libraries are wonderful tools when you need a low-level library to run hot, but if you need, I don't know, a database adapter or a queueing library and you don't want to write one, I'd much prefer pulling something out of maven than trying to integrate a C library. There's just a lot of infrastructure the jvm gives you pretty easily.
If you're just messing around with fun side projects, there's nothing wrong with using something you enjoy for no other reason than your own personal enjoyment.
> If you're just messing around with fun side projects, there's nothing wrong with using something you enjoy for no other reason than your own personal enjoyment.
That was my thought too. If it's a side project, I guess you should just always use Python because you will never need anything else. But I already kinda know Python and it doesn't interest me much, so I write my side projects in the flavor of the day that has my fancy at the moment.
Python has lots of third party library support. You can basically do pretty much anything.
I had to write some proof of concept code that talked ISO-TP on the CAN bus and guess which language other than C had libraries for CAN and ISO-TP? Python. Fortunately we found a serial to CAN gateway and I was able to do it in Ruby over the serial port. Soon we'll have to rewrite part of it in C/C++ on some embedded board, probably ESP32 because it has CAN and WiFi built in. But I think it's perfectly doable in micropython as well. ESP32 CAN bus support for micropython is currently being worked on.
I think the "C-derived open-source world" is a good way of putting it.
The java-world is an inscrutable place of gigantic IDEs, maddening dependency graphs, perverse build systems. Everything is just-so-poorly-compiled enough that it's a nightmare to work with. It sucks all the joy out of programming in my experience.
I mean, it's kind of farcical to complain about Java build systems and compare that to C, which has literally the worst build system/dependency management tooling. Even with vcpkg and conan (which are infinitely better than what existed before), building projects with 4-5 dependencies (abseil, boost, folly, and range-v3, say) requires understanding why builds break complaining about iterator traits under C++20 on Clang 13. It's insane. How can Java possibly be worse than that? It may not be as nice as rust or python (even python can be pretty insane for dependency management, actually), but with Maven I could add 3/4 lines to my project, and in my IDE the dependency would "just work" and I could even navigate to the source code of the dependency.
And it's not like this is a slow language, Java is like 80-90% as fast as C if you know what you're doing, and for most use-cases, and isn't brain-dead in its handling of threads.
I'm a C++/Python developer, by the way, although I've had day jobs writing Java too. I honestly think this is way off base; it's a great language which has constantly evolved, and the JVM is a fantastic piece of technology. Rich Hickey was super smart in targeting the JVM, it was a great decision, and to my mind OP hasn't a clue of what he's talking about.
No one's comparing java and C. The phrase "C-derived" here includes, eg., python/php/ruby/javascript/etc. ie., all those languages whose targeted VM is extensible with C.
The idea is that there's a "cluster of C-based ecosystems" of languages whose open-source philosophy, programming culture, etc. "is just fun". In contrast with the JVM ecosystem which, I think, just isnt.
So you’re comparing the maven ecosystem to that of the maddening dependency graph of node?! The place where the _maintainer_ of your dependency may or may not introduce a zero day just to make a political statement?
Or that of Python where the package managers _globally_ install dependencies and you need virtual environment shenanigans to not break everything???
FWIW, there are Python package managers that don't install dependencies globally, such as Poetry (and before that Pipenv, although I really wouldn't recommend it). It's a shame most people (and projects) still use requirements.txt, though.
I will say as far my limited understanding of the JVM goes, it seems like container tech can take care of any system dependencies that you might need for a complex c++ build.
It also seems that the JVMs original purpose of running the same code across operating systems/architectures has also been superseded by containers, better cross compilation (llvm) and possibly wasm as well.
Maybe at some point in the future, the JVM will mostly be relegated to legacy systems and most new software (assuming AI isn’t writing all future software) will target wasm or require some container runtime.
You can't complain about the complexities of the JVM and then propose containers, which are incredibly complex themselves, as a solution.
I mean sure, most modern JVM apps deploy to containers, but developing in container images is quite another thing and requires you to understand and debug a lot of issues (e.g. mounting, caching, networking) that you'd like not to care about (especially if you're not particularly an infrastructure/ops person).
Hah, I feel pretty much the opposite. For me, Java is simple and logical, and its IDEs help tremendously.
C (and C++), on the other hand, is a trash fire of macros obfuscating code, header files (WTF they're still here in 2022?), libraries which are hard to import, barely working IDEs (IntelliSense in latest Visual Studio takes 30+ seconds to update itself after every change in my small C++20 pet project), people redefining primitive data types in a hundred different ways, unusable compiler error messages etc. Every time I have to work with C++, I'm getting angrier by the hour - and my codebase is only a simple personal project!
Java makes few sense to me, it never did, and I tried to read the JVM spec a few times to avoid poking at random, which seems to be the difference here, the whole platform is spec'd. Not the javascript/c+make+libtool kind of blur or chaos.
Funny how the author complains about Clojure because it compiles to JVM bytecode, and then chooses another LISP compiled to... another kind of bytecode?
> The Janet language is implemented on top of an abstract machine (AM).
Greenspun's tenth rule comes into my mind.
For comparison, the JVM has been developed for almost 30 years now, hundreds of thousands of hours were invested in it, runs virtually on every platform on a planet. It has advanced JIT and GC capabilities, and can compile to native.
Even Clojure has been around for about 15 years now, and it has successfully carved its niche in the Java world.
Janet on the other hand... is a 4 years old hobby project? No disrespect, but they are not in the same league.
Just a side note: it is probably the easiest thing in the world to implement your own LISP - it is a fun thing to do, you can start very small and get something working within hours. So if you are into it, you should probably roll your own too!
Author does not complain that it is compiled to bytecode, he complains that the language is tied to the Java environment, and prefers one that is tied to C.
He prefers this because of the investment he already has in this environment, as well as it's philosophy and practice (programming in Java world is assuredly something else).
When you start with "no disrespect", with high probability what follows is disrespectful. Did he make the claim Janet was in the same league? He starts by clearly mentioning this is in the scope of his side-projects, so enjoyments trumps marketability.
I think the takeaway here is that any language or ecosystem has an unspoken culture, set of habits and norms, set of assumed background knowledge, and general "look and feel" to it that matter to the productivity and overall satisfaction that any given developer will get, based on their personality and past experience. We often ignore these aspects when doing something like picking a language, but they really do make a difference.
i think the biggest point is that it's hard to find developers in a special niche even if it allegedly offers productivity gains, this is negated by increased costs.
clojure was cool, I used it mainly for datomic, but the gains really dont justify relying on public slack channels and mailing lists to hire and find devs.
I really see this from an economic point of view. I could be mistaken based on my own experience dealing with clojure and datomic in particular. I'm just not sure the gain is there anymore as other databases built on conventional tech stacks increasingly more or less take away the gains.
It describes how no technical improvement alone can get an order of magnitude improvement in software development efficiency, so the ways available to increase development by an order of magnitude are learning to use already-built components, shrinking feedback loops so we can be sure we are building the right thing, and growing the profession.
This submission isn't titled correctly despite it being the title of the article. The author has an issue with the JVM, not Clojure, and then chooses a dialect of Clojure (I assume because they like Clojure and it is for them).
It would be nice though to understand what the author believes the issues with the JVM are since they they aren't expounded upon in the article.
The problems with the JVM are more cultural or political and not necessarily technical, although at the time clojure on the jvm did require quite a bit of memory and the cold startup times weren’t great for a simple, “watch this file in development and restart the server” workflow.
> “watch this file in development and restart the server” workflow
Fortunately, no one in the Clojure ecosystem works like that, thanks to the REPL :) You fire up the server, send code straight from the editor to the server and evaluate everything on the fly, no restarts needed.
If you do the whole "restart server" process every time you change `handle-request`, you lose the state of `current-count`, but if you instead just change the function and "send" the new one to a already running server, the state of `current-count` is still the same, even when the new function definition is being used.
Now imagine it with more complex state, or with things that require multiple states to do something. Reloading the server each time would mean you have to manipulate the state into the right "shape" each time before you can test your change, while if you can retain your state even after changing the code, you'd avoid that.
This is why I'm still stuck in Clojure and unable to go back to writing code in other languages. The productivity gain is simply too high to throw away by not using Clojure (or other languages that are driven by the same ideas).
Both the .Net Runtime and the JVM provide this functionality for all the languages they support ... I mean it requires a bit of work for the languages to acctually enable this in a proper workflow, but C#, Java, Kotlin, Visual Basic, ... all support this (which I must say makes even testing a joy. Test not succeeding? Step through and if something goes wrong, just change the code, press F8 and it restarts at the function call or sometimes even the basic block you changed) I believe Visual Studio supports this even for a C++ without a .Net Runtime.
Microsoft calls it "edit and continue". Not sure what Oracle/IBM calls it but it works.
It's not guaranteed to work and there's edge cases where it silently corrupts stuff. Function pointers of course don't update, for example. The Clojure way is cleaner.
There have been tech demos showing that LLVM can do this for C++ compiled to machine code. I don't know if it's really working anywhere, but it certainly can be made to work if someone puts in enough effort.
I once actually established a debugging connection from eclipse to a telephony server running on JVM that was actually serving calls and live-replaced the dialplan handling code, stopping a ddos without making things worse. It even works for that case. Not something I'd advise doing, but I love that this can actually work ...
If your long term state depends upon ephemeral, non-persisted, data then you're hosed if you ever have to restart anyways. If anything brings that server down, you lose current-count as well, whether it's to change out the definition of handle-request or a power outage. So current-count must not be a useful (long term) value worth persisting or the above program is broken from the start.
In the former case (it's useless) the stop-the-world-and-restart approach doesn't hurt anything, in the latter you would need to fix the program anyways.
Yeah, I mean, obviously atoms are not used for long term state storage, I'm not suggesting your replace your database with a atom. But it's useful for a lot of things that are not needed for long-term storage. And even more importantly, it demonstrates the example of how you can use the stateful repl to avoid restarting the full server on every change and still keep state around.
The downsides of clojure on the jvm which I had kind of forgotten about until today like long stack traces and using a lot of memory kind of outweigh the upside of the repl
Not sure how it is better than Ruby. But maybe the author wants something fresh to play with - that I can totally relate to. Good luck with the new journey anyway!
Minor point: Proprietary UNIX OSs aren't artifacts of the seventies. OSs like AIX and Solaris live on. A former employer sells enterprise software for both of those environments to this day. Banks, especially, seem to love AIX in my experience.
Huh. I genuinely didn't know Solaris -- the closed-source commercial version, rather than offshoots like Illumos and OpenIndiana -- was still kicking around, but sure enough there's an "Oracle Solaris 11" page.
Turned 30 last year (basically). Well-funded inertia might float it for another 30. I'll bet even joking about EOLing it could trigger flop sweats in some enterprise technical executives.
Kind of mind boggling. You figure "surely there must have been a point at which being that change-averse bit them in the ass hard enough to modernize their approach" but... guess not.
In fact, one large banking system I worked on in the late aughts completely revamped their megalith of a back-end for a major component of their system and the only two things they didn't change were using AIX and Oracle.
I wouldn't bother with rust under any circumstance.
And in any case, I'm wondering about compiler-writing resources, not compilers.
I daresay you didn't even read what I wrote and just typed "hurr, Rust is good." I've seen a million comments like this, and it's largely the reason I'll never bother.
I think you didn't read what I wrote, did I say 'rust is good' ? I meant the rust compiler contains logic you are searching for. And you need to learn about being polite to people trying to help you.
They're really pretty different so if depends on your goals and your environment. If you're scripting something Lua-scriptable like OpenResty/Haproxy, or you want to piggyback off the Lua ecosystem of modules you're looking at Fennel as your choice. If you want more built-in batteries and you aren't tied to Lua, you may prefer Janet although the ecosystem is young. If you like Clojure but don't want to use regular Clojure, I guess you'd prefer Babashka. Of the 3 I've not used Babashka, so I'm not sure the sell here. Is it just an alternative runtime I guess?
IMHO. Maybe I overexaggerate. Anyway, the author, from my perspective has decided that clojure IS in fact for him, but clojure on the JVM is not, so he chooses to use a different one.