As you mentioned Java, it’s interesting to notice that it has had similar problems throughout its history: logging (now it’s settled on slf4j but you still find libraries using something else), commons (first Apache Commons, now Guava), JSON (it has settled on Jackson but things like Gson and Simple-json are not uncommon to see), nullability annotations ( first with unofficial distributions of JSR-305 which never became official, then checker framework , and lately with everything migrating to JSpecify). All this basic stuff needs to be provided by the language to avoid this fragmentation and quasi de facto libraries from appearing.
The traditional approach in Java has been to let those things happen in third party space, then form an expert group to standardise a shared API for them. That was done with XML parsers and ORM fairly successfully. It doesn't always work, as with your examples - there was an attempt with logging, but it was done badly, JSR-305 ran around, etc. But I think it's a much better approach than the JDK maintainers trying to get it right first time.
But this fragmentation is what needed to make good software. If you put things in the standard library you're just adding a +1 to the fragmented landscape because for instance it will never be specialized enough to cover all use cases, so people will still use their own libraries, just like for instance c++ has three dozen distinct implementations of hash maps just because one cannot fit all cases
It could also be argued that putting a specific executor model into the standard library will make the problem worse because it will give library crates license to use it without considering alternatives because it is standard. At least today taking a dependency on a specific runtime is a well-known boondoggle .
Not only that, but there kind of is a defacto standard (tokio), which is pretty much the default if you aren't in a specific, resource constrained use case.
commons, is something that is eventually being migrated into the main, at least those that are decided to be required for most projects. I don't use apache commons or guava at all in java (now at 25 or 26, depending on project) - there are still some libs that depend on those, but I would argue that most use it out of inertia, than actual need.
As for slf4j, I still don't see any justification for an abstraction layer on top of logging. I never, ever migrated from one logger to another, and even if I did need to do it - it is very easy as most loggers are very similar.
E.g. that's why I decided to use log4j2 in my latest project.
The logging implementation should be an application level decision. By using a facade like slf4j a library allows an application using any logging implementation to use it. That’s why libraries should use it.
You are right but recently, vibe coding has become a demeaning term for AI assisted code by anti-AI people. It’s interesting seeing how words evolve very quickly on the internet as they spread to different demographics.
> move fast and break things and move at a pace that guarantees everything is rock solid.
Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!
That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic. That may be a problem but it’s still a higher level language, just a much higher level language than anything before.
I assume you're some sort of programmer and I genuinely wonder how in the world can someone in good faith downplay non-determinism and ambiguity when talking about a programming language.
High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.
The language specs may be, but an implementation is never ambiguous. When you encounter and undefined behavior in the specs, that’s when you look at your compiler/interpreter docs.
So by your logic all the PMs, managers and customers are programmers, right? After all, there’s a human compiler that takes their input and produces a program?
They are programmers when they write a prompt and get runnable code as a result, yes… but no if asking a human to write the code because if you have an intermediate, manual step between the text and the running code, you don’t have an automated process and hence it’s no longer even an application, let alone a “compiler”.
Why does it matter if a human or a machine is responsible for turning the prompt into code?
If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.
RSS 2.0 is kinda an unspecified mess, and at least 15 year ago, if you wanted to be compatible with the majority of content you needed some weird heuristics to detect which interpretation of the spec a given feed was using (lol).
And Dave Winer was strongly against ever clarifying the spec, and that’s part of what led to Atom.
I use the Code tab in the Claude Desktop App and find that a superior experience since everything you expect from a desktop app works: copy/paste, undo/redo, automatic formatting of text as you type it, multiline input etc which just doesn’t in the TUI. It requires preparing th environment a little bit so that when Claude runs commands it has the same access as a terminal but I got it working easily enough.
All of those: copy/paste, undo/redo, multiline input work for me in my TUI. I wonder if different environments have different behaviors. (Not sure about automatic formatting of text. I usually format my text manually.)
It works differently and in a much more limited manner. Can you select text in the middle of a prompt and delete it? Can you paste anywhere? How do you enter multiline input?? That cannot work unless explicitly handled by the app, terminals don’t have a general way to do it in a TUI. The formatting I am talking about is color highlighting and block code snippets. If you have a terminal that can do that do let me know!!
You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.
Thing is, it's not how incompetent they are, but the opportunism itself. The property I mentioned pulls in opportunists regardless of their competence. So eventually if you work in a field like this, you end up surrounded by them. There's always _some_ around you, of course, everywhere - but across time different fields tended to pull so many of them they would become suffocating to anyone who isn't one. And if you think you can interview your way out of this - an opportunist will often have an easier time to pass a harsh interview process than someone who cares.
IT isn't the only one - finance and law had the issue since forever, AFAIK - but now I'd rather be in a field that's _actively repellent_ to them.
Yes but I don’t know how someone familiar with a Jetbrains IDE can claim that only Lisp has that feature. I love Common Lisp and SLIME, but most of what it can do, I can also do in Java with the IDE. Change a method definition while it’s running and then restart the method? No problem. Run any code within the context of the running method? Yes, Java can do it. Change local variables values in the middle of a method? Easy!
The Lisp REPLcis still superior because it comes with more stuff, like DECOMPILE, INSPECT and so on that can only exist because the language is essentially a compiler even at runtime, which can also be a problem for sensitive domains… but in Java you can do all those things using the IDE so the distance between what is possible in Lisp and a language with good IDE support like Java and Kotlin is now negligible in my opinion.
I've frequently said that Java + JRebel gets the closest to the Common Lisp + slime experience (closer than Python) but as you say the Lisp experience is still superior, the Java ecosystem has yet to close the gap*. The widest part of that gap I'd mention is in not having the condition system built-in to Java (though I'm aware people have tried to make a comparable one as a library), lacking it degrades the debugging experience considerably (even though simple step-debugging is typically more pleasant than in Lisp). IntelliJ's drop frame feature isn't good enough. The other problem is needing Java + something. What you get with just a regular JVM running under your IDE is no better than what other languages offer (if they offer anything) as their cute hotswap/hotpatch feature and comes with big limitations. (Like no changing method signatures or no adding/removing methods or properties, or only applying changes to new objects.) Once you're doing something non-trivial, especially if you're trying to incrementally develop your program rather than just debug one specific problem, you'll have to restart. In contrast Common Lisp's got its disassemble, describe, inspect, compile, fmakunbound, ... all being functions callable at runtime, and update-instance-for-redefined-class is part of the standard language too. Support for live reloading of everything is baked into the language rather than a hack on top, slime is just a convenient way of working with it. It's still convenient to restart the program occasionally, but few things force you to.
Unfortunately JRebel has killed their free tier, so I'd now point unwilling-to-pay programmers to something like https://github.com/JetBrains/JetBrainsRuntime which is IntelliJ/Eclipse/whatever-independent. I haven't tried it myself yet though... Given they only address the biggest class reloading concerns, I doubt it's actually comparable to JRebel for business-world Java. JRebel handles among other things dynamic reloading from XML changes and reinitializing autowired Spring beans that other classes use for dependencies.
*Caveat, I've been out of the professional Java grind for a while, I'd be pleasantly surprised if some new version that's come out contradicts me.
The other motivation for me is to drastically reduce boilerplate code. I can’t believe people here are saying they never use macros, they are so good for this that avoiding them sounds to me like a skill issue! Overuse can damage readability, sure, but so can pretending macros are not an option.
Operatives do that for me, better than macros. Parent is correct that macros are compile time, which gives them a performance advantage over operatives - but IMO, they're not better ergonomically. I find operatives simpler, cleaner and more powerful.
Operatives are based on FEXPRS from older lisps - they're basically a function-like form, but where the operands are not implicitly reduce at the time of call.
(foo (+ 2 3) (* 3 4))
($bar (+ 2 3) (* 3 4))
`foo` is a function, when it is combined with the arguments, it receives the values 5 and 7.
`$bar` however, receives its operands verbatim. It receives (+ 2 3) as its first operand and (* 3 4) as its second - unevaluated.
The operative/FEXPR body decides how to evaluate the operands - if at all.
The difference between an operative/FEXPR and a macro is that macros are second-class objects which must appear in their own name - we cannot assign them to variables, pass them or return them from functions. Operatives and FEXPRs are first-class objects that can be treated like any other.
The difference between FEXPRs and Operatives is to do with scoping and environments. FEXPRs were around before Scheme - when Lisps were dynamically scoped. This meant we could have unpredictable behavior and so called "spooky action at distance". They were problematic and basically abandoned almost entirely in the 1980s.
Shutt introduced Operatives as a more hygienic version - based on statically scoped Scheme. Instead of the operative being able to mutate the dynamic environment arbitrarily, there are limitations. The first part of this is that environments are made into first-class objects - so we can assign them to a symbol and pass them around. The final part is that an operative receives a reference to the dynamic environment of its caller - which we bind to a symbol using the operative constructor, `$vau`.
($vau (operands) dynamic-env . body)
Compare to:
($lambda (arguments) . body)
So operatives are called in the same way a function is called - but the operands are not reduced, and the environment is passed implicitly.
The body can decide to evaluate the operands using the environment of the caller - essentially behaving as if the caller had evaluated them
(eval operands dynamic-env)
But it can chose other evaluation strategies for the operands - such as evaluating them in a custom created environment which we can make with (make-environment) or ($bindings->environment).
This also allows the operative to mutate the environment of its callee - but only the locals of that environment. The parent environments cannot be mutated through the reference `dynamic-env`.
Technically, `$lambda` is not primitive in Kernel - though it is the main constructor of applicatives (functions) - the primitive constructor is called `wrap` - and it takes another combiner (an operative or applicative) as its parameter. Wrapping a combiner simply forces the evaluation of its arguments when called - so functions are just wrappers around operatives - and the underlying operative of any function can be extracted with `unwrap`.
There's a lot more to them. They're conceptually quite simple in terms of implementation, but they have enormous potential use cases that are unexplored.
Read more on the Kernel page[1]. In particular, the Kernel report[2]. There's also a formal calculus describing them, called the vau calculus[3].
Hmm, this sounds like exactly the opposite of what I was talking about. It delays execution rather than promoting execution to compile time.
What I had expected you to talk about was some way of getting the compile time execution of macros by a sufficiently smart compiler that could do extensive partial evaluation at compile time, including crossing procedure boundaries. Of course that's antithetical to the Lisp philosophy of allowing dynamic redefinition of functions and such.
In Common Lisp macros can also be used to implement a kind of Aspect-Oriented Programming, using the macroexpand hook. This hook enables macroexpansion to be dynamically modified at compile time without changing the source code.
reply