Is that something people want to get rid of? Back when I did some clojurescript people were pretty proud of being able to have it used automatically. What's the plan to get the same benefits? Or is the argument that the benefits aren't significant 15ish years on?
I would say the community is pretty evenly split between people who hate it, and people who find it practical. I don't see many people really championing it or being proud of it these days.
Well, technically I think most of the community in indifferent. But from the discourse about the topic, I feel like I see pretty even splits.
Last i checked, it needed JVM (parts of the library are in Java). Given there are many JS minifiers and optimizers (tree shaking etc.) avaliable in JS itself in 2026, I do not know why we need this huge overhead.
And for those of us devs, they never really went anywhere. vim was the most popular editor on HN 15, 10 years ago, still very popular 5 years ago, still popular today.. and that's just an editor, all the other tools like top and its descendents never went away.. I'll believe "TUIs are back" or in some kind of uprise when I notice my non-developer friends and family using them for anything. The most dominant UI today is the mobile app, that's not changing. Limited to professional use (i.e. doing work for someone) and not all use, TUIs aren't touching either web apps or native GUIs either.
And Unreal Engine 5 needs the Agility SDK, creating problems where games wouldn't run if your Windows version wasn't new enough. (Same as the typically encountered glibc problem of the user having an older version than the build needs, really.) (I think most of those particular issues were "solved" now with Win10 being EOL and so the developers just rub their hands of it and say "upgrade". Or use Linux and Steam, where no thanks to MS or the gamedevs themselves, games old and new can just work.)
Dependency hell comes for everyone, win32 may be stable but the broader ecosystem for Windows is little better than anything else. I say little because at least MS does still commit to a lot of backwards compatibility and ensuring some very old DLLs are still part of new Windows 11 installs.
As another comment notes some older Humble Bundle linux builds just don't work anymore on modern systems; some of those are just because they assumed a particular libjpg or libxml or whatever would be part of the base distro install and be around indefinitely. Bad assumption. But fixable the same way as missing DLLs from Windows builds.
If Dotcl does have good performance, it would be interesting to try running Coalton on top of it too. Coalton syntax is probably not unusual if you are familiar with OCaml and F#: https://github.com/coalton-lang/coalton (Though I'd expect the performance of the typical use case of running on top of SBCL to still be better.)
From the same project there's the recently released mine editor that's trying to be a friendlier gateway into trying Common Lisp (and/or Coalton) than emacs: https://coalton-lang.github.io/mine/ Time-to-first-SHOUTING is still once you start a REPL though -- it tells you that your package (namespace) is CL-USER. I sort of think it's one of those things that grows on you, or at least isn't annoying after a while (until you need to deal with certain foreign function interfaces anyway), and it's an interesting possible convention to use SHOUT-CASE in docstrings to call out specific parameters or other function names instead of some @param, \param, @link, or what have you.
Re that last: FWIW, in Emacs Lisp (which is case-preserving and mostly lowercase by convention, without the legacy symbol case behavior of CL), docstring convention is to use single quotes for most literals and to use all-caps to mean the value of a local symbol—usually a function argument, but sometimes a variable introduced in running text for describing the structure of data or such. Last I checked, CL wasn't as consistent across projects, but I tend to carry the Emacs convention there when not conforming to a different local style, and wonder sometimes who would have their monocle pop off to see it…
Being paused in the debugger is per-thread. If the server's using a thread-per-request model, and you're stopped in the request, then other requests can proceed just fine. If some of those requests also trigger the debugger, they'll pause and have to wait, they won't interrupt your current debugging view. Extra care should be taken in any sort of production debugging, of course. (At a Java BigCo, production debugging was technically allowed but required multiple signoffs, the engineer wasn't the one in control but had to direct someone else, lots of barriers to prevent looking at arbitrary customer data, and of course still limited to what you can do with a standard JVM restarted in debug mode. (Mainly setting breakpoints and walking stack traces.))
But the nicest part is that once you connect to the production application, apart from network lag it's no different than if you were developing and debugging locally on similarly specced hardware to the server, you have all the same tools. Many of the broader activities around "debugging" don't need to happen in a paused thread that was entered with an explicit breakpoint or error, they can happen in a separate thread entirely. You connect, then you can start inspecting (even modifying) any global state, you can define new variables, you can inspect objects, you can define new functions to test hypotheses, redefine existing functions... if you want all requests to pause until you're done, you can make it so. Or if you want to temporarily redirect all requests to some maintenance page, you can make that so instead. A simple thing I like doing sometimes when developing locally (and I could do it on a production binary too) is to define some (namespaced) global variable and redefine a singly-dispatched method to set it to the self object (possibly conditionally), and once I have it I might redefine the method again to have that bit commented out just so I know it won't change underneath me. Alternatively I can (and sometimes do) instead set this where the object is created. Then I have a nice variable independent of any stack frames that I can inspect, pass to other method calls, change properties of, whatever, at my leisure without really impacting the rest of the program's running operation. Another neat trick is being able to dynamically add/remove inherited mixin superclasses to some class, and when you do that it automatically impacts all existing objects of that class as well. Mixin classes are characterized by having aspect-oriented methods associated with them; you can define custom :before, :after, or :around methods independent of the primary method that gets called for some object.
What makes you think it falls flat in a team setting? There are plenty of N-pizza-sized teams successfully using Lisp to this day and you're probably aware of many teams successfully using Lisp in the past, too. There's also the success of Clojure. What's required to have a well functioning team is mostly programming language independent; Lisp itself won't save a team lacking those properties anymore than say Java would.
Did you even read what I said or who I responded to? I am specifically talking about working inside an image, monkey patching functions and structures live in the running image. A practice almost no one uses anymore and of which I said that as a single dev on a project I use and find convenient, but I would not want to use it in a team; for that, modern workflows with versioning, beaming code, ci/cd, dev containers etc are preferred.
I prefer lisp over most other things in life, and so does my team. I was specifically not talking about the language though.
I've frequently said that Java + JRebel gets the closest to the Common Lisp + slime experience (closer than Python) but as you say the Lisp experience is still superior, the Java ecosystem has yet to close the gap*. The widest part of that gap I'd mention is in not having the condition system built-in to Java (though I'm aware people have tried to make a comparable one as a library), lacking it degrades the debugging experience considerably (even though simple step-debugging is typically more pleasant than in Lisp). IntelliJ's drop frame feature isn't good enough. The other problem is needing Java + something. What you get with just a regular JVM running under your IDE is no better than what other languages offer (if they offer anything) as their cute hotswap/hotpatch feature and comes with big limitations. (Like no changing method signatures or no adding/removing methods or properties, or only applying changes to new objects.) Once you're doing something non-trivial, especially if you're trying to incrementally develop your program rather than just debug one specific problem, you'll have to restart. In contrast Common Lisp's got its disassemble, describe, inspect, compile, fmakunbound, ... all being functions callable at runtime, and update-instance-for-redefined-class is part of the standard language too. Support for live reloading of everything is baked into the language rather than a hack on top, slime is just a convenient way of working with it. It's still convenient to restart the program occasionally, but few things force you to.
Unfortunately JRebel has killed their free tier, so I'd now point unwilling-to-pay programmers to something like https://github.com/JetBrains/JetBrainsRuntime which is IntelliJ/Eclipse/whatever-independent. I haven't tried it myself yet though... Given they only address the biggest class reloading concerns, I doubt it's actually comparable to JRebel for business-world Java. JRebel handles among other things dynamic reloading from XML changes and reinitializing autowired Spring beans that other classes use for dependencies.
*Caveat, I've been out of the professional Java grind for a while, I'd be pleasantly surprised if some new version that's come out contradicts me.
Yeah, I mean there is some support for various editors (https://lispcookbook.github.io/cl-cookbook/editor-support.ht...) including VS Code (https://lispcookbook.github.io/cl-cookbook/vscode-alive.html), but it's kind of rough (https://blog.djhaskin.com/blog/experience-report-using-vs-co...) and not exactly feature-complete with the emacs experience, plus you're still left having to figure out how to install and setup a Lisp implementation and quicklisp. I like that mine solves those for a newcomer, especially on Windows. (I myself use vim + slimv, but even that isn't quite at parity in some respects with emacs. The biggest weaknesses are around debugging, especially in the presence of multiple threads. But the essentials do work (stepping, eval-in-frame, continuing-from-a-stack-frame, selecting the various types of restarts, compiling changes before selecting restarts) so I'm still fairly productive and don't feel like I'm lacking anything sorely needed for professional work. I've hacked together some automatic refactoring bits as well, which emacs doesn't have either, and I'm eventually going to make a separated GUI test runner.)
I've been kicking the tires with mine a little bit yesterday and today, I think it's quite good for the beginner experience. But I'm constantly of two minds about reporting some feature requests. The project's primary goal seems to be existing as a stepping stone to even see what Lisp (and especially Coalton) is really all about before "graduating" to something like emacs, it feels like a secondary goal (though it is mentioned as a goal) to be usable by professionals as well, but there's inherent tension there. That's also been a weakness with the other editors: anyone already comfortable with Lisp development, professional or not, in emacs or not, isn't very likely to give the time of day to some new thing that's almost certainly not going to be as good as what they're used to. And so the new thing doesn't get the attention and feedback from experienced developers and the gap never closes.
It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.
reply