I've gone through something similar, but for a more functional language (a Scheme). It's interesting how here the biggest wins are from optimizing the objects, while the biggest wins in my case were optimizing closures. The optimizations were very similar.
"Three implementation models for scheme" gives all the answers to make a fast enough scheme, though it has something of a compilation step, so it's not interpreting the original AST.
Then allow me to be judgemental in your stead. I've done a similar setup as the above and completely locally. I dunno how they're paying so much, but that's ridiculously overpriced.
All the other models performed much worse for the skills I'm using. I tried gpt-5.1 (and then 5.4 again recently), and also tried pointing it at OpenRouter and using a few of the cheaper models, and all of them added too much friction for me.
Be judgemental all you want, but I feel like I'm paying for less friction, and also more security since my experiments also showed claude to be the least vulnerable to prompt injection attempts.
In a possible defense of grandparent, whenever I pirate movies these days (seldomly), it would be not because I don’t want to pay, but either because I want the offline reliability or because I just can’t find it elsewhere.
(The latter would however not be the case for Titanic, I imagine.)
It's not the only thing they're doing with it. I mean, the logic is sound - $180 goes into automating bunch of manual processes in personal life, one of which is getting movies, which in some cases involves going out on the high seas.
A bit tangential, but I believe dynamic vs static typing works the same. I switch quite often between them, and when ever I've had a longer break from dynamic typing, coming back to it feels quite heavy. "How did I ever do this?" It feels so heavy.
But a few hours (or days) in, I forget what the problem was. A part of my brain wakes up. I start thinking about what I'm passing around, I start recognizing the types from the context and names...
It's just a different way of thinking.
I recognized the same feeling after vibe coding for too long and taking back the steering wheel. I decided I'd never let go again.
In dynamically typed programs, you can if you allow it, let the types happen to you. In a statically typed program, you are forced to think about them from the beginning. That same abstract concept is at play with vibe coding, but instead the code now happens to you.
My best LLM written code is where I did a prototype of the overall structure of the program and fed that structure along with the spec and the goal. It is kind of the cognitive bitter lesson, the more you think the better the outcome. Always bet on thinking.
To "realize" that it would have to be true. The longer I've stuck with untyped Python the more I've preferred it, and the more I've seen people tie themselves in knots trying to make the annotations do impossible (or at least difficult) things. (It also takes away bandwidth for metaprogramming with the annotations.)
It bugs me that there are two kinds of languages. Parameters and variables could be typed optionally in a dynamic language; either error in the compiler or at runtime; otherwise you just haven’t made any type errors while you coded and the code is fine either way.
This is what gradual typing (such as TypeScript, or the use of Python annotations for type-checking) accomplishes. The issue is that it basically always is bolted on after the fact. I have faith that we aren't at the end of PL history, and won't be surprised if the next generation of languages integrate gradual typing more thoughtfully.
The problem with these two languages is that the runtime type system is completely different (and much weaker) than the compile time one; so that the only way to be safe is to statically type the whole program.
CL has a pretty anemic type system, but at least it does gradual without having to resort to this.
JavaScript caught on because it was the best casual language. They've been trying to weigh it down ever since with their endless workgroup functionality and build processes. If we get a next generation casual language, it'll have to come from some individual who wants it to happen.
No, JavaScript caught on because at the time it was the only game in town for writing web front-ends, and then people wanted it to run on the server side so that they could share code and training between front end and back end.
It's not enough to just be first. It would have been replaced by now if it wasn't fit for purpose. Otherwise we might as well not bother to critique anything.
A needlessly confrontational view. Some people do use dynamic typing as a way to stumble around until it works (e.g. most scientists) but some others simply don't want the noise associated with a static type system accurate enough to really say what you want; especially during prototyping/interactive use. Which is why gradual typing exists, really.
Same reason my views about GC evolved from "it's for people lacking rigour" to "that's true, but there's a second benefit: no interleaving of memory handling and business logic to hurt clarity".
I've slowly evolved from just writing to and looking up json files to using SQLite, since I had to do a bit more advanced querying. I'm glad I did. But the defaults did surprise me! I'm using it with php, and I noticed some inserts were failing. Turns out there's no tolerance for concurrent writes, and there's no global config that can be changed. Rertry/timeout has to be configured per connection.
I'm still not sure if I'm missing something, since this felt like a really nasty surprise, since it's basically unusable by default! Or is this php's PDO's fault?
This is the fault/price of backwards compatibility. Most users of SQLite should just fire off a few pragmas on each connection:
PRAGMA journal_mode = WAL
PRAGMA foreign_keys = ON
# Something non-null
PRAGMA busy_timeout = 1000
# This is fine for most applications, but see the manual
PRAGMA synchronous = NORMAL
# If you use it as a file format
PRAGMA trusted_schema = OFF
You might need additional options, depending on the binding. E.g. Python applications should not use the defaults of the sqlite3 module, which are simply wrong (with no alternative except out-of-stdlib bindings pre-3.12): https://docs.python.org/3/library/sqlite3.html#transaction-c...
For me it's throwaway scripts and tools. Or tools in general. But only simple tools that it can somewhat one-shot. If I ever need to tweak it, I one-shot another tool. If it works, it's fine. No need to know how it works.
If I'm feeling brave, I let it write functions with very clear and well defined input/output, like a well established algorithm. I know it can one-shot those, or they can be easily tested.
But when doing something that I know will be further developed, maintained, I mainly end up writing it by hand. I used to have the LLM write that kind of code as well, but I found it to be slower in the long run.
All games I've installed on Steam Deck, with our without official compatibility, have worked well. Steam Deck runs a Linux. It all works thanks to Proton. And I'd go as far as saying, it just works.
Much of that is at least for my company handled by our accounting company. We just print the correct VAT on the invoice, and report the same VAT to the accountant and they take care of the rest. The shop/payment processor etc doesn't need to be integrated to any of it. Though I have to post-process Stripe's reports, as they refuse to include the used VAT rate in there, despite them knowing it. Stripe does try to sell the tax service to us, but I refuse.
You can simplify for your use case (only B2C or you refund VAT afterwards for B2B, you only ship from one location, custom invoicing), but that’s what it takes to implement it correctly on platform level.
The reply about knowledge about their job and familt made me think.
The only thing I can now think of is using it as a personal therapist. Or asking how to approach their kids. And they're a bit embarrassed about it, because it's still outside the Overton window -especially on HN - which is why they aren't sharing it.
If someone has different usecases, please do prove me wrong! Maybe I just lack imagination.
Such an incredible amount of personal, intimate knowledge to share with a company. Sure, Google can figure out where I live and who I visit because I have an Android phone, but they'll never know the contents of those relationships.
I have a line in the sand with the AI vendors. It's a work relationship. If I wouldn't share it with a colleague I didn't know super well, I'm not telling it to a AI vendor.
I recently asked about baby-led weaning. If my baby were 2 months old, it would have been smart to mention "not yet!" but it knows she's 8 months old and was able to give contextual advice.
I ask gpt a lot of questions about plants and gardening - I’m happy that it remembers where I live and understands the implications. I could remind it in every question, but this is convenient.
I've gone through something similar, but for a more functional language (a Scheme). It's interesting how here the biggest wins are from optimizing the objects, while the biggest wins in my case were optimizing closures. The optimizations were very similar.
"Three implementation models for scheme" gives all the answers to make a fast enough scheme, though it has something of a compilation step, so it's not interpreting the original AST.
https://www.cs.unm.edu/~williams/cs491/three-imp.pdf
reply