Hacker Newsnew | past | comments | ask | show | jobs | submit | notpachet's commentslogin

> I can think of a few good myths for today’s “AI”. Searle’s Chinese room comes to mind, as does Chalmers’ philosophical zombie. Peter Watts’ Blindsight draws on these concepts to ask what happens when humans come into contact with unconscious intelligence—I think the closest analogue for LLM behavior might be Blindsight’s Rorschach.

LLM's remind me of sprites, pixies, and the like, who are situationally helpful but require constant supervision. We're like modern magicians who learned how to summon these sorts of spirits and bind them -- imperfectly -- to our will. But their perception of truth and reality is "through the looking glass" relative to our own. They aren't lying, from their own frame of reference, even though what they say is untrue relative to ours.


Speaking of myths, pixies, and spirits:

> I. DEFINITION:

> MAGICK is the Science and Art of causing Change to occur in conformity with Will.

> (Illustration: It is my Will to inform the World of certain facts within my knowledge. I therefore take “magical weapons,” pen, ink, and paper; I write “incantations”—these sentences—in the “magical language” i.e. that which is understood by people I wish to instruct.

> I call forth “spirits” such as printers, publishers, booksellers, and so forth, and constrain them to convey my message to those people. The composition and distribution is thus an act of MAGICK by which I cause Changes to take place in conformity with my Will.)

- Aleister Crowley, "Magick Without Tears," Chapter I, 1954. https://hermetic.com/crowley/magick-without-tears/mwt_01


A common definition anthropologists use for magic is occult technology: a system of laws that can be manipulated to create desired changes. There's a lot of value in thinking of programming as a form of magic.

Can you expand on this? It has always seemed to me that while programming does indeed like to couch itself in magical terms ("he's a database wizard", "this compiler stuff is black magic", etc), it is fundamentally understandable and replicable. All layers of programming build on their lower layers and this stuff is understood well enough that you can go to university to learn about it in detail.

Programming is technology but not "occult" technology, and I don't really see the added value of treating it as occult. Quite the opposite actually, most good programmers I know acquired their skill because they have a decent grasp about the entire system rather than treating most of it as a black box.


You can go study religious spells in a school as well. There are catholic universities teaching exorcism, and buddhist schools teaching tantric magics that give you superpowers. The critical difference is that I don't believe in either of these things, so I've labeled them "occult". I believe in programming and I'm not calling it occult, but there's little to objectively distinguish it from those other practices.

This is simply a reflection of my beliefs though, not an objective reality of the world. I trust that the TRM for my chip accurately reflect the details I can't observe for myself. Many devs don't even go that far down and trust that their OS, or programming language to behave as they expect. We're all dealing with black boxes on some level.

To quote a reasonable definition from an actual scholar on this subject, Jesper Sorensen:

    Thus, magic is generally conceived of as referring to a
ritual practice aimed to produce a particular pragmatic and locally defined result by means of more or less opaque methods.

This pretty much perfectly describes how programming is perceived by normal people. I could also quote Malinowski, who argued that magic must have a kind of "strangeness" to differentiate it from non-ritual speech. And programmers regularly describe difficult bits of code as magical (e.g. magic constants, or fast inverse square root) even though these are easily explained in most cases.


Of course it's replicable to us high wizards who have studied it for most of our lives and now understand it in depth. So is the actual magic in many fictitious universes.

All technology is like this to some extent, but a lot of technology is grounded enough for the average person to see the rough operation of it. You look inside a washing machine, there's a part that spins around. Attached to it by a rubber belt is a smaller part that spins around, and has electric wires on the other end. Your explainer points to that and says "that's an electric motor - it converts electric power into spinning motion" and you say "ok".

How do you do that with code?


AFAIK learning to program these days is a fairly normalized process where people start with basic commands (ie hello world stuff), then move on to control flow (if/while/for) and eventually on to object oriented programming, higher order functions and all the rest. Some people even go on to do things like "craft your own interpreter" and "NAND to Tetris" to really round out their knowledge, but most do not and that's fine. I think that some of the simplest programs are just as "explainable" as your washing machine example. Conversely, there are plenty of machines complex enough that an average person has no idea how they work. A MRI machine for instance is just a collection of metals and hoses and most people would seriously struggle to point out which parts do what and why. It's still not magic though.

I guess the difference between magic and science to me is that "not everyone can learn magic", but the core bit that makes science work is that in principle everyone can learn it. In practice of course we cannot know everything and so have to rely on the expertise of others, but that is a limitation in the humans and not in the knowledge. Meanwhile for "magic" you have to be chosen by the gods/genetically gifted/cursed/whatever.

In a universe where magic is just another skill that anyone can learn, that reasoning goes right out the window of course.


A lot of other magic systems are in principle open for anyone to learn. I mentioned this a bit more in the other comment, but buddhist spells are open to everyone in principle. The chosen/gifted one is a feature of western magic systems because of our own cultural expectations.

  Oil is the medium of time manipulation magic. Created through ancient sacrificial rituals, it is is a substance that can be used to create aging/rot-retarding barriers, or refined into derivatives that increase the rate of plant growth and mechanical work. To be handled with care, as extended contact can lead to corruption of the body, as well as increased susceptibility to fire elemental spells.

  Simple rituals can render an inferior product from most living things; the time-manipulation abilities of such substances will be weaker, but the substance will be safer to handle, and can even be imbibed (this is a double-aged sword, reducing one's vital life force while increasing one's bodily proportions to that of a toddler).
-Me, "Early Morning Bed Thoughts", a few months ago.

You either see it or you don't.

There's actually a useful and quite generic metaphor to be excavated here. I would just tell you what it is, but I think you'll more enjoy finding it for yourself.

A definition by which every human alive ever qualifies as a magician, and which is therefore not very useful as a distinction.

> A definition by which every human alive ever qualifies as a magician

Exactly correct.

Chapter 2: "No, every act of your life is a magical act; whenever from ignorance, carelessness, clumsiness or what not, you come short of perfect artistic success, you inevitably register failure, discomfort, frustration. [...] Why should you study and practice Magick? Because you can't help doing it, and you had better do it well than badly."


If you called him on it he would say that was on purpose, then talk your ears off about how. He was a ferociously effective charlatan, which is why people still remember the name he made up for himself. (And even invented a rhyming couplet to prate as a pronunciation guide!)

These don't sound like convincing indicators of being an "effective charlatan". Am I to see the Notorious B.I.G. in the same frame?

Yes.

Mhm.

Will you still think I'm fucking with you if I call your comparison a lot more insightful than I think you realize?

White nerdy kids have just been relatively less desperate up to now, socioeconomically speaking. You used to have to be a real hardcore loser, as a not otherwise messed up white boy, to embrace Crowley or hermeticism or any of that other shit that's only interesting to the poor kids and the crime kids and the kids from fucked-up families, who hang around smoking cigarettes together just off school property. (Hello.)

But now, as we exit the second "gilded age" for the second "great depression," the prospect of success in "straight" life, the white folks standard college/job/marry/kids/Epstein-client script, proves a mirage, and the same immiseration of opportunity comes for American whites that American blacks have always known. Thus proliferate get-rich-quick schemes among those certain they are deserving - i.e., con games among suckers, Crowley's native element. Given how much his speed habit led him to write, it's no surprise he comes roaring back. (He did have a sense of character and of history, hence making sure he left behind an appealing - as appalling! - set of lies.)

I have a lot more respect for B.I.G., who at least in my recollection never pretended he was other than one in a million. But when somebody like any of these guys starts saying he sees himself in you or vice versa, you had better keep your knees tightly together and a hand over your drink.


No this was much more substantial and I quite enjoyed it, I was thinking along these lines when offering the comparison, but you have a flowery way to put it.

That is to say, this is the first I'm hearing of this Crowley guy directly, but I have heard murmurs of "magick" down stream in video game culture. So while I agree with the broad social analysis, and have even brushed the aesthetic diffuse through culture, I don't really see any practitioners or other indicators to suggest this is being taken seriously.

Thank you for expanding on this!


You are delusional.

From most this would be no compliment. Out of you I genuinely appreciate it. I don't think you could show me greater kindness if you were trying, and I want you to know it means a lot.

Crowley is full of shit.

By this definition a hammer is a magical or "magickal" implement - the K was Crowley's invention, so that he could trademark it - which of course can be true if someone decides as much, but the only reason to couch such trivia in the pettifogging obscurity Crowley favored is because doing so will help you nail bored young socialites, an activity which Crowley also famously favored. (Gotta watch out for that neurosyphilis! What a shame he never did.)

Try thinking for yourself, instead.


> pettifogging

Off-topic but just wanted to thank you for teaching me a new word. I try to always reply to HN comments that expand my vocabulary.


Is pettifogging some kind of etymological parent of bikeshedding then?

They're etymologically unrelated, "bikeshedding" having been coined in our field and our lifetime, but semantically not too far apart. The main difference I see is that pettifogging connotes an ulterior motive which the described activity serves to conceal, while bikeshedding explicitly denotes the service of no purpose save the burnishment of the bikeshedder's ego.

(The term "bikeshedding" is insufficiently defined, in that it implicitly excludes the social reasons always underlying human behavior, which is why I these days prefer the word I used. Honestly, having been away from it now something over a year, even the simple jargon of the field begins to take on a queasy pseudocolor in my mind, the stinking sinus-stinging yellow-green of a revolted gut revolting. Thinking back over the rattle of acronyms and half-words that used to shape my days comes to be like thinking back over times I have been feverishly ill. Perhaps for once in my life I am on the leading edge of something.)


Nice word.

I rue the day the IG reels crowd pick up on it and it becomes the "word du jour" that gets overused to the point of being intolerable. Right up there with "narcissist" and "gaslighting".


The problem isn't so much overuse as misuse, as "gaslighting" gets thrown around for almost any kind of falsehood.

Another example would be "Ponzi scheme", which I've seen abused for any situation the speakers seems unsustainable, even when there isn't any records fraud.


For sure. Despite all the talking about "self-deification" and all that shit, they sure seem to care a lot about what society (and their imaginary demons) think about them.

Downstream there is a post from one of the devs at Vercel (andrewqu) that built this. They say that this is by design. I think you should shift your base assumptions about the intentions of companies (and the individuals that work in them).

> Overall our goal isn't to only collect data, it's to make the Vercel plugin amazing for building and shipping everything.


Maybe something like David Duchovny's hyperbaric hand chamber from Zoolander[0], but with a mouse inside.

[0] https://www.youtube.com/watch?v=HJH3pXLa8o0


The narration is great.

"But maybe... OLEICAT? no..."


A Russian word for this is "пофигизм" -- the cynical belief that everything is fucked, so why bother.


Cars are here and we're all choking on our own atmospheric excrement, so...


> at least as well as the bottom 10% of programmers

I don't think this is the flex you think it is... in my experience, the bottom 10% of programmers are actively harmful and should never be allowed near your codebase.


Quite a few people think that about Claude code. I disagree with them, personally, but I think we can agree that AI code generation is qualitatively at least as good as the worst human professionals. I think we would also probably agree that the state of the art today is not as good as the very best.

The value per dollar spent is a different calculus and I would say that state of the art models completely surpass any individual’s productive output.


I don't understand how:

> the state of the art today is not as good as the very best

and

> state of the art models completely surpass any individual’s productive output

are not contradictory. If the models completely surpass any individual's productive output, doesn't that mean they're better than the best humans? Or maybe I don't understand what you mean by "surpassing productive output." Are you talking about raw quantity over quality? I mean, yeah... but I could also do that with a bash script.


>are not contradictory. If the models completely surpass any individual's productive output, doesn't that mean they're better than the best humans?

It would be contradictory if we were talking about a human sure, but we're not. We're talking about a machine that can read thousands of words in seconds and spit thousands in slightly longer.

>Are you talking about raw quantity over quality? I mean, yeah... but I could also do that with a bash script.

Well except you can't. You can't replace what LLMs can do with a bash script unless your bash script is calling some other LLM.


> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.

No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.


This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint


I wasn't gesturing to the energy/environmental impacts of AI.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.


> Claude and GPT regularly write programs that are way better than what I would’ve written

Is that really true? Like, if you took the time to plan it carefully, dot every i, cross every t?

The way I think of LLM's is as "median targeters" -- they reliably produce output at the centre of the bell curve from their training set. So if you're working in a language that you're unfamiliar with -- let's say I wanted to make a todo list in COBOL -- then LLM's can be a great help, because the median COBOL developer is better than I am. But for languages I'm actually versed in, the median is significantly worse than what I could produce.

So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand.


A lot of computer users are domain experts in something like chemistry or physics or material science. Computing to them is just a tool in their field, e.g. simulating molecular dynamics, or radiation transfer. They dot every i and cross every t _in_their_competency_domain_, but the underlying code may be a horrible FORTRAN mess. LLMs potentially can help them write modern code using modern libraries and tooling.

My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. I think LLMs will be seen as the compiler technology of the next wave of computing.


The difference is that compilers involve rules we can enumerate, adjust, etc.

Consider calculators: Their consistency and adherence to requirements was necessary for adoption. Nobody would be using them if they gave unpredictable wrong answers, or where calculations involving 420 and 69 somehow keep yielding 5318008. (To be read upside-down, of course.)


But thats the point, an llm is a vastly different object to a calculator. Its a new type of tool for better or worse based on probabilities, distributions.

If you can internalise that fact and look at it like having a probable answer rather than an exact answer it makes sense.

Calculators cant have a stab at writing an entire c compiler. A lot of people cant either or takes a lot of iteration anyway, no one one shotted complicated code before llms either.

I feel discussion shouldnt be about how they work as the fundamental objection, rather the costs and impacts they have.


The compilers used to be unreliable too, e.g. at higher optimizations and such. People worked on them and they got better.

I think LLMs will get better, as well.


nice. 3x.


It can certainly be true for several reasons. Even in domains I'm familiar with, often making a change is costly in terms of coding effort.

For example just recently I updated a component in one of our modules. The work was fairly rote (in this project we are not allowed to use LLMs). While it was absolutely necessary to do the update here, it was beneficial to do it everywhere else. I didn't do it in other places because I couldn't justify spending the effort.

There are two sides to this - with LLMs, housekeeping becomes easy and effortless, but you often err on the side of verbosity because it costs nothing to write.

But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.

When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more.


I can agree with that. So essentially: "Claude and GPT regularly write programs that are way better than what I would’ve written given the amount of time I was willing to spend."


How much time and effort are you willing to spend on maintaining that code though? The AI can't do it on its own, and the code quality is terrible enough.


Have you tried the latest models at best settings?

I've been writing software for 20 years. Rust since 10 years. I don't consider myself to be a median coder, but quite above average.

Since the last 2 years or so, I've been trying out changes with AI models every couple months or so, and they have been consistently disappointing. Sure, upon edits and many prompts I could get something useful out of it but often I would have spent the same amount of time or more than I would have spent manually coding.

So yes, while I love technology, I'd been an LLM skeptic for a long time, and for good reason, the models just hadn't been good. While many of my colleagues used AI, I didn't see the appeal of it. It would take more time and I would still have to think just as much, while it be making so many mistakes everywhere and I would have to constantly ask it to correct things.

Now 5 months or so ago, this changed as the models actually figured it out. The February releases of the models sealed things for me.

The models are still making mistakes, but their number and severity is lower, and the output would fit the specific coding patterns in that file or area. It wouldn't import a random library but use the one that was already imported. If I asked it to not do something, it would follow (earlier iterations just ignored me, it was frustrating).

At least for the software development areas I'm touching (writing databases in Rust), LLMs turned into a genuinely useful tool where I now am able to use the fundamental advantages that the technology offers, i.e. write 500 lines of code in 10 minutes, reducing something that would have taken me two to three days before to half a day (as of course I still need to review it and fix mistakes/wrong choices the tool made).

Of course this doesn't mean that I am now 6x faster at all coding tasks, because sometimes I need to figure out the best design or such, but

I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings, and not about the tab auto completion or the quick edit features of the IDEs, but the agentic feature where the IDE can actually spend some effort into thinking what I, the user, meant with my less specific prompt.


> I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings

So you have to burn tokens at the highest available settings to even have a chance of ending up with code that's not completely terrible (and then only in very specific domains), but of course you then have to review it all and fix all the mistakes it made. So where's the gain exactly? The proper goal is for those 500 lines to be almost always truly comparable to what a human would've written, and not turn into an unmaintainable mess. And AI's aren't there yet.


You really do need to try the latest ones. You can’t extrapolate from your previous experiences.


I do not think they are impartial - all I can see is lots of angst.


I feel like we're talking about different things. You seem to be describing a mode of working that produces output that's good enough to warrant the token cost. That's fine, and I have use cases where I do the same. My gripe was with the parent poster's quote:

> Claude and GPT regularly write programs that are way better than what I would’ve written

What you're describing doesn't sound "way better" than what you would have written by hand, except possibly in terms of the speed that it was written.


yeah it writing stuff that's way better than mine is not the case for me, at least for areas I'm familiar with. In areas I'm not familiar with, it's way better than what I could have produced.


no. I'm a pretty skilled programmer and I definitely have to intervene and fix an architectural problem here and there, or gently chastise the LLM for doing something dumb. But there are also many cases where the LLM has seen something that i completely missed or just hammered away at a problem enough to get a solution that is correct that I would have just given up on earlier.

The clanker can produce better programs than me because it will just try shit that I would never have tried, and it can fail more times than I can in a given period of time. It has specific advantages over me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: