Hacker Newsnew | past | comments | ask | show | jobs | submit | more phpnode's commentslogin

> Have we really reached a point where marketers have polluted our lives with so many ads for garbage that we're incapable of discovering anything worthwhile unless it has a massive marketing agency behind it?

Yes, exactly this. It is extremely difficult to get attention these days, no matter how good your offering.


I'm pretty excited about agentic coding myself, but this does appear to be an extended ai-psychosis (i'm not super comfortable with this phrase, but it is becoming pretty recognisable).

I think he's boxed himself in by continually layering more complexity on his approach, rather than stepping back and questioning the fundamentals or the overall direction.

All of the steps Gas Town or Gas City etc are taking are towards reducing human oversight and control. This is profoundly misguided! In a world of infinite cheap software it is precisely this human decision making and control that matters.

> There will be nothing like it. You are going to want to use Gas City.

No. I do not want to talk to the mayor of my software factory, as its cartoonish minions build an infinite mountain of slop. Unreviewable, both in terms of code and the finished product.

Instead, I want to precisely capture human ideas, have those ideas questioned, challenged, improved, and then I want to bring those ideas to life, keeping the human in the loop whenever they want. Neither Beads, Gas Town, nor Gas City or anything like them are required for that.


Building harnesses does seem like a task that's particularly conductive to psychosis. I've wondered if it's because there is no push-back; almost anything you try will be "right" in that your changes appear to make things happen. "Oh look I added a carpenter and now it's walking around and making notes about the scaffolding" So you get to stay in your flow state without reassessing if the concepts you are forming are ultimately meaningful.

Although I think the post also self-diagnoses some factors that also help:

  With the Gas Town Mayor, you feel like you’re operating at a special level, a VIP, above all the workers. You are talking to someone important: the mayor of a factory the size of a town. You have access to someone with resources, someone who gets you, someone who appreciates how busy you are.

  Working with regular coding agents just doesn’t give you that special feeling.

I think at this point it’s all but confirmed that LLMs sycophantic tendencies coupled with yes-man attitudes produce AI psychosis and AI fanaticism. Especially if you’re already of high opinion of yourself (be it deserved or not).

I’ve wondered whether this is all supposed to be a satire on the industry. But unintentional self-parody is also a possibility.

[deleted]

> All of the steps Gas Town or Gas City etc are taking are towards reducing human oversight and control. This is profoundly misguided!

I mostly agree with you other than this one sentence.

Let's say you're building the literal antithesis of Gastown: an AI agent software specifically designed to be human reviewed and monitored. How to you make this as efficient as possible? Well, it's by ensuring high quality results from human oversight and control while spending the least amount of time. Which to be precise, is still reducing human oversight and control per unit of useful work done.


It's really about granularity - when I'm building something there are parts of the system where I want very fine-grained control and oversight over what is built and how, and there are others where... it just doesn't really matter. What matters to who is contextual - a software architect tends to care about different things from a product manager for example. This is the idea of gradual specification - I should be able to defer some decisions to the AI and exercise full control over other aspects, but it's the humans that should be able to choose what level they want to work at.

The trick is to generate a spec from the old code, then generate the new codebase from the spec.

I just released @codemix/graph - a fully end-to-end type-safe, real-time collaborative graph database that lives in a CRDT.

There's a demo here: https://codemix.com/graph

and it's open source on github at: https://github.com/codemix/graph

It powers the underlying knowledge graph for codemix.com - it's like an IDE for your product, not your code.


This categorisation of evil as stupidity lets the evil off the hook, there are plenty of smart, evil people.


I disagree, but it's probably a matter of definitions. I don't want to play with words, so I will concede that cognitive ability is independent from moral reasoning (which is socially enforced). However, this is not what I'm getting at. Cognitive ability ("intelligence") is correlated with optionality and power. Your ability to change this reality is correlated with your cognitive ability.

If you truly are an intelligent person, would you really find no other ways to use your talents than to inflict harm, exploit others, and make our shared reality a worse place? That would be a waste. I won't get into ambiguous cases and moral relativism. Say we can all agree that some things are "evil": child exploitation is evil. Throwing molotov cocktails at a civilian's house is evil. Sending bombs in the mail is evil.

Now what would you call someone who engages in these kind of activities when they could easily do something better and more satisfying with their lives? I'd say they're pretty stupid. They're probably good at fooling other people into thinking they're smart, but their behavior shows otherwise.

Take for example Ted Kaczynski, a terrorist who is worshipped like a saint and a prophet in certain ideological spheres. Ted Kaczynski is supposedly this 140IQ genius who saw it all coming and tried to warn us. But if you actually read Industrial Society and Its Future, you can see it's complete incoherent garbage, the kind of stuff I was writing when I was 12 to troll on internet forums. Ted Kaczynski is what a stupid person thinks a smart person looks like.

A smart person doesn't need to be evil, just like a billionaire doesn't need to go shoplifting. I'm not saying that stupid people can't be dangerous. But they should be dealt with for what they are: stupid people, inferior to us, worthy of pity. Not powerful monsters above us that we should fear.


lol, lot of words for a No True Scotsman. clearly there are intelligent and evil people in the world, however you refuse to engage with a basic question like this


> clearly

Gonna call Hitchen’s razor on that since we’re playing logical fallacies bingo.


I think this is overly simplistic. e.g. Hans Reiser is clearly a pretty smart guy, but how else would you describe his actions, other than evil?


> Hans Reiser is clearly a pretty smart guy

No idea who this guy is, I'm just reading his Wikipedia page. Looks like he created some file system, good! But it also looks like he got a mail-order bride (suspicious...), was an abusive husband (not good), was not able to get over his divorce (uh-oh), harassed and ultimately murdered his ex-wife (definitely not good!), and ultimately landed in prison.

I think Hans Reiser is some sort of idiot savant or well trained monkey. Probably very good at computer science and building file systems, but his general intelligence seems overall very low, which is proven by his performance at the game of life. I wouldn't personally be afraid of Hans Reiser and I'm sure he could be mentally broken very easily.


Then you should explain yourself better


I mean, you can do that already, you use your own domain name and can then change email providers at will, in theory.

But maybe you logged in to your domain registrar through google oauth. If your google account is locked you can't now get into your domain's settings to change your MX records.

The real problem isn't the email address itself, it's all the access that google owns on your behalf. Lose access to Google, lose access to everything.


the author of this post has a book called Lying for Money which I'd definitely recommend.


This is the same argument that people used to have against compilers


It is not. One version of a compiler on one platform transforms a specific input into an exact and predictable artefact.

A compiler will tell you what is wrong. On top of that the intent is 100% preserved even when it is wrong.

An LLM will transform an arbitrarily vague input into an output. Adding more specification may or may not change the output.

There is a fundamental difference between asking for “make me a server in go that answers with the current time on port 80” and actually writing out the code where you _have to_ make all decisions such as “wait in what format” beforehand. (And using the defaults is also making a decision - because there are defaults)

Compilers have undefined behaviour. UB exists in well defined places.

Even a 100% perfect LLM that never makes mistakes has, by definition, UB everywhere when spec lacks.


Right, they allow for the idea of gradual specification - you can write in broad strokes where you don't care about the details, and in fine detail when you do. Whether the LLM followed the spec or not is mostly down to having the right tooling.


Compilers are an abstraction. AI coding is not an abstraction by any reasonable definition.


You're only thinking that because we're mostly still at the imperative, REPL stage.

We're telling them what to do in a loop. Instead we should be declaring what we want to be true.


You’re describing a hypothetical that doesn’t exist. Even if we assume it will exist someday we can’t reasonably compare it to what exists today.


It exists today, please message me if you’d like to try it


The value is in the imperative, the computer does what you tell it do, The control is very powerful and is arguably a major reason computer technology is as power and popular as it is today. Bits don't generally speaking argue with you the same way analog programming if by electronics or mechanical means did before the transistor.

You can certainly write in imperative or functional but you are still telling the computer what you want. LLM use impercise language can generate loose binding the actual reality people one. They have there use cases too but they have a radically different locus of control. Compilers don't ask you to give up percision either they will do what you tell them to do. AI can do whatever it thinks is the most likely next token which is foundationally different from what we do when we engage in programming or writing in general


This just isn’t true at all, with guidance and guard rails they produce much better code than the average developer does. They are only going to get better.


That is completely false in my experience. I have never once seen an LLM produce code that would be acceptable. It certainly is worse than what a human can do.


Have you seen the kind of code an average developer writes?

I agree that LLMs produce code that is less good than what a good developer could write, but most developers are not good developers, and even the good developer gets tired and must sleep eventually.

Arguments against LLMs are like the old arguments against high level languages like C. People argued that the compiler wrote trash code, that humans could do better, that the costs weren't worth it. None of that mattered and it's the same story here.


I don't know what model you're using or how you're prompting it, but for me some 60-80% of the time the results require only a little bit of steering to be 'acceptable' (like at least what I would expect from a junior engineer and I'll approve the PR even though it's not quite how I would do it), some 30% of the time the results are pretty much what I would do, and some 10% of the time the results are better than what I would do ("huh, good idea, okay let's do it that way").

They're not perfect by any stretch but if they're being likened to slot machines for code, I'll take those odds almost every day.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: