Hacker Newsnew | past | comments | ask | show | jobs | submit | dpark's commentslogin

Since apparently LLMs have also conquered physics, “Claude, transmute this lead to gold for me.”

Yeah, it's almost like the point I was making is that everyone is overselling AI agents' capabilities.

I’m sure someone is out there claiming that AI is going to solve all your business’s problems no matter what they are. Remotely sane people are saying it will solve (or drastically improve) certain classes of problems. 3x code? Sure. 3x the physical hardware in a data center? Surely not.

Right. AI submissions are so burdensome that they have had to refuse them from all except a small set of known contributors.

The fact that there’s a small carve out for a specific set of contributors in no way disputes what Supermancho claimed.


A powertool that needs discretion and good judgement to be used well is being restricted to people with a track record of displaying good judgement. I see nothing wrong here.

AI enables volume, which is a problem. But it is also a useful tool. Does it increase review burden? Yes. Is it excessively wasteful energy wise? Yes. Should we avoid it? Probably no. We have to be pragmatic, and learn to use the tools responsibly.


I never said anything is wrong with the policy. Or with the tool use for that matter.

This whole chain was one person saying “AI is creating such a burden that projects are having to ban it”, someone else being willfully obtuse and saying “nuh uh, they’re actually still letting a very restricted set of people use it”, and now an increasingly tangential series of comments.


I feel like you're still failing to grasp the point.

The only difference is that before AI the number of low effort PRs was limited by the number of people who are both lazy and know enough programming, which is a small set because a person is very unlikely to be both.

Now it's limited to people who are lazy and can run ollama with a 5M model, which is a much larger set.

It's not an AI code problem by itself. AI can make good enough code.

It's a denial of service by the lazy against the reviewers, which is a very very different problem.


No one is missing your point. The issue is that you are responding a point no one made.

The grounding premise of this comment chain was “AI submitted patches being more of a burden than a boon”. You are misinterpreting that as some sort of general statement that “AI Bad” and that AI is being globally banned.

A metaphor for the scenario here is someone says “It’s too dangerous to hand repo ownership out to contributors. Projects aren’t doing that anymore.” And someone else comes in to say “That’s not true! There are still repo owners. They are just limiting it to a select group now!” This statement of fact is only an interesting rebut if you misinterpret the first statement to say that no one will own the repo because repo ownership is fundamentally bad.

> It's a denial of service by the lazy against the reviewers, which is a very very different problem.

And it is AI enabling this behavior. Which was the premise above.


We should fund them, sure, but that’s not enough.

The problem is the cost is so wildly asymmetric. When everyone with a computer and a subscription can vibe code low quality features, when everyone can submit dubious security bug reports, no amount of funding will even that out. Producing submissions is essentially free while triaging and reviewing remains very expensive.

3 years ago the cost was asymmetric in the other direction. The cost of writing code was high. The cost of finding security bugs was extremely high. The cost of triaging and reviewing was basically the same as it is today.

Large corporations that are well funded are facing the exact same issues internally right now. With agent output so cheap, how do you deal with the deluge? It’s not practical or desirable to have your best engineers doing nothing but reviewing generated code, some of which is likely very low value.


Customers don’t care about your testing at all. They care that the product works.

Like most things, the reality is that you need a balance. Integration tests are great for validating complex system interdependencies. They are terrible for testing code paths exhaustively. You need both integration and unit testing to properly evaluate the product. You also need monitoring, because your testing environment will never 100% match what your customers see. (If it does, you’re system is probably trivial, and you don’t need those integration tests anyway.)


Integration tests (I think we call them scenario tests in our circles) also only tend to test the happy paths. There is no guarantees that your edge cases and anything unusual such an errors from other tiers are covered. In fact the scenario tests may just be testing mostly the same things as the unit tests but from a different angle. The only way to be sure everything is covered is through fault injection, and/or single-stepping but it’s a lost art. Relying only on automated tests gives a false sense of security.

Inability to unit test is usually either a symptom of poor system structure (e.g. components are inappropriately coupled) or an attempt to shoehorn testing into the wrong spot.

If you find yourself trying to test a piece of code and it’s an unreasonable effort, try moving up a level. The “unit” you’re testing might be the wrong granularity. If you can’t test a level up, then it’s probably that your code is bad and you don’t have units. You have a blob.


I agree. Not because I think that most AI content is worth reading, but because it can be criticized on more grounded merits. People wrote blogspam by hand for two decades before AI started generating it. It wasn’t high value when a human wrote it either.

On many (most?) posts, far more energy is spent arguing about whether a post is AI than discussing if there’s anything of value in the post.


The thing is, people are screaming “AI” when they see a single “it's not X—it's Y" pattern in a post, despite this being a fairly common construct.

People are nitpicking every tiny thing in their search for proof of AI. It’s not useful and ends up dominating the conversation. AI panic is degrading the value of forums at least as much as actual AI at this point.


“It's not delivery, it's DiGiorno!” — Probably AI, according to HN commenters

Why would you give someone 6 months notice? What good is that for the employee? Especially if the severance is generous.

“Hey, we’re going to fire you in 6 months. Just a heads up.”

Nah. Give me the year of salary and send me home today. Better for the employee and for the company than pointlessly dragging it out. Again, this is assuming generous severance.


Job hunting takes time. Also, they won't be deported in 30 days, along with their families.

I can do a lot of job hunting with a year of severance.

Valid point about employees on visas though.


Maybe they could be kept on the payroll without access to actually work.

But the real problem is any law that would deport someone 30 days after they were laid off, even if they had been working for years. That should be 6 months minimum.


Keeping them on the payroll also enables companies to easily manage and extend medical insurance. I’m pretty sure that what you propose is what a lot of companies actually do, too. They keep them on the payroll for the duration of their severance but do not expect them to actually work.

Agree that no one should be getting deported on 30 days because they got laid off.


A "performance improvement plan" is almost always a 6-month/1 year warning that you're going to get fired/laid off.

It's common in some companies.


PIPs are like a month.

> They just can't legally stop you by, for instance, compelling a judge to order you to stop.

They probably can, actually. TOS are legally binding.

More likely they would block you rather than pursuing legal avenues but they certainly could.


The Supreme Court already ruled on this. Scraping public data, or data that you are authorized to access, is not a violation of the Computer Fraud and Abuse Act.

Now, if you try to get around attempts to block your access, then yes you could be in legal trouble. But that's not what is happening here. These are people/companies that have Claude accounts in good standing and are authorized by Anthropic to access the data.

Nobody is saying that Anthropic can't just block them though, and they are certainly trying.


I didn’t say anything about the computer fraud and abuse act. TOS are legally binding contracts in their own right if implemented correctly.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: