Hacker Newsnew | past | comments | ask | show | jobs | submit | make3's commentslogin

don't call it a bug, they were intentionally aggressively pushing marketing copy into people's commits.

this was malice or greed


I think so too, but my point is that even according to their own words about what happened, the best possible interpretation is that they didn't mean to do it but knowingly let it happen. I agree that a worse version is more likely, but it's pretty damning when even the ceiling for what they can plausibly claim is "we intentionally didn't bother stopping it once it happened accidentally".

I think that comments like this are not helpful for humanism.

Us pogressives should stop trying to argue that AI actually sucks, it obviously doesn't and will continue to improve.

We would start organizing and thinking about ways to build a society where the average human can still have a solid, interesting quality of life even when solid AI is everywhere.


AI still sucks where it truly matters. Ask any SOTA LLM about medications for your child for example. Now that you got the answer, ask again - it might be subtly or not so subtly different. Push back slightly, you will get "I am sorry I was wrong...". Now which version to believe? Neither. You need a competent human being to either figure it out from trusted sources or just know the answer. Maybe LLM actually gets it right, but you will never know unless you are an expert in that particular field and just use it as assistant. For all other cases, its value is negative, which it expertly spins as positive.

From the software engineer's standpoint, I do a fair amount of vibe coding for personal projects. If the result meets my narrow goal, I am happy. At work I can't let a single line of generated code be left unread and unverified because that's where it matters.


If you use a state of the art search tool and give it enough information, which you should, it should be very competent at that honestly, because it can read hundreds of papers and articles. I don't think your exemple is true. You would need statistics and benchmarks, not a made up strawman.

I'm sorry but AI generalizes on various takeaway sets of reasoning benchmarks about a variety of subjects, and if you ask it complex philosophical questions in an argumentative back and forth, it does really pretty well most of the time.

I'm progressive, but this is just not helpful. Progressives should focus less on trying to argue that AI sucks actually (it obviously doesn't), we should focus on thinking about how to transform society to still protect human quality of life and dignity even when strong AI exists.


You're delusional if you're implying that thinking AI is a threat to the existence of the middle class is coocoo.

The actual progressive thing to do is to accept that it is, and think of how to build the world in a way that protects people.


also, it was true that people could be retrained, so their point was less serious.

That won't be the case when AI can do almost every white collar jobs.


no one pragmatic, realistic and grounded who uses the tools regularly thinks that "the death of AI" contingency is something worth planning your career around

Meaning that if AI dies I'll get rich and if AI doesn't die I'll die? I agree - that would be a stupid bet to take.

Or meaning that if AI dies I'll be one of the ones who knows how to do things? That's not so stupid.


this assumes you're not giving anything away by doing things the old way. but you're giving away AI competency, the ability to compete with your peers at your job etc.

The same AI competency that has to be relearned from scratch every 6 months?

I'm still not switching after Altman jumped on the cyberpunk totalitarian contract with the government

well apparently Dario did the same thing with Mythos - ethics for the big AI labs is mainly posturing

I wish palantir had their own model. Now that's a business I can get behind. Until then I have to use grok I guess.

At least palantir is open about their villainy I guess, they make no attempts to pull the wool over your eyes. So you at least know that you are for sure getting in bed with the bad guys if you go with them

"I should vote for Hitler, at least he's open about his villainy", such nonsense argument. People can have ethics, that's a real thing.

The seller of the code has no visibility on the training set of the LLM. If the situation you're describing ends up being illegal, responsibility should fall on the LLM provider to provide tools to detect such overlap with their training sets, and on the clients to run the tools.

The provider of the LLM should want to enable this and to take on that responsibility (I mean take it from the clients), otherwise no one will want to use the tool. Maybe there could be AI tool-use lawsuit insurance, but I feel like that's worse than the copyright infringement detection tool for everyone involved.

I can see the tool happening in the EU, but nowhere else basically, especially in the US, the government sees "AI dominance" as a national priority and a national security priority.


Say that again in five years when you can't find a job except mega yatch toilet cleaner because Claude is distinguished engineer level for one millionth of your cost and thousands of times faster, and can be instantly parallelized in the tens or hundreds of thousands just to be spun down arbitrarily as needed at any time

[flagged]


He exaggerates but several studies show that AI is depressing junior hiring, and models are only going to get more competitive with humans.

It may be hyperbole, but it's how people genuinely feel about AI.

Qunnipiac in March found that voters like AI less than ICE. They also found that over half of Americans think AI will do more harm than good: https://poll.qu.edu/poll-release?releaseid=3955

A Gallup poll in February found only 18% of Gen Z participants surveyed were hopeful about AI: https://www.gallup.com/analytics/651674/gen-z-research.aspx

Maybe those AI doomers all need to touch grass. Or, maybe, the reverse is true and the minority of people who are optimistic about AI are suffering from software brain.

https://www.theverge.com/podcast/917029/software-brain-ai-ba...


The transformer paper was 9 years ago. 9 years between barely translating alright between two very closely related languages (English and French, huge fraction of shared words because of William the conqueror and cultural proximity etc etc) and what we have now.

The thing is able to code up full pretty competent thousand lines projects in an hour. Even hardcore engineers use it now, as of this year. My senior front end friends already can't find jobs.

You're crazy if you think things won't change dramatically, at the scale of all of society.

https://arxiv.org/abs/1706.03762



It's funny because you're arguing that 1 month showing 1 variable increasing by 1 point is as reliable as 9 years with continual increase among multiple variables by multiple points when trying to extrapolate a trend.

There is no acceptable use of AI for most people in the artistic field. They see it as an extreme treason, and I understand. They're under incredible incredible threat.

They are conscious of preventing momentum in a bad direction.

If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.


That's a strange position to take. I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.

Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.


Do you have any idea how hard it already is to make a living in a creative field?

> I can understand not wanting models that have been trained on questionably sourced data, but otherwise they're opposing essentially a UX change, not based on UX concerns but on ideological fears.

"If you ignore their biggest, their primary, concern, their other concerns seem almost trivial".


I literally said I understand if the training data sourcing is their primary concern.

he meant that that's not the primary concern. the sourcing of the data is a red herring, they care about losing their ability to make a living doing the thing that they love that is so central to their identity

I think I'm not sure how to parse your statement... I don't think there'd be much care for (or need for) the UX change if it wasn't for the whole ideological/valid fear about training AI on creative works? But it has been a long day, so I apologize.

I've been all over the place with my thoughts, so it's fair for you to be unsure of how to parse what I said. When making my initial post, I was thinking "this is a coding model, it isn't an image/3d model generation model, so why do they care?". I further interpreted make3 as saying that 3d artists were opposed to AI in general because they view any AI use as trending towards taking away their jobs.

So, what I meant when I said '... otherwise ...' wasn't trying to dismiss the data sourcing concern, but more like "I understand if the data sourcing is the concern, but you (make3) seem to be saying it's about the use of AI in general (ie even if, hypothetically, an ethically sourced training dataset was used for a model), which feels like a weird restriction to me". That was when I added the edit to my initial post.


This is a very first degree analysis.

AI is seen as an oppressor and a threat, and AI providers are seen as oppressors. It's understandable that people don't want to collaborate with their oppressors, either direct or by association. If you were a Jew, would you buy shoes from the Nazis just because you were individually safe from them at that moment? Or would you if you were of a minority they hadn't started exterminating yet? Or if they were not exactly the Nazis killing your people but some affiliated group?

This sounds extreme until you realize they are under threat of losing their likelihood for good.

They are right to not accept your inevitability point without a fight, this is a human thing that can be fought, revolutions have happened, and will continue to happen.

I don't necessarily agree with this but I do understand it.


This is the best phrasing of the issue I've seen online anywhere.

You can find AI useful and still be against its introduction into your field for entirely understandable reasons.

Unfortunately this does create uphill friction for any good-intentioned people trying to use AI to improve art by empowering people to take on more ambitious projects. (This is a general statement and not related to the case of Anthropic. Of course Anthropic here is just trying to sell their product, which is a fair thing to do in isolation, but I also understand the opposition to it on the grounds of its downstream effects.)


> There is no acceptable use of AI for most people in the artistic field.

For all of us, acceptable use is when I use AI to do my job. Unacceptable use is when you use AI to do my job.


I don't think artists see it the same way. An artist will get pilloried by their peers, followers and fans if they post something that has even a whiff of generative AI.

Yes, but I think that still fits. Other people don't want you to use AI to do your job because it makes you more competitive.

Completely false and I hate this puritan gatekeeping. Artists who hate AI are the type to put more importance on the craft than the end product itself. Art is a means of communicating something personal. It’s not meant to show off skills in how well you can move a pencil or how many fricking tools you know in adobe.

AI removes all these hurdles and directly presents you with the end problem - communication. Artists hate that because most artists don’t have anything to communicate. These people deserve to be automated away. I don’t wanna see more derivative shit. Artists who have something special to communicate won’t feel threatened by AI but feel more freedom.


>AI removes all these hurdles and directly presents you with the end problem - communication.

Which is why 99.9% of AI art is worthless. There's literally nothing personal or interesting about getting grok to fart out some picture you thought about while sitting on the toilet in the morning.

AI art will never be good without actual artists embracing the medium.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: