Hacker Newsnew | past | comments | ask | show | jobs | submit | Cthulhu_'s commentslogin

Dexter, from Dexter's Lab, learned French.

> I think Spotify (or its owners/investors) might actually benefit from recommending AI-generated music by not having to pay real artists.

You can remove the think and might; there were articles years ago saying Spotify actually commissioned artists to produce fairly generic songs for the highly played but passively listened to "background noise" playlists, so that Spotify would get the revenue / not pay real artists. I wouldn't be surprised if they replaced those commissioned productions with AI generated stuff to try and cut costs.


Because people don't put the effort in. A lot of electronic music can be considered lazy - just press button, turn a knob, boom you have music. Right? But then you have someone like Aphex Twin and they make something unique out of these easy machines.

I'm sure someone can make unique or passable music with the help of AI tooling, but they can't do it by just saying "make me this music", no matter how much effort they think they have put into the prompt.


It is the same with anything else. I use AI to write a lot of code - but I'm constantly tell it to fix some things - often the same type of error I told it yesterday (things that a junior engineer would have learned a few months ago it is still getting wrong)

I don’t think it’s just about effort. It’s the nature of the technology.

If you practice piano, you will get better in some predictable way, even if it takes a long time.

If you spend more and more time tweaking a prompt, you will be pulling songs from some distribution of possible songs but you will never have the level of control that conventional music producers have.


Is this an actual, measurable, major issue or just a gut feeling? Context switches in general are suboptimal but pretty normal.

Cartman will jestermog every looksmaxxer.

What I gathered is that "mog" is from "amog" which means "alpha male of the group", which makes "mogging" being "out-alpha'd". Which isn't new, it goes back years if not decades to the "alpha male" trends through "pick-up artists" to modern-day incel/bro/manosphere/etc subcultures.

I know the author from about ten years ago and I'm not surprised he's into it. But also he's Dutch so it's probably used very much ironically / as a joke here.


So, classic dick-waving or pissing contest behaviour?

Screenshots aren't very accessible though.

Claude can convert them to text for you.

I'm also not sure why you'd think that, Apple's been at the forefront of "AI" for years now, running models locally and optimizing their CPUs for local workloads to e.g. identify people, places and pets (much appreciated lmao), create slideshows, and subtly improve photo's made on the device.

The photo organization is nice but that being said, if you try to use the on device Apple Foundation models you quickly find it is totally useless.

Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation.

If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.


I cringe whenever someone suggests to just have an agent review because “it knows code better”. An ai agent wouldn’t catch a lot of things a human would flag. And before someone goes you just need to prompt it better, that’s a huge amount of work for large projects and you’re still essentially begging it to do what you want.

And an AI will catch a lot of things a human wouldn't flag.

Why not use both?


I have not encountered anything more soulcrushing in my entire career than having to spend hours going over LLM generated slop that was vomitted out by a contractor in Pakistan that doesn’t give a shit, to only have the review itself be fed in as a re-prompt, and get the same 2000 line ball of spaghetti back with even more issues and going back and forth until I just give up and approve it.

No, AI code review doesn’t help. Claude can’t even give me correct line numbers 80% of the time, literally just makes them up, and more than half of it is false positive BS anyway.


Yep I’ve had to approve bad code too due to timelines and now our codebase has so much tech debt it doesn’t even matter anymore. And worse, as new people work on the code the LLMs pick up the bad code and it’s been spiraling from there.

The problem is that humans inherently fill in data in what the process from the world.

Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial.

It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage).

So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work.


Only a matter of time (if not already) before there's counter-LLMs or whatnot that convince free-reign LLM agents to go and generate cryptocurrencies for the attacker or run propaganda campaigns.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: