Hacker Newsnew | past | comments | ask | show | jobs | submit | maplethorpe's commentslogin

I think the unfortunate fact is that most jobs in the world do not require accuracy, so an inaccurate result has a negligible impact over an accurate one.

I used to feel job safety in the knowledge that AI labs weren't likely to solve the hallucination problem. Then it dawned on me that they don't need to — they just need to reduce our collective expectations.


I predict that this illusion of "(in)accurate enough" will last long enough to trigger a cascading avalanche of failures across all fields of human endeavours, an I'd be pretty cautious to bet on quick recovery or even survival of this civilization after that.

> Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?

It depends what your goals are. All of my side projects were started because I wanted to learn something. Using a "skip to the end" button wouldn't really make sense for me.


The difference between people who want to learn things versus people who just want a finished product is going to be a big dividing line in the post AI world

It's also a nice opportunity to learn while getting something out!

Learn what though? Is knowing CSS at all relevant to making a site all about, say, every type of cheese? If I have, say, 6 hours to build that site, does learning about CSS make the site better, or does learnin about the history of rennet make the site better? The assumption that using AI to replace learning about CSS is replaced by being a drooling moron with the time saved instead, is unfounded. The AI is a fountain of knowledge (that you have to double check). That people choose to not to learn about topics they don't find interesting because they'd rather learn about topics they do find interesting, doesn't automatically make them dumber than you.

> If I have, say, 6 hours to build that site

Then chances are it’ll be subpar either way. Every type of cheese, in six hours? The CSS isn’t the bottleneck there, it’s information hierarchy and the information itself. You can’t possibly learn about the history of cheeses and summarise it and organise it for a website in that amount of time. Writing the website code isn’t the lengthy part.

> That people choose to not to learn about topics they don't find interesting because they'd rather learn about topics they do find interesting, doesn't automatically make them dumber than you.

Why so rough? I don’t see any judgement of character or intelligence in the comment you’re replying to.


The person’s goals might not be spending a lot of time on CSS. Because a person who just does everything from scratch may find themselves learning about what FlexBox is or why your z-index isn’t working.

Going off of the two screenshots in the OP, neither of those were about frontend.

So if the choice is spending time designing a more human frontend or spending more time on the core product, I don’t fault people for choosing the latter.

Now if the core product also stinks, that’s a different issue.


I know on its face this looks like stealing, but this would likely fall under fair use, since the original work is being transformed.

Not a lawyer, here, of course, but does "transformed" cover making a functional copy? Artistically, "transformed" means something is related, but different. In the case of software, this transformation is to code that actually does the same thing as the original. Is that "transformed"? I apologize if that comes off as pugnacious - I'm trying to learn, not poke holes in your argument, but I couldn't figure out a better way to phrase it and still retain the question.

Functionality isn't covered by copyright; it needs a patent. You could even have identical files, if it's the simplest way to do something: https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....

I've seen anti-AI comments here disappear within minutes of posting. I'm honestly surprised to see one at the top of this thread.

What causes comments to disappear? Is that what flagging does?


showdead=no in user settings hides flagged & moderator killed posts

I tried setting showdead=yes but two comments I remember seeing earlier today (as replies to one of my comments) are still gone. Does anyone what else might have happened to them?

Maybe the posters might have deleted the comments themselves?

I often post comments on HN, just to delete them 5 minutes later when I realize I don’t care to deal with the replies I’ll eventually get.

You have to be quick because if someone does reply, you can no longer delete your message.


One benefit of this forum is that they purposely passed over notifications precisely to save us from the temptation to "deal with" replies.

And I very much appreciate that feature, and hope it never changes.

However when I make comments here, I do it with the intention of reading what people have to say in response.

If I am making a comment with the intention to ignore the responses to it, then that’s a good signal for myself that what I am writing is likely not an appropriate comment for HN, and then delete it.


I didn't realise messages even had a delete button. I'm going to reply here so I can check.

edit: you're right, there's a delete button.


I see properly argued positions, even if very anti-AI, hang around, but cheap tribalist takes usually get downvoted pretty quickly.

Cheap pro-AI comments don't get flagged though. You can repeat the same talking points forever:

- "Artists have always been exploited" (patently false since at least 1950, it was a symbiosis with the industry).

- "Humans have always done $X".

- "You are a Luddite."

- "This is inevitable."


Personally I’d downvote these if not further substantiated. Flags are reserved for outright rage bait or personal insults for me.

At least I hope; can’t say I always perfectly follow “up/downvote doesn’t indicate (dis)agreement but rather contribution to the discussion” perfectly.


You probably see that because many are low effort Reddit level comments. I’ve seen lots of long AI skeptic threads and people talking about the likely negatives of AI.

> I and the people I work with are using agents to learn new topics so fast.

I'm a person who loves learning but I don't really understand this claim. My brain quickly reaches a saturation point when learning new topics. I need to leave and come back multiple times until I begin to understand, but this seems to me to be a normal part of the process. It's the struggle that forms the connections in my brain.

Being spoon-fed information isn't the same as learning, to me. Are you also using AI to test you on your new knowledge? Does it administer these tests periodically? Or are you just reviewing notes and saying to yourself "I know this now"?

How are you ensuring you've learned anything at all?


Reminds me of the book "Make It Stick: The Science of Successful Learning" and its comparison of spaced repetition and cramming.

Cramming often feels more satisfying, more like you're learning, but actually leads to worse retention. Spaced repetition that includes the struggle of recalling something just at the edge of being forgotten, on the other hand, feels worse but leads to much higher retention.


> Being spoon-fed information isn't the same as learning, to me

It's like it distills it for you. I feel like you're thinking of an example like trying to learning operating systems by reading wikipedia articles (i.e. it gives you a high level summary but nothing more).

The way I see it, code says a lot, but it takes time to scroll through it and cmd+click back and forth. But if you just ask the AI "where's x thing happening around this file" it will just point you right to it. So I feel like less cognitive energy is spent dealing with the syntactic quirks of code and more is spent on the essential algorithmic task.

I don't really like using it to summarize natural language written by one author or group, like a paper for example, that just feels like laziness to me.


AI has helped me rediscover my love of coding. It helps me write my emails for me, puts together my shopping list, and gives me advice on how to structure my day. AI tells me what to do. I don't have to fear my choices anymore, because AI makes the choices for me.

Sam and Dario were in the closet making AIs and I saw one of the AIs and the AI looked at me.

The AI looked at you?!

When Claude had an outage I forgot how to walk up stairs and couldn’t look it up so I waited for someone to come and get me

Once Mythos is available to business customers, it should radically improve security across the entire web. Imagine if everyone was able to pipe their codebase through Mythos before deployment. We honestly may be on the verge of a bug-free internet.

Could this be Apple's chance to finally stop dragging its feet and integrate AI deeply into its OS?

Why is this necessarily a good thing?

That's because it would be too dangerous to release.

My girlfriend goes to a different school, you wouldn't know her.

Same for teleport, time travel and warp drive.

So is my P=NP proof.

They could release data to back up that claim.

There isn't really a good alternative for After Effects, despite its flaws. There are other motion graphics tools, but they're usually missing enough functionality that you eventually go crawling back to Adobe.

Now that software development is apparently solved, can someone please build a GPU-accelerated version of After Effects? Every motion designer in the world would make the switch over night.


Maxon literally just launched one last week.

Cavalry just become free thanks to Canva

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: