I think the unfortunate fact is that most jobs in the world do not require accuracy, so an inaccurate result has a negligible impact over an accurate one.
I used to feel job safety in the knowledge that AI labs weren't likely to solve the hallucination problem. Then it dawned on me that they don't need to — they just need to reduce our collective expectations.
I predict that this illusion of "(in)accurate enough" will last long enough to trigger a cascading avalanche of failures across all fields of human endeavours, an I'd be pretty cautious to bet on quick recovery or even survival of this civilization after that.
> Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?
It depends what your goals are. All of my side projects were started because I wanted to learn something. Using a "skip to the end" button wouldn't really make sense for me.
The difference between people who want to learn things versus people who just want a finished product is going to be a big dividing line in the post AI world
Learn what though? Is knowing CSS at all relevant to making a site all about, say, every type of cheese? If I have, say, 6 hours to build that site, does learning about CSS make the site better, or does learnin about the history of rennet make the site better? The assumption that using AI to replace learning about CSS is replaced by being a drooling moron with the time saved instead, is unfounded. The AI is a fountain of knowledge (that you have to double check). That people choose to not to learn about topics they don't find interesting because they'd rather learn about topics they do find interesting, doesn't automatically make them dumber than you.
Then chances are it’ll be subpar either way. Every type of cheese, in six hours? The CSS isn’t the bottleneck there, it’s information hierarchy and the information itself. You can’t possibly learn about the history of cheeses and summarise it and organise it for a website in that amount of time. Writing the website code isn’t the lengthy part.
> That people choose to not to learn about topics they don't find interesting because they'd rather learn about topics they do find interesting, doesn't automatically make them dumber than you.
Why so rough? I don’t see any judgement of character or intelligence in the comment you’re replying to.
The person’s goals might not be spending a lot of time on CSS. Because a person who just does everything from scratch may find themselves learning about what FlexBox is or why your z-index isn’t working.
Going off of the two screenshots in the OP, neither of those were about frontend.
So if the choice is spending time designing a more human frontend or spending more time on the core product, I don’t fault people for choosing the latter.
Now if the core product also stinks, that’s a different issue.
Not a lawyer, here, of course, but does "transformed" cover making a functional copy? Artistically, "transformed" means something is related, but different. In the case of software, this transformation is to code that actually does the same thing as the original. Is that "transformed"? I apologize if that comes off as pugnacious - I'm trying to learn, not poke holes in your argument, but I couldn't figure out a better way to phrase it and still retain the question.
I tried setting showdead=yes but two comments I remember seeing earlier today (as replies to one of my comments) are still gone. Does anyone what else might have happened to them?
And I very much appreciate that feature, and hope it never changes.
However when I make comments here, I do it with the intention of reading what people have to say in response.
If I am making a comment with the intention to ignore the responses to it, then that’s a good signal for myself that what I am writing is likely not an appropriate comment for HN, and then delete it.
Personally I’d downvote these if not further substantiated. Flags are reserved for outright rage bait or personal insults for me.
At least I hope; can’t say I always perfectly follow “up/downvote doesn’t indicate (dis)agreement but rather contribution to the discussion” perfectly.
You probably see that because many are low effort Reddit level comments. I’ve seen lots of long AI skeptic threads and people talking about the likely negatives of AI.
> I and the people I work with are using agents to learn new topics so fast.
I'm a person who loves learning but I don't really understand this claim. My brain quickly reaches a saturation point when learning new topics. I need to leave and come back multiple times until I begin to understand, but this seems to me to be a normal part of the process. It's the struggle that forms the connections in my brain.
Being spoon-fed information isn't the same as learning, to me. Are you also using AI to test you on your new knowledge? Does it administer these tests periodically? Or are you just reviewing notes and saying to yourself "I know this now"?
How are you ensuring you've learned anything at all?
Reminds me of the book "Make It Stick: The Science of Successful Learning" and its comparison of spaced repetition and cramming.
Cramming often feels more satisfying, more like you're learning, but actually leads to worse retention. Spaced repetition that includes the struggle of recalling something just at the edge of being forgotten, on the other hand, feels worse but leads to much higher retention.
> Being spoon-fed information isn't the same as learning, to me
It's like it distills it for you. I feel like you're thinking of an example like trying to learning operating systems by reading wikipedia articles (i.e. it gives you a high level summary but nothing more).
The way I see it, code says a lot, but it takes time to scroll through it and cmd+click back and forth. But if you just ask the AI "where's x thing happening around this file" it will just point you right to it. So I feel like less cognitive energy is spent dealing with the syntactic quirks of code and more is spent on the essential algorithmic task.
I don't really like using it to summarize natural language written by one author or group, like a paper for example, that just feels like laziness to me.
AI has helped me rediscover my love of coding. It helps me write my emails for me, puts together my shopping list, and gives me advice on how to structure my day. AI tells me what to do. I don't have to fear my choices anymore, because AI makes the choices for me.
Once Mythos is available to business customers, it should radically improve security across the entire web. Imagine if everyone was able to pipe their codebase through Mythos before deployment. We honestly may be on the verge of a bug-free internet.
There isn't really a good alternative for After Effects, despite its flaws. There are other motion graphics tools, but they're usually missing enough functionality that you eventually go crawling back to Adobe.
Now that software development is apparently solved, can someone please build a GPU-accelerated version of After Effects? Every motion designer in the world would make the switch over night.
I used to feel job safety in the knowledge that AI labs weren't likely to solve the hallucination problem. Then it dawned on me that they don't need to — they just need to reduce our collective expectations.
reply