Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> (1) for my day job, it doesn't make me super productive with creation, but it does help with discovery, learning, getting myself unstuck, and writing tedious code

I hear this take a lot but does it really make that much of an improvement over what we already had with search engines, online documentation and online Q&A sites?



It is the best version of fuzzy search I have ever seen: the ultimate "tip of my tongue" assistant. I can ask super vague things like "Hey, I remember seeing a tool that allows you to put actual code in your files to do codegen, what could it be?" and it instantly gives me a list of possible answers, including the thing I'm looking for: Cog.

I know that a whole bunch of people will respond with the exact set of words that will make it show up right away on Google, but that's not the point: I couldn't remember what language it used, or any other detail beyond what I wrote and that it had been shared on Hacker News at some point, and the first couple Google searches returned a million other similar but incorrect things. With an LLM I found it right away.


That's a great example. (Also I love Cog.)

The training cutoff comes into play here a bit, but 95% of the time I'm fuzzy searching like that I'm happy with projects that have been around for a few years and hence are both more mature and happen to fall into the training data.


Yes.

Me, typing into a search engine, a few years ago: "Postgres CTE tutorial"

Me, typing into any AI engine, in 2025: "Here is my schema and query; optimize the query using CTEs and anything else you think might improve performance and readability"


And nowadays if you type that into a search engine you may be overwhelmed with ads or articles of varying quality that you'll need to read and deeply understand to adapt to your use-case.


> you'll need to read and deeply understand to adapt to your use-case

This sort of implies you are not reading and deeply understanding your LLM output, doesn't it?

I am pretty strongly against that behavior


I didn't say that. When you're trying to get a job done, it's time consuming to sift through a long tutorial online because a big part of that time is spent determining whether its garbage and whether its solving the exact problem that you need to solve. IME the LLM helps with both of those problems.


Those things don't really help with getting unstuck, especially if the reason you are struck is that there tedious code that you anticipate writing and don't want to deal with.


Exactly. My two worst roadblocks are the beginning of a new feature when I procrastinate way too much (I'm a bit afraid of choosing a design/architecture and committing to it) and towards the end when I have to fix small regressions and write tests, and I procrastinate because I just don't want to. AI solved the second roadblock 100% of the time, and help with design decisions enough to be useful (Claude4 at least). The code in the middle is a plus, but tbh I often do it myself (unless it's frontend code).


> does it really make that much of an improvement over what we already had with search engines, online documentation and online Q&A sites?

This can't be a serious question? 5 minutes of testing will prove to you that it's not just better, it's a totally new paradigm. I'm relatively skeptical of AI as a general purpose tool, but in terms of learning and asking questions on well documented areas like programming language spec, APIs etc it's not even close. Google is dead to me in this use case.


> This can't be a serious question? 5 minutes of testing will prove to you that it's not just better, it's a totally new paradigm

It is a serious question. I've spent much more than 5 minutes testing this, and I've found that your "totally new paradigm" is for morons


Yes. It's so dramatically better it's not even funny. It's not that information doesn't exist out there, it's more that an LLM can give it to you in a few seconds and it's tailored to your specific situation. The second part is especially helpful if the internet answer is 95% correct but is missing something specific to you that ends up taking you 20 minutes to figure out.


> that ends up taking you 20 minutes to figure out

That 20 minutes, repeated over and over over the course of a career, is the difference between being a master versus being an amateur

You should value it, even if your employer doesn't.

Your employer would likely churn you into ground beef if there was a financial incentive to, never forget that


Yeah I strongly disagree. I want to spend time figuring the things that important to me and my career. I could care less about the one regex I write every year. Especially when I've learned and forgotten the syntax more times than I can count.


There's a funny quote about regex

"You had a problem. You tried to solve it with regex. Now you have two problems"

1) your original problem 2) your broken regex

I would like to propose an addition

"You had a problem. You tried to solve it with AI generated regex. Now you have three problems"

1) your original problem 2) your broken regex 3) your reliance on AI


Are you boycotting AI or something?

If you try it yourself you'll soon find out that the answer is a very obvious yes.

You don't need a paid plan to benefit from that kind of assistance, either.


> Are you boycotting AI or something?

At this point I am close to deciding to fully boycott it yes

> If you try it yourself you'll soon find out that the answer is a very obvious yes

I have tried plenty over the years, every time a new model releases and the hype cycle fires up again I look in to see if it is any better

I try to use it a couple of weeks, decide it is overrated and stop. Yes it is improving. No it is not good enough for me to trust


You asked whether it's really better than "what we already had with search engines, online documentation and online Q&A sites".

How have you found it not to be significantly better for those purposes?

The "not good enough for you to trust" is a strange claim. No matter what source of info you use, outside of official documentation, you have to assess its quality and correctness. LLM output is no different.


> How have you found it not to be significantly better for those purposes

Not even remotely

> LLM output is no different

It is different

A search result might take me to the wrong answer but an LLM might just invent nonsense answers

This is a fundamentally different thing and is more difficult to detect imo


> A search result might take me to the wrong answer but an LLM might just invent nonsense answers

> This is a fundamentally different thing and is more difficult to detect imo

99% of the time it's not. You validate and correct/accept like you would any other suggestion.


Yes, it can.


In my experience a lot our "google engineers" now do both. We tend to preach that they go to the documentation first, since that will almost always lead to actual understanding of what they are working on. Eventually most of them pick up that habbit, and in my experience, they never really go back to being "google engineers" after that... Where the AI helps with this, is that it can search documentation rather well. We do a lot of work with Azure, and while the Microsoft documentation is certainly extensive, it can be rather hard to find exactly what you're looking for. LLM's can usually find a lot of related pages, and then you can figure out which are relevant easier than you can with google/ecosia/ddg. I've havent used kagi, so maybe that works better?

As far as writing "tedious" code goes, I think the AI agents are great. Where I have personally found a huge advantage is in keeping documentation up-to-date. I'm not sure if it's because I have ADHD or because my workload is basically enough for 3 people, but this is an area I struggle with. In the past, I've often let the code be it's own documentation, because that would be better than having out-dated/wrong documentation. With AI agents, I find that I can have good documentation that I don't need to worry about beyond approving in the keep/discard part of the AI agent. I also rarely write SQL, bicep, yaml configs and similar these days, because it's so easy to determine if the AI agent got it wrong. This requires you're an expert on infrastructure as code and SQL, but if you are, the AI agents are really fast. I think this is one of the areas where they 10x at times. I recently wrote an ingress for an ftp pod (don't ask), and writing all those ports for passive mode would've taken me a while. There are a lot of risk involved. If you can't spot errors or outdated functionality quickly, then I would highly recommend you don't do this. Bicep LLM output is often not up to date, and since the docs are excellent what I do in those situations is that I copy/paste what I need. Then I let the AI agent update things like parameters, which certainly isn't 10x but still faster than I can do it.

Similarily it's rather good at writing and maintaining automatic tests. I wouldn't recommend this unless you're working with actively dealing with corrupted states directly in your code. But we do fail-fast programming/Design by Contract so the tests are really just an extra precaution and compliance thing, meaning that they aren't as vital as they will be for more implicit ways of dealing with error handling.

I don't think AI's are good at helping you with learning or getting unstuck. I guess it depends on how you would normally deal with. If the alternative is "google programming" and I imagine it is sort of similar and probably more effective. It's probably also more dangerous. At least we've found that our engineers are more likely to trust the LLM than a medium article or a stackoverflow thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: