Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Easily 99% of comments generated by LLMs are useless.


That's how I detect who is using LLMs at work.

  # loop over the images
  for filename in images_filenames:

    # download the image
    image = download_image(filename)
  
    # resize the image
    resize_image(image)

    # upload the image
    upload_image(image)


They're often repetitive if you're reading the code, but they're useful context that feeds back into the LLM. Often once the code is clear enough I'll delete them before pushing to production.


do you have proof of this being useful for llm? wouldn't you rather it re-read the actual code it generated instead of assuming that the potentially wishful thinking or stale comment is going to lead it astray?


it reads both, so with the comments it more or less parrots the desired outcome I explained... and it sometimes catches the mismatch between code and comment itself before I even mention it

I read and understand 100% of the code it outputs, so I'm not so worried about falling too far astray...

being too prescriptive about it (like prompting "don't write comments") makes the output worse in my experience


I've noticed this too. They are often restatements of the line in verbal form, or intended for me, the LLM-reader about the prompt, vice a code maintainer.


99% of comments are not needed as they just re-express what the code below does.

I prefer to push for self documenting code anyway, never saw the need for docs other than for an API when I'm calling something like a black box.


I think its because LLMs are often trained with data from code tutorial sites and forums like stackoverflow, and not always production code


Not what I have found with gemini.

What is particularly useful is the comments about reasoning about new code added at my request.


Very often comments generated by humans are also useless. The reason for this are mandated comment policies, e.g., 'every public method should have a comment'. An utterly disgusting practice. One should only have a comment if one has something interesting to say. In a not-overly-complex code base there should maybe be a comment perhaps every 100 lines or so. In many cases it makes more sense to comment the unit tests than the code.


I think the rules for comments on public method is to use something like doxygen to extract the reference. And most IDE can display them upon hovering. And comments can remind the caller of pre- and post-conditions.


I am pretty far to one end of the spectrum on need for comments. Very rarely is a comment useful to help you/another developer decipher the intent and function of a piece of code.


Then tell it to write better comments...


Ah, so it's good enough to write code on its own without time-consuming, excessive hand-holding. But it's not good enough to write comments on its own.


If you put in the work to write rules and give good prompts you get good results just like every other tool created by mankind.

How often do you use coding LLMs?


I can't speak to comments rules specifically but I am a heavy user of "agentic" coding and use rules files and while they help they are simply not that reliable. For something like comments that's probably not that big of a deal because some extra bad comments isn't the end of the world.

But I have rules that are quite important for successfully completing a task by my standards and it's very frustrating when the LLM randomly ignores them. In a previous comment I explained my experiences in more detail but depending on the circumstances instruction compliance is 9/10 times at best, with some instructions/tasks as poor as 6/10 in the most "demanding" scenarios particularly as the context window fills up during a longer agentic run.


They comment on the how, not the why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: