Hacker Newsnew | past | comments | ask | show | jobs | submit | dwaltrip's commentslogin

It kind of sounds like you are saying it is impossible to improve on the current state of the world.

That if it was possible to improve things, someone would have already done it. And they haven’t, so it must not be possible.

That feels a bit extreme… Maybe I’m misunderstanding?


No, it is certainly possible to come up with an innovation that allows progress.

But the tone I get from discussions about repairability and performance is that it would be trivial to make the device, if only businesses wanted to.

However, given the fact that it hasn’t happened yet from a variety of alternative manufacturers, the probability seems very low that the ideal device is possible with current technology at a price that is viable.

Basically, it is a competitive market (or was), and what won out was what was possible. Barring some leap in technology, it is unrealistic to assume we can do better without suffering tradeoffs.


Are you saying there isn’t an actual sycophancy problem?

We are talking about overall patterns here, not the experience of a small subset of skilled and careful users.


> (b) bombing is very expensive so nobody actually profits from the insider trading

The people profiting aren't buying the bombs with their own money.


> up to the point where it could be illegal misapropriation

Huh..?

> And then taking the moral highground and being judgemental about people because they worked in gambling is probably something one should reconsider.

Ah I see.


HN occasionally devolves into “supremely pedantic and nitpicky” mode. Today is one of those days.

If you tried for a few more minutes you would have figured it out.

Maybe they meant un-uninstallable?

oh yeah, that's actually how I read it though now I realize it's nonsensical... like when someone says "I could care less" when they actually mean "couldn't"

If self-driving is any indication, it may take 10+ years to go from 90% to 95%.


Try something like:

> Please carefully review (whatever it is) and list out the parts that have the most risk and uncertainty. Also, for each major claim or assumption can you list a few questions that come to mind? Rank those questions and ambiguities as: minor, moderate, or critical.

> Afterwards, review the (plan / design / document / implementation) again thoroughly under this new light and present your analysis as well as your confidence about each aspect.

There's a million variations on patterns like this. It can work surprisingly well.

You can also inject 1-2 key insights to guide the process. E.g. "I don't think X is completely correct because of A and B. We need to look into that and also see how it affects the rest of (whatever you are working on)."


Ok! I will try that, thank you very much.


Of course! I get pretty lazy so my follow-up is often usually something like:

"Ok let's look at these issues 1 at a time. Can you walk me through each one and help me think through how to address it"

And then it will usually give a few options for what to do for each one as well as a recommendation. The recommendation is often fairly decent, in which case I can just say "sounds good". Or maybe provide a small bit of color like: "sounds good but make sure to consider X".

Often we will have a side discussion about that particular issue until I'm satisfied. This happen more when I'm doing design / architectural / planning sessions with the AI. It can be as short or as long as it needs. And then we move on to the next one.

My main goal with these strategies is to help the AI get the relevant knowledge and expertise from my brain with as little effort as possible on my part. :D

A few other tactics:

- You can address multiple at once: "Item 3, 4, and 7 sound good, but lets work through the others together."

- Defer a discussion or issue until later: "Let's come back to item 2 or possibly save for that for a later session".

- Save the review notes / analysis / design sketch to a markdown doc to use in a future session. Or just as a reference to remember why something was done a certain way when I'm coming back to it. Can be useful to give to the AI for future related work as well.

- Send the content to a sub-agent for a detailed review and then discuss with the main agent.


Eh… I am not sure if that translate to “I don’t know”.

IDK would require the LLM to be aware of the frequency of cases seen in its own training.

I can see this working as a risk ranking, which is certainly worth trying in its own right.

Does it actually say “I don’t know?”


That’s fucking crazy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: