Hacker Newsnew | past | comments | ask | show | jobs | submit | Phemist's commentslogin

The dark forest is conditional on that it does not require huge amounts of resources to eradicate another civilization and that (over time) the universe turns out not to be of a scale enormous enough (and in the book there are agents working to actively make it smaller).

Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.

If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.


So the satellite can know where the ship is, because it knows where it isn't? Then it's a simple matter of subtracting the isn't from the is, or the is from the isn't (whichever is greater)?

Well, the largest ad tech company on the planet owning the largest platform to view content on (Chrome) certainly may or may not have something to do with the viability of alternative payment models.

Similarly in the pilot episode of Designated Survivor. "Let's nuke Teheran" was seen as a valid, and brilliant, tactical move in order to get negotiations with Iran to go Kiefer Sutherland's way.


W3C are dragging their feet on WebNFC: https://github.com/w3c/web-nfc/issues/355, which prevents the talking "to the NFC in your passport" flow from being fully implementable in a website (hence requiring you to install an App). Not sure what the current state of this issue is, or if this github issue represents the latest developments in it, but AFAIK it is one of the MAJOR blockers for a fully web-based flow.


> W3C are dragging their feet on WebNFC

They are not "dragging their feet". Chrome implemented NFC, vomited out a semblance of a standard and said "there, it's standard now". Who cares about objections from other vendors.


You'd think at some point a government would say "for fucks sake" and sling an FTE at that kind of thing for a year to get it done.

One guy shepherding an MR is cheaper than whatever contracted out app would cost, and you need the website anyway.


More importantly iMessage


This issue has a similar conversational rhythm that led to the AI agent hit piece that was trending yesterday:

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

The OPs blog post also reeks of a similar style to the hit piece.

Given the large delay between the initial report and further responses by the user `feld`, I wonder if an OpenClaw agent was given free reign to try to clear up outstanding issues in some project, including handling the communication with the project maintainers?

Maybe I am getting too paranoid..


It's an interesting situation. A break from the sycophantic behaviour that LLMs usually show, e.g. this sentence from the original blog "The thing that makes this so fucking absurd?" was pretty unexpected to me.

It was also nice to read how FOSS thinking has developed under the deluge of low-cost, auto-generated PRs. Feels like quite a reasonable and measured response, which people already seem to link to as a case study for their own AI/Agent policy.

I have little hope that the specific agent will remember this interaction, but hopefully it and others will bump into this same interaction again and re-learn the lessons..


Yes, "fucking" stood out for me, too. The rest of the text very much has the feel of AI writing.

AI agents routinely make me want to swear at them. If I do, they then pivot to foul language themselves, as if they're emulating a hip "tech bro" casual banter. But when I swear, I catch myself that I'm losing perspective surfing this well-informed association echo chamber. Time to go to the gym or something...

That all makes me wonder about the human role here: Who actually decided to create a blog post? I see "fucking" as a trace of human intervention.


I've seen this approach in other places, so it's not specifically a point against you specifically, just a question i'm interested in.

> Exfiltration patterns I'm missing

I was wondering about these entropy-based approaches. If I can make the AI agent run arbitrary python code, and I have access to the secrets, then I can make an infinite amount of encoders that have low "local" entropy, but would still be decodable into your secret. A few examples:

- Take 16 random words longer than `N` characters, encode each 4-bit nibble of the secret into this encoding. The output can be [in order, the 16-word dictionary][word1 word2 word3 word4... wordX]

- Repeat each character of a password N times, separate by spaces, e.g. password `hunter1` becomes `hhhhhhhh uuuuuuuu nnnnnnn ttttttt eeeeeee rrrrrrr 1111111`.

Potentially the LLM might even be able to do these encodings without a script.

Besides the regular network-level blocking, and some simple regex to catch most properly formatted API keys and other credentials, is this worth protecting against? Considering also the more complex the exfiltration patterns to filter for, the higher the amount of false positives.


He was already quite vocally pro-Trump during the primaries and 2016 presidential run.


He wasn't though. He was simply analyzing the communication styöe of Trump, using his hypnosis knowledge, and explained why and how it was better (more efficient) than the competitors. This turned out to be true, giving him the win, just like Scott Adams predicted.

The description of reality is not at all the same as supporting it. "Is" vs. "Ought to be".


His explanation why he endorsed Hillary Clinton was pretty lunatic though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: