Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

point is: AI is different than your usual game, in that the winner might appear randomly, and destroy the world if she makes a mistake. so i believe open ais points are warrented


> and destroy the world if she makes a mistake

How would the world be destroyed? Does an example work without handwaving about recursive self-improvement and an imperative to optimize extremely literally?

Can you give me a play by play of how a newly developed strong AI eradicates the human species quickly and thoroughly without us having any time to react?

EDIT: In summation, there have been several downvotes, but thus far no reply at all, let alone a convincing one.


I think you can find some scary stories in this old thread: https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-fo...

Though I can't say for sure without going through it all again how many rely on smarter-than-human intelligence (achieved via FOOM or not), or how many explicitly end up literally eradicating the planet quickly and thoroughly.

Another thought experiment to consider instead (but it generalizes somewhat to AGI) is the Age of Em scenario, that is, human emulations become possible. The economic shift caused by this would be at least on the scale of the shift from forager -> farmer and from farmer -> industry. Post-transition doesn't mean (immediate) eradication for the old group, but close enough: they no longer control the world and their numbers as a percentage of the rest of the humans are vastly reduced. There's also a hierarchical control systems point I could throw in here that reaction and correction time at higher levels is going to be slower than that of lower levels, so any claims of "we would just squash it in a month" would highly depend on what level of control has to do the squashing. Some levels can't move that fast.


> Can you give me a play by play of how a newly developed strong AI eradicates the human species quickly and thoroughly without us having any time to react?

Terminator, The Matrix, 2001, I Robot, War Games,


Is there any solid theory that these movie scenarios would play out in the real world?

Frankly, I don't want to even estimate the orders of magnitude of difficulty in seeing AGI come to fruition over ML, so I think you, I, and anybody else reading this has little to worry about.


That's not going to happen, though.

1. AI is still very rudimentary, despite the advancements in particular applications made over the last decade.

2. The market doesn't trust AI because there are no validation methods trusted across commerce. Until a Verisign for AI emerges, AI will be regarded with suspicion in all business settings. AI/ML also goes above the heads of pretty much everyone who is not directly working with it. These have been major issues in my company/industry. We had to hide the AI aspects of our platform to avoid it.

3. Legislation will not stop a rogue inventor in her basement.

4. When dangerous AGI emerges, other competing AGIs will be deployed to stop it.

5. We'll see the danger of AGI coming from a mile away. We can wait to fix it until we know the exact problems. Right now the problems are in the distribution of private personal data, not any particular machine learning method.

6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...

The issues to fix today are in the socioeconomic impacts of automation, not the methods of automation. Beyond that, AI has exactly the same issues as any corporation, except magnified by the speed and strategy with which an AGI could potentially execute on any particular task. A strong legal framework for corporations that attributes AI externalities to a board of directors should be sufficient to dissuade bad actors (good luck passing that legislation).

In other words, I don't see artificial intelligence being any more dangerous to human life than a well-run corporation. The rest is government's response.


>6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...

You might as well assume a "truly" hyper-intelligent system will study Maimonides!


Point 6 relies on my subjective opinion and getting past the first 5, was hesitant to add it but figured I might as well :)


i love your points, and agree mostly.

that is why i express support that small entities like openai are thinking about ai science fiction outcomes.

i didnt say we need a cross government, 1 trillion funded, manhatten like project to protect us from agi, which is what we would need IF there was a reasonable chance that AGI will be developed in the next years.

in the meantime let some smart minds think about it, just as a little insurance and preparation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: