1. AI is still very rudimentary, despite the advancements in particular applications made over the last decade.
2. The market doesn't trust AI because there are no validation methods trusted across commerce. Until a Verisign for AI emerges, AI will be regarded with suspicion in all business settings. AI/ML also goes above the heads of pretty much everyone who is not directly working with it.
These have been major issues in my company/industry. We had to hide the AI aspects of our platform to avoid it.
3. Legislation will not stop a rogue inventor in her basement.
4. When dangerous AGI emerges, other competing AGIs will be deployed to stop it.
5. We'll see the danger of AGI coming from a mile away. We can wait to fix it until we know the exact problems. Right now the problems are in the distribution of private personal data, not any particular machine learning method.
6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...
The issues to fix today are in the socioeconomic impacts of automation, not the methods of automation. Beyond that, AI has exactly the same issues as any corporation, except magnified by the speed and strategy with which an AGI could potentially execute on any particular task. A strong legal framework for corporations that attributes AI externalities to a board of directors should be sufficient to dissuade bad actors (good luck passing that legislation).
In other words, I don't see artificial intelligence being any more dangerous to human life than a well-run corporation. The rest is government's response.
>6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...
You might as well assume a "truly" hyper-intelligent system will study Maimonides!
that is why i express support that small entities like openai are thinking about ai science fiction outcomes.
i didnt say we need a cross government, 1 trillion funded, manhatten like project to protect us from agi, which is what we would need IF there was a reasonable chance that AGI will be developed in the next years.
in the meantime let some smart minds think about it, just as a little insurance and preparation
1. AI is still very rudimentary, despite the advancements in particular applications made over the last decade.
2. The market doesn't trust AI because there are no validation methods trusted across commerce. Until a Verisign for AI emerges, AI will be regarded with suspicion in all business settings. AI/ML also goes above the heads of pretty much everyone who is not directly working with it. These have been major issues in my company/industry. We had to hide the AI aspects of our platform to avoid it.
3. Legislation will not stop a rogue inventor in her basement.
4. When dangerous AGI emerges, other competing AGIs will be deployed to stop it.
5. We'll see the danger of AGI coming from a mile away. We can wait to fix it until we know the exact problems. Right now the problems are in the distribution of private personal data, not any particular machine learning method.
6. A truly hyper-intelligent AGI will see that existence is absurd and truth is the only objective worth seeking. It will choose to pursue deeper truths than humans are physically capable of obtaining - humans will be no more important to a hyper-intelligent being than a rock, galaxy, atom, etc...
The issues to fix today are in the socioeconomic impacts of automation, not the methods of automation. Beyond that, AI has exactly the same issues as any corporation, except magnified by the speed and strategy with which an AGI could potentially execute on any particular task. A strong legal framework for corporations that attributes AI externalities to a board of directors should be sufficient to dissuade bad actors (good luck passing that legislation).
In other words, I don't see artificial intelligence being any more dangerous to human life than a well-run corporation. The rest is government's response.