This may be true, but it happens today as well without AI.
Big pharma corporations choose to do research on treatments for diseases that affect the ocidental world. Often setting aside the needs of poor nations.
"The risk with A.I. is that these biases become automated and invisible — that we begin to accept the wisdom of machines over the wisdom of our own clinical and moral intuition. Many A.I. programs are black boxes: We don’t know exactly what’s going on inside and why they produce the output they do. But we may increasingly be expected to honor their recommendations."
I do understand that there is a serious possibility of deploying a bad black box AI, a number of questions come to mind though.
Isn't it true though, that human intelligence (and especially corporate/government/etc. intelligence) is also susceptible to biases which are invisible? Also, do we know why humans or groups of humans produce the output they do? Is there some kind of black box testing procedure we could use to increase trust in AI to a point at least equal to humans?
The presumed argument here is that you can challenge individual people, corporations, etc. more easily on discriminatory behavior than you can challenge an algorithm. If an algorithm happens to refuse to issue loans to black people, who's the class action lawsuit going to sue?
You'd have to prove that the A.I. was discriminating based on a "protected class" and not on some other basis. But you have no insight into the A.I. or its training data. Nor do you have a comparable A.I. of your own to run A/B experiments that can prove discrimination. Now what?
There's an additional danger. Many times society moves forward when the standard-bearers for what was "acceptable" or "correct" before retire or die out. A.I. doesn't die. An A.I. constructed with today's biases may, in some form, outlive its creators and carry these biases well into the future.
"Science progresses one funeral at a time." -- Max Planck