6.7 C
New York
Friday, March 29, 2024

What Makes AI Dangerous?

Courtesy of ZeroHedge. View original post here.

Authored by Per Bylund via The Mises Institute,

So I watched “Do you trust this computer?”, a film that “explores the promises and perils” of artificial intelligence.

While it notes both the good and the bad, it has an obvious focus on how AI might bring about “the end of the world as we know it” (TEOTWAWKI.) That is, if it is left unregulated.

It’s strange, however, that the examples of TEOTWAWKI AI were “autonomous weapons” and “fake news,” the latter because of how it can provide a path for a minority-supported dictator to “take over.” While I understand (and fear) both, the examples have one thing in common – but it is not AI.

That one thing is the State.

Only States’ militaries and groups looking to take over a State have any interest in “killer robots.” They’re also developed by/for those groups.

The fake news and “undue influence” issue is also about the power over the State.

Neither weapons nor fake news require AI.

Yet, in some strange twist, the film makers make it an AI problem. Worse: they end the film indicating that the main problem is that AI is “unregulated.”

But this is completely illogical: with the State as the problem’s common denominator *and* the solution?

Instead, we’re led to believe that it is problematic that Google tracks our web searches and Facebook knows our friends and beliefs (“because autonomous weapons”?). While I agree that it is ugly, neither company is making a claim over life and death. In fact, they operate under the harshest regulation there is: the market. Because they are making investments to make money, and money can only be made in one of two ways: through offering something that people want and are willing to pay for (Oppenheimer’s “economic” means), or through simply taking it from people against their will (“political” means). Companies operate according to the former, which means they are subject to the mercy of consumers. The State operate according to the latter.

No, I’m not saying the ability to play on people’s emotions, deceive them through “fake” information, etc is unproblematic. I’m saying the film completely misses the elephant in the room – and suggests it is the solution.

The logic is based on wishful thinking, if not ideology; a refusal to see what’s obviously there.

The solution is simply not a solution: if the State would “regulate” how Google and FB use AI to sift through the data and feed people what they want to hear, what makes anyone think this applies also to the DOD or NSA and their data, which are *not* collected from consumers voluntarily but in secret. And the latter are much more likely to work on autonomous weapons. The film even states this is the case, yet seems to skip over that problem.

To illustrate the difference between Oppenheimer’s economic and political means, consider two recent trust crises.

The Cambridge Analytical debacle caused Facebook to immediately change their business as the owners lost billions when the company’s value plummeted. That value is based on people’s willingness to use the web site and its apps, to continue sharing content. The #DeleteFacebook hashtag harmed the owners. Then compare with what was revealed by Snowden: that the State spies on everyone. The data are collected in part from companies that are both forced to comply with requests and legally obligated not to say anything about it. Yes, the leak stirred up a lot of emotion, but what happened to the “deep state” surveillance? Probably nothing. Except maybe some new routines and, probably more money to control leaks.

Which is more problematic, the “economic” means that are subject to consumers’ trust (and, really, whim), or the “political” means not subject to insight, oversight, or at all accountable because it is secret and because we pay for it whether or not we wish?

Add to this how the latter is interested in and aims for both autonomous weapons and to keep/claim the power of the State. It’s pretty obvious that neither is a utopian perfect solution, but one clearly has a built-in control mechanism because it is based on value, the other does not – and is even based on being done in complete secrecy and at our expense (involuntarily). Yet the latter is somehow in the film treated as the (“only”?) solution. That perhaps makes for a good play on people’s confirmation bias, because we’ve learned in school and want to believe that the State “is us.” Fine, but that’s not us spying on us and producing autonomous weapons. In fact, it would be hard to believe a political decision to “stop developing” such weapons. Who really believes they wouldn’t continue despite saying the very opposite?

The fact is, there is no downside to simply lying and pretending. Whereas, if severe, companies can be wiped out overnight if people don’t trust them – their value is gone. So the logic in the film simply doesn’t work; it doesn’t make sense. One cannot help thinking, if this is the state of human intelligence, our ability to logically draw conclusions from the data available to us, then making machines that think on “our level” can’t be all that difficult. And it cannot be hard for machines to recognize real patterns and draw conclusions that follow.

But perhaps I shouldn’t be surprised that the film makers misunderstand economics on a fundamental level:  they point to automation as a huge problem – because it creates more value for us at lesser cost. We’ll be relieved of jobs. Oh no. Think about that this Monday morning.

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

157,450FansLike
396,312FollowersFollow
2,280SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x