Continued elsewhere

I've decided to abandon this blog in favor of a newer, more experimental hypertext form of writing. Come over and see the new place.

Sunday, July 16, 2017

The Machinery of Destruction

Who are you more afraid of – psychopathic individuals, like Ted Bundy, or psychopathic systems, like communism or Nazism? Or capitalism, which while it may not be as inherently murderous as the others, seems to be far more efficiently destroying us? Which of these scare you most, and emotional reactions aside, which are actually the most likely to do harm? What if the entities in questions were endowed with superhuman intelligence, like the fictional but archetypal Hannibal Lecter, or capitalism with better technology?

This thought was prompted by another SSC post, which makes a case for putting more resources preventing possible catastrophic consequences of artificial intelligence. In the course of that, he dismissed some common counterarguments, including this:
For a hundred years, every scientist and science fiction writer who’s considered the problem has concluded that smarter-than-human AI could be dangerous for humans. And so we get these constant hot takes, “Oh, you’re afraid of superintelligent AI? What if the real superintelligent AI was capitalism?”
Well: my number one most popular post ever was exactly that hot take; I՚m dismayed to learn that it՚s a cliche. I posted that in 2013 so maybe I was ahead of the curve, but in any case I feel kind of deflated now.

But my deeper point was not that it՚s dumb to worry about the risks of AI since capitalism is much more dangerous – it՚s that AI and capitalism are not really all that different, that they are in fact one and the same, or at least descended from a common ancestor. And thus the dangers (both real and perceived) of one are going to be very similar to the dangers of the other, due to their shared conceptual heritage.

Why do I think that AI and capitalism are ideological cousins? Both are forms of systematized instrumental rationality. Both are human creations and thus imbued with human goals, but both seem to be capable of evolving autonomous system-level goals (and thus identities) that transcend their origin. Both promise to generate enormous wealth, while simultaneously threatening utter destruction. Both seem to induce strong but divergent emotional/intellectual reactions, both negative and positive. Both are in supposed to be rule-based (capitalism is bound by laws, AI is bound by the formal rules of computation) but constantly threaten to burst through their constraints. They both seem to inspire in some a kind of spiritual rapture, either of transcendence or eschaton.

And of course, today capitalism and AI are converged in way that was not really the case 40 years ago – not that there weren՚t people trying to make money out of AI back then, but it was very different AI and a very different order of magnitude of lucrativeness. Back then, almost every AI person was an academic or quasi-academic, and the working culture was grounded in war (Turing and Weiner՚s foundational work was done as part of the war effort) and the military-industrial-academic complex. The newer AI is conducted by immensely wealthy private companies like Google or Baidu. This is at least as huge a change for the field as the transition from symbolic to statistical techniques.

So AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge. Maybe capitalism with AI is going to be far more powerful and dangerous than earlier forms – that՚s certainly a possibility. My only suggestion is that instead of viewing superempowered AIs as some new totally new thing that we can՚t possibly understand (which is what the term “AI singularity” implies), we view it as a next-level extension of processes that are already underway.

This may be getting too abstract and precious, so let me restate the point more bluntly: instead of worrying about hypothetical paperclip maximizers, we should worry about the all too real money and power maximizers that already exist and are going to be the main forces behind further development of AI technologies. That's where the real risks lie, and so any hope of containing the risks will require grappling with real human institutions.

Note: the identification of AI with a narrow form of instrumental rationality is both recent and somewhat unfair – earlier generations of AI were more interested in cognitive modelling and were inspired by thinkers like Freud and Piaget, who were not primarily about goal-driven rationality. But it՚s the more constricted view of rationality that drives the AI-risk discussions.

3 comments:

jed said...

Totally correct up to a point. I agree that both "AI and capitalism are merely two offshoots of something more basic, let՚s call it systematized instrumental rationality, and are now starting to reconverge." And this is important and supports your later conclusions.

But... Capitalism focuses on private goods (rival and excludable). But there are other kinds of goods that are equally important. Economics also identifies "public goods" (non-rival) but beyond that there are "collaborative goods" where each participant benefits from more participants joining the network. These are even more non-rival -- call them "anti-rival".

Most social phenomena (including markets) are collaborative goods, not private goods. Put another way, private goods only exist as defined by a social fabric woven from collaborative goods. “Systematized instrumental rationality” (SIR) applies to collaborative goods just as much as private goods — as we see if we look into the most powerful collaborative goods like the phone network, the internet and the international financial trading system.

So the problem that this post identifies isn’t SIR as such. Instead it is the desire and ability of actors in the system to privatize the benefits of SIR, and the lack of sufficient limits on privatization by the larger social context. This excessive privatization is (most of?) what we are talking about when we talk about “economic rents”.

In other words, if we can get our SIR focused on making collaborative goods more effective, and prevent privatizing their benefits, we have domesticated the monster.

No doubt the motivation to privatize contributes to economics trying to ignore collaborative goods, both directly and indirectly.

I guess we need to analyze in more detail where that motivation comes from, given that it is so often counter-productive even for the privatizers.

mtraven said...

Yes I largely agree.

I'm not against rationality, that would be silly. But I think almost everyone is aware there is are problems with rationality when it is hooked up to the wrong goals, or ignores context, or is exclusively focused on the private as you say. "Rationality" in this sense is a case of intelligence being stupid, by being narrow.

Gregory Bateson's Conscious Purpose vs Nature identifies this problem, although his solution (contemplate the awesome holistic properties of biological systems) seems inadequate.

jed said...

I never imagined you were anti-rational (in the broad sense, as you say) -- one reason I like this blog.

Here I don't think the problem is narrowness though. In a sense the rationality that built the internet was very narrow but it worked out fine. This is even largely true of Wikipedia.

I think the problem is power on the part of privatizers, coupled with a lack of ideological focus or intensity on the part of those who want our world to be more open and collaborative.