You know what I’m talking about. Whose values?
Humans want different things, individually and collectively, and I claim that bracketing this by imagining that the main part of the problem is not turning everything into paper clips is a major mistake.
I have seen way too much hand-waving about this and it doesn’t do this community any credit to eschew politics as mind-killing. I get why we do that, because the naivete and ideological Turing test incompetence around here is often staggering, but that means this is an area where we have failed to improve and we swim in a sea full of sharks.
Of course, “politics” is just a label for something that is really even more fundamental, which includes religion and psychology and sociology and basic human attitudes that vary in a much vaster space than most of us are used to acknowledging.
You have to get serious about whose values. Coherent Extrapolated Volition is a crock unless you preface it with a specification of who you will leave out, and how you will weight their anguish and frustration against the satisfaction of the Coherent Ones. Here are some concrete questions you need to start talking about:
(1) Is the ultimate source of an AI’s values to be an individual, a community of specified individuals, a text or code of some kind which relevant humans have hammered out somehow, a method by which the AI observes human society as a whole and forms its own synthesis of values according so some previously specified recipe, or something else?
“Which one of these sources turns out to be the surest and safest way to install the values the installers actually want?” is not at all the same question as “Which way of doing it is likeliest to give the AI the best values to install?”
Technical research won’t be enough, it will just allow the winners of the race to accomplish their goals whatever they are. This leads to the next concrete question that shouldn’t be dodged:
(2) As activists, should we push for international cooperation with suppression of unauthorized AI research, open-source research, working with a particular government, or something else?
Avoiding an AI arms race is a good top-level goal, but accomplishing that is inevitably political. This question also has potentially different answers depending on whether you care about succeeding in giving the AI the intended values or whether you care what the values actually are.
I am being kind here, because my questions (1) and (2) are still phrased in a process-oriented way which allows you all to stay comfortable, without identifying specific values and actors, but now I’m going to drill down and make you squirm. If you’re good at Noticing Confusion, the squriming should trigger that.
(3) What about God?
Most of the people on this planet ground their values in a religion. Are we to take seriously the idea that “everyone’s values should be accomodated” or its approximation “do the equivalent of taking a vote” or its meta-approximation “do the equivalent of what a vote would give us if everyone was able to get smarter and more educated to the extent that they wanted to” might maximize coherence by excluding atheists? I’m not an atheist so it’s not a problem for me as much as it is for some of you, but both theists and atheists should recognize that the dynamical system of coherent valuations might have multiple attractors and not assume that the one the system is headed for won’t be evil in any of the senses people use that word. (I’m not even going to get into specifically theistic concerns like whether spiritual entities are going to contribute to the process in some way, I just want us to admit that we must have something to say to people who ask what God wants.) The biggest religion by some measures is Islam, which is expansionist and problematic in various ways from the point of view of most of us here, but Christians will have their own priorities if a Singularity is being contemplated, ranging from Teilhard’s Omega Point theology to the identification of a powerful AI with entities they have been taught to anticipate will be apocalyptically relevant.
(4) What about freedom?
Read Maureen Dowd’s interview with Jaron Lanier in the 11/08/17 NY Times. I’ll wait. …. OK. Obviously we can use terms like “maximize human flourishing” to dance around the issue, but there are fundamental polarities between individualism and collectivism, between democracy and autocracy, between virtual reality and traditional lifestyles, which are going to factor in to specifying values and need to be discussed much harder. Yeah, we probably want to avoid a Wirehead Matrix endgame, just like we want to avoid being Clippy, but it gets more uncomfortable when you need to start getting your hands dirty. Do you want to maximize the weighted summation from N=1 to 7.6 billion of the integral of Q(L,N)dL? You’re going to need to define Q in terms of present-subjective-mood or reflective-life-satisfaction or comformance-to-current-value-system or something and build a time-discounting function into dL and figure out what happens when N increases and decide if the weights ought to all be equal, but before you can tackle that you have to figure out what is even possible. Maybe it’s important that people all have some actual input or voice or vote in the final value set, but maybe that’s impossible, and maybe we can maximize their ongoing experience by some measure but it will lead ultimately to anomie and alienation, or maybe we can give the people who want a say a say and give the ones who want money money and give the ones who want work meaningful work but we’d better know what we’re talking about when we talk about those things. This isn’t something to be bracketed away.
(5) What about China?
That’s another elephant we shouldn’t ignore, and it’s necessary to integrate the perspectives of the blind men who each perceive a part of it. It’s probably going to be the most important country economically, possibly militarily, possibly in AI research, and y’all don’t have much of a clue about the conversations they are having over there about the things you want to talk about over here. The biggest Unavoidable is who is in charge there and how much they control what happens and what they want. You may not care much who is in charge, but both their values and the values of the people in China collectively (which have a positive correlation) might come as a shock to you if you haven’t studied them. It’s easy to ignore what’s going on there, there are all kinds of incentives to, so here are a couple of things to chew on: most of the Bitcoin mining that occurs happens in China (which means anyone who controls it ultimately controls the blockchain), and China already has more billionaires than the USA does. In some ways they can get things done a lot faster than Western societies; their inadequacies are not our inadequacies.
I could go on, but I want to spark a discussion so I’m posting this now, trusting that Christiano will allocate his judgy-points fairly if the rest of you build productively on what I am saying.