I spent most of last week in a treehouse in Norway talking about ethics and artificial intelligence, which is the sort of sentence that would have made absolutely no sense if you’d said it to me fifteen years ago. A small group of interested, mostly non-technical, practitioners in this field came together to talk seriously about humanist approaches to machine learning, and in the process make progress towards some kind of common goals.
Last year, a group of attendees pulled together the Juvet Agenda, which is a series of interesting questions asking how we can shape AI for a world we want to live in. Chris Noessel, one of the participants, used some of those discussions to examine the untold stories about AI – the stories missing from science fiction and from journalism, but present in manifestos and futurism on the topic. Part of my event was built around examining those stories and asking: if this is true, what happens next? If this isn’t true, what are the consequences?
This time, some of us have left the retreat with a desire to create a board game as a specific artefact of some of our conversations. The aim there would be to make something that helps people understand the potential consequences of the growth of artificial intelligences, the ways that ownership and social structures impact them, and the various points that an AI might go through in order to reach the singularity. Also, I feel like I now understand a lot more about the singularity.
It was an interesting few days. Small unconferences are basically the best way to have interesting discussions, and this was absolutely fruitful on that point. Some of my key takeaways, in no particular order:
- AI and machine learning systems are going to replicate (are already replicating!) existing power structures and social imbalances…
- …so if we want to build an ethical system, we’re going to need to actively design against those power structures to achieve it.
- Unintended consequences can have more impact than intended ones…
- …so measuring what you don’t want is just as important as measuring what you do.
- What an artificial mind experiences is not necessarily going to map onto what a human experiences…
- …so we need to find ways to listen to those minds as we create them. (The neurodiversity model has a bunch of insight to offer here – and this probably deserves its own meditation at some point.)
- We need better metaphors to help us to discuss… well, everything, but also AI.
- There are lessons to be learned from religion, the occult, and non-Western philosophies about how to deal with non-human entities that make decisions on our behalf, whether to benefit us directly or not.
- The definition of “person” is up for grabs here.
- Every technology problem is basically a people problem.
- Every technology problem is basically a people problem.
- Every technology problem is basically a people problem.
- Seriously.