Open Score: Art and Technology 2016

This past Saturday, Rhizome together with the New Museum presented the second edition of Open Score, their annual symposium on the current state of art and technology. This year’s program examined and focussed on the relationship between blackness and meme culture, ways the internet can create social infrastructures, and the gendered and racial dimensions of artificial intelligence.

We attended the latter of the three sessions, titled “Together in Electric Dreams” featuring the writer Katherine Cross, artists Ian Cheng, Sondra Perry and Patricia Reed.

After speaking about each of their respective works, the panelists delved into discussion about the effects and implications of new kinds of machine intelligence that will one day organize many facets of our future world and the urgent repercussions that may arise as artificial intelligence continues its upward trajectory to a more “sentient” and sophisticated future.

All of the speakers touched upon current AI’s being built as versions of ourselves, programmed to mirror our behaviors, wants and desires. Inevitably we would program these AI with our own set of prejudices and biases which can have a negative and dangerous effect on society. As Patricia Reed called it “a disease and a cure” that can both “overtake and undo human narcissism.”A glaring example of this is the Microsoft chatbot that essentially became a Nazi in under 24 hours after conversing with misogynist people on Twitter.

Another talking point was the way we see and treat AIs. Take the example of the virtual assistant or chatbot, often than not they are designed to be female, programmed to perform automated labor on behalf of a user. Modeled off the traditional view of an “ideal” woman—one who is obedient and gentle natured—these AI’s are often bullied and verbally harassed and mocked. The panelists spoke about how they came across men asking Microsoft’s Cortana questions like “what’s her bra size?” to get a thrill. This raises the ethical question of whether this kind of behavior is just as degrading as if you asked that directly to a woman.

If society begins to view this treatment of AIs as acceptable simply because they are not “real” organic living beings—as Katherine Cross puts it “we’re being trained to deal with AI as the ultimate servant”—what are our ethics around machine intelligence whose labor we wish were free? And will this ultimately influence the way we treat women and human service workers, going backward on progress?

Sondra Perry suggested a thoughtful solution, to have consequences involved for bad behavior, i.e.: disabling your phone for a day if you have been demeaning to an AI system. Not a bad idea.

Cultural representations of AI in films, television and books often depict a dystopian future where AI take over the world and humans are at their mercy. This fear of AI takeover is rooted in the “guilty memory of the future,” where we have programmed AI to do tasks we don’t want to do as humans or endure in some cases, traumatic experience just because they are not seen as living, sentient beings. Take HBO’s Westworld, the recent season ended on these powerful AI gaining sentience and at the verge of what could be a revolt against the humans who have treated them badly in the past.

Speculating far into the future, the panelists discussed the possibility that AIs will one day become their own “intelligent species,” in a phenomenon referred to by artist Ian Cheng as the “Copernican trauma.” Right now, “AI is us,” but that will not always be the case. How AIs will evolve and what they will be like in the future can only be theorized and imagined. As with everything, there will always be positive and negative aspects to this development. Lots of food for thought. In the meantime, treat Siri with respect.