Tuesday, October 8, 2013

Partial response to Aubrey + another article on algorithms

Thinking about machinic language and, more generally, the way machines and humans communicate with each other, I think this newer article on algorithms has a number of points to keep in mind:

"Rage Against the Algorithms"

The article focuses on using reverse engineering methods to get into the black boxes of filtering algorithms, basically, which points to the active (and often invisible) roll that algorithms play in our online and even offline activities.  I'm really intrigued by the idea that algorithms "have biases like the rest of us."  First, it implies that algorithms and "the rest of us" constitute an "us," a usable group of players.  Second, it forces you to think about a string of machine language as having a bias, and then trying to understand what that means.  Is the bias of an algorithm merely a reflection of the biases of the person who coded them?  Could the biases possibly represent some active or at least resistant part of the code itself?

Anyway, it's a short article, and I hope you guys find it interesting.

I won't make it to the meeting tomorrow as I don't fly back from the big island until later in the afternoon, but I will try to post on more of my thoughts as I finish the last chapters of the book.

katie

1 comment:

  1. While I was at Berkman over the summer, we did a simple research project where we'd type "Why is x country so" into Google and record the autocomplete suggestions. Google has been sued in a few countries over these suggestions, which they defend by arguing that the algorithms are simply displaying the most commonly entered searches for the string of keywords (very democratic!). In a way, their defense of the algorithm both says the algorithm is "objectively" displaying results but is also strongly influenced by the bias of both the coders and the users inputting data. Algorithms also build judgments based upon the past, which means they can do things coders and users don't expect or can't predict. At the same time, there are whole swaths of keywords Google won't give autocomplete results for, like "bixsexual" or "how to murder." Of course these biases and "least resistant part" of codes (I like that idea) are exactly what Search Engine Optimization schemes hope to exploit. Know your sorts!

    I saw a recent example of machines (robots in this case) learning their own language (even taboo language!) on TV, I think "Through the Wormhole," but I haven't been able to find a reliable reference to the experiments yet. Part of the problem the researchers had was that these machines learned to communicate with each other, but not with the researchers studying them, so the sounds and movements the robots made could only be inferred as a communication between the robots by the researchers. It presents several interesting problems and many "zones of indiscernibility."

    ReplyDelete