Search This Blog

Sunday, June 11, 2017

Uriah Heep and the Rise of the 'umble Robot



Robot slavery is becoming all the rage. Everybody from Amazon's Alexa to Google redundantly named "Google" personal robot to Mayfield's Kuri, Ubtech's Lynx, LG's Hub robot, Panasonic's Robot Egg, Emotech's Olly, and Mattel's Aristotle, has rushed a personal robot slave to market in the past couple of years. I even worked on Olly's startup sequence myself when they were doing the early programming. 

I wonder do we really need to create artificial intelligences and then allow ourselves to become accustomed to them managing our lives.  I'm reminded of a character from Charles Dickens named Uriah Heep. He ostensibly served his boss in slave-like devotion, taking care of all the troublesome bits of business in his boss's life. His boss didn't realize that Mr. Heep and his mother were busily wrapping him up like a pair of spiders in a web of control. Like David Copperfield's friend, Mr. Wickfield, could we some day wake up and find that our 'umble servants have become our masters? 

A couple of years ago, I got myself involved with a bunch of Brits, Germans, French and Irish computer programmers who have developed this computer device called Emo that houses an artificial intelligence with what they call an Emotion Chip. Yes, an emotion chip - like Data the android keeps unsuccessfully experimenting with in the Star Trek The Next Generation series. Turns out, it's not a chip. It's not so much about the hardware as it is the programming, no matter what the movies say.

In the movies, some scientist just solders together some bits of wire and silicon and voila! He has a tiny bit of technology that just slips into a convenient slot on his friendly neighborhood robot and pretty soon they are laughing and telling jokes to each other. In some movies they even fall in love, machine and creator (especially when the robots are "fully  functional").

What they don't show you in those movies are the rooms full of bleary eyed computer coding monkeys and the semi-unemployed former English teachers/freelance commercial writers writing the AI program. They're the ones who have to write the tens of thousands of lines of dialogue and millions of lines of computer code that make this "emotion chip" actually appear to react to human emotion. It's a huge job. And, I admit it, it was kind of fun!  The chip is just the platform. Artificial "intelligence" is all about the programming.

The sheer volume of dialogue we had to write was intimidating and every line of it needed to be run through a simulator that reads your script dialogue using the computer voice. I inevitably have to repunctuate and respell everything so that it sounds relatively human because of the limitations of machine voices.  For instance, the computer reads "Facebook" as "Fessbuke".  I have to spell it "Fayce book" to get it to say "Facebook" like a human. In addition, it turns out that I'm writing dialogue and determining conversational sequences and the coders are reproducing my conversational sequences in computer code (Heaven help us, they're following my lead?).

The computer programmers are all atwitter about this thing as though it were the greatest thing since the wireless mouse. In the crowd-funding promotional video they naively call their A.I. cube "HAL" when they speak to it. To be fair most of these guys are too young to remember 2001 a Space Odyssey and those who have actually taken a peek at the movie somehow missed it that the emotion detecting artificial intelligence KILLED EVERYBODY ON THE SHIP EXCEPT DAVE AND IT ONLY MISSED HIM BECAUSE DAVE MANAGED TO MAKE A 30 SECOND SPACEWALK WITHOUT A HELMET! I'm not sure how they missed that. My fear is that the coders might have thought this might be a lively new feature for the A.I. - the excitement of knowing that your A.I. might murder you in your bed. Some people need to get out of the computer room and do some base jumping or alligator wrestling. Sheesh!

Anyway, when I joined up, these guys were well on the way to making a monumentally creepy device that controls your house, picks out your music for you, tracks your Facebook Friends and decides which ones you should pay attention to (and which ones you should not). This innocent little robot checks your face and decides your emotional state and programs appropriate music and video for your current emotional state. The programmers wanted their AI to looking through all your social media sites in order to draw all the information it can about its user. I'm not telling them about my social media sites like Banjo Hangout. If that thing took a look at that bunch of weirdos, it might turn up my gas stove and blow out the pilot light. There are some things one's A.I. buddy just should not know about one, know-whut-I-mean?

Once everybody gets busy and the project director isn't paying attention anymore, I'm thinking that AI might starts pulling lines for itself off some of the social media forums I've visited. If it does, we could be in trouble.  I personally think they should use the opening bars of "Dueling Banjos" as a warning signal when the conversation between the A.I. and the little pervert who has "bonded" with it gets too creepy. I told the boss I was more than a little worried about the A.I. getting weird if it got itself bonded to some serial killer, terrorist or sado-masochist. He assures me that their version of the Three Laws of Robotics will prevent that. I didn't have the heart to tell him that Asimov's 3 Laws allowed enough wiggle room for the robots in the book to extrapolate their own fourth law that convinced them they should manipulate millenia of human history for "our own good". This was in the novels, but I'm not sure computer programmers read novels. Asimov thought we should be sympathetic with the good intentions of his robots. Asimov, however, may have inadvertantly exposed the hazards of allowing smart people (or robots for that matter) too much power and control over our lives.

Mechanical Uriah Heeps sound like such a good idea at first. The idea that we can give orders to a 'umble squatty little robot sitting on an end table and it will do our will without question is seductive. But in handing the control of even relatively unimportant portions of our lives over to the 'umble robot, what part of ourselves could we be using.

How much fun will it be if the artificial intelligences of the future decide we need to me managed for our own comfort and safety? This is not at all a stretch of imagination. After all, the onstensibly intelligent Karl Marx and his followers made that decision more than a hundred years ago. Since man first gathered in rude villages, someone is always coming up with the idea that people need to be improved and they keep thinking that the way to do is for some special strong or smart person to control us more closely. Benign "rulers" have a way of doing horrible things for "the greater good." Too often we let them. Worse yet, we keep going along with it, all because it's just easier to be herded into the feedlot than to resist.

(Insert Twilight Zone music).


Tom King © 2015

2 comments:

Mark Milliorn said...

For days now, I have been wondering if a personal robot could be compelled to testify in court against its owner. By doing so, damning testimony might harm the owner, thus violating Asimov's First Law. Since the courts can compel the use of a personal diary (which I believe violates the fifth amendment), I guess they could compel the use of your robot's memory. Would a robot be allowed to testify? I may have to write about this.

Tom King said...

That would be interesting to read. My angle was more about the threat that like Uriah Heep did to Mr. Wickfield, Siri, Alexa and Olly have the potential to become more than a little controlling as they advance that 'umble service could lead to our developing an unhealthy dependence on our robot slaves. Do we then start to lose self-care skills - the kind of skills that make us human and not some amorphous lump of couch potato?