John M. Jordan, Clinical Professor of Supply Chain & Information Systems at Penn State, recently published a wide-ranging survey of robotics technology, aptly titled Robots, in the MIT Press Essential Knowledge Series. In this November 17, 2016 interview with Zed Adams, Associate Professor of Philosophy at the New School for Social Research, he discusses some of the main themes of his book, with a special focus on the question of how success and failure in robotics should be measured, as well as what our attitudes toward robots can reveal about our own humanity. The transcript has been edited for clarity, continuity, and length. Both the interviewer and interviewee were given the chance to refine their questions and responses.
Zed Adams: In your book, you point out that although it’s easy to identify instances of robots, it’s hard to give a general definition of what makes something a robot. This is intriguing, because it seems like we know what differentiates robots and, say, humans — at least until we’re asked to give a general account of that difference. Could you say something about why the problem of defining robots is so hard?
John Jordan: Well, I don’t recall specifically how much of this made it into the final version of the book, but I’m interested in the longstanding fascination people have had with artificial life. You’re in some tricky territory when you have a human-made artifact that aspires to replicate human traits. To use an analogy, think about high fidelity. You go back to Edison’s Diamond Disc records and people were saying that it was like the opera singer was in the room with you, but if we listen to them now we don’t think that at all. And yet, as a species we have a longstanding tendency to think that we’ve approximated humanity in things like sound replication, machine vision, muscle power; think of how Boston Dynamics’ Cheetah robot can run faster than Usain Bolt. Humanity is clearly the benchmark, and when we’re talking about humanity’s thinking capacities instead of our muscular ones, a lot of people have a large stake in saying that we’ve somehow achieved parity with human cognition.
I’m wondering if what’s going on here is that we fill in the blanks. You know those visual acuity tests where I give you nine dots and your mind fills in the tenth dot even when it’s not actually there? It’s the same with the Edison discs. People had heard opera singers before, so when they heard the approximation of one they filled in the missing pieces despite the fact that, objectively speaking, the sound did not remotely resemble a human voice. We do the same thing now with machines.
Z: In the book, you mention one definition that makes sense of how what counts as a robot can change over time. It’s a definition from Bernard Roth, the co-founder and academic director of Stanford’s Institute of Design. He says, “My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines. If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the devices get downgraded from ‘robot’ to ‘machine’.” [p.4] Roth’s definition is a comparative definition.
Perhaps our retrospective judgment that the Edison discs do not sound realistic is similarly comparative. There are newer recording technologies that we can compare the older technologies to, technologies that are able to record a greater frequency range, have less surface noise on playback, and so forth. Is there an analogous point to be made about robots? What would be the other development against which we’re measuring the Roomba such that it goes from being something amazing that we can think of as human-like to being something that seems purely mechanical?
J: Well, one thing is the role of fictional characterizations of robots. The first robot I ever saw was the PR2 out of the Willow Garage and it was five feet tall and weighed about 400 pounds. That’s huge! It was over $400,000, and people used it to learn to program robots. With a lot of work, they taught it to fold a shirt, and that was a big deal. Yet, because of our familiarity with George Lucas and Arnold Schwarzenegger movies, folding a shirt just seemed disappointing. It seemed more like a machine than like a robot. It wasn’t something that had any humanity inside it. It wasn’t going to save the world or destroy the world. It wasn’t going to be an interesting subject for a movie. No one is going to watch a movie about someone programming a robot for a month to fold a shirt, and yet it was really hard! We have this massive cognitive disconnect between the real achievements in robotics and the imaginary things that people have held up as standards for those achievements. People treat Asimov’s laws of robotics as if they are some sort of actual benchmark. No! It’s the wrong question entirely.
Z: Science fiction undoubtedly distorts our ability to think clearly about robots, but that raises the question of what would be a better standard for success. Perhaps one thing that we’ve learned from actual robotics is that many apparently mundane skills, like the ability to fold shirts, turn out to be surprisingly complex. It’s an accomplishment for humans just as much as it is for robots.
J: Right, but if I’m working at the Gap for $8 an hour, and all of the sudden there’s a robot that can fold shirts, then how do I feel about that? On the one hand, I’m seeing that as one less boring piece of work that a human has to do; but, on the other hand, that’s a job. The question of my identity as a human is related, historically at least, to what I do, to my livelihood. If robots fill the playing field enough, so that there’s less work for people to do because machines are so good at so many tasks, then what does that mean? Maybe we give everyone a universal basic income, or put everyone on disability, or maybe we end up with a two-tiered society of haves and have-nots. All of these questions of identity that robotics leads us to ask are philosophical and academic, but also very practical and even existential. If I don’t have a job, then do I have an identity?
Z: I want to ask you more about this identity question. What do you think are some of the most significant effects that real-world robotics have had on our self-conception?
J: One thing that’s really interesting is the whole question of augmentation. Instead of people versus robots, it’s people and robots coming together as some sort of hybrid. At what point do we have athletic competitions, for example, with no limit on the participation of these hybrids or on other human-robot partnerships? Whether it’s the fastest running thing, or the fastest running thing that can actually catch a ball, whether its people on performance enhancing drugs, whether it’s mental, whether it’s physical, the question is always just how good the “extended” human can actually be. It’s a compelling question. Right now, competitions involving only robots are interesting to roboticists, but not to ordinary people. There is this notion of the robot World Cup. At what point will we have robots who can do to soccer what IBM Deep Blue did to chess? It feels like a long way off. On the other hand, there’s progress every year, and at what point does that progress overtake us? Could there be a team of bots that beats Brazil at soccer? Or maybe there will be hybrids; people inside of exoskeletons who can take more punishment, run faster, get less tired, kick stronger, kick farther and more accurately. I think we’re going to have to deal with that.
Z: There’s also a real desire for artificiality at play in our attraction to robots and augmentation, as well. People can come to want the artificial thing more than the natural thing.
J: Sherry Turkle’s book Alone Together is really good on this, though it’s less about robots than it is about electronically mediated human relationships. It talks about how the very notion of solitude is now something to be feared. We’re always on our phones sending texts and tweets so that we don’t have to be alone with ourselves. We’re always busy and connected, but because we’re never fully alone we’re never fully human. What does it mean to be social, then? It becomes a process of semi-artificial humans interacting with other semi-artificial humans. It’s a pretty scary notion. Turkle has worked on all of this stuff, and I think she’s really seen how it’s come around full circle. The newer generation that’s grown up in this environment isn’t going to really know what a Turing test is. For them, the Turing test is their social media news feed.
Z: That’s an interesting spin on Turing tests, because we normally think of Turing tests as just being a test of whether computers have minds, whereas you’re bringing out how they are equally well insights into our own minds. This raises the question of what our relationship to robots can reveal to us about ourselves. One particularly striking example of this that you discuss is Snackbot, a robot developed at Carnegie Mellon. Snackbot delivered snacks to people in an office setting, and engaged in preprogrammed banter while doing so. The fascinating thing that you note about this banter is that “participants felt jealous if the bot complimented a coworker’s work ethic or healthy snack choice.” [p. 216] What do you think we can learn about ourselves from the case of Snackbot?
J: It comes down to the notion that how a robot reacts to you affects my conception of you as a person, and that we compete for the robot’s praise even when the robot isn’t an actual entity. I think that there’s something to look at here, whether it’s Cortana from Microsoft or Alexa from Amazon. All of these technologies have hard-coded responses to “How do you feel about me?” questions that factor into their being anthropomorphized by an intelligent agent. There’s been some talk of giving these things names like “R2-D2” or “F-300” rather than “Mike,” or “Sue,” or “Alexa.” By giving them human names, we’re jumpstarting a conversation that we don’t necessarily want to be jumpstarting.
Z: Do you think that there are dangers in building robots that encourage us to interact with them in the way we normally interact with humans?
J: The whole notion of a voice interface is different from the notion of speech recognition. Instead of typing in, say, “Find directions to Verizon Center,” we actually say “Alexa, tell me how to get to Verizon Center.” People personify the voices of their GPS in their cars, yelling back at them. “Stop telling me what to do!” “Give me time to get through this turn!” These are reactions to objects that are completely inanimate. There is not any sort of actual artificial intelligence involved. Yet people still react to it in a very embodied way. I think that this is something that the Snackbot example shows. The field of research into human robot interactions is still very primitive. There’s a lot on how robots understand humans, but far less on how humans understand robots.
Z: One way in which the topic of how humans understand robots has been explored was through Cynthia Breazeal’s robot Kismet, who had facial features that simulated human emotions. This contrasts with many other robots, who do not have facial expressions at all.
J: Have you seen what Breazeal’s new robot Jibo looks like? It’s totally deanthropomorphized. It looks like an Apple-branded tabletop sculpture. It’s just a black and white blob, and there’s really no human or animal form to it at all. The round orb at the top will sort of tilt if you talk at it, but it’s a very long way from all those ears and eyes that the earlier model had. She’s on to some very important stuff there. How does emotion get passed back and forth from the inanimate to the human? That she removed all human traits from the inanimate side of the equation is huge (of course, there’s still some human stuff — for instance, voice software — that is part of the interface).
Z: The deanthropomorphized design of Jibo connects up with another theme in your book, which is how trying to emulate naturally-occurring abilities can stand in the way of making progress in robotics. You make the wonderful point that the Wright Brothers’ advancements in flight were not achieved by making a plane with flapping wings, but through coming up with a more basic understanding of aeronautics.
J: The Wright Brothers were geniuses, but not because they invented the airplane. They invented aeronautics itself, the science of flight in general! Very few people get the distinction there. What similar discoveries are we going to make about cognition? How do we emulate that rather than doing what Leonardo da Vinci did, which was basically to flap wings like a bird and see if that could make a person fly? That’s where the state of AI is right now. They think they can just take what little we understand about the circuit of the brain and just throw transistors at it. It’s not enough. When is the transistor going to get perspiration from nerves? When is a chip set going to have its voice crack when it’s frightened while speaking in front of a large group? The mind and the body are unified.
Z: I want to ask you explicitly about what robotics can teach us about the mind/body relationship. The approach to understanding cognition that is explored in robotics involves a degree of embodiment not present in more traditional approaches to AI, but one difference between robotic embodiment and the embodiment of animals and humans is that we care about our bodies. We give a damn about the well-being of our bodies whereas robots tend not to. Are you suggesting that the most promising examples of care for a robot’s body are cases where there’s the kind of symbiosis between human and machine that you’re talking about, of augmented embodiment?
J: Think of an exoskeleton. Whether it’s just a part of my body or all of it, when I’m in that machine I am that machine; and that machine is me. There was a woman who completed the entire London Marathon in an exoskeleton. It took her a week, but the fact remained that this was a paralyzed person running a marathon. What does that mean? The machine itself didn’t do it. It’s not as if someone just set up a radio controlled car and had that run the track instead. This was a person completing a marathon. She was augmented, but she’s still a person. These binary distinctions between what’s a person and what’s not are going to get very problematic very fast.
Z: Another way to put your point would be to say that if we approach these things with a binary distinction between the natural and the artificial, then we’ll always be lead to posit an opposition between humans and robots. The picture that you’re suggesting is much more integrated. Things like eyeglasses or cochlear implants become extensions of human mindedness and embodiment. This is related to the idea that surface similarities or differences can be quite misleading. The Wright Brothers’ discovery of aeronautics didn’t involve emulating flapping wings, but they did discover something that applied both to flapping wings and to airplane wings. This would suggest that if we have robotic versions of human sensation, thought, and action, and are able to discover an underlying principle that explains both, then there really isn’t a difference in kind between the two. Do you think that there have been any discoveries of this kind in robotics, specifically with regard to either sensation, thought, or action? Does LIDAR, say, represent a discovery about the nature of sensation in general?
J: No, but people have developed haptic interfaces that come close, like with the Wii. There was an experiment where people shook hands across the Atlantic using haptic graspers; so yes, we’re getting close on sensation. Also, there’s an electrical lineman in Tennessee who lost his arm and has received a robotic prosthesis. He reports that he can feel the sensation of holding his wife’s hand. You can talk about phantom limbs and all of that, but I still don’t think we’ve capped the well in terms of what the human mind can do and what the human body can do. Nerve-actuated prostheses exist. The question is whether we can develop synthetic nerve endings, but I don’t doubt that we’ll eventually do it.
Z: One worry you could have, though, is that our ability to predict what will happen next, or how long certain developments will take, is fallible. In your book you detail an amazing example of an inability to predict what was going to happen next: the case of the DARPA challenge, the Defense Department’s competition to develop autonomous vehicles. In 2004, none of the vehicles finished the challenge, which led some writers, such as Frank Levy and Richard Murnane, to predict that the practical skills involved in driving would never be replicated artificially. Yet the very next year, five vehicles successfully completed the challenge. Do you have any sense of what led to such significant progress over just one year?
J: LIDAR was a big piece of it. The availability of that technology to the DARPA teams in 2005 made a big difference. It also goes back to the question of whether you map the terrain you’re going to traverse before you do it or just adapt to what you see. Right now, a Google car can’t go into a parking lot if it hasn’t been pre-mapped. Privately they may have fixed this, but the publically available rubric states that the car has to match the point cloud that’s been generated by pre-drives. It looks at the points it has on the map and those it has on the sensor. What agrees? What doesn’t agree? Am I okay? Am I not okay? It’s computationally intensive, whereas the “inventing your way as you go” method is what Tesla is trying to do. It’s a very different approach, one that can make use of a much bigger body of training data. It’s going to be interesting to think about whether the challenge will be creating a better map of the world before we do things in it, or creating bigger training data sets and having cars drive more miles with fewer sensors. Which of those is going to be a better model for future achievements in robotics? A rich sensor with limited trials or a cheap, ubiquitous sensor sweep with many trials? Think about however many hundreds of millions of miles that Teslas have driven. How many deer strikes do they have? Google is maybe in the one-to-two-million-mile range. How many deer strikes do they have? How many training examples does it take to teach a car the optimal response to a deer strike? I think the choice between these two models is going to be emblematic of the future of robotics.
Z: The choice between these two models, between one that essentially depends upon a detailed map of the environment, and one that doesn’t, connects up too with Rodney Brooks’ radical proposal that genuine progress in robotics might involve doing away with inner representations altogether. Do you see Brooks’ proposal as still being influential, or has it become less relevant at this point?
J: It’s important to remember that we’ve built all of the robots we’ve built for specific tasks. You’re not going to take a Rodney Brooks robot that was designed for a specific task, such as his industrial robot Baxter, and have it read mammograms. You’re also not going to take IBM’s Watson and turn it loose on a DARPA challenge. You’re not going to make it try to open a door. What gets lost here is that the algorithmic tunings are very precise. Watson would make a lousy spam filter. Forget embodiment, forget physical robotics. The fact is, Watson as configured for Jeopardy is not going to solve credit card fraud. This whole notion of general robotic intelligence is very far off.
Z: One way to take what you’re saying is that robots as we imagine them don’t exist and may never exist. They remain in the realm of fiction, yet the long history of these fictional representations continues to influence how we think about them. As you put it, “It’s difficult to recall an emerging technology with deeper roots in science fiction than robotics.” [p. 43] Do you think these fictional representations of robots also motivate many of our fears about them?
J: There’s something to that, but I also think this fear comes from, for example, the fact that Foxconn, the company that manufactures Apple products in China, has 50,000 robots on their assembly line. They predict that they’ll eventually have a million. Right now, they have 1.4 million employees. How are those numbers going to change in the future? There’s nothing humanoid about something that can weld or hermetically seal a glass screen into a metal case. It doesn’t have to have a head or arms or legs. It still affects peoples’ jobs. I think there’s an economic fear that’s very real.
In 2000, Goldman Sachs had 600 people who traded stocks. How many do they have now? Two. That’s high-end cognitive work, and so this isn’t just gas pump attendants and bank tellers. This is radiologists and even truck drivers. Otto, which is a self-driving truck, just drove 50 miles delivering beer in Colorado. We had a self-driving semi-truck on a public road. Something that’s really interesting is when you look at previous dislocations when you had muscle power dislocated by mechanical power, it was on the “lower end” of society. You know, diggers, farmers, movers of stuff; the container ship came and the longshoremen are all gone. What happens now, though, when all the surgeons go away? When all the radiologists and stock traders go away? It doesn’t matter what color collar you have. You could be vulnerable.
Z: How long do you think it will be before we will see robotics impacting society on an even larger scale?
J: We already saw it in the presidential election. You take something that’s largely macroeconomic and technological like steel mills. Everyone wants to bring back the steel mills, but you can’t! You aren’t going to be able to undo fifty years of labor history and macroeconomic investment and trade deals to make it cost effective to make steel in Pittsburgh again. There are no mills in Pittsburgh anymore. There are lots of medical facilities, as well as lots of tech research and development. I think that if you put the face of the immigrant other on this process of technological displacement you get people very excited.
Also, look at how much fake news was circulated before the election. There was more fake news on Facebook than real news, and a huge amount of Twitter traffic was just bots. I don’t think it’s going too far to say that these things might influence an election. This whole notion of what’s real comes up here. If I’m reading something that says “Mike Pence said…,” and a bot put that in front of me, then there’s a sense in which whether I click on it or not is a real Turing test. This affects the democratic process, because all of a sudden I’m forming opinions based on utter fiction.
The speed and volatility of future progress in robotics will be shocking to those who aren’t ready for it. In 2004, the notion of a self-driving car was a thirty-to-fifty-year proposition. There were smart people saying this. Even two years ago, they thought that a computer that could effectively play Go was ten years out. We had one within a year. I think that things are getting really advanced really fast. There are customer trials going on for self-driving cars right now in Pittsburgh, which is not an easy place to drive! You have a whole fleet of self-driving Uber cars that are taking fares. If you had asked three years ago who would be among the first to market self-driving cars, Uber would not have been on your list. Ten years ago, Dell was still the leader in mid-range business servers. Who would have said then that Amazon would eat them for lunch, not by selling boxes but by selling computing? Nobody, but Amazon has basically dismantled Dell and HP. The whole mid-range server market has just gone away. The fact that the competition could come from a really weird direction and come really fast is scary.
The most striking thing about watching the field is that some stuff that seemed really hard isn’t, and some stuff that seemed easy is hard. People thought chess was hard. It turns out that chess is easy and picking up a bottle is hard. The whole question of what’s hard and what’s easy is dynamic. It goes back to the first part of our conversation, and what we can say about our conceptions of robots based on the fact that we aren’t even calling them robots necessarily. One way to get around this nomenclature problem is just not to address it. You call robots personal assistants or augmented vehicles. Uber is probably very careful not to call them robo-cars.
Z: Do you think that anything is lost when we lose the term “robot” and the associations that come with it?
J: I think that the terms are problematic that and anything we do to avoid them is actually okay. “Personal assistant,” is a reasonably accurate description of what Alexa does. People ask: is it augmented humanity? Is it artificial intelligence? It doesn’t really matter. It’s better if we just get around the nomenclature and get things done, and I think that this is what’s going to happen more and more. There have been robots whose job it is to carry medical supplies around medical centers for fifty years now. Nobody calls them “robots” in a Hollywood sense, though.
Z: Are you suggesting that robots might go from being thought of as a real possibility, albeit one that doesn’t yet exist, to being thought of as something that will only ever exist in fiction? Ghosts might be an example of this. Your attitude to ghosts really changes if you go from thinking of them as a real possibility to thinking of them as completely imaginary fictional entities. Could something similar happen with how we think about robots?
J: Think about when Steve Jobs launched the iPhone. It wasn’t really a phone at all. It was an ultra-portable supercomputer. Nobody called it that or thought of it that way, but that’s what it was and still is. The simplified naming of these things is used to avoid robotic associations. You see this also in a military context. Drones aren’t called robots, even though that’s what they are. The military doesn’t even call them drones anymore. They call them UAVs, and they’ve been incorporated into foreign policy without any real debate. There’ve been no reports to Congress on the number of drone deaths or unclassified cost-benefit analyses done of drone warfare.
Z: In that case, retaining the terminology might be beneficial in that it would force those questions to the surface. People have some conception of what we’re talking about, after all, when we talk about robots.
J: There’s the baggage that comes with the terminology, which is why, in a typical military fashion, they give some faceless bureaucratic name to these kinds of things. They didn’t refer to napalm as napalm. They probably had some acronym for it that conveyed nothing of the horror that the actual substance wrought. This is the genius of Dr. Strangelove. It shows how people can accept the most horrifying of realities if you just neuter them linguistically. What you call things matters.
Z: One final question: if you were to recommend some books to read after reading your book, what would they be?
Michael Lewis, The Undoing Project: A Friendship That Changed Our Minds (W. W. Norton & Company, 2016); and a movie, Spike Jonze, Her (2013).