June 19, 2015

“An attitude towards a machine”




The British film Ex Machina, written and directed by Alex Garland, is an intelligent dramatization of the issue of artificial intelligence. Caleb, a young programmer working for Bluebook (the counterpart of Google in the film’s universe), is invited to spend a week in the company owner’s secluded refuge in the wilderness. It turns out that Caleb is to conduct an enhanced version of the Turing test on an advanced robot, Ava, constructed by the owner, Nathan. Ava is designed as a female. If she can get Caleb convinced that she loves him, he will take her to have a consciousness, and thus she will have passed the test. Of course, Caleb knows from the start that Ava is an artefact, which raises the stakes. I shan’t give the plot away for those who haven’t yet seen the film.
Ex Machina might be suitable for a film and philosophy course. It gives plenty of food for thought. There is ingenious play with various degrees of embodiedness - to travesty Wittgenstein, Ava’s body is the worst picture of her soul (Philosophical Investigations¸ Philosophy of Psychology – A Fragment, § 25).The erotic tension gives a wholly different twist to the intellectualist preconception of the original Turing setup.
Standing before a Jackson Pollock painting, Nathan explains that the greatest difficulty is how to recapture a similar effect of controlled spontaneity in the robot.
            There are, by the way, some intriguing allusions to Wittgenstein – maybe as a gesture to his grappling with the problems of souls and automata (e.g. Philosophical Investigations § 420). Apart from the search engine Bluebook which is explicitly said to be named after the Wittgenstein text, a copy of Gustav Klimt’s portrait of Margaret Stonborough-Wittgenstein (Ludwig’s sister), posing erect, cool, composed, is prominently displayed – in the room in which the female dresses and faces of robots are stored. (Gender roles is also a theme of the film that might be explored.) And on one occasion, Ava shows Caleb a doodle she has made and asks “What is this?” (an allusion to Culture and Value, p. 24). Caleb then teaches her to draw.
            Like virtually all stories involving artificial intelligence, this film remains a fairy tale along the lines of Frankenstein or Pygmalion – not a prediction of a possible development. The missing link in all stories about artificially created consciousness is the question how a human creation is supposed to be endowed with a life. The real deus ex machina here is Ava’s supposed desire to survive. But where does her desire come from? Or rather: where does the machine’s disposition to secure its own continued functioning come from?

27 comments:

  1. I'm not sure if you're suggesting that a disposition to survive would be the crucial distinction between a person and a machine, or that such a disposition couldn't be built into a machine - but either way, I don't think that quite captures the nub of the problem (and apologies if you're suggesting neither).

    One key thing is that even if the robot was programmed to behave in ways that mirrored our own survival instincts that would still only amount to a kind of parody of humanness. The fact that the behaviour was consciously designed by humans would in-itself count against considering the robot to be a person. It would be a cleverly designed replica, and not the real deal.

    Having said that, I'm not sure that there would never come a point where our attitude became an attitude towards a soul (or person). It might be that a sufficiently human-like creation simply provoked that attitude in us, whatever our intellectual qualms. To be honest, I couldn't be sure unless you put one in front of me and I got to interact with it. Anything else would just be guessing.

    However, it's worth reminding ourselves just how amazingly complex such a robot would probably have to be in order to pass the test. The range of characteristically human forms of behaviour is well-nigh endless. We forget things (and are irritated by it); we form irrational attachments to things; we grow old; we get bored; we become obsessed then suddenly drop our obsession and move on to something new; we are insecure about our appearance - or vain, or self-loathing (and sometimes all three at once); we fall in and out of love; we delude ourselves; we find important things funny and fight to the death over trivialities; we are eloquent or tongue-tied (and sometimes being tongue-tied can speak volumes); we feel like killing ourselves - and sometimes do; we are suddenly overcome by an inexpressible sense of beauty or joy or transcendence; and on and on and on.

    And all this is expressed by words, voice, facial expressions, gestures, glances, the way we walk, etc. Even a person's sneeze can be full of his or her individual personality.

    Good luck designing a robot that captures even a fraction of that complexity.

    ReplyDelete
    Replies
    1. Thank you for your thoughtful comments. I believe we’re on the same page.

      I spoke about the robot’s “desire to survive” because that seemed to be its core driving principle in the film. Besides, the idea that computers might be disposed to struggle for their “survival” apparently lies behind the fear of people like Stephen Hawking that artificial intelligence might one day overwhelm humankind and destroy it (why else would they be bent on doing that? but aren’t we here in the grip of a Darwinian picture?). But on the whole, the issue of motivation tends to be ignored in speculation about AI. The idea is that once a computer can beat a grand master in chess, the road is open to an unlimited approach to the human; that the only problems remaining are technical ones. But it seems to me the idea of implanting wishes, desires, feelings in a computer raises conceptual issues, not just technical ones.

      The computer in Ex machina is supposed to have a huge repertoire of facial expressions, etc, drawn from an enormous database. But then you want to ask: what do the expressions express? On what principle are they organized? Or are they simply random? Human expressions, for the most part, are neither organized nor random. If I may express the point metaphorically, they normally grow organically from the life of the human being. They may of course also be deliberate, employed manipulatively, but then they are fake expressions, say, of joy, grief, anger. (What is genuine and what is fake is not of course a matter of skill: one person may express fake grief more "convincingly" than somebody else who is really grieving.) In the case of the robot, on the other hand, I don’t see how the genuine-fake distinction can even get a grip. I believe this is in line with your suggestion that a robot would only present a parody of humanness.

      Just as the distinction between genuine and fake expressions does not seem to get a grip on a computer’s manner of existing, neither does the distinction between genuine and fake desires. (The Ex machina computer is not just set on securing its own continued functioning, it is apparently also guided by a “desire” to “stand” on a busy street corner and “watch” the bustle of life. However that is to be understood.)

      Delete
    2. "But on the whole, the issue of motivation tends to be ignored in speculation about AI. The idea is that once a computer can beat a grand master in chess, the road is open to an unlimited approach to the human; that the only problems remaining are technical ones. But it seems to me the idea of implanting wishes, desires, feelings in a computer raises conceptual issues, not just technical ones. "

      This is exactly the issue! No one discusses this in the 'literature' surrounding AI and its ultimate aim of domination. Why would a machine not be depressed, or apathetic.

      Delete
  2. A few related comments:

    Might we recognize as human something that is less than human? – Might we, for instance, have a tendency to “read” humanness into something that (we would be able to recognize) is sufficiently like human but not quite, as we read cheerfulness into bird’s singing (“singing”!), or complicated thoughts into a baby’s eyes?

    A different point: I’m not sure this is precise, but I don’t know how to make it more so: To a certain extent, in this movie as well as in others, the human robot is imagined as a psychopath (or what people imagine as psychopath). This seems to be the go-to fine difference in shades of behavior—the shade of humanness movie makers use to portray robots. (Americans are in general fascinated with psychopaths.) – I’m not sure we know how to imagine the human-like robot; I’m not sure we quite have a readymade attitude towards it in our form of life.

    Another point: Is recognizing something or someone as human an all-or-nothing issue? The idea of the Turing test sometimes makes it seem so. But might we, for example, recognize robots as humans for certain purposes but not others—e.g. voting rights but not love, eligibility for medical care but not to become clergyperson or for competing in the Olympics? And do we have just one attitude towards the human, or are there many different attitudes involved—attitude towards a child, attitude towards someone from the opposite sex, attitude towards our peers, attitude towards the old, attitude towards the insane, the foreigner…? – Partly I’m asking that in response to what Phillip says is involved in being human. I think we don’t demand that everything towards which we have an attitude-towards-humanity to exhibit or to be capable of all the behaviors Phillip mentions. We don’t expect children, for instance, to be capable of self-loathing. Might the human-robot become just another shade of humanity? And if so, might your question, Lars: “how a human creation is supposed to be endowed with a life” be broken into an array of different questions?

    ReplyDelete
    Replies
    1. Thank you, good questions. It is good to remind us that what might be meant by “an attitude towards the human” is far from unified or determinate.

      When Wittgenstein speaks of an attitude towards a soul and contrasts it with being of the opinion that someone has a soul, in PI II (PPF § 22), he does not seem to be invoking any familiar, ready-made notion, rather he seems to be making a rhetorical gesture, encouraging us not to regard our view of other human beings (and, by extension, of other living beings) as capable of feeling pain, of seeing, hearing, being frightened, etc as somehow thought-grounded, as a hypothesis based on evidence, but rather as a kind of framework for the way we see, describe, and respond to others. And the way this attitude manifests itself will vary in myriad ways depending on the context.

      Delete
    2. Thanks Lars, I agree completely.

      Part of what I was trying to say sort of piggybacks on what you say now. In a way I was trying to interpret your question “how a human creation is supposed to be endowed with a life?” and I was trying to argue that answering it does not require a discovery of facts, but a discovery of an attitude. – Is this something you would agree with? I also said that my sense is that we don’t quite have the required attitude in our form of life (yet?) perhaps because we have never needed to actually deal with such robots, except when fantasizing them. (And I take it that partly this is why there is room for your question, as I understood it.) But if this is true, the consequence seems to be that answering your question is not something that we have the resources for in our form of life. And if this is the case, then your question is more of a riddle then a question. It becomes: “what would our lives and world be like in which we recognize robotic human creations as having some sort of human life?” – Is that how you meant your question?

      Delete
    3. That’s a good way of formulating the question (I hadn’t actually thought of it in that form). I believe there is a temptation to think that for our words (e.g. ”human being”, “desire”, “think”) to have meaning we must have determined for each possible case whether the word applies or does not apply. But that is certainly an illusion. Permit me to quote myself from an earlier paper. P. F. Snowdon had suggested that “[t]he normal view about language is that we have conferred upon its terms an interpretation in such a way that determines them as true of or false of certain merely possible ... situations' ('Personal Identity and Brain Transplants', in D. Cockburn, ed., *Human Beings*, 1991, p. 121). My comment was:

      “It is not made clear in what way this conferral is understood to be taking place. Is it something we supposedly do *in* using language? If so, I should be inclined to retort that in using language we are not normally interpreting or conferring interpretations on words. Or is it something that we do in addition to merely using language? But then it is not clear how important this conferring of interpretations is supposed to be for our use of language, or what relation to the use of language it is supposed to have.” (“Imagination and the Sense of Identity”, in *The Limits of Experience*, 1994, n. 11.)

      So one might say: there is no way of inferring from our responses to fictitious stories of robots today what will be our likely responses to various types of humanoid robots that we may encounter in the future; much less what will become the established ways of treating them and referring to them.

      Delete
    4. I just happened to read Wittgenstein's *Philosophical Investigations* § 68 which seems very pertinent here:

      '... I can give the concept of number rigid boundaries in this way, that is, use the word “number” for a rigidly bounded concept, but I can also use it so that the extension of the concept is not closed by a boundary. And this is how we do use the word “game”. For how is the concept of a game bounded? What still counts as a game and what no longer does? Can you say where the boundaries are? No. You can draw some, for there aren’t any drawn yet. (But this never bothered you before when you used the word “game”.)

      “But then the use of the word is unregulated, the ‘game’ we play with it is unregulated.” — It is not everywhere bounded by rules; but no more are there any rules for how high one may throw the ball in tennis, or how hard, yet tennis is a game for all that and has rules too."'

      Delete
  3. Hi,
    thanks for an interesting post and an interesting discussion.

    Does not talking about “an attitude towards a soul” somehow make humaneness seem superficial? I take it that any of you do not mean that we simply apply a certain attitude to a certain something. Something which in some sense is in advance indeterminate. It is not like we start out with a bunch of blank entities and then choose to apply different attitudes towards them (some we give the label “animal”, and some “human”, and some robot). This attitude towards a human being is in other words not just a matter of choice. Did I get this right? (Perhaps Hertzberg meant something like this when he explained the Wittgenstein quote, that Wittgenstein was basicly just saying that seeing a human being is not making a hypothesis. The word “attitude” is, I guess, mainly to make a contrast here).

    On the other hand, some of your comments seems to suggest that we cannot know what it would be like to meet a human-like robot and thus we cannot say much about this question. We cannot know what attitude we take on A.I. until we meet a sophisticated A.I. I wonder what this means. In what sense can’t we say?

    Cartwright writes: “The fact that the behaviour was consciously designed by humans would in-itself count against considering the robot to be a person. It would be a cleverly designed replica, and not the real deal.” I think this is an important point (and Hertzberg, I believe, made a similar point). The life of the robot would not be its own life, but something that someone has designed; something that is dependent on the designers understanding of life. And related to this: who would we look at (to blame, or for an apology) if the robot harmed someone? The designer or the robot? And if one says that we might choose to look at the robot for an apology, this would not just be a choice we made for our own behalf, but also an attitude we would adopt towards the victim. We would not really take the suffering of the victim seriously, would we?

    These are, of course, not predictions of what we in future will or will not do, but I think they do define some framework for what it would mean to act as if A.I. would be people (An important note: we do not act “as if” other people were people, they are people and we act (hopefully) accordingly). Thus, what ever attitude we would take towards A.I., we have kind of already defined a relationship towards them in the very concept: in that they are something someone has designed, one way or another. This, I think, would make any attempt to treat them like humans just a psychological trick, even though we might choose to ignore that the life of the A.I. was showing is just a function of the design. On the opposite side, treating people as if they were merely something we should try to redesign (for instance when a person commits a crime), is treating them inhumanely.

    Would this not make Reshef’s suggestion for giving A.I. voting rights somewhat absurd? Would it not then be the designer who was doing the voting (I don't think any amount of randomness would change this)? As I said, we might of course choose to ignore this and if everyone did ignore this, in appearance everything would seem as if robots were voting. But would we not just be deceiving ourselves? Even though in some sense what we would do may be open-ended, i don’t think it means that it is completely open-ended and that nothing definite can be said.

    -MK

    ReplyDelete
  4. For what it’s worth, here are a few comments in response to what you say, Anonymous.

    Regarding the question whether we can up and choose an attitude: I take the will to be involved here in a complicated way. The attitude towards a human being is not a matter of choice, as you say—but only in the sense that we cannot choose it in the same way we can choose what to have for lunch. There is a sense in which we can choose an attitude, and this is closer to the sense in which we can choose what our attitude will be towards gay couples. First, the question has to open up for us, and this thing doesn’t normally happen with things. But sometimes it does. If, when I look at a gay couple, I only see abomination, and if the question opens itself up for me, then I might be able to engage in all sorts of imaginative thinking and perhaps one day it will dawn on me that I’m actually witnessing a relationship. I’ll discover that I need a certain terminology to describe what I’m witnessing. A new attitude will force itself on me, so it will not be something that I just chose randomly. But I will not be completely passive either. It will be an achievement of the imagination. I will have succeeded to see something I was previously in some sense blind to. And something like that might happen with the robots; who knows. We might be able to see something into those robots if they were complicated enough and in the right ways. And if this happens then it might actually force itself on us that the robot is something of which it makes sense to ask for an apology.

    The answer to some of your questions depends, I think, on what exactly we imagine. And the truth is that I don’t quite know what to imagine. There are all sorts of robots to imagine. Science fiction gives us all sorts of representations of “human” robots. Lt. Cmdr. Data of ‘Star Trek: the next generation’ is one that’s designed to be less ambiguous. We are supposed, I think, to think of him as human—to adopt this attitude towards him. The robot in ‘Ex Machina’ is more ambiguous. There is an air of riddle surrounding that movie, and part of the riddle is exactly about what attitude we are to have towards that robot. In our life we don’t quite have the room (conceptual room) for this kind of thing, probably because we don’t have this kind of thing, like we don’t have Neanderthals around. (What exactly would be our attitude towards Neanderthals?) So I disagree with your saying: “whatever attitude we would take towards A.I., we have kind of already defined a relationship towards them in the very concept: in that they are something someone has designed, one way or another.” – That is, unless we imagine something fairly specific, the mere fact that someone has designed it might not be taken to be sufficient to define an attitude towards something. So, I take myself to be playing the imaginative game the movie makers suggested when I say that we don’t have a readymade attitude towards this robot. If this kind of robots becomes a significant part of our lives, we’ll have to find the proper attitude; we’ll have to do the imaginative work—to find or even invent the right language. And I don’t know in advance what will be the result. Given its capabilities and behavior and given our own needs and desires and beliefs and all sorts of other things, we might end up thinking of it as a mere machine, or we might think of it as a lesser human, or a different kind of human, or like a different species, or we might find a completely new attitude. And it might also be the case that this imaginative work will change what we think of as human.

    ReplyDelete
    Replies
    1. Thank you for stimulating thoughts, M.K. and Reshef.

      It would be a good idea to distinguish between different kinds of thing that we might have in mind in speaking about ways of relating to humans and to machines. One may be speaking of a psychological phenomenon, the funny feeling we may have, say, when we discover we have misidentified something or are unsure how to relate to it. As when in a wax cabinet we realize that what we have been staring at is a real human being and not one of the dolls. An analogous experience: I remember, as a child, being faced by a person in a Santa Claus guise – even when I was old enough to know it was a family member, it still felt weird; I wasn’t able to relate to him in the normal way. Perhaps one day I might be getting a similar kind of sinking feeling when faced, say, with a highly developed computer endowed with a convincing ability to “take part” in a conversation: I *know* it’s been designed to create the impression of being human, and yet I can’t avoid the feeling that I’m talking to a human being. This is a bit like being presented with a clever conjuring trick.

      Here is another context: a person confronted with someone in obvious distress may respond with concern or compassion. If he doesn’t, we will perhaps consider him callous or obtuse. *Which* of those will be his reaction is an expression of who he is (and may to some extent be dependent on the will), but the fact that one of those descriptions – this spectrum – will apply is not dependent on him. And vice versa: regardless of how he behaves, his way of treating a sack of potatoes will amount neither to compassion nor to callousness. Even indifference is a response to the human (or, as it might be, to the animal), not to potatoes.

      Here the ascription is made primarily from a third person point of view. It is dependent both on how the person behaves, on how he feels, etc, but also on the object in relation to which he is responding. Indeed, the very description of his response may be conditioned by the object – or, on what the object is taken to be. A friendly smile will only be directed at living beings, or what are thought to be living beings. But the lines aren’t sharp here: a person may well be said to kick his car out of frustration; of course this description does not have the same depth as when we describe people’s anger and frustration with their fellow human beings.

      So the question about robots might be phrased as follows: how does the insight that a certain sequence of robotic behaviour was the product of someone’s design affect how we would describe a person’s responses to that behaviour? I guess this question will only genuinely arise once we face (to the extent that we face) an actual urge to use human terms for a robot. Still we might speculate: what would a situation be like in which we were tempted to attribute responsibility to a robot (hold it to “its” words; accept its apology; give it the right to vote; sentence it to a fine or to prison)?

      Delete
    2. ". . . how does the insight that a certain sequence of robotic behaviour was the product of someone’s design affect how we would describe a person’s responses to that behaviour?"

      Wouldn't it depend on what kind of design we mean? If the behaviors are canned, however cleverly, reflecting the designer's intent to replicate human behaviors in as convincing a way as possible, then that would be one thing. But if the designers somehow managed to construct a system which is capable of autonomous responses on par with the sorts of responses humans make, then that would be a very different matter.

      Whether we could tell them apart or not is yet another concern but assuming we could (because it's kind of hard to expect human designers, no matter how clever and forward thinking they are, to foresee every eventuality) then just being designed by a human, instead of by natural selection in nature, would not seem to be a barrier to our treating the AI as we would a human (always assuming that there is enough similarity in responses to the human being in the AI.

      Just being an organic, flesh and blood sort of creature isn't likely to be enough to warrant human-like treatment if AI research ever gets to the point where it can design the sort of complex, autonomous action-generating system that we are. On the other hand, what we actually know about the genesis of any such system we happen to come in contact with in such a world might not be a whole lot, in which case treating human-like machines like humans would seem to be the most natural thing in the world for creatures like ourselves.

      Delete
    3. I’m inclining towards the view that it doesn’t make much sense to speculate about the type of designs that might make us want to treat robots the way we treat human beings. (For one thing, the notion “the way we treat human beings” is not unified. Thus, some of us welcome refugees with food and mattresses, others want to turn water cannons on them to keep them from crossing our borders.)

      You say: “if the designers somehow managed to construct a system which is capable of autonomous responses on par with the sorts of responses humans make then that would be a very different matter.”

      The question is: what kinds of responses would those be? How do we decide whether a machine’s behaviour is on a par with human behaviour? Perhaps matters should be turned around: it’s when someone feels inclined to respond to a machine as somehow more than a machine that she is likely to say: “its behaviour is on a par with yours and mine” – not the other way round.

      If the plastic flower looks exactly like a real one, does it matter that it’s plastic? Well, will it really look the same to you after you’ve been told it’s plastic? --- If you know it’s plastic, you won’t wonder where it was plucked, nor will you be surprised it keeps for so long. Indeed you can keep it in its vase for year after year. Will that not make a difference to how you look at it? (I’m not saying it should. If you’re fine with it, then good.)

      “being designed by a human, instead of by natural selection in nature, would not seem to be a barrier to our treating the AI as we would a human”

      Of course to speak of something being designed by natural selection is a metaphor. Natural selection doesn’t have designs. People, like plants and elephants grow as they do, turn out as they do. Nobody meant anything by it (except God, if that is your outlook.)

      Delete
    4. If a flower is plastic, not organic, there will be some facts that about it that make it so, even if we hadn't discovered them initially or on our own. If brought to our attention or otherwise discovered, our thoughts about the flower, and our way of treating it, will change. If an AI's behavior is canned, preprogrammed to seem as if it were the spontaneous responses of a genuinely autonomous, intelligent agent, it's not unreasonable to suppose that that will show up eventually though in theory at least, given a finite timespan, we could be fooled. If we come to a point where all observations convince us that the AI is responding as an autonomous agent (the kind we recognize in ourselves), then what it's made of, or even what it looks like, will hardly matter. All we need is recognizable behavior and that will include language.

      In theory at least there would seem to be no reason why a designed system should not be a matter of designed autonomy, as opposed to offering only the APPEARANCE of such autonomy. Only technical issues might prevent the former sort of design. But just being designed, rather than naturally occurring, would not, it seems to me, lead us to act differently toward an entity that behaves with human level autonomy, or oblige us to.

      The issue isn't what the entity looks like or its genesis but what it does. Our inclinations to act toward it are likely a function of how it acts and thus what we see in its actions. Such inclinations are likely the outcome of evolution.

      What sort of behavior may count then? I'd say that behavior that equates to human behavior is the sort that expresses an underlying awareness, i.e., reveals, in its operation, intentionality about what is being done. Here intentions of a third party, qua designer, are irrelevant. What we want to see is behavior that's an expression of the acting agent itself. The measure of that will be novelty in the entity's behaviors and a consistency in that novelty that persists over a period of time a period we count as normal for naturally occurring intentional agents like ourselves. Divergence from that standard raises questions and significsnt divergence answers them in the negative.

      Can an AI ever warrant treatment as a human being then? I don't see why it would not under conditions like these, whatever the status of today's science and available engineering skills to achieve that might be now.

      Delete
    5. I'm sorry I'm late in replying.

      Why should we care whether a flower is made of plastic or consists of cells? Its molecular structure is hardly important in itself – that would be a strange superstition. It seems to me that if we do care, it is because we know a real flower has grown from a seed or a bulb, we know it will wither one day, it needs to be cared for, etc. If we are interested in a plastic flower, it is because of other things: we may admire it for being a skilful imitation, or discuss how you can tell it’s an artefact. (Someone who had only ever come across plastic flowers would probably think about them in a different way: perhaps he would wonder whoever thought of creating objects of that shape. But to us, they are imitations.)

      Most robots in the world today have been designed with some specific practical purpose in mind. They may be used for cleaning the house, or piecing together a car, or caring for patients in a hospital, or guarding an industrial plant. Robots may also be made for the sole purpose of testing how human-like we can make them. Of course these devices might one day be developed to the extent that people were frequently fooled by them, even on closer contact. (Would that be a case of “designed autonomy”?) But if we told people the robots had been constructed with the primary aim of behaving in a human-like fashion, I expect their inclination to treat them like human beings would soon vanish.

      It’s hard to imagine, on the other hand, robots just accidentally becoming human-like.

      As for “behavior that … expresses an underlying awareness, i.e., reveals, in its operation, intentionality about what is being done”, I’m not sure what to do with that notion. Doesn’t the behaviour, say, of a house-cleaning robot reveal a fair degree of intentionality about what is being done?

      Delete
    6. Of course there are reasons to care about the make up of a flower. A plastic one may look good on the mantle but not close up and it won't offer a fragrance or attract bees (a good thing if indoors). It will outlast the cellular kind but won't grow and change. We might care about the AI's constitution if we were interested in its durability or other physical capacities but not if we're interested in the autonomy of its behaviors which would be the crucial thing for an AI.

      Fooling someone into thinking the AI intelligent in a human way may only have to do with their seeing it operate in a particular environment where the designers have already scoped out the range of possibilities, e.g., animated entities in a Disney scenario. But if the point is to give the AI human-like capacities so it can operate in a human way in the world, then canned programming would likely be insufficient.

      What case what the AI is made of, or even what it's genesis is, will be of relatively little interest. What will matter is the interacting behaviors the machine exhibits, its capacity to interpret and respond to unanticipated events in an effective way. We want to know if there's some kind of mental "life" because uncanned behaviors would require a thinking, intentional entity. And being that kind of entity would only be ascertainable via behavioral interactions just as it is with others like ourselves.

      A designed capacity to respond that takes into account x number of possibilities could fool observers only as long as x possibilities and only x possibilities are encountered. But reality isn't like that. An AI designed to operate autonomously would have to be the kind of machine that could continue to convince us in scenarios not anticipated by its designers. In a sense, the machine must be its own designer, a designer of its own behaviors, even if it's basically running on an operating system and program(s) built into it by a team of engineers to allow it to generate unplanned responses to unforeseen events. Being designed needn't imply canned responses though it may be the case that achieving the non-canned sort is just a quantitative step (or several) up from the level of design at which only canned responses can be generated.

      When we meet a person we don't wonder whether there's a real person there, someone we can talk to (if we know the right language and the other has the right capacities, e.g., good hearing, adequate brain, etc.), who can be expected to respond to us in an autonomous way. We don't think we have to look inside or peer into his/her eyes to find a soul. We know humans are like us because they act like us. Why would it be any different with the right sort of designed machine as long as it has physical capacities we are capable of recognizing and relating to? If a rock could speak we certainly couldn't understand it but maybe a lion's speech could be recognized and understood under the right circumstances. And this is less about forms of life (as in grasping the lion's world) than about it entering ours. If a robot could speak (and act in novel ways in interplay with ourselves) why wouldn't we understand it as a fellow thinking entity (assuming its language and behaviors were sufficiently accessible to us)?

      Intentionality is seen in behavior but not as some special, isolated phenomenon. It's not as if there's an intentional organ in us or a quality to be picked out. What are our intentions but the array of expectations, beliefs, needs and wants of the acting agent expressed in our acts?

      Delete
    7. - So what you’re saying is: if at any given moment we can’t tell, even from close up, whether a flower is plastic, then it doesn’t matter. And that’s fine, though probably others would think it does matter. There’s no “right” or “wrong” here, of course.

      “What will matter is the interacting behaviors the machine exhibits, its capacity to interpret and respond to unanticipated events in an effective way.”

      - I don’t think there’s much doubt that robots can be incredibly sophisticated in the skills dimension. One could imagine travelling around the globe with a robot valet, a glorified Passepartout, who was able to tackle every foreseen and unforeseen contingency along the way. But how would it do for a travel companion? What kinds of conversations would you carry on with it?

      “We know humans are like us because they act like us. Why would it be any different with the right sort of designed machine as long as it has physical capacities we are capable of recognizing and relating to?”

      - Well, what does “acting like us” come to? Would we share a meal or a drink with a robot? Would it need to rest after work, or go to sleep at night? Would it go on strike for higher wages (come to think of it, what would it actually need money for)? Could we discuss parenting with it, compare childhood memories, talk about our feelings about death – our own or that of those we love?

      Might a robot be cantankerous or jovial, flighty or single-minded, self-preoccupied or in need of constant reassurance? Would it prefer listening to Schubert or Lady Gaga?

      Of course we might, as you say, supply the robot with canned lines and responses on these various topics (just as we might supply it with receptacles for food and drink, etc), but if we knew the lines had been canned, would we still relate to its words as to the words of another human being? If the argument is to show that origins don’t matter, the provenance of its lines should not be kept a secret.

      My guess is that under these circumstances I would not, in the long run, find the company of robots all that meaningful. Maybe others would not mind as long as the robot could “respondto unanticipated events in an effective way”.

      Delete
    8. I think that perhaps we're thinking about this in terms of different expectations. It seems to me you're concerned with a question like 'when does a manufactured entity (the AI) warrant being treated like a person across the full panoply of features we expect of persons?'

      Obviously, without a common physical constitution and a common history there would be no such time. A machine made of silicon chips and wires and pieces of metal, without birth and childhood experience in its history and the long record of interactions with human beings behind it that we have, would certainly be different enough from us in a cultural/behavioral way that it might never count as one of us in the sense of being a fellow human being. But that's not the only sense of "one of us" there can be.

      The question I have in mind is when does it become sensible to treat an AI as one of us in terms of cognitive capacity? When can we say the AI has attained the state of having a mental life (even if its physical life is radically different than ours)?

      What would such a mental life entail? It would be the capacity to think about things, to form concepts and make connections between thoughts, to apply reasoning in an aware way, etc. Theoretically an alien life form could have that without being one of us as a fellow human being and so be worthy of treatment as a "person," too, even if it were not a human person. It's that sort of condition I think the AI could attain if its programmers, engineers, etc. could build into it the capacity to act autonomously (without following pre-programmed scripts incorporated into its programming by parties outside itself).

      Of course there would have to be programming of some sort for such an of entity though. But it would be whatever programming is required to establish a mental life for the entity, whatever is needed to give it the capacity to think and act with the kind of mental features we have: awareness, aboutness, reasoning, valuing, etc.

      We don't seem to know how to do any of this yet but perhaps there's nothing in principle from preventing us from doing it at some point. And then the question is likely to be how should we treat this particular alien intelligence? Like a machine capable only of doing our bidding or like a person capable of having interests and needs of its own?

      Delete
    9. The issue what could possibly incline us to treat a machine as “a person capable of having interests and needs of its own” seems to me to be precisely what you’ve failed to address throughout.

      Delete
    10. I thought that was precisely what I was addressing albeit with different expectations of what counts as personhood than you've brought to the table. If a machine entity could be constructed with cognitive capabilities equivalent to our own, then I'm suggesting that that would be enough to warrant treating it as a person in the most important sense, namely as an autonomous entity with interests.

      What else would we need to treat it thus? If the machine were a fake (had only canned responses to offer) then, once that became clear (as it would, given enough exposure to it), we would say we'd been fooled though we still might feel twinges and such in stepping back from the illusion. But if the illusion did not manifest itself as illusion because the machine never slips out of "character" no matter what occurs given unforeseen scenarios, then what grounds would we have for ignoring the interests it reveals to us in our interactions with it? Would it be enough just to say it's made of wires and chips instead of flesh and blood like us so it doesn't count? I'm suggesting that it would not.

      Could such a machine be like the "female" entity in the film? I haven't seen the film yet so I'm in the dark as to specifics but, in principle at least, if a machine with a female form could also be endowed with human-like self-awareness (in possession of needs/interests which it feels in a way that's broadly equivalent to what we recognize as human sensibilities), then it's unlikely to matter what the entity's actual history was. A canned history would be enough, I think, for the machine to "see itself" in the way humans see themselves (though canned behavioral responses would not, given open-ended scenarios, be enough to keep us convinced for the duration).

      We're talking here about a machine with enough of a mental life to "see itself" and that's still a tall order. In this case it's not inconceivable, at least to me, that we would find ourselves treating it like a fellow human, even a female one, if its behaviors and forms were appropriate to that gender identification. Of course, producing such a machine isl beyond our capabilities but it's fiction so, for this exercise at least, that can't matter.

      This leads to what is, I think, the real question: What is it about us that makes us the kinds of things we are? Is it our organic form? Our type of "life history"? Or is it just having a certain kind of mental life (the thinking capacities we recognize in ourselves)? I would say that that's really the point of films like this. It's not just whether or not such a thing could be built, or whether or not we might find ourselves confused when confronted by it. We likely would, The important question a story like this poses is what makes a conscious creature conscious?

      Knowing we can be fooled by exhaustively programmed behaviors is one thing and I think we all can, for a while at least, given the right scenarios. But what counts as fooling us is something else and I'm suggesting that, if the programming produces autonomous behaviors, the kind that require awareness (thoughts about things and about a self), then the constitutive material and the "life" history questions aren't what really matter.

      What matter are the behaviors themselves, to the extent they are recognizable as such by us. If we don't look into our fellow humans for their mental life, when we meet and greet them, but read it in their behaviors (in how they present), then why would we think we need more where a machine is concerned? That is, it's the behaviors that show the interests and needs and it's reasonable to say that such behaviors can only be effected, over the long term anyway, in a non-canned way, i.e., that they demand the presence of the mental in the behaving entity that makes interests and needs a reality.

      Delete
    11. I find expressions like “self-awareness”, “autonomy”, “mental life” hopelessly vague and somewhat question-begging in this context.

      I haven’t spoken about what *warrants* treating, or speaking of, a machine as human, simply about what might in practice make us do so. And basically, my point is that it hardly makes sense to pronounce on this beforehand, until we actually feel inclined to speak and act in such ways.

      What I’ve always found puzzling is how people who write about AI seem to assume that by making machines more sophisticated we will somehow end up endowing them with *interests*, *wants*, *needs* and suchlike. This seems like a conceptual conjuring trick.

      Here’s the thing: machines become smarter and smarter. They are able to perform tasks in an increasingly independent fashion, developing an ever more complex behavioural repertoire, learning from experience, acting effectively in totally novel situations, etc, etc. However, nothing here has been said about their coming to have needs, wants or interests. Their mode of being, as it were, is altogether instrumental. They are not entities for which things matter.

      Maybe we want to say, e.g., that they have an “interest” in acquiring the information they “need” for solving the tasks. But those are not *their* needs or interests, they are the needs and interests of the tasks – nor are the tasks “theirs”. (We may set them the task of becoming more “human-like”. But even that is then a task.) We need to keep this in mind. If we don’t, we lose our bearings in the whole discussion. Metaphors are OK as long as they don’t trip us up.

      Perhaps what inclines us to overlook this is the form of words used in speaking about AI. We say that machines are smart, that they perform tasks, carry out calculations, learn stuff, etc. We speak as if machines were subjects, agents, just as human beings and animals are. And then we think they may become self-aware, struggle to survive, etc. But these ways of speaking (which are of course highly practical) come from our comparing machine processes to things done by living creatures. We forget that for a machine to do something is in a different logical category from a human being or a dog or hedgehog doing something. (This is not a technical but a conceptual difference.)

      Again: if we want to speak about, say, a robot being self-aware, the question is: what are we to think of as the self of which it is aware: is it this particular piece of robotic hardware, or the program, or this particular hardware in combination with this program, or what? And similarly: which entity is supposed to have needs or interests? These questions help dissolve the idea of AI as constituting a subject or agent.

      Delete
    12. I find expressions like “self-awareness”, “autonomy”, “mental life” hopelessly vague and somewhat question-begging in this context.

      It likely has to do with the capacity of language to refer to these kinds of phenomena. Yet we can hardly get along without speaking about these sorts of things. They come with the territory of talk about human life.

      I haven’t spoken about what *warrants* treating, or speaking of, a machine as human, simply about what might in practice make us do so.

      That's all I mean by "warrants" here, as in what would make us feel that that was the right way to respond?

      What I’ve always found puzzling is how people who write about AI seem to assume that by making machines more sophisticated we will somehow end up endowing them with *interests*, *wants*, *needs* and suchlike. This seems like a conceptual conjuring trick.

      Well something produces these phenomena in us. Assuming it's our neurological systems (what else could it be?), there must be some way they do it. One reasonable possibility is how computers do what they do to allow machines to behave in seemingly intelligent ways. What machines seem to lack is the internal/mental life we find in ourselves. The question then is what accounts for this? Is it something left out of machine programming or something machines really could never do? I think the best conjecture at this point is that it's something left out which, if discovered, can then be included.

      Here’s the thing: machines become smarter and smarter. They are able to perform tasks in an increasingly independent fashion, developing an ever more complex behavioural repertoire, learning from experience, acting effectively in totally novel situations, etc, etc. However, nothing here has been said about their coming to have needs, wants or interests. Their mode of being, as it were, is altogether instrumental. They are not entities for which things matter.

      That's the question, i.e., what is it to have needs, to have things matter? My tentative conclusion: it's to have the features we find in ourselves (aboutness in thinking, awareness of things and of ourselves as selves, etc.), What do these consist of? It seems to me it's the array of our perceptual and conceptual experiences in all their variety. What remains to be seen is whether or not AIs can be built to have such things, too. If they can, it won't matter that they're different from us in some particulars or in "life" histories. To the extent there's enough similarity to grasp what they are, recognizing a mental life in them through their behaviors is all that's needed to prompt us to treat them as fellow autonomous entities.

      (continued below)

      Delete
    13. (continued from above)

      . . . Metaphors are OK as long as they don’t trip us up.

      I think there's more than metaphor at work here though.

      Perhaps what inclines us to overlook this is the form of words used in speaking about AI. We say . . machines are smart . . perform tasks, carry out calculations, learn stuff . . We speak as if machines were subjects, agents, just as human beings and animals are. And then we think they may become self-aware, struggle to survive, etc. But these ways of speaking (which are of course highly practical) come from our comparing machine processes to things done by living creatures. We forget that for a machine to do something is in a different logical category from a human being or a dog or hedgehog doing something. (This is not a technical but a conceptual difference.)

      It depends on the type of machine. We're a kind of machine, too, after all. There's no reason to think being organic is the only way to have cognitive capacities rather like ours. Then the logical categories converge.

      Again: if we want to speak about, say, a robot being self-aware, the question is: what are we to think of as the self of which it is aware: is it this particular piece of robotic hardware, or the program, or this particular hardware in combination with this program, or what? And similarly: which entity is supposed to have needs or interests? These questions help dissolve the idea of AI as constituting a subject or agent.

      If we examine what it is to be a self and aware, what do we find except a particular set of recollections and their relations to current experiences? Why should we think a machine couldn't be programmed to have a similar sense of itself (even if its experiences might have different sensory equipment generating its experiences)?

      Delete
    14. You write: “There's no reason to think being organic is the only way to have cognitive capacities rather like ours. Then the logical categories converge.”

      Our worry – or at least my worry – hasn’t been about “cognitive capacities”. I don’t deny there may be a use for cognition talk in connection with machinery. My point, on the other hand, is that people tend to speak very loosely about stuff like sensations, desires, volition, etc, when talking about AI. (I think reflecting about this is useful because it forces us to get a clearer view of psychological concepts.)

      People’s thinking on these matters seems to drift along two currents. One is to think that as we make machines ever smarter, i.e. ever more effective in dealing with new tasks in novel situations, then somehow along the way they will automatically develop sensations, volitions, etc. I see no reason whatsoever to believe that.

      The other is to think that a designer can *supply* machines with sensations, volitions, etc. just as one would supply them with cognitive capacities.

      Take sensations: the idea, I suppose, is to program “expressive behaviour” into the machine. Say, to make it say “Ouch!”, emit groan-like sounds, maybe produce tears etc, in connection, say, with certain kinds of malfunction in the hardware. Let’s call that supplying the machine with the experience of pain.

      All right, then, but we should keep in mind that this differs from the way we speak about pain in humans. With people (and to some extent with animals) we may distinguish between their being in pain and expressing pain. We think someone may be in pain without letting on, or pretending to be in pain when they’re not. I’m not here invoking the Great Mystery of Solipsism, I’m just recalling what might be called the conceptual multiplicity of pain talk in the case of living beings. I don’t see how that multiplicity can be introduced in the case of machinery without begging all sorts of questions. And without it, I think our attitude to machine “pain” would be different, in important ways, from our attitude to pain among the living.

      Again, take volition: what would implanting voluntariness in a machine come to? In the case of people, we distinguish between voluntary and involuntary actions: sneezing (in most cases) is involuntary, signing a check is voluntary (unless done under hypnosis, etc). This is bound up, among other things, with what we may hold a person responsible for. Well, then, how is this distinction to be implemented in machinery? Are we to imagine the machine emitting the words “I mean to …”, “I didn’t mean to…” etc., from time to time? But then, what would those vocalizations be bound up with, if they’re not to be totally arbitrary?

      We need to resist the temptation to insist that what makes a word (say, “pain” or “voluntary”) applicable to an individual at a given moment in time can be exhaustively spelled out in terms of whatever occurs within that individual at that exact moment. To get a clear view of what words like these express we should take a wider perspective on the human context in which they are used. For this reason, thinking of them as referring to “phenomena” is misleading. Because of this the idea of applying these words to machines in anything like the sense they have in speaking about human life seems very farfetched.

      Talk of cognitive capacities is different. In their case, it’s mostly a matter of hitting the mark, getting things right. If this can be done on a machine (say, it can “reach reliable conclusions” on being fed the relevant data and being “asked a question”) then I can see the point of attributing those capacities to it for certain practical purposes (never mind that the human context for cognition is widely different). But I can’t see much point in attributing pain or voluntary action to a robot.


      Delete
    15. What we attribute to a robot will depend on what we take to be going on in it. And that will depend, from a scientific perspective at least, on what we can put into it technologically and, in a behavioral sense, what we recognize the robot to be doing. These are different questions, though not unrelated.

      The first will depend on what we discover brains do in creatures like us and whether that's replicable on a machine, and whether we can develop the capabilities for replicating it. The second on our capacities of observation, given an alien medium such as a machine platform.

      When we encounter other organic entities like ourselves, recognition is easy, normal, because it's built into us. We recognize body movements, gestures, sounds - all as a kind of language: the language of life, our kind of life. What we react to with familiarity is the presence of intentionality in the other, a subjective dimension, existence of a mental realm (along with the range of physical movements of the other). We recognize mental life, taking such entities to be like us in this way: They know their world, and, given sufficient capabilities, they know us as we know them. Yet a machine, given the different medium, may be like us only in some ways. But, at least at this stage, there's no reason to pre-judge the case, to presume such a machine can never be enough like us to have a roughly comparable mental life of its own, the kind of thing that generates autonomous behaviors.

      Cognition in us consists of more than just problem solving in a computaitonal way. There's awareness of the problem, too, at least at some level, and awareness of solving it -- or of trying to. Such awareness consists, in part, of perceptual experiences, recollections and associations. These are hard to enumerate distinctly and exhaustively, and we should expect them to have a different "look" in a machine. A different "feel." Heat and cold, hardness and softness would affect machines differently than ourselves because of different constituent materials and, perhaps, perceptual systems to carry information about such things back to the processing part of the neural system.

      If the experiences are different enough, the behaviors generated might not even be recognizable to us. But can we assume they never would be? Or that there could never be enough going on in such entities to produce some behaviors we can recognize as intentional, as dependent on a sort of mental life, too?

      If we can't then we cannot foreclose the option, suggested by Ex Machina, that minds may some day be artificially produced in machines or that, at least in some cases, we will be motivated to treat such machines in ways akin to how we treat each other.

      Delete
  5. Hi,


    "Well, what does “acting like us” come to? [...] Could we [...] compare childhood memories?".

    In my view these questions hit deeper into the problematic of having an artificial intelligence, than that of effecting simply on our personal judgements. You (Hertzberg) say that "Maybe others would not mind as long as the robot could “respondto unanticipated events in an effective way". But isn't the question here how we should understand such responses? I mean, if someone thinks of an A.I. as having a "real" life, this can be an expression of certain expectations of what a relationship with such an A.I. can be. It is not just a reaction to what happens in a certain situation.

    The part that having a childhood, with everything that involves in it, plays in life is also an important part in what it means to have, for example, a friendship. Learning another persons background and about her childhood is learning to know that person. Learning to know that that person did not have a past is not just another disappointment in any human relationship, like learning that a friend has a criminal background. It is not something one could brush off with simply stating that "i guess we can't be friends after all" (even though it may appear that I am here trying to legislate the use of the relevant concepts, i do not believe I am; but perhaps there is no confusion about this). The difficulty of having a relationship with a person with a criminal past is nothing like the difficulty of having a relationship with a "person" with no past what so ever. In fact, in the latter case it can hardly be described as a difficulty, since many and various aspects of that relationship would simply fall off. We may still, of course, use the term "friendship", but that "friendship" would not come any closer to a friendship with a person with a past.

    The disappointment when finding out "what it really is" (the Ai), is not just a change in attitudes, and there is reason to say, i think, that in such a situation the person was under the illusion about what it meant to have life. Because of this, that person concidered the Ai as having life. In other words, that person was not just under illusion about her own appricitions and preferences, which revealed themselves later, but the illusion was about what human life was like. In that sense I would say that that person was under the misconception that the robot had life.

    Perhaps someone might think that this feature of "life" is contingent upon our conception of life today, and that this may change (meaning that sometime in the future we might not put such emphasis on people having a past). The problem, in my view, with this sort of reasoning is that when such a change in our attitudes towards life changes, it is not clear whether we, as we live today, would even recognize that new life as a life to begin with. It would not just be a change in the use of the term "life", but rather the question is whether such a society had any concept that we would identify as something similar to what we mean by life. Another way of putting this is that it is hard to imagine a historical development in terms of changes in the use of the concept of "life", without there being a point where we would have to take a stance on whether a way of understanding life is in fact deeply inhumane. This could not just be a discussion which could happen in neutral terms, such as a pure description of different uses of the term "life", without such a description in itself being an expression of an attitude towards what life is.


    -M.K.

    ReplyDelete
  6. I agree, I think, with everything you say.

    ReplyDelete