Wednesday, August 24, 2016

Personhood, AI, and my agnosticism about other minds

Philosophy of mind has never been a branch I've done much with, and I only did the most basic epistemology as an undergrad. But I've become more interested in Theory of Mind, especially the practical business of how children develop it and when, as I've watched the process occurring in my daughter. It's a fascinating process, and some moments really stick out as "wow" moments. I remember very clearly the day I managed to teach my daughter her name (because I have shamelessly documented her life, I even have a video of it!). Sshe was in the bathtub, and I pointed to her, and said "Gweni", pointed to me, and said "mommy", and pointed to the livingroom (where dad was) and said "daddy". After a few repetitions, she started pointing to herself and shrieking "Genni, Genni!" This happened on a Tuesday or Wednesday -- and on Saturday the word "mine" entered into her vocabulary. It was like having a name was the catalyst for differentiating herself (from me? from the rest of the world? I'm not sure) and now there was something differentiate to which things could belong. Fascinating. She'll be five in November, and has been meeting all the false-belief/Sally-Ann milestones appropriately along the way. I was gone for two weeks in July, and a few weeks later she was trying to tell me that the park near our house could be reached by turning down a particular street. This wasn't true, and I was trying to gently point out that I didn't think she was right, and she promptly squashed me, "You don't know! I went with dad! You weren't there! You don't know!" So she clearly can differentiate what I know from what she knows.

Watching the process by which a child becomes a person is strange in that I can see it happening but it's hard to point out what it is that is happening: What, exactly, is my evidence, other than little anecdotes like the above, that she is indeed developing a mind, a mind distinct from my own? The more I think about it, the more I've come to the conclusion that my personal stance towards the existence of other minds is party agnostic and partly pragmatic: I don't have any evidence that they don't exist, but I also don't have any evidence that they do. However, life goes much smoother if I act as if other humans are in fact persons; it's a convenient fiction to adopt.

But then this makes me think: What if that is what "having a mind" really is -- people acting as if you do? If everyone is adopting this practical fiction, then does it make a difference whether other minds (or even our own minds) exist? That other people are persons too? What happens, then, if we extend this from human persons to non-human persons? I was recently co-writing a paper with a friend on concepts of 'rationality' as it appears in AI, cognitive science, game theory, and philosophy. He's on the AI side of things, and a lot of our skype conversations consisted of me throwing up my hands and wailing "But what is it they think they're doing? How can they know if they've succeeded if they haven't even defined their success conditions?" It frustrated me to no end that it seemed to me, as an outsider, that AI people don't even know how to identify if they've succeeded. But perhaps that doesn't actually matter: Maybe AIs achieve personhood/acquire a mind at the point in which we treat them as if they have achieved personhood/acquired a mind. I was in Prague last year for a one-day workshop, and had a free day which I spent wandering the city. I spent a good twenty minutes at one of the castle gardens watching a small robotic lawn mower. It was operating according to a pretty crude algorithm, in that it would go straight forward until it bumped into a wall or tree or other obstacle, then it would back up a few feet, rotate a set number of degrees, and continue forward. At one point, it had gotten boxed in between two corner walls of the garden and a tree. All it needed to do was shift a very slight amount and then it could've gotten out around the tree. But instead, it kept shifting by too large an angle, and then running into one of the obstacles. I watched as a growing crowd of people gathered to watch it, and then they started offering it encouragement. "Come on, little guy, you can do it! Keep trying! Oooh, so close! Try again!" It was really bizarre how easily they personified it. I was reminded of this encounter yesterday when I saw the headline "People will lie to robots to avoid 'hurting their feelings'". Does it matter if the robot has feelings that can be hurt? Or is what matters that we treat it as if it does? Does it matter if other humans have feelings that can be hurt? Or is what matters that we treat them as if they do?

Maybe there's nothing more to being human than being anthropormorphised by other humans.

No comments:

Post a Comment