Friday, October 26, 2012

Regarding Artificial Consciousness



Putnam claims that if there existed a sophisticated enough android, we would lack sufficient evidence to either confirm or deny that said android was conscious. Putnam defines consciousness to refer to a subject's ability to have subjective experiences. The argument is as follows:
                We start with our own subjective experiences that we cannot deny that we have. A main reason we assume other people have subjective experiences of their own is that they talk about them like as we do. Imagine you have a white table and a pair of rose-colored glasses. Before putting on the glasses you say "the table is white." You can then put on the glasses and say "the table appears red," still aware that the table remains white. One could say that this talk of a "red" table is expressing a subjective experience, since it does not explain an objective reality.
                It seems inevitable that androids will be able to make this same sort of distinction between appearance and reality. An android with comparable to human understanding of what a table is, the ability the distinguish colors, to wear glasses, etc. would say, while wearing the glasses, "I detect that the table is red"  even though I know it is white." It appears in this instance that the android is aware of its own subjective experience and therefore conscious.
                Of course, this really only shows that the android can speak as if it had subjective experiences - as if it were conscious - but that does not mean that it truly is conscious. Putnam claims that despite this uncertainty, it would be discriminatory to deny the android the assumption of consciousness, for it would be a decision based solely on their physical composition. Therefore we ought to decide to view the androids as conscious.
                One could say that perhaps Putnam is being a bit too generous in this decision, however. The one thing an android cannot not be is a man-made machine. At no stage of their “mental” development and learning does an android cease to be a robot created by humans. Even if at some point (not necessarily the point Putnam decided above) the android arrives at “true” consciousness, it will not become an organic (of the typical natural sense) creature to which we can more easily relate. It seems to follow, then, that the default position is that the android is not special. If the robot is not conscious, it is not a person and therefore cannot be discriminated against. One must demonstrate quite well that the android is conscious in order to overturn this default viewpoint.
                Putnam would disagree with this default stance view of the android. In attempting to create artificial intelligence with the hopes of it attainting consciousness, one has already granted the android a sort of specialness; we want to be able to say that it is conscious. In a way, you could see the learning android as a human fetus; not yet resembling the anticipated final stages of its development. Like many would see no problem terminating the fetus at this early stage, not many would say that you could discriminate against the fledgling robot. However, after a certain point, we say that the fetus is finally a person, or at least person enough to consider aborting it wrong. Like we would want to be generous in our views of the infant during this abortion, we would want to be generous in our views towards the android. We would rather accept a lesser android as conscious than discriminate against any conscious android that we refuse to see as conscious. The consequences of being wrong about an actually conscious android are much worse than those of being wrong about an unconscious android.

No comments:

Post a Comment