‘Do you think that the machine you
are reading this story on, right now, has a feeling of “what it is like” to be
in its state?
What
about a pet dog? Does it have a sense of what it’s like to be in its state? It
may pine for attention, and appear to have a unique subjective experience, but
what separates the two cases?
These are by no
means simple questions. How and why particular circumstances may give rise to
our experience of consciousness remain some of the most puzzling questions of
our time.
Newborn babies,
brain-damaged patients, complicated machines and animals may display signs of consciousness.
However, the extent or nature of their experience remains a hotbed of intellectual
enquiry.
Being able to
quantify consciousness would go a long way toward answering some of these
problems. From a clinical perspective, any theory that might serve this purpose
also needs to be able to account for why certain areas of the brain appear critical to
consciousness, and why the damage or removal of other regions
appears to have relatively little impact.
One such theory
has been gaining support in the scientific community. It’s called Integrated
Information Theory (IIT),
and was proposed in
2008 by Guilio (sic) Tononi, a US-based neuroscientist.
It also has one
rather surprising implication: consciousness can, in principle, be found anywhere
where there is the right kind of information processing going on, whether
that’s in a brain or a computer.’
The above quotation is from an article by
Matthew Davidson, “What makes us conscious?”
I’m curious about what kind of theory is being proposed here. The proposal has
a similar ambiguity as the Turing test.
Is Tononi
putting forward a stipulative definition: “let’s call a machine conscious if it
has such and such characteristics”? If so, it would of course be a mistake to
suppose that this constituted a discovery. Anyone is free to formulate stipulative
definitions. Whether they will enjoy wide acceptance depends on their utility for
the purpose for which the stipulation is to be used.
Or is he
proposing an empirical theory? But then the question is: how is it to be
tested? By what criteria are we to establish whether the theory holds good in a
given case?
Not only does he
seem not to provide a criterion, but somehow the idea of proposing some
specific mark of consciousness seems misconceived. The problem, as I see it, is
that the word “consciousness” has a variety of uses. There are of course the
regular down-to-earth uses, as when we ask “Jack regained consciousness a
little after 5 yesterday”, or “I wasn’t conscious
I had to file a tax-return by April 1.” The word is also used in more abstract ways,
as in discussing animal consciousness, etc. In these contexts, however, it does
not refer to some specific mental phenomenon, but rather functions as an
umbrella term, to refer to the applicability of words such as “sensation”, “perception”,
“intention”, “awareness”, “attention”, etc. And applicability is – to put it
crudely - a practical matter.
What tends to
lead us astray here – putting it briefly –
is our inclination to accept the dualist idea that consciousness is a specific
metaphysical substance which makes such things as sensations possible.