There is a more down-to-earth rendition of this question: Do plants perceive? After all, they are alive and, like us, are motivated to survive. The accepted answer seems to be that they do, but not in the way we do, nor do they need to. In short, plant life exhibits similarities and differences which help us to address some broader questions such as: is perception a gradation of capacities; is consciousness a necessity; is there an essential motivation? Of course, there are contenders at both extremes who wish to argue that plants are either capable or incapable of sentient perception. In the case of machines, the motivation is not theirs, but that might change if machines become conscious – if consciousness can bypass biology. And isn’t life just another kind of mechanism? So isn’t perception a process that can be replicated mechanically? On the scientific front, perception is taken to be explicable objectively without the prerequisite of a sentient ‘presence’ – the corollary being that a scientific explanation of perception will automatically yield an explanation of sentient behaviour and account for different levels of awareness to boot. Then there is the related question of a self-motivated inquiring intelligence; indeed, could an ‘intelligent machine’ ever begin to match a child’s imaginative perception of an event like Christmas?
‘Automata’ have been entertaining us for generations with mechanical responses that look like perceptions and intentions. Now we have versions that can engage us with conversational simulations. Altogether, they have shifted the debate onto the question about whether machines can be capable of achieving a ‘functional equivalence’ – whether, for practical purposes we don’t need to talk about mental states. But even if mental states are considered to be extraneous, it doesn’t prove that they don’t exist. In fact most exponents of ‘Artificial Intelligence’ tend to skirt around the issues, as if we only need to observe that artificial intelligence and artificial perception amount to alternative, perhaps superior, operational modes of what we call thinking and awareness. On the other hand, a case can be made for perception to be recognised as a confluence of two realities, the ‘objective’ and the ‘subjective’, which has been artificially under-stated in the name of explanation by swapping a fact for a theory – the fact being what we know of mental states from the inside, which we have devalued in favour of the theory that such knowledge doesn’t make a real difference because it is more like a passive effect than an active causal component of perception.
In fact, perception makes a difference because it introduces a new sort of realisation – such as when we come to know that colour is something more than the wavelength of light. So even though we can build a robot to detect different wavelengths and name the corresponding colours, it doesn’t prove that it can actually see colours. Nor does it matter whether my perceived ‘red’ is your ‘green’ in a world where only wavelengths count – wavelengths that don’t need to be elaborated in perception – except that reflected light has neither colour nor luminosity until perception supplies a different kind of realisation. Therefore, instead of downgrading our qualitative experiences because robots might not need them, we should be celebrating their special status. Similarly, a computer can win at chess without realising the significance of its achievement – so its victory is hollow even if it is programmed to cheer. Consequently, in the bigger picture, there may be more to reality than all we might affirm in terms of its physical properties alone – and the fact that we can equate everything to the physical world is possible only because there are two explicit sides to the equation.