Consider a yellow coffee cup. It’s sitting on your desk. You periodically grab it to take a swig. How might this work in the brain? Perhaps there’s a mental map of the desktop, with the cup represented in that map. In this telling, you might think that you look at the cup to fix its exact position, then direct your arm to reach for it.
In Being There, Andy Clark speculates differently, using something he calls a personalized representation. Rather than representing the totality of the cup, you just remember that “it’s the yellow thing.” It’s far easier to recognize the color yellow than to distinguish one distorted torus from all the other three-dimensional shapes. It works even when you’re using your peripheral vision, which is very bad at detail. So it’s cheaper in this situation to represent the cup by its color.
One day, I was working on a podcast script. To the right of my laptop were two green cans. One had delicious La Croix fizzy water in it. Say, National Beverage Corporation, makers of La Croix: I’m open to a sponsorship deal. The other was also a La Croix can, but empty. Between the cans and the laptop was a stack of green 3x5 notecards. Green, green, green, yet I unerringly grabbed the right object. And I did it without shifting my attention from the screen, working all with peripheral vision.
My suspicion: there’s not even a personalized representation here. Instead, there’s muscle memory. When I want a drink from a can, I just repeat the movement that put it there, which is the same movement I used last time I fetched it – because (I noticed as I thought about what I do unthinkingly) I always set the can down in the same place. In fact, that place was where the now-empty can had been while I was drinking from it. After I shook the last refreshing drop Ibid. from that can into my mouth, I’d put it down farther away from me and specifically moved the new can to where it had been.
This is completely non-representational in somewhat the way described in the Pengi episode: I imagine myself flinging my hand out in a particular trajectory, slowing at the end, with the expectation that somewhere around the end there’ll be something grabbable. Then when the brain felt the cold shape there, that affordance triggered another remembered movement that took the can to my mouth. It didn’t matter how my hand had come to grasp that shape. (If the can had unexpectedly been missing, the hand would have “thrown an interrupt”, causing me to look over at where it should have been, much as I would if my peripheral vision had glimpsed a predator.)
Another piece of evidence: in the case above, I was working on a longish rectangular table with plenty of room for cans. Another time, I was working on a small café table where I automatically put the can behind the laptop screen. Nevertheless, I was able unerringly to grab the can even though I couldn’t see it.
Proprioception is the sense that tells you where parts of your body are in space, what angle your joints are at, how much force your muscles are exerting, and so on. Whereas Clark described grabbing based on vision, I suspect proprioception – self monitoring – has more to do with it.
It’s possible the brain barely gets involved at all, in much the way it’s not involved in jerking your hand back from a hot stove. (The instruction to pull away comes from the spinal cord, which only passes along the “Ouch!” signal to the brain after the fact.)