Over the past several weeks, I've been reading and listening to a lot of stuff that is less sociology and more neuroscience. Blog posts and podcasts have been showing up in my RSS feed on the field, and I am fascinated! (I tried to quickly pull together some links but had some difficulty with that. Much of this comes from Office Hours, Diane Rehm, Slate, and RSAnimate, though, if memory serves.) Here is just a brief--and likely horribly oversimplified--rundown of my lay understanding of the ideas.
Brains work because neurons (i.e. single nerve cells) are networked together in an incredibly complex web of connections, and the neurons fire in patterns. When I look at my wife's face, for example, a very specific set of neurons fire simultaneously. When I look at my cat's face, a different set of neurons fire. It is these patterns that amount to what we call memory or thinking.
For most animals, these low-level patterns that are largely pre-programmed genetically are enough. "Food = Good." "Fire = Bad." Human beings, however, are among a unique set of animals that can build hierarchies of these patterns (because of our neural cortex) to do higher-level analysis. Here is an example I just recently heard (I think on the Diane Rehm Show?): There are some neurons that start firing whenever I see the little crossbar of a majuscule A. There are other neurons that fire for the legs of the A. When they all fire together, that is the pattern, or mental concept, of an "A." Now, ramp that up another level. When those neurons fire with those for the other letters, I construct the concept for an "Apple." Apples fit into another concept called "fruit," and so on. This complex, hierarchical patterning of increasingly abstract ideas is what we call intelligence.
Even though individual computers don't work in the way that brains do, as outlined above, the increasingly complicated network of computers that we call the internet does look a lot like a brain. Some are speculating that as the internet gets more complex, it will perhaps begin to display its own intelligence.
Now, we're moving away from the empirical and into the philosophical. If an intelligence emerges from the internet, it might also spawn consciousness. Increasing neural-network complexity seems to beget self-awareness. After all, what is more abstract than the recognition that "me" is a "he" to some other "me?" It is even possible that at some point in the future, the internet could experience "pain." Think about the last time you had a headache. It's difficult to describe the sensation--or any sensation for that matter--but we know that it amounts to a kind of short circuiting in the brain. Imagine if there was a power outage for the future internet that knocked out a series of computers. The internet might experience this as "pain." (Just a reminder, I'm fully ripping off all of these ideas and most of the examples.)
How might I tie this back into sociology? (For a good discussion in this area, check out "Can Sociology Create an Artificial Intelligence?" in Randall Collins' Sociological Insight.) One place of inquiry I see is in the very nature of higher-level reasoning. Since neuroscience is gathering evidence that thought itself is about creating hierarchies, we must investigate the relationship between the social tendency to stratify socially and our biological tendency to stratify mentally. Is the latter driving the former? If so, can we overcome it?