PyramidHead
Contributor
We know that different species of animals have different mental capacities. We know why that is: based on the specific pressures in the environments where their ancestors thrived, they tended to favor a mental 'toolbox' that was sufficient for evolutionary fitness. We know that it's ludicrous to expect an organism to understand or conceptualize something if it lacks the appropriate mental tools. No matter how hard we try, we will never be able to teach a field mouse to play chess. No matter how hard the mice works at it, it will never play chess.
We also know that we're animals, and that we developed according to the exact same process that gave rise to the limited brains of other animals. Yet, we never seriously entertain the idea that some (perhaps even most) of reality is beyond the perimeter of our particular mental toolbox. We have come a long way with the tools we have. But is there any reason to think we can keep going until we understand everything? Why, when every other organism has a model of the universe that is small and provincial enough to account for their immediate needs, should we alone have a brain with the capacity to comprehend all of reality? It seems much more likely, at least to me, that like everything else with a brain, we will eventually come to the limit of what we can render with our concepts, what we can render with concepts per se. Beyond that point will be as foreign to us as chess is to a mouse; to the mouse, getting better at something might mean doing it faster, or memorizing certain regular features of its environment, or finding the best way around an obstacle. The notion of anticipating the choices of another being in the context of a game with specific rules is of a different world altogether from what it can even imagine doing.
It wouldn't surprise me if subjective experience is one such problem. The school of thought that claims we are banging up against our mental limit as humans when we try to understand how the feeling of vertigo results from certain biochemical reactions has been called the 'new mysterianism.' Colin McGinn and Thomas Nagel are some of its more well-known proponents.
I wonder, however, where the burden of proof should be in a case like this: do the new mysterians have to demonstrate that we are incapable of solving the hard problem of consciousness, or do people who disagree have to demonstrate that it's solvable? On the one hand, we've solved a lot of problems as a species. Other animals more or less have to start from scratch every generation, but we can build on past successes. Having made it this far, why give up? But on the other hand, we are still animals, and it would be a truly remarkable stroke of luck if we just happen to have the brainpower to understand anything we put enough effort into, despite having evolved for mundane, fitness-enabling tasks like everything else. And it's not like our progress has stayed constant; physics has advanced very little since the late 20th century, and philosophy of mind has made no inroads to anything resembling a final answer to the hard problem. Maybe those are signs that we've hit our mental boundary.
We also know that we're animals, and that we developed according to the exact same process that gave rise to the limited brains of other animals. Yet, we never seriously entertain the idea that some (perhaps even most) of reality is beyond the perimeter of our particular mental toolbox. We have come a long way with the tools we have. But is there any reason to think we can keep going until we understand everything? Why, when every other organism has a model of the universe that is small and provincial enough to account for their immediate needs, should we alone have a brain with the capacity to comprehend all of reality? It seems much more likely, at least to me, that like everything else with a brain, we will eventually come to the limit of what we can render with our concepts, what we can render with concepts per se. Beyond that point will be as foreign to us as chess is to a mouse; to the mouse, getting better at something might mean doing it faster, or memorizing certain regular features of its environment, or finding the best way around an obstacle. The notion of anticipating the choices of another being in the context of a game with specific rules is of a different world altogether from what it can even imagine doing.
It wouldn't surprise me if subjective experience is one such problem. The school of thought that claims we are banging up against our mental limit as humans when we try to understand how the feeling of vertigo results from certain biochemical reactions has been called the 'new mysterianism.' Colin McGinn and Thomas Nagel are some of its more well-known proponents.
I wonder, however, where the burden of proof should be in a case like this: do the new mysterians have to demonstrate that we are incapable of solving the hard problem of consciousness, or do people who disagree have to demonstrate that it's solvable? On the one hand, we've solved a lot of problems as a species. Other animals more or less have to start from scratch every generation, but we can build on past successes. Having made it this far, why give up? But on the other hand, we are still animals, and it would be a truly remarkable stroke of luck if we just happen to have the brainpower to understand anything we put enough effort into, despite having evolved for mundane, fitness-enabling tasks like everything else. And it's not like our progress has stayed constant; physics has advanced very little since the late 20th century, and philosophy of mind has made no inroads to anything resembling a final answer to the hard problem. Maybe those are signs that we've hit our mental boundary.