Sure the macro scale architecture is part of it. We are mentally and physically limited to the number and function of neurons, nerves, muscles, etc. But the electrochemical activity is what makes the macro scale activate its options, potential and abilities. We may have quantum indeterminacy of various abilities within the framework and constraints of the macro scale architecture.
That's because they are "projecting" their own experiences onto the brain processes that they are studying that they would know are correlated to the experience of red. They would be matching known processes that correlate to redness and filling in the information from qualia based their own experiences. But they cannot detect qualia the way they detect all other physical properties of the processes.
That's not what I meant. However a person may mentally perceive colour, anyone and everyone who is not colour blind can point to a red traffic light and identify it as being red, not blue or purple or green....that being because the wavelength of light being emitted by the traffic light is known to represent 'red' in the mind of any observer who is not colour blind or has some brain abnormality that interferes with discerning colours.
So are you ultimately saying that an instrument can detect the human experience of sensing the color red?
The experience is completely internal; We have no way to tell whether an instrument does or does not experience red, and that fact remains true even when we extend the meaning of 'instrument' to include human brains other than our own.
As far as I can tell, nothing - not even other humans - can have the experience of sensing the colour red; it is unique to ME.
I guess that other humans, and even other animals, might have the same or a similar experience; But that's a pure guess with no particular reason to accept it as true.
I guess that computers don't have the same or a similar experience; But again, that's a pure guess with no particular reason to accept it as true.
It is just as (un)reasonable to guess that any sufficiently complex computer (biological or otherwise) has 'experiences' and 'consciousness'.
Personally, I think that all this guessing is pretty futile; If we make the assumption that a particular system - whether that system is another person, or a computer, or a piece of photographic film - has the 'experience of red', and having made that assumption, our predictions of how that system will behave are bourne out by observation, then it's a good assumption. If the future behaviour of the system is in contradiction to the assumption, then it was a poor assumption. And if the behaviour is equivocal, we know nothing about the quality of the assumption.
After a few iterations, if it is still not possible to assign the assumption the label 'good' or 'bad', then it's time to stop wasting time on this pointless quest to understand something about which we cannot collect any data, and to think about something else, until such time as fresh data is available, or we think up a new experiment that might collect such fresh data.