• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

COLOUR

... snip ...

Colour and shape perception are, I think, strongly inter-related and often (though not always or necessarily) perceived in combination (eg in the observation of a strawberry). In the OP model, one property (shape) is taken to be an objective property of external objects and the other property (colour) isn't. This is a problem for the OP model, in that it can't properly explain why making this distinction is necessarily warranted.
That doesn't seem to be a problem. An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.
 
Last edited:
I think of shape as edge detection and connecting. These are things arising from lateral inhibition and continuity forming, both pretty fundamental neural processes. No mind needed. Horseshoe crabs do them. Of course if one invokes naming then language capability and memory are required. The later two seem to be much more complex processes but are still possible without invoking 'mind'.

Guessing function is like the devils workshop. If beggars had horses princes might ride.
 
An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.

Nice example. And as fromderinside says, in terms of vision it seems to be about edge detection, which I believe has to do with detecting contrasts (essentially when the input hitting one part of the retina differs from what is hitting an adjacent part).

It annoys me to have to say it though, but that doesn't seem to help either of us with the underlying problem (of whether shape is an independent, objective property of objects and colour isn't). Or do you think it does?

-------------------------------------------------------------------------------------------------------

Temporarily staying with the issues of contrast and edge detection specifically, here (in the images below) is an interesting phenomenon (involving our favourite example, strawberries). In some ways, I think it gets back to your thoughts on how different physiologies may fundamentally change how organisms perceive the world, including, in one of your examples, bats (using sonar) and in one of my examples, bottom-of-the-deep-ocean creatures that perceive their surroundings by detecting electricity.

If you recall, I extended the OP model beyond saying that the world outside our heads is not actually coloured, to what seemed to be the next implication, that it is not even objectively what we would call bright either, that we just see it that way (ie the way a hypothetical sentient bat or deep sea shark, operating in the absence of light, might see it if it had vivid, conscious brain experiences to accompany its perception).




x.png
Image in colour, which we perceive via the cones in our retinas (known as photopic vision).

y.png
Same image in grayscale. The strawberries still appear lighter than the background.

z.png
Same image in scotopic. Now the strawberries are darker than the background. The direction of contrast has been reversed.

Scotopic vision is our vision in low light conditions, when rods take over from cones. Rods are (so I read) most sensitive to (have peak sensitivity at) wavelengths associated with blue/green, and are more insensitive to wavelengths associated with red.

This lack of sensitivity to red can be seen here also:


iv.png
Colour


v.png
Grayscale


vi.png
Scotopic. The red parts of the strawberry are now almost indistinguishable from the black background, in other words there is a lack of contrast, and a resultant lack of edge detection.

The above phenomenon is known as the Purkinje effect, and was apparently one of the early clues that we have more than one type of light detector.
https://en.wikipedia.org/wiki/Purkinje_effect


So, my OP-model-related question is therefore, "are certain objects, objectively and externally, brighter than their surroundings, or are they not"?
 

Attachments

  • i.png
    i.png
    229 KB · Views: 1
  • ii.png
    ii.png
    320.1 KB · Views: 1
  • iii.png
    iii.png
    309.6 KB · Views: 1
Last edited:
An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.

Nice example. And as fromderinside says, in terms of vision it seems to be about edge detection, which I believe has to do with detecting contrasts (essentially when the input hitting one part of the retina differs from what is hitting an adjacent part).

It annoys me to have to say it though, but that doesn't seem to help either of us with the underlying problem (of whether shape is an independent, objective property of objects and colour isn't). Or do you think it does?

The physical details of what the brain does to create mind is well above my pay grade but what some the results of what the mind does should be able to be reasoned out to some extent. We do have a sense of spacial awareness developed early in life... this would seem to have been the result of mental mapping through a combination of various sensory inputs blended into a whole. A new infant does not seem to focus on anything (because nothing makes sense yet, only a cacophony of sensory input from all the senses?). I assume that the taste is the first that we make some sense of. Sight would seem to be next but, since there is so much information to be processed, an understanding probably takes quite a while. Spacial awareness would seem to need inputs from several senses to develop. The mental mapping required for this awareness would incorporate touch (which informs us of shape plus more), vision (confirms shape and provides other information), hearing (informs us of the direction of something that isn't felt or seen), etc.

So I guess that was a long winded way of saying that shape is a property of an object that occupies a specific volume within our sense of spacial awareness and can be confirmed by both touch and sight and can be measured with precision. Our visual image of the shape of an object can be mistaken because of shading or perspective but its form can be determined through touch. Our vision more easily fooled likely because we live in at least the three dimensional world and our sight gives us essentially a two dimensional representation with depth being interpreted and perceived often by subtle clues. Our sense of touch gives us truer three dimensional information.

Color is not confirmed by any other sense so could be (and likely is) only an artifact of the detection process. In the series of events that is required for detection of 'color', it is difficult for me to imagine how a stimulus that results in a very different form of stimulus that results in yet another very different form of stimulus that results in our sense of 'color' would be identical to some property of the object seen.

-------------------------------------------------------------------------------------------------------

Temporarily staying with the issues of contrast and edge detection specifically, here (in the images below) is an interesting phenomenon (involving our favourite example, strawberries). In some ways, I think it gets back to your thoughts on how different physiologies may fundamentally change how organisms perceive the world, including, in one of your examples, bats (using sonar) and in one of my examples, bottom-of-the-deep-ocean creatures that perceive their surroundings by detecting electricity.

If you recall, I extended the OP model beyond saying that the world outside our heads is not actually coloured, to what seemed to be the next implication, that it is not even objectively what we would call bright either, that we just see it that way (ie the way a hypothetical sentient bat or deep sea shark, operating in the absence of light, might see it if it had vivid, conscious brain experiences to accompany its perception).

Scotopic vision is our vision in low light conditions, when rods take over from cones. Rods are (so I read) most sensitive to (have peak sensitivity at) wavelengths associated with blue/green, and are more insensitive to wavelengths associated with red.

Scotopic. The red parts of the strawberry are now almost indistinguishable from the black background, in other words there is a lack of contrast, and a resultant lack of edge detection.

The above phenomenon is known as the Purkinje effect, and was apparently one of the early clues that we have more than one type of light detector.
https://en.wikipedia.org/wiki/Purkinje_effect


So, my OP-model-related question is therefore, "are certain objects, objectively and externally, brighter than their surroundings, or are they not"?
I think you are now definitely into artifacts of the sensing technique rather than a property of the object being measured. 'Brightness' could tell us how our particular chosen measurement technique interacts with the object but not necessarily about the object in and of itself. A pane of glass has very low brightness (essentially invisible) to our chosen measurement technique, light, but damn bright (loud) to a bat's chosen measurement technique (sound).

ETA:
I think we are still faced with the problem of "what is the definition of color being used". My preferred definition would mean that objects can't have color but have some property that results, after several steps in a process, in our sensing 'color'.
 
Last edited:
That doesn't seem to be a problem. An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.

"In infancy" has little to do with it since the NS is pretty much defined before birth with respect for from where input to where NS processes collect information.

Because that is true, I start from the more or less universal observation that senses are arranged in the NS from where it is sensed in the world. Skin, even pain,are are arranged in a homunculus in motor cortex. Vision is spatially and frequency represented in the visual cortex Sound is tonotopically arranged in the auditory cortex. etc. So the sensory spaces are defined IAC from how they were transduced, even if you hate that word. The wiringisin, the precursors to needed processing capabilities are in. What remains are continuation of input defining actual stimuli from a world outside the womb.

There is a lot for the NS to get straight, but one of those things is not that a capability to map need be developed since it is already pretty much in place. Hell if such is needed to be accomplished in other mammals they would already have consciousness since they are pretty much wireded as are we and they can get about pretty well after dropping to earth.

IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.
 
That doesn't seem to be a problem. An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.

"In infancy" has little to do with it since the NS is pretty much defined before birth with respect for from where input to where NS processes collect information.

Because that is true, I start from the more or less universal observation that senses are arranged in the NS from where it is sensed in the world. Skin, even pain,are are arranged in a homunculus in motor cortex. Vision is spatially and frequency represented in the visual cortex Sound is tonotopically arranged in the auditory cortex. etc. So the sensory spaces are defined IAC from how they were transduced, even if you hate that word. The wiringisin, the precursors to needed processing capabilities are in. What remains are continuation of input defining actual stimuli from a world outside the womb.

There is a lot for the NS to get straight, but one of those things is not that a capability to map need be developed since it is already pretty much in place. Hell if such is needed to be accomplished in other mammals they would already have consciousness since they are pretty much wireded as are we and they can get about pretty well after dropping to earth.

IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.

You seem to be conflating two very different things, the hardwiring of the brain and the human's mental development to the point of making sense of the sensory inputs. Or is it that you have never had interaction with a newborn infant and subsequently watched its development?
 
IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.

You are the one who is arguably inventing a need, namely that something needs to be a property of a stimulus before it can be a property of mental experience. There are so many examples of where this is not the case that the principle is totally accepted and established.

Secondly, the idea that senses are either arranged or fully formed at or before birth is very improbable. Just for starters, an infant’s brain may have the same number of neurons as an adult’s, but the adult brain has ten times the number of interconnections (50 trillion versus 500 trillion).
 
Last edited:
We do have a sense of spacial awareness developed early in life... this would seem to have been the result of mental mapping through a combination of various sensory inputs blended into a whole. A new infant does not seem to focus on anything (because nothing makes sense yet, only a cacophony of sensory input from all the senses?). I assume that the taste is the first that we make some sense of. Sight would seem to be next but, since there is so much information to be processed, an understanding probably takes quite a while. Spacial awareness would seem to need inputs from several senses to develop. The mental mapping required for this awareness would incorporate touch (which informs us of shape plus more), vision (confirms shape and provides other information), hearing (informs us of the direction of something that isn't felt or seen), etc.

As I understand it, an infant's vision is initially blurred, so if that's true then the world only comes into focus gradually, and thus learning about the world visually would be gradual. Feeding is one of the first and most crucial and urgent things for a baby. For a baby to get that input wrong could be instantaneously fatal. I'm speculating regarding the latter, obviously. I think the former is well-established.

So I guess that was a long winded way of saying that shape is a property of an object that occupies a specific volume within our sense of spacial awareness and can be confirmed by both touch and sight and can be measured with precision. Our visual image of the shape of an object can be mistaken because of shading or perspective but its form can be determined through touch. Our vision more easily fooled likely because we live in at least the three dimensional world and our sight gives us essentially a two dimensional representation with depth being interpreted and perceived often by subtle clues. Our sense of touch gives us truer three dimensional information.

Color is not confirmed by any other sense so could be (and likely is) only an artifact of the detection process. In the series of events that is required for detection of 'color', it is difficult for me to imagine how a stimulus that results in a very different form of stimulus that results in yet another very different form of stimulus that results in our sense of 'color' would be identical to some property of the object seen.

It is true that shape and form can be 'verified' via several senses, which helps to establish its/their veracity, or veridicality as I think it is called (its/their 'accuracy/truthfulness').

I would at least say that the traditional distinction between primary properties (objective, independent properties of the outside world that do not depend on perception) and secondary properties (that heavily depend on perception) has a decent basis in terms of supporting evidence.

I think you are now definitely into artifacts of the sensing technique rather than a property of the object being measured. 'Brightness' could tell us how our particular chosen measurement technique interacts with the object but not necessarily about the object in and of itself. A pane of glass has very low brightness (essentially invisible) to our chosen measurement technique, light, but damn bright (loud) to a bat's chosen measurement technique (sound).

Yes regarding the pane of glass. Good example, imo.

In a limited way, our having two basic types of light detector enables us to be (to some extent) 'two types of organism', if you see what I mean. We can at least see two ways. Via one way (cones), an object appears 'bright' (both of itself and in comparison to its surroundings), via another (rods) it appears 'dark' (both of itself and in comparison to its surroundings), when the light input is the same, and the object's spectral reflectance properties are the same, in both cases. This does, as you agree, seem to imply that 'brightness' is in the organism's processes and perceptions, not in (a property of) the object or the input stimulus.

And if 'brightness' is subjective (to a particular receptor or mode of detection) then I think colour's objectivity is undermined also.

I think we are still faced with the problem of "what is the definition of color being used". My preferred definition would mean that objects can't have color but have some property that results, after several steps in a process, in our sensing 'color'.

What I would say at minimum is that I cannot see an explanatory weakness in that, or in your previous definition. As far as I can tell, it/they explain everything, with the added bonus that they are more precise.

All it would mean, ontologically, is that the mental experience is the real one. I do not get why that should be problematical, given that it's how we readily understand so many other mental experiences.
 
Last edited:
IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.

You are the one who is arguably inventing a need, namely that something needs to be a property of a stimulus before it can be a property of mental experience. There are so many examples of where this is not the case that the principle is totally accepted and established.

Secondly, the idea that senses are either arranged or fully formed at or before birth is very unlikely. Just for starters, an infant’s brain may have the same number of neurons as an adult’s, but the adult brain has ten times the number of interconnections (50 trillion versus 500 trillion).

Wow. Assigning property to my description of what is the state of the brain at birth. Is comment needed? As for the low likelihood of assigning neurocapability before sensing capability I'm on solid ground even by your arguments.

I'm sure I didn't say fully anything. I said the infrastructure was physically in place without mental activity guessing how it was going to put things together. That's a long way from fully anything. Spouting a 10 times increase in connectivity isn't demonstration anything other than what I wrote. Your statement dismisses the fact that other mammals function quite well at birth with essentially the same neurosensory structure without, in infancy, having to invent features which are already present in species without our obvious cognitive skills.

As for all skepticalbip's ontological song and dance about babies being unfocused as reason for doing basic spatial organization try communicating with an adult cat, much less a kitten. Alternatively try explaining a baby's move of hand and face toward an nipple as unfocused needing development.

What's unfocused is the one trying to interpret from an armchair using rational skills unsuited to accumulation of data.

One thing I liked about Plato was he usually posed rational questions about to problems that could be dealt with by simple demonstrable logic. When one goes after hidden observables one is out of rational depth as a rule.
 
Last edited:
You seem to be conflating two very different things, the hardwiring of the brain and the human's mental development to the point of making sense of the sensory inputs. Or is it that you have never had interaction with a newborn infant and subsequently watched its development?

I'm not. I'm suggesting that mammalian evolution has proceeded to produce nervous system that is sensorily physically organized at birth. that says noting about how much or little one makes of such as color. It does suggest that color is organized spatially in the center of direct vision and that sensitivity to moving things and processing information during darkness is located more peripherally as evidenced by a baby turning it's head in response to a moving target out of foveal pov. Even at birth convergent innervation between different receptors of one small region of light frequency is collected by a single ganglion cell. Color is physically implicated.

All the above does not suggest that much more organization of basic color processing goes on during the first few months of baby's postnatal life.

Why are you guys the all or none gang. Basic structure is present. Basic color structure is present. Even basic focus and search processes are working.

What I'd do if I had my druthers would be to focus on structures further from primary sensation, like relating sense to combination of sense and meaning and stuff like that. I wound't try to shove that down to color or visual feature forming processes beyond the obvious.

My basic point is receptors are the source of information about color one sees and that is solidified by basic connections already being in place linked to specific color when it is introduced as information to the nervous system.
 
...... Alternatively try explaining a baby's move of hand and face toward an nipple as unfocused needing development.
That's easy. A newborn infant doesn't do that. The nipple is placed in the newborn infant's mouth by the mother to trigger the suckling reflex (the mother had already developed her spacial awareness long before). The suckling reflex can also be triggered by rubbing the lip with the finger, no nipple anywhere near. It isn't until later, after the infant learns a bit, that it seeks the nipple.
 
You seem to be conflating two very different things, the hardwiring of the brain and the human's mental development to the point of making sense of the sensory inputs. Or is it that you have never had interaction with a newborn infant and subsequently watched its development?

I'm not. I'm suggesting that mammalian evolution has proceeded to produce nervous system that is sensorily physically organized at birth. that says noting about how much or little one makes of such as color. It does suggest that color is organized spatially in the center of direct vision and that sensitivity to moving things and processing information during darkness is located more peripherally as evidenced by a baby turning it's head in response to a moving target out of foveal pov. Even at birth convergent innervation between different receptors of one small region of light frequency is collected by a single ganglion cell. Color is physically implicated.

All the above does not suggest that much more organization of basic color processing goes on during the first few months of baby's postnatal life.

Why are you guys the all or none gang. Basic structure is present. Basic color structure is present. Even basic focus and search processes are working.

What I'd do if I had my druthers would be to focus on structures further from primary sensation, like relating sense to combination of sense and meaning and stuff like that. I wound't try to shove that down to color or visual feature forming processes beyond the obvious.

My basic point is receptors are the source of information about color one sees and that is solidified by basic connections already being in place linked to specific color when it is introduced as information to the nervous system.
And again, I haven't seen anyone say that the 'basic wiring' isn't in place at birth. It is the interpreting of the sensory input through that 'basic wiring' that is learned through experience.

So again... "You seem to be conflating two very different things, the hardwiring of the brain and the human's mental development to the point of making sense of the sensory inputs."
 
No. I'm not. Not only is the basic wiring in place but neural activity has been ongoing for some time, including receptor activity prior to birth. So neurons are used to getting input from green receptors that is different from what it's getting from blue receptors that is different than what is being produced through the red receptors. Color sense is being reinforced prior to birth. When one is born green, blue, and red light activates receptors then the NS which is now ready to work on producing the decoding of color, location, and intensity from specific neural activity incoming.

The brain is being conditioned to produce characteristic color behavior prior to any conscious organized and coordinated activity which is what you're all hot about.

No coflation. It's just the basis for verifying color is present before interpretation that color is relevant to information received.

It's why I distinguished between the wiring being there and the receptors being active before meaning was was appreciated signalling color exists as a physical thing before color information can properly be interpreted.

It is also why I specified receptor induced activity as color and neural based information processing as color information.

Information about color needs a material basis for producing more than just presence of information. The neural information resulting from receptor activity needs a color reference which is identified as what is produced by neurons transporting it from specific receptors on specific tracts, locations, and intensities. It's not a matter of mind invention. it is a matter of verifying what is being transported for interpretation.

Otherwise it's all a magic act.
 
...... Alternatively try explaining a baby's move of hand and face toward an nipple as unfocused needing development.
That's easy. A newborn infant doesn't do that. The nipple is placed in the newborn infant's mouth by the mother to trigger the suckling reflex (the mother had already developed her spacial awareness long before). The suckling reflex can also be triggered by rubbing the lip with the finger, no nipple anywhere near. It isn't until later, after the infant learns a bit, that it seeks the nipple.

You are wrong. https://www.livescience.com/59847-newborns-breastfeeding-temperature.html

A new study from Italy suggests that one reason newborns are drawn to the nipple is because it is slightly warmer than the surrounding skin.

A higher nipple temperature could make it easier for a newborn to find it and could help explain the phenomenon of newborns just minutes old who somehow clamber up to the nipple, which researchers refer to as "breast crawl," according to the study, published today (July 19, 2017) in the journal Acta Paediatrica.

I'm almost ready to label you a Tr..... fact inventor.

I would have except my intention in my post was to imply more than presence of neonate intention ....

Sometimes I play fair.
 
You are wrong. https://www.livescience.com/59847-newborns-breastfeeding-temperature.html

[FONT=&]A new study from Italy suggests that one reason newborns are drawn to the nipple is because it is slightly warmer than the surrounding skin.

[/FONT]
[FONT=&]A higher nipple temperature could make it easier for a [/FONT]newborn[FONT=&] to find it and could help explain the phenomenon of newborns just minutes old who somehow clamber up to the nipple, which researchers refer to as "breast crawl," according to the study, published today (July 19, 2017) in the journal Acta Paediatrica.[/FONT]

I'm almost ready to label you a Tr..... fact inventor.

I would have except my intention in my post was to imply more than presence of neonate intention ....

Sometimes I play fair.

From your link:
Once the baby is held in a position that is comfortable, presenting the nipple can help with latching on. The mother should hold the baby so that the head is level with the breast, nose to nipple. Then, the baby should be turned so the mother and baby are tummy-to-tummy. Support the breast with the free hand that isn’t holding the baby with all four fingers underneath it, away from the areola (darkened area around the nipple) to best present the nipple, recommended Terry Bretscher, a nurse and lactation supervisor at Pomona Valley Hospital Medical Center.

Many experts recommend tickling the baby's lower lip with the nipple and waiting for the baby's mouth to open wide before offering the breast. Then, pull the baby in to latch on. The tip of the baby’s nose and chin should be touching the breast during proper positioning. Mothers shouldn’t worry about suffocation. The baby will pull off if unable to breathe.
The mother is instructed to position the infant so that it is nose to nipple (the mother is positioning the infant, not the infant chasing the nipple)
Then tickle the infant's lip with the nipple. Tickling the infants lip triggers the suckling reflex whether it is tickled with a nipple, or tickled with a finger, or anything else even when there is no nipple anywhere close. If you know anyone with a young infant, tickle its lower lip and see what happens.

Another trick that can be done with infants is testing their grasp reflex with their toes. It is probably a vestigial reflex left over from our earlier primate ancestors.
 
Last edited:
As I understand it, nearly all human newborns (like many mammal newborns) do instinctively, actively seek the nipple. Also, mothers can assist, as can midwives.

Similarly, I would imagine that the basic abilities to perceive shape and colour are there at birth also.
 
OK skepticalbip, so you verify I played fair.

How about we agree with ruby sparkes saying we'll put seeking as something newborn humans do at birth. How about agreeing that expanding existing NS infrastructure rather than learning may be at the center of early NS development. From what I remember from working with newborn Norwegian rats n the Olds lab in the late seventies, ascending and descending NS were there at birth and activity thereof were functioning as early as five days after birth. Rats are born blind. We're certainly not going to implant or record baby humans for ethical reasons, Donald Lindsay recordings from wife at UCLA aside.
 
Something that has so far been lacking in the discussion, I think, perhaps oddly so given that some have claimed that colour is a property of objects, is.....the relevant properties of objects, and how they affect absorption and reflection (or scattering) of EM radiation.

Here, for example, is a graph showing the spectral reflectances of the igneous rock graphic granite:

1.png

And here is a similar graph for granite porphory:

2.png

https://www.researchgate.net/figure...Intermediate-and-Monomineralic_fig1_329645884

I am tempted to ask the rhetorical question, "Why would an igneous rock have 'evolved' to have colour, given that for at least 90% of its existence on earth (and for 99.9+% of its existence in the universe) there was nothing to detect it ?" but I'll set that aside for now.

On the same note, here below is link to a detailed paper on geological spectroscopy. It includes an analysis of, for example, the processes of absorption, (including electronic, chemical and vibrational processes) and the processes of reflection/scattering (including how grain sizes play a role):

Spectroscopy of Rocks and Minerals, and Principles of Spectroscopy
https://archive.usgs.gov/archive/sites/speclab.cr.usgs.gov/PAPERS.refl-mrs/refl4.html

---------------------------------------------------------------------------

On a related note, and switching from objects to the radiation itself, light also causes objects to become heated, and our own retinal receptors are apparently triggered by both light and heat (the latter to a much lesser extent, apparently). Both of these raise the issue of whether light (and indeed only light) does in fact have 'colour information', rather than merely 'information (or energies) of different sorts which can have different results, via different processes, in different entities and objects, depending on circumstances and interactions'. The contribution of light to creating vitamin D, for example, and indeed causing pain, have previously been mentioned. And the retinal inputs also go (via separate receptors and transductions) to specific parts of the brain not understood to be much or directly involved in vision, such as the pituitary gland (where melatonin is produced).

This is a way of asking if, for humans, because they have such vivid colour experiences, and because these are so important to them in a variety of ways, the tempting idea that they are caused (only) by specific colour information, that already existed for that purpose, in light (or indeed objects) is overstated or overdetermined. The alternative is that such properties (which have a variety of physical influences and functions, not just on vision, when it evolved in some living things) existed for billions of years in a lifeless universe and were of themselves uncoloured, and that some organisms eventually evolved the capacity to finely (and automatically) distinguish between them, including some that eventually developed (again, automatic) subjective and wholly internal representations, the properties of which were and are not part of the external objects or stimuli (as with pain and many other uniquely mental/psychological phenomena).
 

Attachments

  • Granite.pdf
    18.4 KB · Views: 1
  • Granite porphory.pdf
    27.8 KB · Views: 1
Last edited:
What follows are overviews of the topics you bring up from a physics POV. Since you don't refer to them I thought either you were operating as the Seldon in  Foundation (Asimov novel) looking as a psycho historian at what is, or maybe you'd just ....

Enjoy ....

adding what they communicate to your body of knowledge.


Light and heat are related.  Color temperature

The color temperature of a light source is the temperature of an ideal black-body radiator that radiates light of a color comparable to that of the light source. Color temperature is a characteristic of visible light that has important applications in lighting, photography, videography, publishing, manufacturing, astrophysics, horticulture, and other fields. In practice, color temperature is meaningful only for light sources that do in fact correspond somewhat closely to the radiation of some black body, i.e., light in a range going from red to orange to yellow to white to blueish white; it does not make sense to speak of the color temperature of, e.g., a green or a purple light. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit of measure for absolute temperature.

800px-PlanckianLocus.png

Basically rocks are aggregates of molecules, a variety of atoms, which are home to, use and emit EM of which light is but a small part.

 Stellar evolution

Stellar evolution is the process by which a star changes over the course of time. Depending on the mass of the star, its lifetime can range from a few million years for the most massive to trillions of years for the least massive, which is considerably longer than the age of the universe. The table shows the lifetimes of stars as a function of their masses.[1] All stars are formed from collapsing clouds of gas and dust, often called nebulae or molecular clouds. Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star.


 Emission spectrum

The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference. This collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element's emission spectrum is unique. Therefore, spectroscopy can be used to identify elements in matter of unknown composition. Similarly, the emission spectra of molecules can be used in chemical analysis of substances.

As for uncolored how about ...

 Laser

A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for "light amplification by stimulated emission of radiation".[1][2][3] The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light which is coherent. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers and lidar. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Alternatively, temporal coherence can be used to produce pulses of light with a broad spectrum but durations as short as a femtosecond ("ultrashort pulses").

... and there's material pointing to  Information as well.

Information can be thought of as the resolution of uncertainty; it is that which answers the question of "what an entity is" and thus defines both its essence and nature of its characteristics. The concept of information has different meanings in different contexts.[1] Thus the concept becomes related to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, representation, and entropy.Information is associated with data, as data represents values attributed to parameters, and information is data in context and with meaning attached.

Information also relates to knowledge, as knowledge signifies understanding of an abstract or concrete concept.[2]


In terms of communication, information is expressed either as the content of a message or through direct or indirect observation.

That which is perceived can be construed as a message in its own right, and in that sense, information is always conveyed as the content of a message.
 
Last edited:
Back
Top Bottom