fromderinside
Mazzie Daius
- Joined
- Oct 6, 2008
- Messages
- 15,945
- Basic Beliefs
- optimist
I agree the post above yours is
I believe the way Hull put it was behavior is measured by number of bollae.
That doesn't seem to be a problem. An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.... snip ...
Colour and shape perception are, I think, strongly inter-related and often (though not always or necessarily) perceived in combination (eg in the observation of a strawberry). In the OP model, one property (shape) is taken to be an objective property of external objects and the other property (colour) isn't. This is a problem for the OP model, in that it can't properly explain why making this distinction is necessarily warranted.
An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.
An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.
Nice example. And as fromderinside says, in terms of vision it seems to be about edge detection, which I believe has to do with detecting contrasts (essentially when the input hitting one part of the retina differs from what is hitting an adjacent part).
It annoys me to have to say it though, but that doesn't seem to help either of us with the underlying problem (of whether shape is an independent, objective property of objects and colour isn't). Or do you think it does?
I think you are now definitely into artifacts of the sensing technique rather than a property of the object being measured. 'Brightness' could tell us how our particular chosen measurement technique interacts with the object but not necessarily about the object in and of itself. A pane of glass has very low brightness (essentially invisible) to our chosen measurement technique, light, but damn bright (loud) to a bat's chosen measurement technique (sound).-------------------------------------------------------------------------------------------------------
Temporarily staying with the issues of contrast and edge detection specifically, here (in the images below) is an interesting phenomenon (involving our favourite example, strawberries). In some ways, I think it gets back to your thoughts on how different physiologies may fundamentally change how organisms perceive the world, including, in one of your examples, bats (using sonar) and in one of my examples, bottom-of-the-deep-ocean creatures that perceive their surroundings by detecting electricity.
If you recall, I extended the OP model beyond saying that the world outside our heads is not actually coloured, to what seemed to be the next implication, that it is not even objectively what we would call bright either, that we just see it that way (ie the way a hypothetical sentient bat or deep sea shark, operating in the absence of light, might see it if it had vivid, conscious brain experiences to accompany its perception).
Scotopic vision is our vision in low light conditions, when rods take over from cones. Rods are (so I read) most sensitive to (have peak sensitivity at) wavelengths associated with blue/green, and are more insensitive to wavelengths associated with red.
Scotopic. The red parts of the strawberry are now almost indistinguishable from the black background, in other words there is a lack of contrast, and a resultant lack of edge detection.
The above phenomenon is known as the Purkinje effect, and was apparently one of the early clues that we have more than one type of light detector.
https://en.wikipedia.org/wiki/Purkinje_effect
So, my OP-model-related question is therefore, "are certain objects, objectively and externally, brighter than their surroundings, or are they not"?
That doesn't seem to be a problem. An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.
That doesn't seem to be a problem. An interpretation of shape is made because the mind has created a map, during infancy, of the neurons in the visual cortex related to the visual field. When observing a strawberry, those neurons in the map that correspond to the photoreceptors that have input from the shape in the visual field (the strawberry) will be active. But only those neurons within that shape that the mind interprets as 'red'. As a loose analogy, think of a billboard filled with small red, green, and blue lights all evenly distributed. Now switch on a matrix of lights in in the shape of a strawberry in the center, but only the red lights, the green and blue lights within that shape left off.
"In infancy" has little to do with it since the NS is pretty much defined before birth with respect for from where input to where NS processes collect information.
Because that is true, I start from the more or less universal observation that senses are arranged in the NS from where it is sensed in the world. Skin, even pain,are are arranged in a homunculus in motor cortex. Vision is spatially and frequency represented in the visual cortex Sound is tonotopically arranged in the auditory cortex. etc. So the sensory spaces are defined IAC from how they were transduced, even if you hate that word. The wiringisin, the precursors to needed processing capabilities are in. What remains are continuation of input defining actual stimuli from a world outside the womb.
There is a lot for the NS to get straight, but one of those things is not that a capability to map need be developed since it is already pretty much in place. Hell if such is needed to be accomplished in other mammals they would already have consciousness since they are pretty much wireded as are we and they can get about pretty well after dropping to earth.
IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.
IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.
We do have a sense of spacial awareness developed early in life... this would seem to have been the result of mental mapping through a combination of various sensory inputs blended into a whole. A new infant does not seem to focus on anything (because nothing makes sense yet, only a cacophony of sensory input from all the senses?). I assume that the taste is the first that we make some sense of. Sight would seem to be next but, since there is so much information to be processed, an understanding probably takes quite a while. Spacial awareness would seem to need inputs from several senses to develop. The mental mapping required for this awareness would incorporate touch (which informs us of shape plus more), vision (confirms shape and provides other information), hearing (informs us of the direction of something that isn't felt or seen), etc.
So I guess that was a long winded way of saying that shape is a property of an object that occupies a specific volume within our sense of spacial awareness and can be confirmed by both touch and sight and can be measured with precision. Our visual image of the shape of an object can be mistaken because of shading or perspective but its form can be determined through touch. Our vision more easily fooled likely because we live in at least the three dimensional world and our sight gives us essentially a two dimensional representation with depth being interpreted and perceived often by subtle clues. Our sense of touch gives us truer three dimensional information.
Color is not confirmed by any other sense so could be (and likely is) only an artifact of the detection process. In the series of events that is required for detection of 'color', it is difficult for me to imagine how a stimulus that results in a very different form of stimulus that results in yet another very different form of stimulus that results in our sense of 'color' would be identical to some property of the object seen.
I think you are now definitely into artifacts of the sensing technique rather than a property of the object being measured. 'Brightness' could tell us how our particular chosen measurement technique interacts with the object but not necessarily about the object in and of itself. A pane of glass has very low brightness (essentially invisible) to our chosen measurement technique, light, but damn bright (loud) to a bat's chosen measurement technique (sound).
I think we are still faced with the problem of "what is the definition of color being used". My preferred definition would mean that objects can't have color but have some property that results, after several steps in a process, in our sensing 'color'.
IMHO we all need to drop back and think about to what are humans evolutionarily related before we go inventing needs in humans for this and that.
You are the one who is arguably inventing a need, namely that something needs to be a property of a stimulus before it can be a property of mental experience. There are so many examples of where this is not the case that the principle is totally accepted and established.
Secondly, the idea that senses are either arranged or fully formed at or before birth is very unlikely. Just for starters, an infant’s brain may have the same number of neurons as an adult’s, but the adult brain has ten times the number of interconnections (50 trillion versus 500 trillion).
You seem to be conflating two very different things, the hardwiring of the brain and the human's mental development to the point of making sense of the sensory inputs. Or is it that you have never had interaction with a newborn infant and subsequently watched its development?
That's easy. A newborn infant doesn't do that. The nipple is placed in the newborn infant's mouth by the mother to trigger the suckling reflex (the mother had already developed her spacial awareness long before). The suckling reflex can also be triggered by rubbing the lip with the finger, no nipple anywhere near. It isn't until later, after the infant learns a bit, that it seeks the nipple....... Alternatively try explaining a baby's move of hand and face toward an nipple as unfocused needing development.
And again, I haven't seen anyone say that the 'basic wiring' isn't in place at birth. It is the interpreting of the sensory input through that 'basic wiring' that is learned through experience.You seem to be conflating two very different things, the hardwiring of the brain and the human's mental development to the point of making sense of the sensory inputs. Or is it that you have never had interaction with a newborn infant and subsequently watched its development?
I'm not. I'm suggesting that mammalian evolution has proceeded to produce nervous system that is sensorily physically organized at birth. that says noting about how much or little one makes of such as color. It does suggest that color is organized spatially in the center of direct vision and that sensitivity to moving things and processing information during darkness is located more peripherally as evidenced by a baby turning it's head in response to a moving target out of foveal pov. Even at birth convergent innervation between different receptors of one small region of light frequency is collected by a single ganglion cell. Color is physically implicated.
All the above does not suggest that much more organization of basic color processing goes on during the first few months of baby's postnatal life.
Why are you guys the all or none gang. Basic structure is present. Basic color structure is present. Even basic focus and search processes are working.
What I'd do if I had my druthers would be to focus on structures further from primary sensation, like relating sense to combination of sense and meaning and stuff like that. I wound't try to shove that down to color or visual feature forming processes beyond the obvious.
My basic point is receptors are the source of information about color one sees and that is solidified by basic connections already being in place linked to specific color when it is introduced as information to the nervous system.
That's easy. A newborn infant doesn't do that. The nipple is placed in the newborn infant's mouth by the mother to trigger the suckling reflex (the mother had already developed her spacial awareness long before). The suckling reflex can also be triggered by rubbing the lip with the finger, no nipple anywhere near. It isn't until later, after the infant learns a bit, that it seeks the nipple....... Alternatively try explaining a baby's move of hand and face toward an nipple as unfocused needing development.
A new study from Italy suggests that one reason newborns are drawn to the nipple is because it is slightly warmer than the surrounding skin.
A higher nipple temperature could make it easier for a newborn to find it and could help explain the phenomenon of newborns just minutes old who somehow clamber up to the nipple, which researchers refer to as "breast crawl," according to the study, published today (July 19, 2017) in the journal Acta Paediatrica.
You are wrong. https://www.livescience.com/59847-newborns-breastfeeding-temperature.html
[FONT=&]A new study from Italy suggests that one reason newborns are drawn to the nipple is because it is slightly warmer than the surrounding skin.
[/FONT][FONT=&]A higher nipple temperature could make it easier for a [/FONT]newborn[FONT=&] to find it and could help explain the phenomenon of newborns just minutes old who somehow clamber up to the nipple, which researchers refer to as "breast crawl," according to the study, published today (July 19, 2017) in the journal Acta Paediatrica.[/FONT]
I'm almost ready to label you a Tr..... fact inventor.
I would have except my intention in my post was to imply more than presence of neonate intention ....
Sometimes I play fair.
The mother is instructed to position the infant so that it is nose to nipple (the mother is positioning the infant, not the infant chasing the nipple)Once the baby is held in a position that is comfortable, presenting the nipple can help with latching on. The mother should hold the baby so that the head is level with the breast, nose to nipple. Then, the baby should be turned so the mother and baby are tummy-to-tummy. Support the breast with the free hand that isn’t holding the baby with all four fingers underneath it, away from the areola (darkened area around the nipple) to best present the nipple, recommended Terry Bretscher, a nurse and lactation supervisor at Pomona Valley Hospital Medical Center.
Many experts recommend tickling the baby's lower lip with the nipple and waiting for the baby's mouth to open wide before offering the breast. Then, pull the baby in to latch on. The tip of the baby’s nose and chin should be touching the breast during proper positioning. Mothers shouldn’t worry about suffocation. The baby will pull off if unable to breathe.
The color temperature of a light source is the temperature of an ideal black-body radiator that radiates light of a color comparable to that of the light source. Color temperature is a characteristic of visible light that has important applications in lighting, photography, videography, publishing, manufacturing, astrophysics, horticulture, and other fields. In practice, color temperature is meaningful only for light sources that do in fact correspond somewhat closely to the radiation of some black body, i.e., light in a range going from red to orange to yellow to white to blueish white; it does not make sense to speak of the color temperature of, e.g., a green or a purple light. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit of measure for absolute temperature.
Stellar evolution is the process by which a star changes over the course of time. Depending on the mass of the star, its lifetime can range from a few million years for the most massive to trillions of years for the least massive, which is considerably longer than the age of the universe. The table shows the lifetimes of stars as a function of their masses.[1] All stars are formed from collapsing clouds of gas and dust, often called nebulae or molecular clouds. Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star.
The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has a specific energy difference. This collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element's emission spectrum is unique. Therefore, spectroscopy can be used to identify elements in matter of unknown composition. Similarly, the emission spectra of molecules can be used in chemical analysis of substances.
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for "light amplification by stimulated emission of radiation".[1][2][3] The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow.
A laser differs from other sources of light in that it emits light which is coherent. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers and lidar. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Alternatively, temporal coherence can be used to produce pulses of light with a broad spectrum but durations as short as a femtosecond ("ultrashort pulses").
Information can be thought of as the resolution of uncertainty; it is that which answers the question of "what an entity is" and thus defines both its essence and nature of its characteristics. The concept of information has different meanings in different contexts.[1] Thus the concept becomes related to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, representation, and entropy.Information is associated with data, as data represents values attributed to parameters, and information is data in context and with meaning attached.
Information also relates to knowledge, as knowledge signifies understanding of an abstract or concrete concept.[2]
In terms of communication, information is expressed either as the content of a message or through direct or indirect observation.
That which is perceived can be construed as a message in its own right, and in that sense, information is always conveyed as the content of a message.