• Welcome to the Internet Infidels Discussion Board.

“Reality Goes Beyond Physics,” and more

Time is a dimension, a measurement dimension. Those steeped in scifi confuse the word dissension, time as dimensional measurement and dimension as some kind of other worldly reality.

At least scientifically you can't just st6ep into and out of time. Metaphysically you can imaghine it any numer of ways, not subject to testing.

A metal rod has a length dimension measured in meters.

Velocity is meters per second

In that context space-time simply has 4 measurement dimensions. X,Y,Z position in a coordinate system measured in meters, and time measured in seconds. Anything else is scifi and speculation.

Drop an object and time is a dimensional measurement of change in position. The falling object has a time dimension. The object has a position dimension in meters. Relative to an observer from a fixed positiion = the object in space-time has a position defined by (x,y,z,t ) t in seconds x,y,z in meters.



Time is not an ethereal other reality you can travel in, it is a unit of measure like kilograms, meters, and degrees C.

Meters and seconds are scalars, they have magnitude but no direction. Velocity is a vector, it has magnitude and direction.

To ‘go back in time’ means to go back to a previous physical state of the universe. In Star Trek you walk trough something and go to the future or past.

In the USA NIST propagates the standard second. It is an international standard.



Scientific time is ticks on a clock. An atomic clock sets the standard.

And Systems International ,standard units of measure and dimensions.



 
I am not well versed in philosophy or philosophical discussion, I should probably avoid these kinds of threads.
I have to take issue with the above. If I recall correctly, it was pood who somewhere remarked something along the lines of philosophy being first and foremost about being able to realize what questions need to be asked given a particular context. I am taking issue with the above quoted remark because I took a quick glance at this OP and then this as well as this along with a few of the responses. The questions raised are good (by which I mean interesting) questions, and the few responses I read are, I would say, properly regarded in somewhat Kuhnian terms as normal science representations of the prevailing paradigm perspective. Might as well call them the paradigm catechism. You know, to be provocative. But correct nonetheless. It is possible both to accept the science and question it. In fact, it is possible to both accept the science and doubt it at the same time. Without that very sort of questioning and doubting, science expires; it dies a slow death. And that's all I have to say about that. Well, not really. But I am going to be busy halter-breaking for some time.
Your dissertation exemplifies my general view of philosophy. A lot of words with little practcle value. In my 30 years on the job I never knew any engineer or anyone with science credentials to talk about or refer to philosophy. Never saw 'A Philosophical Guide On How To Do Science' on anyone bookshelf.

Starting inn the 19th century modern model based experimental science dispaced Natral Philsophy.

Yes, I periodically dispute with pood over issues on science vs philosophy. That being said I have no issues with pood, he is a reaojned person.

The topic would be a derail to another thread. I had a thread downstream.

Science is something you learn from facts and experience. Thistledown of yeas of trail an error are part of it. We know the theories that succeeded, but few remember all that failed. Questioning is part of everything. not just science.

Engineering problem solving can be Socratic, or deductive, or inductive, or a combination. In groups it is dynamic fluid process with no rigid boundaries.

In practice for me you have to be able to temporarily hold two contradictory things to be true at the same time. Otherwise you can end up with rigid thinking that ignores options.
 
Last edited:
Libet’s experiments did not disprove free will, as he acknowledged. It’s a whole lotta nothing.
Libet's experiments did not prove that the conscious agent isn't responsible for his decision because it's the conscious agent ONLY who makes it. You can't separate the brain from the agent who gives permission for the action to be executed. The courts don't say, "your brain made the decision, not your conscious self, so you're off the hook of culpability. This in no way means the agent was morally responsible or free to have chosen otherwise.

You can't be conscious of something before the event.
Of course not. But consciousness (e.g., or agency) is a prerequisite of decision-making.

No, it's not. Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.

agency: the capacity, condition, or state of acting or of exerting power: operation


Consciousness is generated not only after the event, but after the senses have acquired the information and the brain has processed it. That's where the milliseconds of delay between the event and a response comes from. Reflex response being the fastest, nerve loops, etc, with prefrontal deliberation the slowest.
Even if it takes a millisecond of delay to reach conscious awareness, it takes a conscious will to make a decision.

Abstract​

The real question that Libet's experiments raise is whether our conscious wills cause the willed actions. What is at issue is the effects rather than the causes of conscious will. The question is whether conscious will is impotent, not whether it is free. If conscious will is impotent, then we cannot control our actions by means of conscious will, and this disability might reduce our freedom of action. Libet's experiments raises or sharpens this new question. By raising a new issue in a new way, Libet's work made (and continues to make) many people rethink their assumptions. The assumptions at stake are both normative and descriptive. The relevant normative assumption is, roughly, that causation by conscious will is necessary for responsibility. The descriptive assumption that Libet questions is, again roughly, that conscious will causes the willed action. This chapter addresses these assumptions in turn. It concludes that Libet's experiments do not undermine responsibility in general, but they do illuminate some particular cases as well as common standards of responsibility.



Wrong, will is not autonomous, nor is will the decision maker.

Nothing can happen consciously before information is acquired, processed, integrated with memory to enable recognition and understanding, at which point the event is represented in conscious form, including conscious will, which is the urge or impulse to act, and not some special veto function that is exempt from the process of cognition.[/url]
I’m just trying to establish that it is the conscious agent that permits the action to take place and therefore he is responsible for making that decision. This has nothing to do with moral responsibility. If someone runs a red light, it is his foot that is pushing on the accelerator to increase the speed. He is giving consent to this action. I’m not saying his decision is a free one.

The 'conscious agent' is the work of the brain. Conscious agency is no autonomous. Vetoing a decision (if there time) is a part and parcel of the process that made the decision in the first instance, just that new information altered the decision making process before it was completed in favour of the first option. Which, if determinism is true, is the course the process must take.
 
First the event happens, then the senses acquire the information which is transmitted to the brain, which processes and represents some of that information in conscious form, sight, sound, smell, associated feelings and thoughts.
Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.
Is this consciousness effective? Does it effect anything? Does it affect anything? Is it an inert by-product of information processing? Is this consciousness epiphenomenal?

Of course consciousness has an adaptive function, it's our mental representation of the external world, where we can see, feel, smell, taste the objects and events of the world and respond according. 'Accordingly' is of course a part of the information acquisition, processing and mental representation of our brain generated virtual world.
 
First the event happens, then the senses acquire the information which is transmitted to the brain, which processes and represents some of that information in conscious form, sight, sound, smell, associated feelings and thoughts.
Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.
Is this consciousness effective? Does it effect anything? Does it affect anything? Is it an inert by-product of information processing? Is this consciousness epiphenomenal?

Of course consciousness has an adaptive function, it's our mental representation of the external world, where we can see, feel, smell, taste the objects and events of the world and respond according. 'Accordingly' is of course a part of the information acquisition, processing and mental representation of our brain generated virtual world.

Nice, clear and concise.
 
First the event happens, then the senses acquire the information which is transmitted to the brain, which processes and represents some of that information in conscious form, sight, sound, smell, associated feelings and thoughts.
Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.
Is this consciousness effective? Does it effect anything? Does it affect anything? Is it an inert by-product of information processing? Is this consciousness epiphenomenal?

Of course consciousness has an adaptive function, it's our mental representation of the external world, where we can see, feel, smell, taste the objects and events of the world and respond according. 'Accordingly' is of course a part of the information acquisition, processing and mental representation of our brain generated virtual world.
There are two usual senses of adaptive. It can indicate a result. It can indicate a contribution. The question is whether conscious thinking is being regarded as effectual or by-product, as contributing or merely inert. Some physicalists - particularly of the reductive stripe - insist that, logically, conscious thinking is ineffectual and epiphenomenal. And, of course, that is a position from logic rather than an actually scientific fact: meaning it ain't necessarily so for sure. On the other hand, it seems that emergent physicalists would not be so restricted to epiphenomenalism.

Of course, it would be far more correct to say that it is not known how conscious thinking and brain physiology relate (at whatever level of reduction). But what people speculate despite not actually knowing is often telling and interesting in its own right.
 
First the event happens, then the senses acquire the information which is transmitted to the brain, which processes and represents some of that information in conscious form, sight, sound, smell, associated feelings and thoughts.
Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.
Is this consciousness effective? Does it effect anything? Does it affect anything? Is it an inert by-product of information processing? Is this consciousness epiphenomenal?

Of course consciousness has an adaptive function, it's our mental representation of the external world, where we can see, feel, smell, taste the objects and events of the world and respond according. 'Accordingly' is of course a part of the information acquisition, processing and mental representation of our brain generated virtual world.
There are two usual senses of adaptive. It can indicate a result. It can indicate a contribution. The question is whether conscious thinking is being regarded as effectual or by-product, as contributing or merely inert. Some physicalists - particularly of the reductive stripe - insist that, logically, conscious thinking is ineffectual and epiphenomenal. And, of course, that is a position from logic rather than an actually scientific fact: meaning it ain't necessarily so for sure. On the other hand, it seems that emergent physicalists would not be so restricted to epiphenomenalism.

Adaptive in the sense that we are able to respond to the challenges of the external world in a beneficial way; we are not only able to survive, but thrive.

Of course, it would be far more correct to say that it is not known how conscious thinking and brain physiology relate (at whatever level of reduction). But what people speculate despite not actually knowing is often telling and interesting in its own right.

We don't know how the brain generates mind, a virtual subjective representation of the external world and self, consciousness, but it is clear that it does.

What else are the senses for, if not to convey information to the brain? What are lobes and structures of the brain for, if not to process information?

If the brain is not responsible for generating consciousness, why does damage to the brain alter consciousness, including the personality and character of the patient.

If the brain is not responsible for consciousness, why do chemical imbalances (LSD, etc) radically alter the mind? Where a once a solid appearing world becomes psychedelic, distorted time, objects appearing to shift and change.

How is this not brain generated mind?
 
First the event happens, then the senses acquire the information which is transmitted to the brain, which processes and represents some of that information in conscious form, sight, sound, smell, associated feelings and thoughts.
Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.
Is this consciousness effective? Does it effect anything? Does it affect anything? Is it an inert by-product of information processing? Is this consciousness epiphenomenal?

Of course consciousness has an adaptive function, it's our mental representation of the external world, where we can see, feel, smell, taste the objects and events of the world and respond according. 'Accordingly' is of course a part of the information acquisition, processing and mental representation of our brain generated virtual world.
There are two usual senses of adaptive. It can indicate a result. It can indicate a contribution. The question is whether conscious thinking is being regarded as effectual or by-product, as contributing or merely inert. Some physicalists - particularly of the reductive stripe - insist that, logically, conscious thinking is ineffectual and epiphenomenal. And, of course, that is a position from logic rather than an actually scientific fact: meaning it ain't necessarily so for sure. On the other hand, it seems that emergent physicalists would not be so restricted to epiphenomenalism.

Adaptive in the sense that we are able to respond to the challenges of the external world in a beneficial way; we are not only able to survive, but thrive.

Of course, it would be far more correct to say that it is not known how conscious thinking and brain physiology relate (at whatever level of reduction). But what people speculate despite not actually knowing is often telling and interesting in its own right.

We don't know how the brain generates mind, a virtual subjective representation of the external world and self, consciousness, but it is clear that it does.

What else are the senses for, if not to convey information to the brain? What are lobes and structures of the brain for, if not to process information?

If the brain is not responsible for generating consciousness, why does damage to the brain alter consciousness, including the personality and character of the patient.

If the brain is not responsible for consciousness, why do chemical imbalances (LSD, etc) radically alter the mind? Where a once a solid appearing world becomes psychedelic, distorted time, objects appearing to shift and change.

How is this not brain generated mind?
The brain generates the mind. Okay. So the brain is not the mind, and the mind is not the brain. Okay. Got it. The brain generates consciousness. Again, okay. Based on all that, consciousness might be the mind or an aspect of the mind. Either way is fine. The brain effects. At the very least the brain effects or generates the mind. But there is still that nagging question: Does the mind which is not the brain effect, affect, or generate anything?
Adaptive in the sense that we are able to respond to the challenges of the external world in a beneficial way; we are not only able to survive, but thrive.
The brain is an adaptive result; it is both an evolved and a developmental result. Sure. Not a contentious issue. The brain is also effectual; it is generative. Is the generated mind which is not the brain also effectual? Does it produce, does it generate anything effectual? For instance, does this mind which is not the brain affect the brain? Or is it only the brain which does the effecting and, thereby, the affecting?

If the mind is the brain or the brain is the mind, then mind is a superfluous term. Actually, it would be worse than superfluous; it would be misleading inasmuch as it suggests that the mind is not the brain. But saying the brain is the mind - identifying brain with mind - does not really help sufficiently. Because there is still that consciousness matter. It would make no sense to say that the brain is consciousness whereas it seems prima facie more sensible to say the brain is the mind. But the difference in the relatively greater sensibleness for mind can be regarded as the product of linguistic habit. Even so, consciousness would be at least as superfluous as would be mind.

Still, if mind and/or consciousness are not effectual, then would it not be more accurate to speak strictly in terms of the brain which is effectual? Of course it would be. Oh, but then it becomes very difficult - hmmm, actually currently impossible - to discuss or even think about brain development. Such a conundrum! But interesting, nonetheless, because - and this is what it boils down to - it leaves one wondering what are the benefits that can even possibly be derived from this reductive way of regarding brain, mind, and consciousness wherein mind and consciousness are regarded as ineffectual epiphenomena?
 
Michael, I think the duality of brain/mind is very similar to, of not the same duality as computer/(program/state/context).

I'm going to use an example from one of my first jobs handling the topic: we had a 787 simulator we needed to build. One of the features the customer (an international air carrier) wanted was specifically to be able to quickly load "scenarios" on the system.

The problem here is that the "system" was a collection of some 20 computers, and the system it loads on may not even have the same architecture as the system it was produced on.

Note here that "context" here is something rather grand: excepting parts that happen merely "around" and "behind" the system, every part of it is stopped, momentarily, and then the entire volatile memory region, the changing parts of every program, get stored to disk according to their virtual address locations. This is closed by taking the context of the context-switch and then storing that to disk too, and then firing off all that junk to the host PC.

The computer with all its programs on it is the "brain", but the "mind" is more about that "context", not the "thing" but the description of what it happens to be doing in the moment.

LLMs as systems have reinforced the idea of a systematic context, if only because the momentary context of an LLM can, in at least one of those moments, be distilled to a "seed" and data block. In turn that block contains mostly sensible, human-interpretable contents in a way that is largely model-independent or at least model-translatable with some framework "magic".

In this way language may embed experience for that which translates such embeddings into a virtualized token.

I'll admit I have conflicts with some folks who study language here because of how I access the idea of it and what language means to me?

Going back to the OP, I think that it really relates to the "types of causes" discussion, because people looking for these principles in biology really need to start looking at the principles of automation and autonomy outside or entirely independent of biology, in the most simplified setting possible.

Ultimately there are values within a process generates because of its nature, such as:

Constant values held for reference within the system;

Volatile values which when measured represent something "from a hardware driver" or "sensor" of some kind;

Internal variables which hold some kind of memory of volatiles or partial work comparing constants to volatiles;

Text-like data, which is constant but definitional of process descriptions rather than momentary process inputs.

All of these participate in different parts operation, and they aren't exactly distinct for all they describe the parts of a dynamic system.

I think it is shooting ourselves in the foot to consider these "metaphors" rather than "general principles around 'systems'."
 
The computer with all its programs on it is the "brain", but the "mind" is more about that "context", not the "thing" but the description of what it happens to be doing in the moment.
Jarhyn, I appreciate our need to resort temporarily (even if often) to analogies, metaphors, and similes. However, because even the best analogies are not so much actual descriptions as they are stand-ins for descriptions or stimuli for hoped-for better descriptions, analogies often help us realize what are the areas upon which we should probably turn our focus. Whether brains or minds are computable (or computerizable?) is a separate issue, but in terms of brain as computer and programs with mind as description of brain functioning in some specified context, the question at hand would be whether a description of computer functioning affects computer functioning. What is apt about the description analogy is that it presents the description as something other than the functioning itself, just as the mind was put forth as if it were something other than the functioning of the brain, with that functioning somehow generating the mind as description (to continue with the analogy). The question in terms of brain distinguished from mind is whether the mind as a description of brain functioning acts effectually upon brain functioning.

Of course, if it does affect by effecting, then we recognize that we mean something more substantial than a simple description, but that just shows a limit for the analogy, and then, were it necessary or useful, we could work to put forth something in terms other than description.

On the other hand, if the description is inert with regards to the functioning, then we have description as an unnecessary variable at best. I guess an analogy here could be something like this: Remember the old days (e.g., the MS-DOS days) when resources were so scant that you found yourself sometimes having to work to minimize variables? Or, to put it another way, scientific experiments often depend on identifying and isolating variables. Here the brain-mind matter can be approached in terms of isolating linguistic variables. Is the mind variable linguistically necessary if the mind does not effect at all to affect brain functioning?

A problem for reductionism is that it eventually loses (maybe even gets used to deny that there ever was what we regard as actual) content. There is no time, temperature, or pressure at the quantum level. That is not a problem for real-world - uh, I mean macrophysical - science. Macrophysical scientists can say: Sure, sure, somehow quantum level occurrences effect macrophysical content. And then they proceed with their own investigations. If called upon, most real-world science will pay homage to the quantum before moving on to seeking macro-level solutions and understandings. Basically, that is to say that macrophysical science is relatively non-reductionist. (By the way, this is not denying all usefulness to (thinking about) quantum level science.)

So, much - even most - science and scientific understanding can advance without being burdened with the problems introduced by and associated with radical reductionism. Science-istic or science-y philosophizing is not analogous to science in this regard, and the distinction between brain and mind is more a philosophy issue than it is a science matter. We hope that scientists will eventually understand the brain, neurology, and physiology well enough to be able to devise therapies for diseases of brain functioning such as those under the broad dementia category, but those advances will not depend upon thinking that the brain-dependent mind is epiphenomenal. In fact, I expect that regarding the brain-dependent so-called mind as epiphenomenal would be a serious impediment to therapies development.
 
It's not analogical. I'm making a stand on this: I do not mean that human consciousness is an analog for, though it is literally "analog" in nature, computer virtualization.

These are the *exact same category of phenomena*, with respect to the language I use.

In the same way you can say something about "Galois groups" in general, you can say something about this category in general, and know it with the same confidence as anything else provable as true under the axioms of math.

I am not using them as analogs for understanding, I am observing this is the same principle operating in different places to produce a result for the same reasons.

It is not whether brains are "computable", though this follows, or "computed" although this follows too in terms of how "chemistry" would meet a definition of "abstract computation", but that brains are "computers", as in "the immediate instantiated article of a class 'computer'."

Computers are well understood, and there is isomorphism between the languages that we use to discuss the class of systems from different perspectives.
 
First the event happens, then the senses acquire the information which is transmitted to the brain, which processes and represents some of that information in conscious form, sight, sound, smell, associated feelings and thoughts.
Consciousness is based on prior information processing. It is the processing that determines the form that consciousness takes, thoughts, feelings, etc, and it is underlying processing that 'feeds' conscious experience as information is acquired and integrated into conscious form.
Is this consciousness effective? Does it effect anything? Does it affect anything? Is it an inert by-product of information processing? Is this consciousness epiphenomenal?

Of course consciousness has an adaptive function, it's our mental representation of the external world, where we can see, feel, smell, taste the objects and events of the world and respond according. 'Accordingly' is of course a part of the information acquisition, processing and mental representation of our brain generated virtual world.
There are two usual senses of adaptive. It can indicate a result. It can indicate a contribution. The question is whether conscious thinking is being regarded as effectual or by-product, as contributing or merely inert. Some physicalists - particularly of the reductive stripe - insist that, logically, conscious thinking is ineffectual and epiphenomenal. And, of course, that is a position from logic rather than an actually scientific fact: meaning it ain't necessarily so for sure. On the other hand, it seems that emergent physicalists would not be so restricted to epiphenomenalism.

Adaptive in the sense that we are able to respond to the challenges of the external world in a beneficial way; we are not only able to survive, but thrive.

Of course, it would be far more correct to say that it is not known how conscious thinking and brain physiology relate (at whatever level of reduction). But what people speculate despite not actually knowing is often telling and interesting in its own right.

We don't know how the brain generates mind, a virtual subjective representation of the external world and self, consciousness, but it is clear that it does.

What else are the senses for, if not to convey information to the brain? What are lobes and structures of the brain for, if not to process information?

If the brain is not responsible for generating consciousness, why does damage to the brain alter consciousness, including the personality and character of the patient.

If the brain is not responsible for consciousness, why do chemical imbalances (LSD, etc) radically alter the mind? Where a once a solid appearing world becomes psychedelic, distorted time, objects appearing to shift and change.

How is this not brain generated mind?
The brain generates the mind. Okay. So the brain is not the mind, and the mind is not the brain. Okay. Got it. The brain generates consciousness. Again, okay. Based on all that, consciousness might be the mind or an aspect of the mind. Either way is fine. The brain effects. At the very least the brain effects or generates the mind. But there is still that nagging question: Does the mind which is not the brain effect, affect, or generate anything?
Adaptive in the sense that we are able to respond to the challenges of the external world in a beneficial way; we are not only able to survive, but thrive.
The brain is an adaptive result; it is both an evolved and a developmental result. Sure. Not a contentious issue. The brain is also effectual; it is generative. Is the generated mind which is not the brain also effectual? Does it produce, does it generate anything effectual? For instance, does this mind which is not the brain affect the brain? Or is it only the brain which does the effecting and, thereby, the affecting?

If the mind is the work of a brain, how is mind in anyway separate from the brain?

As mind is the physical activity of the brain (chemicals alter mind function, etc), the function of the mind is part and parcel of representing the world consciously to enable careful and considered response, understanding, pattern recognition, planning, things that unconscious reflex actions do not permit.

If the mind is the brain or the brain is the mind, then mind is a superfluous term. Actually, it would be worse than superfluous; it would be misleading inasmuch as it suggests that the mind is not the brain. But saying the brain is the mind - identifying brain with mind - does not really help sufficiently. Because there is still that consciousness matter. It would make no sense to say that the brain is consciousness whereas it seems prima facie more sensible to say the brain is the mind. But the difference in the relatively greater sensibleness for mind can be regarded as the product of linguistic habit. Even so, consciousness would be at least as superfluous as would be mind.

Mind refers to a specific attribute or activity of a brain. Just like we refer to the senses, nerve impulses and so on. There is no mind in the absence of a brain with the capacity to generate a mind.



Still, if mind and/or consciousness are not effectual, then would it not be more accurate to speak strictly in terms of the brain which is effectual? Of course it would be. Oh, but then it becomes very difficult - hmmm, actually currently impossible - to discuss or even think about brain development. Such a conundrum! But interesting, nonetheless, because - and this is what it boils down to - it leaves one wondering what are the benefits that can even possibly be derived from this reductive way of regarding brain, mind, and consciousness wherein mind and consciousness are regarded as ineffectual epiphenomena?

Why would mind/consciousness not be effectual? Consciousness evolved because it provided an advantage. Light sensitive cells provided an animal with rudimentary sight, basic light and dark images, an approaching shadow may be a predator, and this information allowed the animal to avoid getting eaten. Being adaptive, animals with sight survived more often and passed this ability on to future generations, each more refined than the last.
 
it is possible to both accept the science and doubt it at the same time
Silly me, I kinda thought it was impossible to accept science -or to pursue scientific methodology - without doubting everything all along the way and tirelessly imagining every way a conclusion could be wrong.
I really like your ability to bring logic down to brass tacks, but it can cause a problem when it doesn’t fit I to your formula. Lets work this out instead of being enemies.
 
Why would mind/consciousness not be effectual?
I was just wondering whether you thought consciousness ineffectual. You do not. Good. Why would anyone think of consciousness as ineffectual? Interesting question even though I do not think there is good reason for holding to conscious thought as ineffectual. Nonetheless, epiphenomenalism - "the view that mental events are caused by physical events in the brain, but have no effects upon any physical events" - persists. Then there is eliminative materialism:
Eliminative materialism (or eliminativism) is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist and have no role to play in a mature science of the mind.
Here is another introduction:
Eliminative materialism is a revisionary view in the philosophy of mind and of cognitive science, according to which our ordinary, folk psychological notions and categories of mental states are empty, that is, they do not stand for anything in objective reality. Ordinary categories of mental states include propositional attitudes (such as belief, desire, fear) and phenomenal states (such as the subjective aspect of pain, pleasure, colour perception, etc.). The main point of eliminative materialism is that categorization of mental states according to our ordinary, everyday understanding is illegitimate, because it is not supported by the best scientific taxonomies that deal with mental life, such as neuroscience. Some eliminative materialist authors add the further claim that future neuroscience will in fact eliminate all non-scientific vocabulary related to the domain of mental states.
 
Why would mind/consciousness not be effectual?
I was just wondering whether you thought consciousness ineffectual. You do not. Good. Why would anyone think of consciousness as ineffectual? Interesting question even though I do not think there is good reason for holding to conscious thought as ineffectual. Nonetheless, epiphenomenalism - "the view that mental events are caused by physical events in the brain, but have no effects upon any physical events" - persists. Then there is eliminative materialism:
Eliminative materialism (or eliminativism) is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist and have no role to play in a mature science of the mind.
Here is another introduction:
Eliminative materialism is a revisionary view in the philosophy of mind and of cognitive science, according to which our ordinary, folk psychological notions and categories of mental states are empty, that is, they do not stand for anything in objective reality. Ordinary categories of mental states include propositional attitudes (such as belief, desire, fear) and phenomenal states (such as the subjective aspect of pain, pleasure, colour perception, etc.). The main point of eliminative materialism is that categorization of mental states according to our ordinary, everyday understanding is illegitimate, because it is not supported by the best scientific taxonomies that deal with mental life, such as neuroscience. Some eliminative materialist authors add the further claim that future neuroscience will in fact eliminate all non-scientific vocabulary related to the domain of mental states.


Well, the point of all this just to say that the brain is the sole agency of mind, consciousness, thought, decision making and motor response, impulses sent to muscle groups to perform an action.

And that this amazing ability is enabled by the neural architecture and its associated attributes and abilities.

That decisions or the workings of a brain come not from the notion of free will or some autonomous feature able to veto decisions, but the state and condition of the brain in any given instance of decision making, that it is the state and condition of a brain in any given instance, and that alone, that determines the action that is taken in that instance.
 
The CFT Consciousness Fart Theory.

CFT says consciousness is like a fart. You can't tell where it comes from and wherever you go you seem to be in the middle of it.
 
So, speaking of quasi crystals and aperiodic monotiles, apparently there is a relationship between these as the "shadow" of dimensional regular hyper-structures.

Apparently this may provide the answer of how to "generate" the initial condition of an infinite "flatland" using a finite "shape" and a projection.

This would also imply that generating an infinite 3d version of a flatland only requires a static multidimensional object and being able to translate from a "shadow" to a "shape" to calculate a whole universe ala the flatland model.

This phenomena itself, particularly with the vortex component, could well explain how spooky action at distances manifests in such systems through mathematical relationships of the higher-dimensional projection of the system.

 
Last edited:
According to Quantum Consciousnesses Theory consciousness is quantized by the concton.

Conctons are massless particles that are not limited by C and space time. Conctons explain people seeing the future and the paranormal.
 
Back
Top Bottom