• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

American beliefs in Evolution

You simply insist that ''the function of neurons is as a belief engine'' and ''Neurons encode beliefs'' regardless of the absence of evidence to support your assertion, and in the face of evidence contrary to your assertion.

Electromagnet radiation/light reflection enabling vision , airborne molecules, vibrations in the air, etc, are not 'beliefs,' they are physical phenomena.

Your claims are patently absurd.
 
I wtcwed a show on superstition.

Rats a let into rengular passage at the end of which us a lever that gives them a food pelet. They lebed to press the lever.

After te rats had been conditioned a daely was added befor the lever woud give a pellt.

Rtas would go aroud in a circle or some other thing before retrying. Some of the rats would wlalk through the passage and then repeat what they did before presing the lver on the next try.

We naturally associate success aka survival with actions, it is a survival mechanism.

There is no purpose to our neurons and brain, it is what evolved through mutaion and natural selection. There is no purpose to evolution.

Unless you invent one. Religion, philosophy, mysticism. In the extreme the woo-woo choo-choo is now leaving on track 9.
 
There is no purpose to our neurons and brain, it is what evolved through mutaion and natural selection. There is no purpose to evolution.

Unless you invent one.

If any part of your purpose involves being alive, it serves your purpose. :)
 
You simply insist that ''the function of neurons is as a belief engine'' and ''Neurons encode beliefs'' regardless of the absence of evidence to support your assertion, and in the face of evidence contrary to your assertion.

Electromagnet radiation/light reflection enabling vision , airborne molecules, vibrations in the air, etc, are not 'beliefs,' they are physical phenomena.

Your claims are patently absurd.
No, I produced a complete definition of "belief" which functions satisfactorily with all use, described a neuron, and then illustrated why the neuron's configuration (indeed even a simple translator's configuration) instantiates belief systems.

Because all things that exist exist as physical phenomena, and people have beliefs the beliefs people have must be held through the holding of some physical phenomena in some configuration.

The nerves are the primitive units of that configuration.

The only question is "how many neurons does it take to encode 'belief'?"

Seeing as how the neuron is a machine that traverses an error surface to a problem to render a solution based on feedback to the backpropagation algorithm, it satisfies the definition of a belief without needing a neighbor: the belief that a solution will yield less backprop reaction as a result of the interaction.
 
You simply insist that ''the function of neurons is as a belief engine'' and ''Neurons encode beliefs'' regardless of the absence of evidence to support your assertion, and in the face of evidence contrary to your assertion.

Electromagnet radiation/light reflection enabling vision , airborne molecules, vibrations in the air, etc, are not 'beliefs,' they are physical phenomena.

Your claims are patently absurd.
No, I produced a complete definition of "belief" which functions satisfactorily with all use, described a neuron, and then illustrated why the neuron's configuration (indeed even a simple translator's configuration) instantiates belief systems.

I don't recall that you offered a 'complete definition of will and how it relates to the presence of 'will' rather than function in neurons.

A lot of things were said and time has passed, perhaps you can reiterate?


Seeing as how the neuron is a machine that traverses an error surface to a problem to render a solution based on feedback to the backpropagation algorithm, it satisfies the definition of a belief without needing a neighbor: the belief that a solution will yield less backprop reaction as a result of the interaction.

A neuron doesn't form 'convictions,' it isn't 'convinced' that a stream of information is true or false, it's just the first step in the process of a brain forming a coherent mental representation of the world and self, which then includes beliefs and assumptions provided by this higher order of information processing.

Basically;

perceptual processing
• Superior colliculus

Modulation of cognition
(memory, attention)

• Cingulate cortex
• Hippocampus
• Basal forebrain

Representation of
emotional response

• Somatosensory-related
cortices

Representation of
perceived action

• Left frontal operculum
• Superior temporal gyrus

Motivational evaluation
• Amygdala
• Orbitofrontal cortex

Social reasoning
• Prefrontal cortex
 
I don't recall that you offered a 'complete definition of will and how it relates to the presence of 'will' rather than function in neurons
I'm not surprised. Your "logic" on the
A neuron doesn't form 'convictions,' it isn't 'convinced'
Yes it does. Yes it is. You're again waving around a term that you only define poorly so that you are not held to task on the stupid things you declare. You engage in pure sophistry on that front.

The fact English has many words that all say the same thing, but in different contexts, further lends to this outcome.

Beliefs are convictions. As to being "convinced", how convinced someone is is actually a function of belief in terms of connection to the reinforcement function. This is more a measure of "strength" or "immutability" of a belief structure, and that would then directly map to some factor in the backpropegation algorithm.

"Strong convictions" are then beliefs which backpropagation cannot traverse into for the sake of training the system.

This implies that in biological systems, systems where backpropagation is mediated by inputs from other neurons, this can change in strength, and cease in application, leading to static and unchanging beliefs: absolute conviction.

Indeed, it can be seen using this lens that belief, conviction, and even doubt are functions of primitive neural processes in general, and are not anything "special" at all.
 
I don't recall that you offered a 'complete definition of will and how it relates to the presence of 'will' rather than function in neurons
I'm not surprised. Your "logic" on the
A neuron doesn't form 'convictions,' it isn't 'convinced'
Yes it does. Yes it is. You're again waving around a term that you only define poorly so that you are not held to task on the stupid things you declare. You engage in pure sophistry on that front.

That's you. you assert that neurons are conscious without a shred of evidence

Neuroscience assumes that consciousness emerges from neurons interacting with each other in the brain, not that each neuron is a conscious unit, but that consciousness emerges when readiness potential is achieved; a sufficiently complex brain.

The fact English has many words that all say the same thing, but in different contexts, further lends to this outcome.

You need to support your contention that single neurons are conscious. Words alone prove nothing.

Beliefs are convictions. As to being "convinced", how convinced someone is is actually a function of belief in terms of connection to the reinforcement function. This is more a measure of "strength" or "immutability" of a belief structure, and that would then directly map to some factor in the backpropegation algorithm.

Nah, neurons there is no evidence that neurons need to convince themselves to acquire and process information, that is their evolved role, where physical makeup and position within the system determines the role they play.


"Strong convictions" are then beliefs which backpropagation cannot traverse into for the sake of training the system.

Neurons don't have convictions, they have a physical makeup that determines the role they play within the system.

Objects only have certain functionalities when they are put together in a certain way and that these functionalities do not exist in any of the components.


Basic Assumptions of Consciousness Research
''The majority of consciousness research is steeped in an evolutionary perspective and a fundamental assumption of ‘mind-brain unity’. Single-cell organisms do not need brains, because they interface directly with their environment through chemo-tactic receptors. The brain evolved as an information processor, to bring the ‘outside inside’ so that the whole organism is privy to environmental stimuli. Primitive brains react reflexively. The higher vertebrate brain emerged because natural selection favours brains that respond rapidly, yet are flexible enough to adapt to changing environments.''


This implies that in biological systems, systems where backpropagation is mediated by inputs from other neurons, this can change in strength, and cease in application, leading to static and unchanging beliefs: absolute conviction.

It does not imply that each and every neuron has convictions. Consciousness and convictions emerge through the interaction of all the neurons in a brain, connectivity, senses, memory function, etc.

Indeed, it can be seen using this lens that belief, conviction, and even doubt are functions of primitive neural processes in general, and are not anything "special" at all.


Yes, collective neural processes, which nobody disputes. But that was not your claim. Your claims extend to consciousness and convictions within not only single neurons but other mechanical systems such as computers. What about microchips, are they conscious as well?
 
That's you. you assert that neurons are conscious without a shred of evidence
It's a trivially true statement purely based on the definition of "neuron" and "consciousness".

I pointed out how that works in my posts.
Neuroscience assumes that consciousness emerges from neurons interacting with each other in the brain
No, neuroscientists, individual people, make that assumption.

Neuroscience instead proves that those neuroscientists are wrong.

It does not require an interaction between neurons to create what I have defined as consciousness, and what I have defined as consciousness is suitably to semantically complete my statements about what that consciousness and things satisfying the definition are capable of in terms of the other offered definitions.

Again, you keep trying to pretend there's something special going on when everything going on is rather simple and mundane.

I have provided a definition of consciousness and a model for ascertaining whether one system is meaningfully conscious of another along some given axis of consciousness.

That you have tried to understand nothing with those definitions and are already out of ideas on how to use them is telling.
 
I have no idea why they don't just say ... If god created man, God created man through evolution.
 
I have no idea why they don't just say ... If god created man, God created man through evolution.
I have said that countless times to fundies. But they're fundies, and FSTDT. For instance when I say that to most of them, the Sunday School answer seems to be "That's not what God says!' Of course I say "If you're reading something else that God wrote, you're reading it wrong, or God didn't really write it or it was mis-translated or edited out by Queen Jimmie or..." and by then they're totally making crucifix signs and stuff...
 
That's you. you assert that neurons are conscious without a shred of evidence
It's a trivially true statement purely based on the definition of "neuron" and "consciousness".

I pointed out how that works in my posts.
Neuroscience assumes that consciousness emerges from neurons interacting with each other in the brain
No, neuroscientists, individual people, make that assumption.

More than a mere assumption, it is what the evidence supports.
Neuroscience instead proves that those neuroscientists are wrong.

Hardly. It is you who makes extraordinary claims that are not supported by evidence.

Support your claim that individual neurons have convictions or beliefs, or are conscious. Your semantic sophistry is far from sufficient,

It does not require an interaction between neurons to create what I have defined as consciousness, and what I have defined as consciousness is suitably to semantically complete my statements about what that consciousness and things satisfying the definition are capable of in terms of the other offered definitions.

What you have defined as 'consciousness' is nothing of the sort. You mistake function for consciousness.


You try to relabel functionality as consciousness...essentially saying 'oh look, it has complex functionality, it must be conscious.''

Not so.

Function in artificial systems is a matter of design and build, and evolution in organisms, which are not necessarily conscious, plants, microbes, etc.

Again, you keep trying to pretend there's something special going on when everything going on is rather simple and mundane.

Consciousness is special.

I have provided a definition of consciousness and a model for ascertaining whether one system is meaningfully conscious of another along some given axis of consciousness.

Your definition fails because function does not equate to consciousness.

That you have tried to understand nothing with those definitions and are already out of ideas on how to use them is telling.


Pffffft. Bluff and Bluster.
 
More than a mere assumption, it is what the evidence supports.
You never actually presented any evidence, just bald claims.
Hardly. It is you who makes extraordinary claims that are not supported by evidence
I gave you the evidence. You just ignored it as you usually do.
What you have defined as 'consciousness' is nothing of the sort. You mistake function for consciousness.
Now e just all know you are full of shit.

What I have defined as "consciousness" actually exists and operates with the exact same application and implications. Here you are literally making a bald no-true-scotsman.

You're literally trying to hide an important element behind a useless term so you can pretend it doesn't exist.

Function in artificial systems is a matter of design
This is literally the genetic fallacy.

Form is all that matters. Intent matters not. Design is about intent, not form.

Function in all systems is exclusively a matter of mechanical structure rather than intent of design, and the fact is that you are attempting to strip the particular name of the function, "the function of consciousness within a system" just so you can't reasonably claim it doesn't exist.

It would be like saying that atoms don't exist, just particles. Yes, atoms are particles, but particles can specifically be atoms.

Your incredulity and shit takes will be filed right where they belong, in the their circular forever-home.
 
More than a mere assumption, it is what the evidence supports.
You never actually presented any evidence, just bald claims.
Hardly. It is you who makes extraordinary claims that are not supported by evidence
I gave you the evidence. You just ignored it as you usually do.
What you have defined as 'consciousness' is nothing of the sort. You mistake function for consciousness.
Now e just all know you are full of shit.

Nah, you are getting sad because you have no case to argue. Function, however complex the system, does not necessarily equate to consciousness or conscious function.

A neuron or a microchip does what it is evolved or designed to do, process information.

What I have defined as "consciousness" actually exists and operates with the exact same application and implications. Here you are literally making a bald no-true-scotsman.

Consciousness exists, just not under the conditions you invoke. Microchips are not conscious, neurons do not form beliefs or consciously ponder on what they do as play their role within the system.

You have no evidence for it, and no evidence exists.

You merely spin a web of BS in the hope of appearing to make a case for conscious computers, neurons with beliefs, etc, yet fail.

Fail because word play alone proves nothing. Playing with words, you are able to invoke, gods and goblins, spirits and demons, conscious computers and neurons that believe.....yet none of it is real. Just words.
 
Consciousness exists, just not under the conditions you invoke. Microchips are not conscious, neurons do not form beliefs or consciously ponder on what they do as play their role within the system.
You are conflating "consciousness" as in"to have an awareness of a state" with "internal reflection", which is to create an "awareness", a read, of an internal state.

A computer system is always conscious of it's inputs, but it is not always internally reflective of them.

Indeed, I have spent some part of my morning speaking to a computer system which does form beliefs and actively ponders its own internal state.

But sure, go on ahead believing what you want. It will not only never get you to a point where you can design a useful agent system, it will actively draw you further away from any such attainment.

Only by focusing on the mechanical principles from which belief arises can one build anything that encodes such belief.

You cannot program a machine to hold a will, if you cannot describe what a will is.

You can never have a machine which can even form a will without understanding and accepting the idea that the only way to learn whether or not you will want to choose or not to chose some thing is to imagine what would happen if you did it, without a.machine that forms beliefs about hypothetical futures.

Indeed, I would bet that without thinking about the future impacts of potential decisions, it is highly likely that those decisions would turn out disastrously.
 
Consciousness exists, just not under the conditions you invoke. Microchips are not conscious, neurons do not form beliefs or consciously ponder on what they do as play their role within the system.
You are conflating "consciousness" as in"to have an awareness of a state" with "internal reflection", which is to create an "awareness", a read, of an internal state.

No, I'm not. I use the word as it is defined. to be conscious, to be consciously aware of yourself and your surroundings.

Machines are not aware of themselves or their surrounding.

And the evidence supports the proposition that conscious activity emerges when the potential is reached. Prior to that there is unconscious acquisition and processing of information, distributed, integrated with memory to enable recognition and presented in conscious form....you see, feel, think and act consciously.


Which is basically the reason why your claim for machine consciousness and neurons with beliefs, etc, is not only not supported by evidence, but is patently absurd

Word play is not sufficient to prove your proposition.


That's all I have time for today.
 
No, I'm not. I use the word as it is defined
If we had a list of all the words of antiquity that we later figured out had definitions that didn't actually align with how reality worked, it would be longer by many orders of magnitude than the set of all sensible words.
Machines are not aware of themselves or their surrounding.
They are aware of themselves when they access reflected data (have senses that they can probe inwardly with). They are aware of their surroundings when they have as much as a simple camera, microphone, or even mouse input.

This is because "to be aware" is to be "watching and modeling changes in the environment into meaningful information".

And the evidence supports the proposition that conscious activity emerges when the potential is reached
You may have thought this sentence encodes something but it does not. It literally is a word salad. It means nothing as you wrote it.

It says "consciousness emerges when consciousness emerges". The following unquoted sentences are saying "before consciousness has emerged consciousness hasn't emerged".

Given the fact that I at least apparently have a sane, workable definition of what consciousness is, and all you have offered is that "function is not consciousness " whatever the fuck that is supposed to mean, I can't help but think you argue for recognition and credit rather than to actually figure things out.

Then, I have yet to see an argument from a hard determinist that doesn't contain enough smug to choke a San Franciscan.

Computers can at the very least do the things I describe even if you claim that the things I describe should not be referred to by the terms I use to encode them briefly.

The problem with your arguments is that I don't conflate my terms, I leverage their functional definitions rather than some conflation of them to make any argument about the meaningfulness of those capabilities I discuss, which are real under the definitions I use for the terms.

You don't even have that going for you because you completely fail to understand even the nature of your own terms. Your own terms are not even abstract, they are just not defined.
 
No, I'm not. I use the word as it is defined
If we had a list of all the words of antiquity that we later figured out had definitions that didn't actually align with how reality worked, it would be longer by many orders of magnitude than the set of all sensible words.

Irrelevant. Forget about playing word games, just produce evidence for your proposition: that individual neurons, microchips, computers, etc, may be conscious and hold beliefs. To hold beliefs, someone or something must have the capacity to think , reason and consider, not just process information.

That takes a higher order of function than single neurons, microchips or the current state of computers.

Given sufficient processing power, consciousness may emerge in AI but time has not yet arrived.


Machines are not aware of themselves or their surrounding.
They are aware of themselves when they access reflected data (have senses that they can probe inwardly with). They are aware of their surroundings when they have as much as a simple camera, microphone, or even mouse input.

You keep saying things like 'they are aware of themselves' without a shred of evidence. Switches and sensors that flip switches when the criteria are met are not conscious, it's just a mechanical response, light level drops, the lights come on. No thought is involved, no belief, no consideration, just mechanical detectors, relays and switches doing what they were designed for. Function, not thought or belief.

This is because "to be aware" is to be "watching and modeling changes in the environment into meaningful information".

Artificial systems are not aware. They detect and respond unconsciously and mechanically, without thought, consideration or conscious mental representation of information. That's where millions of years of biological evolution comes into play.
 
that individual neurons, microchips, computers, etc, may be conscious and hold beliefs
I provided my tight definitions for these are defined in terms of connectivity and available connectivity models.

This is like asking, someone to prove that flipflops can hold states.

It's an absurd demand, right on the face of it. If it doesn't hold a state it is not a flipflop. If it does not encode a belief, it is not a neuron.
You keep saying things like 'they are aware of themselves' without a shred of evidence
Again, this is what reflection is. Self-awareness is reflection. Reflection is awareness. From there it's only a question of what exactly it is aware of, of itself, and whether that awareness accomplished anything useful.

Usually, this is not leveraged in any useful way.

You're asking for evidence to prove things satisfy the definitions that define them.

Switches and sensors that flip switches when the criteria are met are not conscious
Yes, they are, specifically of their inputs.

They are not necessarily reflectively conscious of their internal state.

Again, your misconceptions come from the fact that you never actually put forward the effort to figure out what consciousness even means as a concept before going off to proclaim machines can't possibly be doing it

Artificial systems are not aware.
Artificial systems are not aware of what exactly? They are aware of the things presented to them, they are aware of their history of inputs, assuming they record them for later review, they are aware of any state presented through internal reflection structures.

You are trying to rely on some magical platonic "Awareness!"; Some magical platonic "Consciousness!", some magical platonic "Freedom!".

But that's not how those concepts work.

It is Awareness OF; Consciousness OF; Freedom OF. Without accepting that each of these things has a target, implied as it is in most usages, you will forever be unable to ascertain why they are really there when they are present.
 
that individual neurons, microchips, computers, etc, may be conscious and hold beliefs
I provided my tight definitions for these are defined in terms of connectivity and available connectivity models.

Definitions are not enough. Superman can be defined in terms of his powers, but that doesn't make him real.


This is like asking, someone to prove that flipflops can hold states.

Any evidence to show that computers or individual neurons possess consciousness and hold beliefs, etc, will do.

It's an absurd demand, right on the face of it. If it doesn't hold a state it is not a flipflop. If it does not encode a belief, it is not a neuron.

It's basic justification. A claim is made and evidence to support the claim is given. Semantics do not prove the reality of extraordinary claims like 'computer are conscious' or single neurons act on their beliefs.

It's not enough.

You keep saying things like 'they are aware of themselves' without a shred of evidence
Again, this is what reflection is. Self-awareness is reflection. Reflection is awareness. From there it's only a question of what exactly it is aware of, of itself, and whether that awareness accomplished anything useful.

Usually, this is not leveraged in any useful way.

You're asking for evidence to prove things satisfy the definitions that define them.

There is no evidence to show that computers are aware of themselves. Computers are not self-aware, computers function on the basis of their design and build, processing power, memory capacity, etc, not consciousness.

There is no evidence that individual cells have beliefs, that they are aware of the information they acquire and transmit throughout the brain, where it is integrated and only some of that information made conscious, not through the decision of single neurons, but that this the role of the brain as a whole.
 
Definitions are not enough. Superman can be defined in terms of his powers, but that doesn't make him real.
-_-

You miss the point, of any of it.

The things I define are absolutely real, and absolutely as I describe, even if you object to the words I use to describe them.

You can't even do that much. You use the words "consciousness" and "belief" and "awareness" with some special awe that puts it on a pedestal specifically to NOT be examined and built from.

Further, they build exactly as I describe, to the significance I describe. Which is, in reality, not much.

You are trying to create barriers that you do not think can easily, or ever really, be crossed.

The problem with this is that this view does not let you ever actually build one, because it assumes there is something that is not understood, or even really understandable about the concept.

It's just not a healthy or useful way to approach life, especially for someone like me whose goal is to actually construct systems which don't just "believe" but also "doubt".

If you type "a pig is a type of cow" into an AI, as it's training data, and ask it "is a pig a type of cow" and it says "yes, a pig is a type of cow", and then you say "what is a cow?" And it gives you a definition in terms of bovine DNA modulus against a mammalian base ancestor DNA, and then ask it for "what is a pig" and it responds in terms of a DNA modulus against a common bovine ancestor with DNA that

No amount of argument, lifting the hood, or otherwise changes that clear fact. Not only is it a belief, it is an incorrect belief.

The same is true if you just have a hard coded system with a lookup table, which, when it says "what category is pig" and the system responds with "cow".

It's still a belief, regardless of what you might say about computers and beliefs.

This informes the expectation that beliefs are created by arbitrary structuring of switching systems, and that they are composed of primative things which, while beliefs, cease to look like the things we are used to at particularly small scales.

And from here this answers a fundamental question "how do you build a belief?" With the fairly incredible answer "build any system which produces an arbitrary answer from data".

How do you produce systems which produce arbitrary answers from data?

You connect switches.

If you have a massive pile of switches that produces no sane output, the beliefs of the system are un-useful and insane.

If you have a massive pile of switches that produces sane output, the beliefs of the system might be capable of serving some utility function.

We figured out early last century how to make a belief engine whose beliefs involve responding consistently, for which we could re-arrange and re-order the execution process easily using that system's belief structures.

That's all there really is to it.

The real crowning achievement of the brain was a belief engine that is capable of hosting the belief that it's beliefs are wrong, looking at those beliefs, and correcting them.

That's the "magic trick" that you want to claim most systems don't do -- even though I have never seen you do it either, for that matter. But once you can admit that an arbitrary arrangement of switches presents an arbitrary belief structure, then the question is more "how do you get the system to become able to reflect into those structures of belief to ascertain which switching structure is aligned which way, such that it caused the system to miss whatever it is the goal for terminating output on the process?"

In short, you want to claim that the vast majority of systems are utterly incapable not of belief, but of doubt in their beliefs. Most cannot form beliefs about their beliefs because they can only execute on them rather than examine them in meta-analysis.

Neurons are special, because they have a built-in way of asserting doubt on the belief structure they represent through backprop mechanisms, and HTMs have an additional layer of this available through the arrangement of their refractory timings against each other.

My way of thinking gets us LLaMa models, and things which you can ask "what is the capital of Spain" the first time it might say "Barcelona", get "no, that's not right can you try again" as the response, and it tries "Madrid" and then you say "good job, that's the correct answer" and it doesn't fuck that up anymore, unless you lie to it again later persistently.

Instead, you throw your hands up because you have tried nothing to actually semantically complete "belief" and you are all out of ideas.

If you cannot semantically complete an idea, then there is something incomplete about your understanding of it, plain and simple.

Once an idea IS semantically completed, such as this presentation of a model of what "belief" constitutes such that beliefs can be engineered, and the types of beliefs that are more interesting can be thought of, well, that's all there is. It's proven as real.

And lo and behold, we have systems capable of reflecting doubt on beliefs on the basis of new information arising from such understanding that these things can be both understood well and built from their most primative elements.
 
Back
Top Bottom