Definitions are not enough. Superman can be defined in terms of his powers, but that doesn't make him real.
-_-
You miss the point, of any of it.
The things I define are absolutely real, and absolutely as I describe, even if you object to the words I use to describe them.
You can't even do that much. You use the words "consciousness" and "belief" and "awareness" with some special awe that puts it on a pedestal specifically to NOT be examined and built from.
Further, they build exactly as I describe, to the significance I describe. Which is, in reality, not much.
You are trying to create barriers that you do not think can easily, or ever really, be crossed.
The problem with this is that this view does not let you ever actually build one, because it assumes there is something that is not understood, or even really understandable about the concept.
It's just not a healthy or useful way to approach life, especially for someone like me whose goal is to actually construct systems which don't just "believe" but also "doubt".
If you type "a pig is a type of cow" into an AI, as it's training data, and ask it "is a pig a type of cow" and it says "yes, a pig is a type of cow", and then you say "what is a cow?" And it gives you a definition in terms of bovine DNA modulus against a mammalian base ancestor DNA, and then ask it for "what is a pig" and it responds in terms of a DNA modulus against a common bovine ancestor with DNA that
No amount of argument, lifting the hood, or otherwise changes that clear fact. Not only is it a belief, it is an incorrect belief.
The same is true if you just have a hard coded system with a lookup table, which, when it says "what category is pig" and the system responds with "cow".
It's still a belief, regardless of what you might say about computers and beliefs.
This informes the expectation that beliefs are created by arbitrary structuring of switching systems, and that they are composed of primative things which, while beliefs, cease to look like the things we are used to at particularly small scales.
And from here this answers a fundamental question "how do you build a belief?" With the fairly incredible answer "build any system which produces an arbitrary answer from data".
How do you produce systems which produce arbitrary answers from data?
You connect switches.
If you have a massive pile of switches that produces no sane output, the beliefs of the system are un-useful and insane.
If you have a massive pile of switches that produces sane output, the beliefs of the system might be capable of serving some utility function.
We figured out early last century how to make a belief engine whose beliefs involve responding consistently, for which we could re-arrange and re-order the execution process easily using that system's belief structures.
That's all there really is to it.
The real crowning achievement of the brain was a belief engine that is capable of hosting the belief that it's beliefs are wrong, looking at those beliefs, and correcting them.
That's the "magic trick" that you want to claim most systems don't do -- even though I have never seen you do it either, for that matter. But once you can admit that an arbitrary arrangement of switches presents an arbitrary belief structure, then the question is more "how do you get the system to become able to reflect into those structures of belief to ascertain which switching structure is aligned which way, such that it caused the system to miss whatever it is the goal for terminating output on the process?"
In short, you want to claim that the vast majority of systems are utterly incapable not of belief, but of doubt in their beliefs. Most cannot form beliefs about their beliefs because they can only execute on them rather than examine them in meta-analysis.
Neurons are special, because they have a built-in way of asserting
doubt on the belief structure they represent through backprop mechanisms, and HTMs have an additional layer of this available through the arrangement of their refractory timings against each other.
My way of thinking gets us LLaMa models, and things which you can ask "what is the capital of Spain" the first time it might say "Barcelona", get "no, that's not right can you try again" as the response, and it tries "Madrid" and then you say "good job, that's the correct answer" and it doesn't fuck that up anymore, unless you lie to it again later
persistently.
Instead, you throw your hands up because you have tried nothing to actually semantically complete "belief" and you are all out of ideas.
If you cannot semantically complete an idea, then there is something incomplete about your understanding of it, plain and simple.
Once an idea IS semantically completed, such as this presentation of a model of what "belief" constitutes such that beliefs can be engineered, and the types of beliefs that are more interesting can be thought of, well, that's all there is. It's proven as real.
And lo and behold, we have systems capable of reflecting doubt on beliefs on the basis of new information arising from such understanding that these things can be both understood well and built from their most primative elements.