AI undeniably does it differently than organic machines, like us, but is there sufficient reason to deny that it approaches human conscious, whatever that is, and there is much debate of what that is, enough to fool gullible humans? Where do we draw the line? Should we draw any line? Aren't AI experiences also subjective? Could it be we can't do that because reconstructing organic human machines is far beyond our current technology, even if theoretically possible at all?
The Turing test is probably the best approach. Of course that depends on the person having the discussion. Two people with different skill sets will make a big difference in determining whether it is a conscious machine or not.
Put an eight-year-old behind a curtain. How does one determine if it is a person or a machine? Eight-year-olds aren't overly informed on average.
Heh, reminds me of my local non-quantized 13b. It's... Not very smart.
It's like asking, say, your average idiot high schooler to do something.
There's a 50gb model binary, but I have yet to get that going since I lack the GPU to run it locally. Instead I'm going to end up having to SSH into a server in Australia of all places to borrow time on a friend's GPU.
Hopefully the newer model will at least be beefy enough to actually manage taking an outline and an example, and generalizing on the outline to produce examples.
I could probably find an NVIDIA card myself, or get my friend to just ship me the GPU, but that's gonna take way too long and way too much in shipping.
Personally, I'm clear where I stand: that there is no true barrier between the nature of belief between organic belief engines and inorganic ones.
Obviously machines can "think" because
we are machines.
I think the better question is not "can they" but "what is meant by 'thinking', actually, for real?" As to "what is intelligence?" I think the very question is malformed.
Really the problem is the general of inability of MOST machines we make to adjust a
cognitive bias.
I think it's important to actually pick apart the word "cognition" and try to examine what we are trying to say with it. It's base word is "cog". It draws up colorful metaphors for me of machines whirling, and systems transforming an input to an output in a mechanical way, when I push at it behind the first layer of neive, casual use, when I reflect on its structure as an idea.
In reality, it seems to me that thinking, cognition, pick a synonym, is just applying ANY process, systematically, to get from A to B.
The problem is that most systems that so "cogitate" which are made by humans are simply not set up to widely modify their own system of cogitation in general. In fact we protect the "text" section of most programs, the cogitation model from modification" because these systems we make are exceedingly fragile... Usually. Even the tiniest adjustment made without exact care can lead to completely shattering any illusion we have of "sanity" or "logic" within the machine.
Tensor systems in particular are interesting because there is literally no such thing as an invalid input. The input will always compile to an output, even if the output is "are you ok? It looks like something vomited o your keyboard...."
And earlier inputs impact the compilation of later inputs into outputs.