• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

AI, the Drone Wars, and UBI.

To me "think" involves original thought, not merely returning the words of your teachers.
How is that different from what regular people do?
You certainly have not demonstrated any original thought.
I agree with LP, AI at present time is a little more than sophisticated and massive search engine.
It's still good at things that humans are not particularly good at and bad at things humans are good.
All human thoughts are based on stored memories. All AI thoughts are based on stored memories.
No, sometimes both have thoughts based on current signals (or as current as a signal can be, at any rate). I have thoughts unbidden about the linearity of some feature in my visual field, for example, and these come not from memories, but rather from a sensory manifold.

Is it less a thought, after all, when the thought is "a heuristic activation"?

This is not necessarily based on "memory" so much as "the current state, by any origin that leads there".
 
Given that AI (and LLMs in particular) do discover new and novel chemicals all the time, and many of these are effective to the task asked of them, your statement that they don't rather falls flat.

If we are talking dialectic synthesis here, AI shine like a burning star in that respect.
Brute force analysis isn't AI.
 
Given that AI (and LLMs in particular) do discover new and novel chemicals all the time, and many of these are effective to the task asked of them, your statement that they don't rather falls flat.

If we are talking dialectic synthesis here, AI shine like a burning star in that respect.
Brute force analysis isn't AI.
Except that modern AI don't do brute force analysis, but suggest specific molecules, and create candidates with more success to an application than humans doing the same task.
 
Given that AI (and LLMs in particular) do discover new and novel chemicals all the time, and many of these are effective to the task asked of them, your statement that they don't rather falls flat.

If we are talking dialectic synthesis here, AI shine like a burning star in that respect.
Brute force analysis isn't AI.
Except that modern AI don't do brute force analysis, but suggest specific molecules, and create candidates with more success to an application than humans doing the same task.
It's brute force to get there. The models have become good enough they can run a million molecules through and have it identify the ones likely to do what's wanted.
 
Given that AI (and LLMs in particular) do discover new and novel chemicals all the time, and many of these are effective to the task asked of them, your statement that they don't rather falls flat.

If we are talking dialectic synthesis here, AI shine like a burning star in that respect.
Brute force analysis isn't AI.
Except that modern AI don't do brute force analysis, but suggest specific molecules, and create candidates with more success to an application than humans doing the same task.
It's brute force to get there. The models have become good enough they can run a million molecules through and have it identify the ones likely to do what's wanted.
There is no evidence that's what is happening in the entirety of even majority of cases.

This isn't "massively parallel folding at home" this is "build an auto-regressive model for what works and throw seeds into that". It is a mechanical imagination.

I mean Stable Diffusion literally "here's a cloud, I think it should look like a _____" and then it imagines a _____ from the cloud you showed it.

When I am being creative there is a two step process, sometimes condensed: I think of a piece of text in response to the query "what do I want to see?", and then the answer to that I put into a "from the unformed stuff, the disordered noise currently in my hippocampus, generate a scene".

Where you seem to find a difference, I found a trivial one and a much deeper similarity.
 
An interesting article in The Register today.

https://www.theregister.com/2024/05/23/ai_untested_unstable/

It seems AI has been rushed out despite some glaring and severe flaws, and that those pushing it out on an unsuspecting public haven't even paused in their hubris for long enough to provide a mechanism to report issues, much less work on, or even (God forbid) fix them.

If you don't let people report problems, then no problems are being reported, so there are no problems. Right?
 
An interesting article in The Register today.

https://www.theregister.com/2024/05/23/ai_untested_unstable/

It seems AI has been rushed out despite some glaring and severe flaws, and that those pushing it out on an unsuspecting public haven't even paused in their hubris for long enough to provide a mechanism to report issues, much less work on, or even (God forbid) fix them.

If you don't let people report problems, then no problems are being reported, so there are no problems. Right?
No surprise. Got to get there first to grab market share, then worry about making it work. Plenty of bug-fests have gone to market because of market pressure.
 
Back
Top Bottom