dystopian
Veteran Member
The chance of either alife or self-modifying viruses leading to strong AI is effectively zero, however.
Agreed. However, I'm not convinced that simulating the human brain is the way to go. To me it seems best to just keep working away at the myriad AI algorithms.
Why not both?
Whole brain emulation is in fact the only method we can be reasonably certain will work; relying on far fewer 'maybe's' than any other approach. IF strong AI is simply a matter of computation and simulation; then whole brain emulation is most likely to achieve positive results: we already know brains are capable of general intelligence. By comparison, we have no such certainty about other methods.
How do we determine when we have strong AI?
There would be absolutely no doubt whether an AI is of the strong/agi variety. Self-awareness is only a small part of what defines strong AI; at the very least, a strong AI must be capable of performing the *full* range of cognitive actions a human is capable of, under the same conditions as humans are capable of. You can't fake that as a whole.