Nor did I say ''AI is better than doctors.' But for the sake of argument, let's say that AI diagnosis and treatment surpasses human ability, that you are significantly better off consulting AI....who are patients likely go to for their medical needs?
I honestly do not expect it to surpass human ability, because none of the questions posed to it were about cases where the doctor was examining a real patient that he or she was familiar with.
Apparently, AI has already surpassed us, albeit in specific and specialized ways.
Predictions, image and object recognition, games, pattern recognition, etc.
What the future holds is anyone's guess. I don't know, but I don't make the assumption that it can't surpass us in many fields.
In a sense, all of our tools surpass us, especially when they allow us to do things that we cannot normally do without an augmented body. And that's my point. AI is a field that is most useful in terms of its ability to augment us, not replace us. The article you posted was promoting the technology in that sense--as an augmentation for experts. However, it is always tempting, when contemplating autonomous machines (starting with windup toys), to develop a sense of envy at what they can do on their own that we cannot. ChatGPT is wildly overestimated as a form of AI, but that is because it is designed to mimic human language. Just remember that when it apologizes for getting something wrong, it really has no sense of remorse, even though you may be tricked into appreciating the apology.
These programs can only summarize what their programming calculates as the most relevant texts from their storage to match an input text. That's it. They don't actually know what they are talking about, but they appear that way to us when we read their responses--in much that same way that people supply lots of personal context to interpret a Ouija board, Chinese cookie message, or horoscope. The accuracy and usefulness of the response depends entirely on the large textbase that it is trained on, but the program itself can't distinguish between crap and gold.
So far. It doesn't mean that progress stops at this point.
No one is saying that it does. This discussion is really a criticism of the public perception of where we are with the technology at present. There is a question of scalability. Chatbots have their uses, but only for certain types of tasks. They won't drive us to create real sentience or intelligence, because they only perform sophisticated transformation on text created by actually intelligent beings. Humans use language to exchange thoughts. Chatbots use language to summarize a collection of other people's language-encoded thoughts. Robots are more interesting, because their technology drives us to understand and engineer machines that mimic our own intelligence-driven autonomous behavior.
I've been in a months-long discussion of ChaptGPT with colleagues, many of whom, like me, have some experience and knowledge of how these programs work. The discussion was kicked off by a query by a colleague of mine who asked the program how many past presidents had been Jewish. He knew, of course, that the answer was none, but the program can only construct answers out of statistical associations between groups of words that cluster together into what might loosely be called conceptual units. The response was that there were two Jewish presidents in the past--Herbert Hoover and Barack Obama. It could not explain how it arrived at those conclusions, although I think I know why, but that was the point of the experiment--to see how easy it was to elicit false information. The program had no knowledge of what a Jewish person was, but it had plenty of "word clouds" that scored Hoover and Obama as having the strongest connection to Jewish families. For example, Obama's father had married a Jewish woman before he divorced her and married the woman with whom he fathered Barack. So you can kind of see a motherhood connection there, if all you are looking at is associative affinities. Herbert Hoover had been a very strong advocate for Jewish refugee families during Hitler's rise to power.
Sure...and AI is still in the early stages of development.
If one is looking at a time scale that ends at truly intelligent machines, one could say that we are only discussing how much more distance we have to travel to get to that end point. I'm perhaps a little more pessimistic than most people, but I've been immersed in the field long enough to know some of the obstacles we face. That's not to say that I'm not impressed with the generative aspect of these LLM chatbots. They perform better than I would have expected at this point in time, so I find them fun to play with. If I were still active in the field, I would be studying the methods they use to craft summaries.
Now let's think about medical diagnoses. If the training base for a chatbot contains quack diagnoses, then it is going to make some really bad diagnoses. So a lot depends on how accurate the material is in the large text training set and how prominent those quack diagnoses can affect responses to queries about symptoms. The program's ability to "surpass human ability" depends fully on the human-authored input. It doesn't actually learn anything about the world or have any experiences of reality in the way human beings or other animals do. So, although the output of these programs can sometimes be very impressive and even useful, there is a serious danger inherent in relying on them too much in situations where human wisdom and reasoning is needed.
Isn't an accrurate diagnosis based on relating the described symptoms, x ray results, fMRI images, etc, with a body of information and a list possible conditions?
It depends on how much the training text base includes accurate medical diagnoses involving those tools. Bear in mind that it takes a long time to train these chatbots up, and they don't actually do any "learning" after the training session. Producing machines that learn from experience and evolve their understanding of reality is a long way off. Our brains come equipped with an episodic memory capability that we haven't quite figured out how to implement yet. All doctors learn from experiences they have over time that includes changes in the way to interpret those symptoms and diagnostic results. Chatbots can only summarize prerecorded stuff.
'Artificial intelligence is more accurate than doctors in diagnosing breast cancer from mammograms, a study in the journal Nature suggests.
An international team, including researchers from
Google Health and
Imperial College London, designed and trained a computer model on X-ray images from nearly 29,000 women. The algorithm
outperformed six radiologists in reading mammograms.
AI was still as good as two doctors working together.''
Hospitals trial intelligent machines to improve cancer detection and results so far are promising.
www.bbc.com
That's why doctors consult with each other over diagnostic results. Neural nets are very good at some kinds of pattern recognition. Just bear in mind that abacuses surpass the ability of mathematicians to calculate certain types of results quickly and accurately. Humans are just as good at getting the results, but it takes them much longer. Abacuses are more limited when needs go beyond the narrow range of things they can do for you. And you must know how to make them work properly. Otherwise, they are useless tools.