• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

AI Doomers and the End of Humanity

Copernicus

Industrial Grade Linguist
Joined
May 27, 2017
Messages
6,001
Location
Bellevue, WA
Basic Beliefs
Atheist humanist
Professor Emily Bender at the University of Washington is a recognized AI researcher who has much to say about the fearmongering of Artificial Intelligences technologies in the press and on social media. This fearmongering has been popularized for a long time in science fiction and the movie industry, but it has flooded media outlets with the release of OpenAI's chatbot technology. Large Language Models (LLMs) are sometimes called "generative AI", because they are trained on huge amounts of textual data (at high cost) that allow the technology to cluster written snippets of text into topics that can then be used to generate summaries of their training data. These programs aren't really intelligent in a human sense, do not have emotions or thoughts, do not learn from experience, and do not actually understand input queries or their own responses. They are essentially "stochastic parrots" trained to emit written English that simulates a thoughtful response based on the published words of human writers.

Now Emily has published a very nice article in Scientific American about AI doomers and how they have distorted the reality of what AI is and what its real potential harms are:

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

 
Until there is a required watermark for AI generated media, there are issues. I'm much more worried about the manipulation of humanity than the end of it.
 
Until there is a required watermark for AI generated media, there are issues. I'm much more worried about the manipulation of humanity than the end of it.

You are, but the article takes that worry seriously. It is about the fact that the doomsday hype has obscured the real dangers of AI, which involve the manipulation of public opinion. I doubt that the imposition of a "watermark" on AI generated media will do much at all to solve that problem.

The idea that AI is somehow going to doom the human race seems to grip people's attention even more that the dire news about climate-induced catastrophes, a real existential threat, pose for humanity.
 
If AI becomes a better doctor, surgeon, lawyer, designer, builder, writer, driver, etc, than any of us, what is left for us to do in life?
 
If AI becomes a better doctor, surgeon, lawyer, designer, builder, writer, driver, etc, than any of us, what is left for us to do in life?

Why do you believe that AI is going to become any of those things in the near future? The existing technology cannot be scaled up to surpass human performance in those areas, but it can supply tools to the doctor, surgeon, lawyer, designer, builder, writer, drive, etc., that improves their job performance in those areas.
 
I didn't mean current technology.

Can you give some sense of how far off in the future it will be before AI technology produces better doctors, surgeons, etc.? I know that we see that kind of thing in movies all the time, but those are pure fiction. What current technologies do is augment the performance of humans, not replace humans. The AI constructs in the movies replace humans.
 
I didn't mean current technology.

Can you give some sense of how far off in the future it will be before AI technology produces better doctors, surgeons, etc.? I know that we see that kind of thing in movies all the time, but those are pure fiction. What current technologies do is augment the performance of humans, not replace humans. The AI constructs in the movies replace humans.

According to some tests, AI is already better in some ways:


Why AI is better than doctors?

"The AI responses also rated significantly higher on both quality and empathy. On average, ChatGPT's responses scored a 4 on quality and a 4.67 on empathy. In comparison, the physician responses scored a 3.33 on quality and 2.33 on empathy."

 
I didn't mean current technology.

Can you give some sense of how far off in the future it will be before AI technology produces better doctors, surgeons, etc.? I know that we see that kind of thing in movies all the time, but those are pure fiction. What current technologies do is augment the performance of humans, not replace humans. The AI constructs in the movies replace humans.

According to some tests, AI is already better in some ways:


Why AI is better than doctors?

"The AI responses also rated significantly higher on both quality and empathy. On average, ChatGPT's responses scored a 4 on quality and a 4.67 on empathy. In comparison, the physician responses scored a 3.33 on quality and 2.33 on empathy."


OK, but the article only reports on some kind of survey where questions were submitted to doctors and evaluated against answers from ChatGPT. The results were compared and licensed by "three licensed physicians". I have no idea what their experience was or what criteria they used. The details of the experiment are glossed over (not that I have time to read them anyway), and it comes off to me more as a rather sloppy project designed to sell the technology. Even so, they predictably admitted:

"ChatGPT provides a better answer," said John Ayers, vice chief of innovation with the division of infectious disease and global public health at the Qualcomm Institute at University of California, San Diego, who led the study. "I think of our study as a phase zero study, and it clearly shows that ChatGPT wins in a landslide compared to physicians, and I wouldn't say we expected that at all."

Although ChatGPT scored higher than physicians, Ayers emphasized that "[t]his doesn't mean AI will replace your physicians." Instead, he said that "it does mean a physician using AI can potentially respond to more messages with higher-quality responses and more empathy."

Unlike physicians, who are often press for time and struggling with burnout, ChatGPT can more easily craft a detailed and empathetic response, which can enhance a doctor's actual response.

I can agree with that generalization, but I would like to see more evidence other than this rather cursory survey. Note that the study explicitly does not conclude that "AI is better than doctors".
 
I didn't mean current technology.

Can you give some sense of how far off in the future it will be before AI technology produces better doctors, surgeons, etc.? I know that we see that kind of thing in movies all the time, but those are pure fiction. What current technologies do is augment the performance of humans, not replace humans. The AI constructs in the movies replace humans.

According to some tests, AI is already better in some ways:


Why AI is better than doctors?

"The AI responses also rated significantly higher on both quality and empathy. On average, ChatGPT's responses scored a 4 on quality and a 4.67 on empathy. In comparison, the physician responses scored a 3.33 on quality and 2.33 on empathy."


OK, but the article only reports on some kind of survey where questions were submitted to doctors and evaluated against answers from ChatGPT. The results were compared and licensed by "three licensed physicians". I have no idea what their experience was or what criteria they used. The details of the experiment are glossed over (not that I have time to read them anyway), and it comes off to me more as a rather sloppy project designed to sell the technology. Even so, they predictably admitted:

"ChatGPT provides a better answer," said John Ayers, vice chief of innovation with the division of infectious disease and global public health at the Qualcomm Institute at University of California, San Diego, who led the study. "I think of our study as a phase zero study, and it clearly shows that ChatGPT wins in a landslide compared to physicians, and I wouldn't say we expected that at all."

Although ChatGPT scored higher than physicians, Ayers emphasized that "[t]his doesn't mean AI will replace your physicians." Instead, he said that "it does mean a physician using AI can potentially respond to more messages with higher-quality responses and more empathy."

Unlike physicians, who are often press for time and struggling with burnout, ChatGPT can more easily craft a detailed and empathetic response, which can enhance a doctor's actual response.

I can agree with that generalization, but I would like to see more evidence other than this rather cursory survey. Note that the study explicitly does not conclude that "AI is better than doctors".

The question of whether AI can surpass human ability doesn't rest on one study or one field. Apparently AI can already beat master chess players, Go, Starcraft, and in certain language tasks, search engines, extrapolation, prediction, etc, and we are still in the early days of development.
 
Nor did I say ''AI is better than doctors.' But for the sake of argument, let's say that AI diagnosis and treatment surpasses human ability, that you are significantly better off consulting AI....who are patients likely go to for their medical needs?
 
Nor did I say ''AI is better than doctors.' But for the sake of argument, let's say that AI diagnosis and treatment surpasses human ability, that you are significantly better off consulting AI....who are patients likely go to for their medical needs?

I honestly do not expect it to surpass human ability, because none of the questions posed to it were about cases where the doctor was examining a real patient that he or she was familiar with. These programs can only summarize what their programming calculates as the most relevant texts from their storage to match an input text. That's it. They don't actually know what they are talking about, but they appear that way to us when we read their responses--in much that same way that people supply lots of personal context to interpret a Ouija board, Chinese cookie message, or horoscope. The accuracy and usefulness of the response depends entirely on the large textbase that it is trained on, but the program itself can't distinguish between crap and gold.

I've been in a months-long discussion of ChaptGPT with colleagues, many of whom, like me, have some experience and knowledge of how these programs work. The discussion was kicked off by a query by a colleague of mine who asked the program how many past presidents had been Jewish. He knew, of course, that the answer was none, but the program can only construct answers out of statistical associations between groups of words that cluster together into what might loosely be called conceptual units. The response was that there were two Jewish presidents in the past--Herbert Hoover and Barack Obama. It could not explain how it arrived at those conclusions, although I think I know why, but that was the point of the experiment--to see how easy it was to elicit false information. The program had no knowledge of what a Jewish person was, but it had plenty of "word clouds" that scored Hoover and Obama as having the strongest connection to Jewish families. For example, Obama's father had married a Jewish woman before he divorced her and married the woman with whom he fathered Barack. So you can kind of see a motherhood connection there, if all you are looking at is associative affinities. Herbert Hoover had been a very strong advocate for Jewish refugee families during Hitler's rise to power.

Now let's think about medical diagnoses. If the training base for a chatbot contains quack diagnoses, then it is going to make some really bad diagnoses. So a lot depends on how accurate the material is in the large text training set and how prominent those quack diagnoses can affect responses to queries about symptoms. The program's ability to "surpass human ability" depends fully on the human-authored input. It doesn't actually learn anything about the world or have any experiences of reality in the way human beings or other animals do. So, although the output of these programs can sometimes be very impressive and even useful, there is a serious danger inherent in relying on them too much in situations where human wisdom and reasoning is needed.
 
This reminds me of Alan Turing's classic paper on the possibilities of artificial intelligence: computing machinery and intelligence - a.m. turing, 1950

It has a lot of interesting stuff in it, and it was there that he proposed his famous imitation game. It's essentially a behaviorist take on the question of whether computers can think in human fashion, asking whether they can act like they do so. He composed a hypothetical example of a dialogue with a computer:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

A  Sonnet is a kind of poem, and I doubt that many of us are familiar with the  Forth Bridge a railroad bridge a little northwest of Edinburgh, Scotland.

The second one seems like an attempt to simulate human arithmetic, by taking 30 seconds to give an answer. The correct answer is 105721 and the mistake is from forgetting a carry digit.

The third one is correct, though it taking 15 seconds also seems like an effort to simulate human thought. It was stated in  Descriptive notation though  Algebraic notation (chess) is what's used nowadays.

In algebraic notation, with one of the four possible disambiguations, I'm white with my king at e1 and you are black with your king at e3 and rook at h8. Your move is ... Rh1.


Looking at how ChatGPT performs, I was reminded of that hypothetical example.
 
I doubt that many of us are familiar with the
wikipedia.png
Forth Bridge a railroad bridge a little northwest of Edinburgh, Scotland.
The Forth Bridge is an icon of British engineering, and British people (such as Turing and his audience) are about as likely to be unfamiliar with it, as Americans are to be unfamiliar with the
 Statue of Liberty a giant statue of a woman holding aloft a torch, that dominates the entrance to New York's harbour on the East Coast of North America.
 
Nor did I say ''AI is better than doctors.' But for the sake of argument, let's say that AI diagnosis and treatment surpasses human ability, that you are significantly better off consulting AI....who are patients likely go to for their medical needs?

I honestly do not expect it to surpass human ability, because none of the questions posed to it were about cases where the doctor was examining a real patient that he or she was familiar with.

Apparently, AI has already surpassed us, albeit in specific and specialized ways.

Predictions, image and object recognition, games, pattern recognition, etc.

What the future holds is anyone's guess. I don't know, but I don't make the assumption that it can't surpass us in many fields.

These programs can only summarize what their programming calculates as the most relevant texts from their storage to match an input text. That's it. They don't actually know what they are talking about, but they appear that way to us when we read their responses--in much that same way that people supply lots of personal context to interpret a Ouija board, Chinese cookie message, or horoscope. The accuracy and usefulness of the response depends entirely on the large textbase that it is trained on, but the program itself can't distinguish between crap and gold.

So far. It doesn't mean that progress stops at this point.


I've been in a months-long discussion of ChaptGPT with colleagues, many of whom, like me, have some experience and knowledge of how these programs work. The discussion was kicked off by a query by a colleague of mine who asked the program how many past presidents had been Jewish. He knew, of course, that the answer was none, but the program can only construct answers out of statistical associations between groups of words that cluster together into what might loosely be called conceptual units. The response was that there were two Jewish presidents in the past--Herbert Hoover and Barack Obama. It could not explain how it arrived at those conclusions, although I think I know why, but that was the point of the experiment--to see how easy it was to elicit false information. The program had no knowledge of what a Jewish person was, but it had plenty of "word clouds" that scored Hoover and Obama as having the strongest connection to Jewish families. For example, Obama's father had married a Jewish woman before he divorced her and married the woman with whom he fathered Barack. So you can kind of see a motherhood connection there, if all you are looking at is associative affinities. Herbert Hoover had been a very strong advocate for Jewish refugee families during Hitler's rise to power.

Sure...and AI is still in the early stages of development.


Now let's think about medical diagnoses. If the training base for a chatbot contains quack diagnoses, then it is going to make some really bad diagnoses. So a lot depends on how accurate the material is in the large text training set and how prominent those quack diagnoses can affect responses to queries about symptoms. The program's ability to "surpass human ability" depends fully on the human-authored input. It doesn't actually learn anything about the world or have any experiences of reality in the way human beings or other animals do. So, although the output of these programs can sometimes be very impressive and even useful, there is a serious danger inherent in relying on them too much in situations where human wisdom and reasoning is needed.

Isn't an accrurate diagnosis based on relating the described symptoms, x ray results, fMRI images, etc, with a body of information and a list possible conditions?


''Artificial intelligence is more accurate than doctors in diagnosing breast cancer from mammograms, a study in the journal Nature suggests.
An international team, including researchers from Google Health and Imperial College London, designed and trained a computer model on X-ray images from nearly 29,000 women. The algorithm outperformed six radiologists in reading mammograms.

AI was still as good as two doctors working together.''

 
If AI becomes a better doctor, surgeon, lawyer, designer, builder, writer, driver, etc, than any of us, what is left for us to do in life?
Observe. Indeed, it would be extremely important (and something the global governments have failed to do) to figure out how we make our livelihoods if technology displaces most jobs. This sort of thing is hinted at in The Expanse, where there are very few jobs.

But if we have food and drink and shelter, there are always the arts, nature, sport, etc... Our lively identity and who we are aren't necessarily defined by our jobs, we just need them to be able to do what we actually like occasionally.
 
Nor did I say ''AI is better than doctors.' But for the sake of argument, let's say that AI diagnosis and treatment surpasses human ability, that you are significantly better off consulting AI....who are patients likely go to for their medical needs?

I honestly do not expect it to surpass human ability, because none of the questions posed to it were about cases where the doctor was examining a real patient that he or she was familiar with.

Apparently, AI has already surpassed us, albeit in specific and specialized ways.

Predictions, image and object recognition, games, pattern recognition, etc.

What the future holds is anyone's guess. I don't know, but I don't make the assumption that it can't surpass us in many fields.

In a sense, all of our tools surpass us, especially when they allow us to do things that we cannot normally do without an augmented body. And that's my point. AI is a field that is most useful in terms of its ability to augment us, not replace us. The article you posted was promoting the technology in that sense--as an augmentation for experts. However, it is always tempting, when contemplating autonomous machines (starting with windup toys), to develop a sense of envy at what they can do on their own that we cannot. ChatGPT is wildly overestimated as a form of AI, but that is because it is designed to mimic human language. Just remember that when it apologizes for getting something wrong, it really has no sense of remorse, even though you may be tricked into appreciating the apology.


These programs can only summarize what their programming calculates as the most relevant texts from their storage to match an input text. That's it. They don't actually know what they are talking about, but they appear that way to us when we read their responses--in much that same way that people supply lots of personal context to interpret a Ouija board, Chinese cookie message, or horoscope. The accuracy and usefulness of the response depends entirely on the large textbase that it is trained on, but the program itself can't distinguish between crap and gold.

So far. It doesn't mean that progress stops at this point.

No one is saying that it does. This discussion is really a criticism of the public perception of where we are with the technology at present. There is a question of scalability. Chatbots have their uses, but only for certain types of tasks. They won't drive us to create real sentience or intelligence, because they only perform sophisticated transformation on text created by actually intelligent beings. Humans use language to exchange thoughts. Chatbots use language to summarize a collection of other people's language-encoded thoughts. Robots are more interesting, because their technology drives us to understand and engineer machines that mimic our own intelligence-driven autonomous behavior.


I've been in a months-long discussion of ChaptGPT with colleagues, many of whom, like me, have some experience and knowledge of how these programs work. The discussion was kicked off by a query by a colleague of mine who asked the program how many past presidents had been Jewish. He knew, of course, that the answer was none, but the program can only construct answers out of statistical associations between groups of words that cluster together into what might loosely be called conceptual units. The response was that there were two Jewish presidents in the past--Herbert Hoover and Barack Obama. It could not explain how it arrived at those conclusions, although I think I know why, but that was the point of the experiment--to see how easy it was to elicit false information. The program had no knowledge of what a Jewish person was, but it had plenty of "word clouds" that scored Hoover and Obama as having the strongest connection to Jewish families. For example, Obama's father had married a Jewish woman before he divorced her and married the woman with whom he fathered Barack. So you can kind of see a motherhood connection there, if all you are looking at is associative affinities. Herbert Hoover had been a very strong advocate for Jewish refugee families during Hitler's rise to power.

Sure...and AI is still in the early stages of development.

If one is looking at a time scale that ends at truly intelligent machines, one could say that we are only discussing how much more distance we have to travel to get to that end point. I'm perhaps a little more pessimistic than most people, but I've been immersed in the field long enough to know some of the obstacles we face. That's not to say that I'm not impressed with the generative aspect of these LLM chatbots. They perform better than I would have expected at this point in time, so I find them fun to play with. If I were still active in the field, I would be studying the methods they use to craft summaries.


Now let's think about medical diagnoses. If the training base for a chatbot contains quack diagnoses, then it is going to make some really bad diagnoses. So a lot depends on how accurate the material is in the large text training set and how prominent those quack diagnoses can affect responses to queries about symptoms. The program's ability to "surpass human ability" depends fully on the human-authored input. It doesn't actually learn anything about the world or have any experiences of reality in the way human beings or other animals do. So, although the output of these programs can sometimes be very impressive and even useful, there is a serious danger inherent in relying on them too much in situations where human wisdom and reasoning is needed.

Isn't an accrurate diagnosis based on relating the described symptoms, x ray results, fMRI images, etc, with a body of information and a list possible conditions?

It depends on how much the training text base includes accurate medical diagnoses involving those tools. Bear in mind that it takes a long time to train these chatbots up, and they don't actually do any "learning" after the training session. Producing machines that learn from experience and evolve their understanding of reality is a long way off. Our brains come equipped with an episodic memory capability that we haven't quite figured out how to implement yet. All doctors learn from experiences they have over time that includes changes in the way to interpret those symptoms and diagnostic results. Chatbots can only summarize prerecorded stuff.


'Artificial intelligence is more accurate than doctors in diagnosing breast cancer from mammograms, a study in the journal Nature suggests.
An international team, including researchers from Google Health and Imperial College London, designed and trained a computer model on X-ray images from nearly 29,000 women. The algorithm outperformed six radiologists in reading mammograms.

AI was still as good as two doctors working together.''


That's why doctors consult with each other over diagnostic results. Neural nets are very good at some kinds of pattern recognition. Just bear in mind that abacuses surpass the ability of mathematicians to calculate certain types of results quickly and accurately. Humans are just as good at getting the results, but it takes them much longer. Abacuses are more limited when needs go beyond the narrow range of things they can do for you. And you must know how to make them work properly. Otherwise, they are useless tools.
 
If AI becomes a better doctor, surgeon, lawyer, designer, builder, writer, driver, etc, than any of us, what is left for us to do in life?
Fundamentally, AI as we currently see it is basically autocomplete on steroids. As such, it can't actually create.

Most people wouldn't have a job in such a world but there would still be value in the development of new stuff. Advancing the state of knowledge is still of benefit and an awful lot more manpower could be devoted to it when it's not needed for the dissemination of existing knowledge.
 
They are finally beginning to understand the legal issues surrounding some

Legal risk for AI: Users can be liable for chatbots' mistakes


In the early 2000s, there were a few DARPA Grand Challenge contests to develop self-driving autonomous vehicles, and I was involved in several robotics projects related to these vehicles. One of the issues that we discussed with developers was who would be liable for accidents caused by self-driving vehicles. The owner of the vehicle? The manufacturer? The vendor who supplied the part that caused the crash? The software developers? Obviously, you can't sue the AI.
 
They are finally beginning to understand the legal issues surrounding some

Legal risk for AI: Users can be liable for chatbots' mistakes


In the early 2000s, there were a few DARPA Grand Challenge contests to develop self-driving autonomous vehicles, and I was involved in several robotics projects related to these vehicles. One of the issues that we discussed with developers was who would be liable for accidents caused by self-driving vehicles. The owner of the vehicle? The manufacturer? The vendor who supplied the part that caused the crash? The software developers? Obviously, you can't sue the AI.
As is currently the case, liability would be determined on a case by case basis.

It shouldn't be any more difficult to do than it currently is; probably it would be easier, because autonomous vehicles can record everything.

Highly complex cases aren't anything new. Even in an apparently simple case, it's often impossible to determine who was at fault and to what degree, with any certainty.

That's why insurers retain a lot of expensive lawyers.
 
Back
Top Bottom