• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

AI Doomers and the End of Humanity

What are the six SAE levels of self-driving cars? | Top Gear -- from completely manual to completely automatic driving in levels 0 to 5, as defined by the - Society of Automotive Engineers

What does the human in the driver's seat have to do?
  • 0, 1, 2 - You are driving whenever these driver support features are engaged - even if your feet are off the pedals and you are not steering
  • 3, 4, 5 - You are not driving when these automated driving features are engaged even if you are seated in the driver's seat"
  • 0, 1, 2 - You must constantly supervise these support features; you must steer, brake or accelerate as needed to maintain safety
  • 3 - When the feature requests, you must drive
  • 4, 5 - These automated driving features will not require you to take over driving
What do these features do?
  • 0, 1, 2 - These are driver support features
  • 3, 4, 5 - These are automated driving procedures
  • 0 - These features are limited to providing warnings and momentary assistance
  • 1 - These features provide steering OR brake/acceleration support to the driver
  • 2 - These features provide steering AND brake/acceleration support to the driver
  • 3, 4 - These features can drive the vehicle under limited conditions and will not operate unless all required conditions are met
  • 5 - This feature can drive the vehicle under all conditions
Example Features
  • 0
    • automatic emergency braking
    • blind spot warning
    • lane departure warning
  • 1
    • lane centering
    • OR
    • adaptive cruise control
  • 2
    • lane centering
    • AND
    • adaptive cruise control at the same time
  • 3
    • traffic jam chauffeur
  • 4
    • local driverless taxi
    • pedals/steering wheel may or may not be installed
  • 5
    • same as level 4, but feature can drive everywhere in all conditions
 
What are the six SAE levels of self-driving cars? | Top Gear
Is Tesla Level 3 or 4?

Neither. Tesla's in hot water for deploying its advanced driver assistance technology suite, called Autopilot, and suggesting such a name is not misleading to the hundreds of drivers taking their hands off the wheel illegally and letting the car auto-steer.

...
Officially, the tech is still considered Level 2. And in fact, Mercedes is the only car brand to successfully receive Level 3 certification for self-driving in the US.

These 13 Cars Have The Most Advanced Self-Driving Features
noting
Mercedes Is Now Approved For Level 3 Autonomous Tech - "Mercedes-Benz is the first manufacturer to achieve an international system approval to UN-R157 for its level 3 autonomous tech."

Back to the earlier link. This MB car has:
  • LiDAR -- Laser technology scans the road for objects and terrain
  • Road wetness sensor -- Measures water levels on the road
  • Redundant System -- Enables safer control of braking, steering, and power supply
 Lidar - like radar, but with visible light.
 
The rest of the cars listed are at levels 1 and 2. Features of some of these cars:
  • Highway assistance
  • Traffic assistance
  • Lane-change assistance
  • Staying-in-lane assistance
  • Slowdown for curves
  • Parking assistance
  • Backing-up assistance
  • Evasive-steering assistance
  • Adaptive cruise control - maintains distance from vehicles in front
  • Forward-collision warning and/or avoidance
  • Around-view monitor with moving-object detection
  • Blind-spot assistance
  • 360-degree camera
  • Safe exit - arning of anything near doors
  • Speed-limit compliance with reading of traffic signs
  • LIDAR mapping
  • Driver-attention system
  • Vehicle summoning - automatic parking-place exiting
 
In my opinion, safety on the road depends in no small part on the ability of drivers to figure out what is in the minds of other drivers, which is not always easy.
I couldn't disagree more.

Safety on the road depends on the ability of drivers to know and obey the rules, and to reasonably expect others to do the same; And on the ability to rapidly detect and respond quickly and appropriately when others break those rules.

It's a situation far better suited to autonomous algorithmic systems than to human beings.

The vast majority of even professional drivers have woeful gaps in their knowledge of road rules, in my experience. These can be programmed into autonomous vehicles, and kept up to date automatically as legislators change things. Most drivers have forgotten half of the rules that were in force when they took their one and only driving test; And half of what they do remember, is now out of date.

Well, I couldn't disagree more with you, because there are already stories like this:

Cruise Agrees to Reduce Driverless Car Fleet in San Francisco After Crash


It isn't just that one driverless car crashed into a fire truck. There have been other problems that appear less serious, but we are in early days. It is far from clear that overall safety on the road will validate marketing hype about how safe these vehicles are when their numbers on the road start increasing, but the fact is that we've all seen other drivers behaving recklessly and erratically, often because of DUI behavior. Humans can recognize such patterns of anomalous behavior more quickly and accurately than driverless car technology allows.


In this case, however, it is scary because it puts humans in realistic danger of being injured or killed, not to mention property damage.
No, it doesn't.

They're CURRENTLY in danger from those things, and autonomous vehicles REDUCE that danger.

That they don't reduce it to zero is irrelevant; but it's the basis for people's being scared.

It's not realistic to be more fearful of autonomous vehicles than of human piloted ones; It's a cognitive error.

And it's not reasonable to judge the proposed new paradigm against perfection; It should be judged against the paradigm it supplants.

What you are missing here is that the technologies developed for driverless cars can be deployed in driverful cars, and that will lead to an increase in safety on the roads. You need to compare apples to apples. Not every car out there has safety features that prevent unsafe driving, but they can be added to cars driven by people. You need to compare driverless vehicles to augmented driver vehicles. My position is that these AI technologies are better suited for augmentations of human drivers. The flaw in most people's thinking is that these machines are thought to be better than humans or replacements for humans, but they are only really useful as extensions of humans. They don't model reality in the same way that humans do, so they won't scale up to the level of intelligence needed to operate safely as fully autonomous machines. For example, they are not good at tasks like object recognition, which is a skill that drivers use all the time. Usually, they are tested under conditions that are more controlled than the ones they meet out on the road. Many of these startup companies now are essentially beta testing them now under real world conditions before they are ready for deployment. The driverless trucks are particularly worrisome, because their momentum makes them much more dangerous in the vicinity of humans.
 
According to an article in The New Yorker Newsletter, that showed up in my inbox yesterday (behind a paywall I'm afraid), two driverless taxi companies that have been testing their cabs in San Francisco, Cruise and Waymo, have been granted licenses for limited operations. The number of cabs they can deploy is limited, and they are prohibited from using freeways, etc. The public debates seem to reflect what has been discussed here. Aside from the weirdos who claim that wi-fi raise their blood pressure and electromagnetic waves give them headaches, there are two camps, one claiming that autonomous vehicles make us safer and the other claiming that autonomous vehicles are incapable of duplicating complex human judgement skills. A middle position claims that more testing is needed. Anecdotes of bizarre mishaps are given.

The article closes by predicting that self-driving vehicles will enter our lives in fits and starts, rather than in one tremendous wave.
 
According to an article in The New Yorker Newsletter, that showed up in my inbox yesterday (behind a paywall I'm afraid), two driverless taxi companies that have been testing their cabs in San Francisco, Cruise and Waymo, have been granted licenses for limited operations. The number of cabs they can deploy is limited, and they are prohibited from using freeways, etc. The public debates seem to reflect what has been discussed here. Aside from the weirdos who claim that wi-fi raise their blood pressure and electromagnetic waves give them headaches, there are two camps, one claiming that autonomous vehicles make us safer and the other claiming that autonomous vehicles are incapable of duplicating complex human judgement skills. A middle position claims that more testing is needed. Anecdotes of bizarre mishaps are given.

The article closes by predicting that self-driving vehicles will enter our lives in fits and starts, rather than in one tremendous wave.

Right. I reported that NY Times article in the post above yours. The point I was making with it is that the number of licenses granted has been cut in half as the result of a crash between a Cruise vehicle and a fire engine. There have been some other incidents, including several where traffic became blocked when the vehicles broke down in the middle of the road. In the absence of a human responsible for the vehicle, it is a bit awkward to get help when there is a problem. The whole purpose of these pilot projects to figure out what can go wrong, so Murphy's Law has apparently started to pop up in the rearview mirror. Too much money has been spent for them the stop the program now, but Cruise needs to address the problems that have already arisen. I don't expect things to go well as these extreme weather incidents start to add unusual driving conditions to the mix. Do these cars stop for puddles in the roadway? Hopefully, they won't drive their passengers into flooded roads. How will they react in sudden hailstorms? What if there is a power failure and all the traffic lights go out? I guess we'll find out.
 
Singularity approaches.
Never mind the singularity. What approaches is a world in which we can have anything we want with almost no human effort.

Only capitalists could see that and say "OMG we're doomed!".

The big problem is that they are determined that, if we can make enough of everything for everyone, they should be the only ones to get it.

A post scarcity society is (largely) a post privilege society, and those who currently enjoy privileges are going to fight tooth and nail against that.
For all intents and purposes we can have or near communism now. The only reason we don't are capitalists which cling to that system.
 
Last edited:
About AI & doctors. You don't need that new shiney ChatGPT level AI to make diagnosis in 99% of the cases.
Most doctors go through checklist in their "diagnoses". Computers could do that yesterday and 30 years ago.
There is not much thinking in that. I consider human doctors a scam. They all should be computers already.


About Midjourney, I was initially impressed. But then I realized what's really going on. They merely take bits and pieces or real art/pictures and compose them in a different way.That's not really that hard if you think about it.

And speech recognition still sucks. I don't know how they concluded that it's better than humans already.
My judgment is based on what google use.

Selfdriving cars is similar to speech recognition. It still sucks and can be easily confused by trash on the road.
 
And speech recognition still sucks. I don't know how they concluded that it's better than humans already.
My judgment is based on what google use.

It varies with the language. Speech recognition for standard literary dialects of English is actually quite good. They have made tremendous progress in just the past decade with these new tensor-based predictive algorithms, which rely narrow the range of words that a given chunk of acoustic signal could represent. The programs don't just rely on acoustic information to home in on the words. They also narrow the range of words scanned for on the basis of semantic content in running discourse.

I don't know how good speech recognition is for Russian, but I do know that automated Russian-English translation is now very good. I use it all the time, although it still makes a certain number of errors. It is possible to watch films and news programs in Russian using only closed captions for English that are generated automatically. The programs always make some errors, but not enough to make it too difficult to follow the dialog.
 
Professor Emily Bender at the University of Washington is a recognized AI researcher who has much to say about the fearmongering of Artificial Intelligences technologies in the press and on social media. This fearmongering has been popularized for a long time in science fiction and the movie industry, but it has flooded media outlets with the release of OpenAI's chatbot technology. Large Language Models (LLMs) are sometimes called "generative AI", because they are trained on huge amounts of textual data (at high cost) that allow the technology to cluster written snippets of text into topics that can then be used to generate summaries of their training data. These programs aren't really intelligent in a human sense, do not have emotions or thoughts, do not learn from experience, and do not actually understand input queries or their own responses. They are essentially "stochastic parrots" trained to emit written English that simulates a thoughtful response based on the published words of human writers.

Now Emily has published a very nice article in Scientific American about AI doomers and how they have distorted the reality of what AI is and what its real potential harms are:

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype


I think a lot of the hype of AI is rooted in companies using the term AI for any kind of advanced algorithm they use because of some kind of wow factor. Back in the day, we would steer clear of using the term A.I. in describing our work and instead use the specific advanced algorithm or perhaps call an implementation "smart." Otherwise, it might be seen as kooky, treated with some skepticism, or seen as misapplying terminology from a related field of study. In the era we are in, there is a fake it until you make it business paradigm and so making big claims is trendy. Ordinary people are being inundated with the buzz word and inferring it is everywhere, including healthcare, manufacturing, defense industry, not merely analytics but also decision-making, ... of course, Hollywood and social media are going to contain the thoughts and concerns of ramifications of runaway, hard to test and reproduce, decision-making that appears to be made by automatons that cannot understand humanity. And that's going to make a feedback loop to companies taking advantage of ignorance and politicians trying to quell concerns. Perhaps soon you will be able to go to the grocery store and buy lettuce that has a label promising the farm didn't use AI to grow the lettuce but it will cost a dollar more. Of course, however, potential customers probably read about the dangers of AI using their Google News that automatically prioritized their news content without them even knowing it...
 
More about speech recognition. I have always suspected that english speech is harder to "decipher" than russian.
And AI seems to corroborate that. It can do russian almost perfectly, whereas english is far from it. It make mistakes even with news reporters.
And mistakes are pretty stupid.
 
More about speech recognition. I have always suspected that english speech is harder to "decipher" than russian.
And AI seems to corroborate that. It can do russian almost perfectly, whereas english is far from it. It make mistakes even with news reporters.
And mistakes are pretty stupid.

You have no problem with Russian, because that is your native language, not English. So English is always going to be harder for you. That said, Russian spelling is far less complex than English spelling, although it is not without its own peculiarities. Cyrillic was developed natively for south Slavic language pronunciations, primarily Old Church Slavonic. English spelling is a mess to learn, but Cyrillic represents sound-symbol correspondences that predate the evolution of hard and soft consonants that we see today in Slavic languages. So even Russian spelling contains a lot of conventions that don't exactly reflect pronunciation.
 
I don't need my news feed prioritized. I don't need targeted ads, friend suggestions or random pop ups.
It depends on how they are handling the prioritizing. Offering you more of what you click on is far more likely to produce something interesting than offering random things.
 
I don't need my news feed prioritized. I don't need targeted ads, friend suggestions or random pop ups.
It depends on how they are handling the prioritizing. Offering you more of what you click on is far more likely to produce something interesting than offering random things.

Perhaps so, yet I prefer random because you never know what interesting things may pop up. If I need something specific, I'm happy to run a search.
 
If AI becomes a better doctor, surgeon, lawyer, designer, builder, writer, driver, etc, than any of us, what is left for us to do in life?
Observe. Indeed, it would be extremely important (and something the global governments have failed to do) to figure out how we make our livelihoods if technology displaces most jobs. This sort of thing is hinted at in The Expanse, where there are very few jobs.

But if we have food and drink and shelter, there are always the arts, nature, sport, etc... Our lively identity and who we are aren't necessarily defined by our jobs, we just need them to be able to do what we actually like occasionally.

Whenever I see a post about AI 'giving us time for the arts', I always think to myself - 'I can't wait to write poetry seven days a week about all the interesting experiences I'm having'.

IMO, we already live in a world that's quite automated, but most people born into it don't notice, because it's all they know. Week to week our hardest tasks are showing up for work, grocery shopping, and paying the electricity bill.

In the future when automation becomes even more extreme I'd expect the brunt of most populations to adapt to it fine, and a small portion of people who see it for what it is to be upset about it. From my personal perspective, my life is already quite boring due to automation. But most people out there are content spending their lives watching Netflix and sports.
 
Look at production in the US. We make less and the coal miners, steel producers aren't adapting fine. The Rust Belt is named that for a reason. Some have adapted but there is a glass ceiling. It will get worse.
 
  • Like
Reactions: DBT
Back
Top Bottom