• Welcome to the Internet Infidels Discussion Board.

Jobs that robots can't do

rousseau

Contributor
Joined
Jun 23, 2010
Messages
13,762
As a software developer I've read quite a bit about how software is eating the world, the impending robot invasion, and how routine jobs containing low cognitive skill are on the decline. From all of this talk I can only glean that in 100-200 years time society is going to be much closer to a post-work economy.

That said, it all raises an interesting question when we talk about a specific type of job:

Jobs that:

1) Need to be done

and

2) Can't be done by robots

I wonder what types of jobs we think will *never* be taken over by machines. I also wonder how society might go about divvying up responsibility when a much smaller proportion of society actually needs to work.
 
There is no job that can't, in principle, one day be done by machines.

There will probably always be some jobs that can't, practically and profitably, be done by machines yet.

The question of how society might handle this is misguided; it is not a matter for speculation, so much as a matter of recorded history. We are handling it now, and have been for centuries. More than enough time to have made mistakes from which we could learn if we wanted to.

The desire to learn from our mistakes seems sadly lacking. Perhaps we should program a computer to do it for us.
 
There is no job that can't, in principle, one day be done by machines.

There will probably always be some jobs that can't, practically and profitably, be done by machines yet.

The question of how society might handle this is misguided; it is not a matter for speculation, so much as a matter of recorded history. We are handling it now, and have been for centuries. More than enough time to have made mistakes from which we could learn if we wanted to.

The desire to learn from our mistakes seems sadly lacking. Perhaps we should program a computer to do it for us.

I didn't intend to imply that society *won't be able to handle it*, but rather *when we are forced to continually shift to this reality, and the majority of work is done by machines, how will we react, or have reacted*. My assumption would be a guaranteed basic income for most, but it would seem to create a weird dynamic if a minority of people were working. Why wouldn't the minority just want to be on a basic income?
 
Define "robot".

How much AI does the robot have?

A "robot" that can do every human job almost certainly would be a person, not merely a machine.
 
There are no such jobs.

Unless, we define "being an authentic human being as a museum piece" as a job. And that our future robot overlords have interest in preserving such curiosities.
 
As a software developer I've read quite a bit about how software is eating the world, the impending robot invasion, and how routine jobs containing low cognitive skill are on the decline. From all of this talk I can only glean that in 100-200 years time society is going to be much closer to a post-work economy.

That said, it all raises an interesting question when we talk about a specific type of job:

Jobs that:

1) Need to be done

and

2) Can't be done by robots

I wonder what types of jobs we think will *never* be taken over by machines. I also wonder how society might go about divvying up responsibility when a much smaller proportion of society actually needs to work.



Are you really a software developer? Full Stack?

What languages can you code?

I can write C++, VisualBasic, Java, and I am learning Python now. Should be "fluent" by winter.

So...as a software guy you probably are aware that, depsite what you see in the movies and TV and what some futurists claim, that we are a long long way from any sort of "robot invasion."

Right? And you most likely are aware that as of yet no software or AI has passed the Turing Test.

And of course, insofar as any sort of invasion, GIGO would apply.

A major problem facing AI is that of adaptability. Programs, unlike biological systems which often adapt, are very "brittle." Thus, the crucial "fuzzy logic" aspect still has a ways to go, I think.

There is the Markov limitation.

Too: The biggest challenge is understanding what 'understanding' something actually means. A machine can execute a sequence of steps but how can it be designed to really understand something? Does deep blue actually understand the nuances of playing chess? Can it imagine new combinations and sacrifices that would be inspired from its own unique analysis? How can self-awareness and a capacity to learn be induced in machines? These are the unsolved problems that face researchers. The sophisticated parallelism and information processing capacity of the human brain is still unmatched.

We must all remember that at the end of the day, computer "thinking" is only the manipulation of electrons being passed through logic gates. Thus the term 'brittle."

Just sayin'.


https://www.techdirt.com/articles/2...-first-time-everyone-should-know-better.shtml


>>>>>>>> p.s I don't wanna stray off-topic but I would love to discuss coding with you some more. Maybe on the tech threads?<<<<<<<<
 
Last edited:
there's tons of jobs that can never be handled entirely by robots, at least not without going very far into the question of AI and how developed that gets.
there's also the question of "jobs" vs. "professions" - because there's even more professions that robots (or even AI) could probably never do, for example: artist (of any medium or type, including the plethora of support jobs connected to the manufacture of goods or products based on imagination), therapist, architect, decorator, party planner, graphic designer, photographer, human resources, advertising, food engineer, etc.
 
Are you really a software developer? Full Stack?

What languages can you code?

I can write C+, VisualBasic, Java, and I am learning Python now. Should be "fluent" by winter.

So...as a software guy you probably are aware that, depsite what you see in the movies and TV and what some futurists claim, that we are a long long way from any sort of "robot invasion."

A simple test: Look at Microsoft's attempt at design-by-contract for C#. It's hard to get through a function without having to write some sort of assertion to tell it something it can't figure out on it's own. I loved the idea but very soon uninstalled it because I found it made so many mistakes.

And what's C+, bastardized code that's half C, half C++? :D

Right? And you most likely are aware that as of yet no software or AI has passed the Turing Test.

Which says nothing about it's ability to write code. Plenty of things that can pass the Turing test can't write code (the vast majority of all humans), there are limited-purpose program generators out there, none of which have any understanding of English or other human languages and thus couldn't even attempt a Turing test.

A major problem facing AI is that of adaptability. Programs, unlike biological systems which often adapt, are very "brittle." Thus, the crucial "fuzzy logic" aspect still has a ways to go, I think.

Agreed.

Too: The biggest challenge is understanding what 'understanding' something actually means. A machine can execute a sequence of steps but how can it be designed to really understand something? Does deep blue actually understand the nuances of playing chess? Can it imagine new combinations and sacrifices that would be inspired from its own unique analysis? How can self-awareness and a capacity to learn be induced in machines? These are the unsolved problems that face researchers. The sophisticated parallelism and information processing capacity of the human brain is still unmatched.

Define "understand". It seems to me that "understand" translates as "logic we can't see or figure out how to reproduce".

We must all remember that at the end of the day, computer "thinking" is only the manipulation of electrons being passed through logic gates. Thus the term 'brittle."

And human "thinking" is electrical impulses being passed through neurons. I don't think there is the fundamental difference you are talking about, it's just our best attempts at AI aren't even at the level of insects yet.
 
A simple test: Look at Microsoft's attempt at design-by-contract for C#. It's hard to get through a function without having to write some sort of assertion to tell it something it can't figure out on it's own. I loved the idea but very soon uninstalled it because I found it made so many mistakes.

And what's C+, bastardized code that's half C, half C++? :D

Right? And you most likely are aware that as of yet no software or AI has passed the Turing Test.

Which says nothing about it's ability to write code. Plenty of things that can pass the Turing test can't write code (the vast majority of all humans), there are limited-purpose program generators out there, none of which have any understanding of English or other human languages and thus couldn't even attempt a Turing test.

A major problem facing AI is that of adaptability. Programs, unlike biological systems which often adapt, are very "brittle." Thus, the crucial "fuzzy logic" aspect still has a ways to go, I think.

Agreed.

Too: The biggest challenge is understanding what 'understanding' something actually means. A machine can execute a sequence of steps but how can it be designed to really understand something? Does deep blue actually understand the nuances of playing chess? Can it imagine new combinations and sacrifices that would be inspired from its own unique analysis? How can self-awareness and a capacity to learn be induced in machines? These are the unsolved problems that face researchers. The sophisticated parallelism and information processing capacity of the human brain is still unmatched.

Define "understand". It seems to me that "understand" translates as "logic we can't see or figure out how to reproduce".

We must all remember that at the end of the day, computer "thinking" is only the manipulation of electrons being passed through logic gates. Thus the term 'brittle."

And human "thinking" is electrical impulses being passed through neurons. I don't think there is the fundamental difference you are talking about, it's just our best attempts at AI aren't even at the level of insects yet.



Your last comment is opinion only. The reductionist or materialist view.

There is likely more going on up there in our brains. Call it a soul or Universal Spark or "anima" or what have you. Neurologists do not understand it yet. Which means that we are light years away from being able to replicate it and insert it into any type of programming language.

Also you have the phenomena of neuro-plasticity. Which most people know as only in regards to changing behavior and even improving IQ. yes..that is part of it, but it also has to do with thinking and understanding. And adaptability. Things that neurons cannot do, and they have zero plasticity. It's the fuzzy logic deal again.

Any software program, even the "fuzziest" and most intuitive is, at least thus far, little more than a variation on a feedback loop.

I don't understand, too, how you can say a computer's inability to pass a Turing Test has nothing to do with coding? LOL. What is it exactly then, that allows a computer to try and pass it?

Our brains are much more than that. We can adapt and alter. Machines and their code cannot.

Lastly..the best AI deal out there now for Windows is "Virtual Assistant Denise." If not familiar, check this out......https://guile3d.com/en/product/


Oh,...cannot come close to passing the Turing Test! LOL
 
If we limit it to "jobs which cannot be economically done by robots," the list can be quite long. Think of a human being as a robot driven by a non-linear computer which does not require electrical power, can operate in extremely wet conditions, and can be mass produced by unskilled labor.
 
I'm thinking that farrier work (trimming the feet of horses) might be challenging for a robot, not because of the degree of technical difficulty involved, for that could be programmed and the robot could be far more precise than a human.
The nature of horses is what would likely preclude this task being done by a robot. Now if you elected to sedate the horses, a machine could do fine but as hooves need trimming on a 6-8 week schedule, being sedated that frequently would not be healthy for the horse unless we devise much better drugs, which is always a possibility. Given the costs involved to advance such a scenario, I think some of my skills may prove useful going forward, in teaching them to others. I can't see our love for horses ending any time soon.
 
As a guy whose job was replaced by technology, I've been telling young people to get jobs not easily replaced. Plumbing, electrician, HVAC, that sort of thing. Robots can create those systems but it's a long way off before they can install and service them.
 
As a software developer I've read quite a bit about how software is eating the world, the impending robot invasion, and how routine jobs containing low cognitive skill are on the decline. From all of this talk I can only glean that in 100-200 years time society is going to be much closer to a post-work economy.

That said, it all raises an interesting question when we talk about a specific type of job:

Jobs that:

1) Need to be done

and

2) Can't be done by robots

I wonder what types of jobs we think will *never* be taken over by machines. I also wonder how society might go about divvying up responsibility when a much smaller proportion of society actually needs to work.



Are you really a software developer? Full Stack?

What languages can you code?

I can write C++, VisualBasic, Java, and I am learning Python now. Should be "fluent" by winter.

So...as a software guy you probably are aware that, depsite what you see in the movies and TV and what some futurists claim, that we are a long long way from any sort of "robot invasion."

Right? And you most likely are aware that as of yet no software or AI has passed the Turing Test.

And of course, insofar as any sort of invasion, GIGO would apply.

A major problem facing AI is that of adaptability. Programs, unlike biological systems which often adapt, are very "brittle." Thus, the crucial "fuzzy logic" aspect still has a ways to go, I think.

There is the Markov limitation.

Too: The biggest challenge is understanding what 'understanding' something actually means. A machine can execute a sequence of steps but how can it be designed to really understand something? Does deep blue actually understand the nuances of playing chess? Can it imagine new combinations and sacrifices that would be inspired from its own unique analysis? How can self-awareness and a capacity to learn be induced in machines? These are the unsolved problems that face researchers. The sophisticated parallelism and information processing capacity of the human brain is still unmatched.

We must all remember that at the end of the day, computer "thinking" is only the manipulation of electrons being passed through logic gates. Thus the term 'brittle."

Just sayin'.


https://www.techdirt.com/articles/2...-first-time-everyone-should-know-better.shtml


>>>>>>>> p.s I don't wanna stray off-topic but I would love to discuss coding with you some more. Maybe on the tech threads?<<<<<<<<

This is a lot to respond to, so I'll just say: yes, I'm a developer :D.

When talking about a 'robot invasion', this doesn't mean we need thorough AI, all it means is that we have a variety of machines carrying out routine tasks. A machine can cook you dinner or vacuum your floor and still be a robot.
 
I wonder what our political leaders have in mind in terms of their taxpayer base in a future where computerization and robotics are being used by an ever larger proportion of industries, while the productive worker/tax payer base becomes an ever shrinking source of revenue for both state and federal governments (not to mention the unemployed themselves).
 
I wonder what our political leaders have in mind in terms of their taxpayer base in a future where computerization and robotics are being used by an ever larger proportion of industries, while the productive worker/tax payer base becomes an ever shrinking source of revenue for both state and federal governments (not to mention the unemployed themselves).
Government can use robots too. Imagine robot politician.
 
And human "thinking" is electrical impulses being passed through neurons. I don't think there is the fundamental difference you are talking about, it's just our best attempts at AI aren't even at the level of insects yet.

Your last comment is opinion only. The reductionist or materialist view.

There is likely more going on up there in our brains. Call it a soul or Universal Spark or "anima" or what have you. Neurologists do not understand it yet. Which means that we are light years away from being able to replicate it and insert it into any type of programming language.

You realize you're on an atheist board? The notion of souls doesn't go over very well here. While we do not yet understand it that doesn't mean it's beyond understanding.

Also you have the phenomena of neuro-plasticity. Which most people know as only in regards to changing behavior and even improving IQ. yes..that is part of it, but it also has to do with thinking and understanding. And adaptability. Things that neurons cannot do, and they have zero plasticity. It's the fuzzy logic deal again.

Any software program, even the "fuzziest" and most intuitive is, at least thus far, little more than a variation on a feedback loop.

And an insect's brain is little more than a feedback loop. That's the level of AI computing at this point.

I don't understand, too, how you can say a computer's inability to pass a Turing Test has nothing to do with coding? LOL. What is it exactly then, that allows a computer to try and pass it?

I'm saying that whether a computer can pass a Turing test and whether it can write a program are two different things--being able to do one says nothing about whether you can do the other.

Our brains are much more than that. We can adapt and alter. Machines and their code cannot.

Ever hear of genetic algorithms?

- - - Updated - - -

I know what jobs robots can already do - FoxNews person.

No--Uncanny valley.

While a robot could pretty much do Faux "News" reports making a realistic hottie to deliver that "news" is another matter.
 
There are plenty of personal service jobs that people wouldn't want a robot to perform but would prefer another human instead. Masseuse, personal trainer, prostitute, hair cutter and stylist, among others. How many people do you know who prefer to go through the regular checkout line at the store rather than use the automated check out?
 
My assumption would be a guaranteed basic income for most, but it would seem to create a weird dynamic if a minority of people were working. Why wouldn't the minority just want to be on a basic income?

I never understand why people seriously ask this question. It's like they don't understand humans.

Why WOULDN'T they want to work? If we have universal basic income, then that means they'd get that money AND whatever they make from the job they work. So long as the additional money is sufficient in the eyes of people to justify the time they spend earning it, you will have most people trying to continue to work if they can. This applies especially to jobs that are rewarding beyond just the financial; which is likely going to be true of many or even all jobs that exist in a society where automation takes care of all the rest.
 
there's tons of jobs that can never be handled entirely by robots, at least not without going very far into the question of AI and how developed that gets.

Name one.

there's also the question of "jobs" vs. "professions" - because there's even more professions that robots (or even AI) could probably never do, for example: artist (of any medium or type, including the plethora of support jobs connected to the manufacture of goods or products based on imagination), therapist, architect, decorator, party planner, graphic designer, photographer, human resources, advertising, food engineer, etc.

You realize that some of these professions are ALREADY capable of being done entirely by computers, right? People who think that humans will always have a monopoly on creativity are seriously ignorant of what computers are already capable of doing in that field.
 
Back
Top Bottom