Ok, that's a fairly major point, and I'd agree with you. The point I was disputing was that robots would necessarily replace humans, and then we got bogged down on what is likely in the near future.
Keep in mind that 'near future' was never defined. Some people think it's 15 years. I would consider the next 50 years to constitute the near future.
Agreed. It looks like we just mean different things by 'replacement'. Obviously robots are useful, and do tasks we might otherwise do ourselves. The same is true of dogs.
It isn't really a matter of replacing, ultimately.
1) We *will* replace ourselves with any of the jobs that are too dangerous for humans (already happening)
2) We *will* replace ourselves with any jobs where the robot/ai is far cheaper or far superior (already happening)
3) This will likely result in ever increasing levels of non full-time employment, which will necessitate economic/political change that lessens our reliance on paid work in order to live.
4) In a society where people don't rely on paid work, robots/ai will start to take even more jobs as the dull and dirty jobs that people used to take solely for the money will now not be attractive enough to attract a sufficient human workforce.
5) Jobs that carry the level of responsibility that people are often uncomfortable handing over to machines *will* be delegated to robots/ai if those systems are superior at the job (already happening).
6) In a society sufficiently automated through these processes, even leadership roles might eventually be handed over in this manner.
Conclusion: speaking from a deployment angle, it is *plausible* that humans can be entirely replaced in any job necessary for the functioning of society. Whether this *will* happen, is another question. It may be that instead, robots/ai will replace us in all areas to some extent, but not entirely. It might even be that we get a Butlerian Jihad.
To touch upon some of the points in greater detail:
Point 3: This is particularly noticeable in my country. We have perhaps the highest percentage of people not working full-time jobs in the world. 26.8% of men and 76.6% of women in the Netherlands work less than 36 hours a week. This is compared to just 8.7% of men and 25.8% of women in the rest of the EU. There's even been political discussion about adopting a 21 hour workweek (though the current government isn't ideologically inclined to agree with such measures), since there just isn't enough work to provide all-round otherwise.
All this is because of a number of reasons (the female discrepancy has less to do with sexism and more with a few historical oddities, but those are a whole different topic). One of the reasons is that part-time jobs here are actually quite well paid and the government has enacted laws guaranteeing the right to relaxed workhours. Another, very important reasons, is an exceptionally high degree of automation in a wide range of industries here; which is sometimes far ahead of the situation in other countries (the port of Rotterdam is a good example, with loading/unloading/transport of cargo already fully automated with almost no human involvement back in the early 90's. Many world ports haven't even begun the process of trying to catch up). In some ways, the Netherlands and its work-load distribution might serve as the prototype society for an increasingly automated future.
Point 5: As an example of this I always like to point to the Maeslantkering; a storm surge barrier that protects about 1,5 million people and which is in fact the largest autonomous robot on the planet (as big as the Eiffel tower, and 4 times as massive). Humans do not control the maeslantkering. It is controlled by an AI system that can not be overruled by humans. If the Maeslantkering AI decides that the barrier closes, it will close. Even if that costs billions of euros in economic damages. If the AI decides otherwise, it won't close... even if humans are starting to panic about the storm raging outside. Why do we leave the decision to an AI when other countries would insist on a human executive? Because the AI is simply far better at determining the risk and choosing the optimal response. The risk of a human making very costly mistakes was deemed too high.
The Maeslantkering shows me that humans are perfectly willing to leave important decisions to machines if those machines are better at it than humans. If we are willing to trust a machine with the lives of 1,5 million people; we would trust a machine with the lives of 17 million people. Or a billion. We would trust it with running the economy. We would trust it with running the government. We would trust it with running anything and everything. We're obviously not that concerned with giving the decision to a machine, so long as the machine is competent enough.
Yes, absolutely. That's something that prostitutes do, after all.
I think the proper word for those people are escorts, not prostitutes.
More seriously; so what? That's only marginally more awkward than showing up to a party wearing the same outfit as someone else.
This is the point that keeps on coming up - whether we're talking about straight one-for-one replacement in which a robot takes over what a human does by duplicating every quality the human has, in the same way that the human does, but better, or whether we're talking about a robot just being used to do a task that a human does now. The latter seems painfully straightforward, the former is the one that to my mind has issues.
If we *can* create a robot that does everything we do, that duplicates our "human-ness"; then we *will* create such a robot. There is no question about this. If a thing is possible, someone will do it. Whether said robot fully replaces us so that we can move about in hoverchairs doing nothing at all, or whether the robot just becomes another person part of our society, that is a different matter.
Absolutely. Just as it is your gut feeling that these issues are not only solvable, but trivially so. I don't think this thread can support the detail that would needed to produce full arguments on these points, and I'm happy to disagree on something I see as a matter of degree, rather than a fundamental difference.
I didn't say that the solutions themselves are trivial; just that assuming sufficiently advanced technology, things that would be hard to do with/on/for humans would be trivial for a robot/ai. That may seem like a circular argument, but it's more a matter of distinguishing between the ultimate limits of the two entities. Humans have already reached their natural limit. We could raise those limits through extensive re-engineering, but so long as we stay purely biological the theoretical upper limit will be much lower than the theoretical upper limit of a purposefully designed mechanical being.
I think design can only be superior for a given purpose. I'm extremely sceptical of the idea of a design that's just generally superior
The human form is generally superior to that of a snail.
And if you can imagine that a design can be superior for a given purpose, then you can imagine that a design could be superior in general.
Very impressive! I've seen similar but not this particular application, which looks very cool. It also looks quite slow, working on thermal absorption, which was the issue with previous attempting to get strong artificial muscles. So I've got to ask if it really has the identical performance characteristics I was asking for?
Pound for pound, inch for inch, it outperforms us. You made no requirement that it should operate at our speeds; although that seems to me to just be an engineering issue and not a fundamental one.
This I have seen. These systems can't regenerate past a certain damage threshold either, a much lower threshold than in humans, since they rely on the underlying superstructure for both delivery (large capillaries) and shaping of the new material.
Except that allows for the total regeneration of damaged areas. A human can't just regenerate an arm, even if you still have the bone structure for it.
While this is, again, very cool, you're kidding yourself if you think that this is anywhere near as good as human healing.
On some level it is. I never claimed that it was as good in the total package (yet); just that it was in some ways more efficient, which it is. To me, it serves as a proof of concept. An early step to a future beyond our wildest expectations.