• Welcome to the Internet Infidels Discussion Board.

Anyone else tired of AI?

American consumers spend $12 billion per
year on actual AI services. Meanwhile,
the industry is burning through 400 to
500 billion annually just on
infrastructure, data centers, chips,
electricity, etc. That's a 40 to 1 gap.
For every dollar consumers spend, the
industry spends 40 building the
infrastructure alone.

Companies buying computers and related hardware to bet on the future of AI aren't making money yet, but the companies supplying the hardware are reeling in cash. But are their stock prices commensurate with their sales? Here are the Price-to-Sales ratios of some major U.S. companies. (Vertiv supplies cooling equipment to large data centers; its stock price has boomed in recent months.)

If you own $28,200 of Nvidia stock, you will earn the revenue from only $1000 of their sales. The same dollars invested in GM stock would represent about $90,000 of sales.

Nvidia 28.2
Netflix 12.1
Meta 10.6
Apple 9.5
Alphabet 8.4
McDonalds 8.3
Vertiv 6.9
Cisco 4.9
IBM 4.2
Caterpillar 3.85
Merck 3.5
Home Depot 2.35
Nike 2.3
Freeport-McMoRan 2.2
Pfizer 2.2
PepsiCo 2.2
Boeing 2.1
Disney 2.1
Chevron 1.4
Walmart 1.2
GM 0.32
 
Companies buying computers and related hardware to bet on the future of AI aren't making money yet, but the companies supplying the hardware are reeling in cash. But are their stock prices commensurate with their sales? Here are the Price-to-Sales ratios of some major U.S. companies. (Vertiv supplies cooling equipment to large data centers; its stock price has boomed in recent months.)

If you own $28,200 of Nvidia stock, you will earn the revenue from only $1000 of their sales. The same dollars invested in GM stock would represent about $90,000 of sales.
So that's an indication of how overvalued stocks are? i.e. that they seem to be in a bubble?
 
Just a handful of companies account for a third of the S&P 500 and they are spending massively on AI. This is fodder for anyone wanting to predict doom and gloom. Are these companies over leveraged? Are they spending irresponsibly in a "too big to fail" manner because they are too big to fail. Have they more liabilities than assets? I'm not seeing it. I'm not much for balance sheets but the numbers for Amazon, Alphabet and the like don't look too shabby.
This cannot be compared to a .com bubble-> Negative cash flow and gross speculation. This is not that.
 
BTW about AI related to "Political Discussions"...

I thought this video showed Bernie Sanders but only realised it was fake when he said:


At 17:52
A few hours ago, my office received a letter from a one zero year-old boy named Matteo in Wisconsin.
So instead of saying "ten", "Bernie" said "one zero year-old".

Then I noticed on the video's description:
DISCLAIMER:

The stories on this channel are entirely fictional and created for entertainment. Any characters mentioned are used only to help convey the message. They are not real, and any similarity to actual people or events is purely coincidental. Please enjoy and take the lessons from the story.

How this content was made:
Altered or synthetic content

"any similarity to actual people or events is purely coincidental" ?!

The video title was "Most People Have No Idea Why Donald Trump is Bulldozing the East Wing -Bernie Sanders"

So it seems to imply that it is actually Bernie Sanders.
 
Last edited:
Companies buying computers and related hardware to bet on the future of AI aren't making money yet, but the companies supplying the hardware are reeling in cash. But are their stock prices commensurate with their sales? Here are the Price-to-Sales ratios of some major U.S. companies. (Vertiv supplies cooling equipment to large data centers; its stock price has boomed in recent months.)

If you own $28,200 of Nvidia stock, you will earn the revenue from only $1000 of their sales. The same dollars invested in GM stock would represent about $90,000 of sales.
So that's an indication of how overvalued stocks are? i.e. that they seem to be in a bubble?
My personal opinion is worthless. I can only report what I hear via the 'Net. Many pundits agree that AI-related stocks are in a bubble, but some advocate buying anyway! Bubbles can grow for quite a while before popping.

Some think AI will radically transform almost everything. Some think AI is over-rated and some companies are moving away from it. Both viewpoints may be simultaneously valid!

Warren Buffett has his own favorite stock-market valuation indicator. A quarter of the way down that page a graph shows that the Dot-com bubble in 2000 didn't even get up to 150% while it is 211% at the end of 2024 and higher still today.

Just a handful of companies account for a third of the S&P 500 and they are spending massively on AI. This is fodder for anyone wanting to predict doom and gloom. Are these companies over leveraged? Are they spending irresponsibly in a "too big to fail" manner because they are too big to fail. Have they more liabilities than assets? I'm not seeing it. I'm not much for balance sheets but the numbers for Amazon, Alphabet and the like don't look too shabby.
This cannot be compared to a .com bubble-> Negative cash flow and gross speculation. This is not that.

The big AI spenders are mostly Big Cash-generating Machines. BUT they are not getting significant revenue (yet) from their AI efforts. SPENDING OTOH is huge:
Global spending on Artificial Intelligence (AI) is projected to reach approximately $1.5 trillion in 2025, an increase from nearly $1 trillion in 2024. This significant investment is driven by a massive influx of corporate and private funding, primarily centered on AI infrastructure and generative AI technologies.
 
I hate the AI customer sales reps who talk to me with folksy voices, adding "um"s and "er"s, repeating things I just said and throwing me compliments. It's a hall of mirrors experience that leaves me unsettled, empty, and sometimes sullen.

I spend a lot of time (way too much, probably) watching videos on the YouTube, and the amount of AI voice overs is maddening to me. In my previous career, I used to record voice overs for commercials, some narration, and other things, and I can spot an AI VO most of the time. The fake vocalized pauses (the "um" and "er") are pretty good, as is the addition of breaths in between words and sentences, but they still haven't got the natural cadence down, and a dead giveaway is when the voice pronounces a word one way in one line, and then another way a few lines later. But it's close enough, and it's putting a lot of my peers out of business.

AI and machine learning have a lot of potential (especially in my current job in autonomous vehicles) but in most of the use cases I've seen, it's just annoying. No, AI assistant, I don't need help re-writing a LinkedIn post or updating my resume'.
 
AI is what happens when powerful
people just cannot understand why Clippy wasn't hugely popular, and decide to make a "better" version.

View attachment 52529
BTW in case you didn't hear about it in August, lots of people changed their profile pictures to Clippy...
Louis Rossmann, a "right to repair" supporter, said:
“If you’re tired of companies changing the terms of the sale after the sale, turn your profile photo to a Clippy,” says Rossmann. “If you’re tired of companies that have the ability to ransomware your products … change your profile photo to a Clippy.”
He seems to think Clippy had a good heart.
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.

That didn’t slip past me just because I didn’t mention it. :rolleyes: Anyway, I’m honestly more tired of white people treating the success of other white people as their own than I am of AI. AI can stay as far as I’m concerned.
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.

That didn’t slip past me just because I didn’t mention it. :rolleyes: Anyway, I’m honestly more tired of white people treating the success of other white people as their own than I am of AI. AI can stay as far as I’m concerned.
I’m almost certain AI is color blind, as was my comment. People are people are people.

Those who control AI ( for now) care only for green —or gold, perhaps

God knows people have stolen not just lives but accomplishments and inventions and art and music and science and mathematics from all who could not prevent such thefts.

Trust me: I’m not concerned about white people losing more than I am concerned about anyone else—everyone else.

I’m certain I’ve read too much sci fi and speculative fiction even if I did not burn with rage when someone dared alter my own work, childish as it was decades ago, not to see the many ways this will go very badly. For all of us.
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.

That didn’t slip past me just because I didn’t mention it. :rolleyes: Anyway, I’m honestly more tired of white people treating the success of other white people as their own than I am of AI. AI can stay as far as I’m concerned.
I’m almost certain AI is color blind, as was my comment. People are people are people.

Those who control AI ( for now) care only for green —or gold, perhaps

God knows people have stolen not just lives but accomplishments and inventions and art and music and science and mathematics from all who could not prevent such thefts.

Trust me: I’m not concerned about white people losing more than I am concerned about anyone else—everyone else.

I’m certain I’ve read too much sci fi and speculative fiction even if I did not burn with rage when someone dared alter my own work, childish as it was decades ago, not to see the many ways this will go very badly. For all of us.

I’m honestly not sure I understand either of those replies, maybe because my point wasn’t clear. What I said is that AI is the new cheap labor the wealthy are exploiting (and you’ve even hinted at that yourself). Instead of focusing on the exploitation itself, the lack of consent in using artists’ work or the absence of systems to pay them, most of the discussion I see in the media and online is about AI replacing jobs. In other words, people are blaming the cheap laborer instead of the people exploiting it.

It reminds me of slavery debates, where the question wasn’t “should people be exploited?” but “should slaves be free?” The moral framing was aimed at the wrong subject. Same thing here.

And just to clarify, my comment about being more tired of white people claiming the success of other white people than I am of AI wasn’t meant to start a debate; it was a tongue-in-cheek response to the thread title. But I’m happy to explain why I’m not tired of AI if that isn’t already obvious.

For example: if AI is being trained on data from real people, say, accounts receivable records or any other human-generated information, why shouldn’t those contributors be paid when that data creates value? Musicians get royalties every time their work is played or used commercially. Data is no different. Every data point ultimately traces back to human labor or creativity, and if AI depends on that human input, the humans behind it deserve compensation too.

Yet the conversation keeps being, “OMG AI is taking our jobs!” instead of recognizing that we could be setting up a fair system of exchange now, while AI is still scaling up, so that it benefits us early. But no, it’s easier to bash AI than to challenge the exploiters using it. Personally, I don’t care if it’s AI or something else entirely that ends up leveling the playing field, what matters is that we start building a structure that works for everyone, not just those profiting from other people’s work.
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.

That didn’t slip past me just because I didn’t mention it. :rolleyes: Anyway, I’m honestly more tired of white people treating the success of other white people as their own than I am of AI. AI can stay as far as I’m concerned.
I’m almost certain AI is color blind, as was my comment. People are people are people.

Those who control AI ( for now) care only for green —or gold, perhaps

God knows people have stolen not just lives but accomplishments and inventions and art and music and science and mathematics from all who could not prevent such thefts.

Trust me: I’m not concerned about white people losing more than I am concerned about anyone else—everyone else.

I’m certain I’ve read too much sci fi and speculative fiction even if I did not burn with rage when someone dared alter my own work, childish as it was decades ago, not to see the many ways this will go very badly. For all of us.

I’m honestly not sure I understand either of those replies, maybe because my point wasn’t clear. What I said is that AI is the new cheap labor the wealthy are exploiting (and you’ve even hinted at that yourself). Instead of focusing on the exploitation itself, the lack of consent in using artists’ work or the absence of systems to pay them, most of the discussion I see in the media and online is about AI replacing jobs. In other words, people are blaming the cheap laborer instead of the people exploiting it.

It reminds me of slavery debates, where the question wasn’t “should people be exploited?” but “should slaves be free?” The moral framing was aimed at the wrong subject. Same thing here.

And just to clarify, my comment about being more tired of white people claiming the success of other white people than I am of AI wasn’t meant to start a debate; it was a tongue-in-cheek response to the thread title. But I’m happy to explain why I’m not tired of AI if that isn’t already obvious.

For example: if AI is being trained on data from real people, say, accounts receivable records or any other human-generated information, why shouldn’t those contributors be paid when that data creates value? Musicians get royalties every time their work is played or used commercially. Data is no different. Every data point ultimately traces back to human labor or creativity, and if AI depends on that human input, the humans behind it deserve compensation too.

Yet the conversation keeps being, “OMG AI is taking our jobs!” instead of recognizing that we could be setting up a fair system of exchange now, while AI is still scaling up, so that it benefits us early. But no, it’s easier to bash AI than to challenge the exploiters using it. Personally, I don’t care if it’s AI or something else entirely that ends up leveling the playing field, what matters is that we start building a structure that works for everyone, not just those profiting from other people’s work.
Bombs level playing fields. Everybody loses, even those too stupid to know it.

So who is to be compensated? The people who aggregate the data? Analyze it? Who ARE the data ( or their efforts are the data)?

People in the ‘in groups’ have been stealing from and exploiting those in the out groups for millennia. This includes but is not limited to taking credit for intellectual and creative work not originating with them. It does not, imo, make it better that this will happen to people who have, traditionally, been the in groups.

You are much more optimistic than I if you believe this levels the playing field in any way that does not resemble the work of a bomb.

I am not concerned that AI is ‘taking our jobs’ so much as AI is taking our humanity. How many people in this thread have referred to humans as what’s the term? Meat suits? Makes my skin crawl. But at least I’m alive and know it.
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.

That didn’t slip past me just because I didn’t mention it. :rolleyes: Anyway, I’m honestly more tired of white people treating the success of other white people as their own than I am of AI. AI can stay as far as I’m concerned.
I’m almost certain AI is color blind, as was my comment. People are people are people.

Those who control AI ( for now) care only for green —or gold, perhaps

God knows people have stolen not just lives but accomplishments and inventions and art and music and science and mathematics from all who could not prevent such thefts.

Trust me: I’m not concerned about white people losing more than I am concerned about anyone else—everyone else.

I’m certain I’ve read too much sci fi and speculative fiction even if I did not burn with rage when someone dared alter my own work, childish as it was decades ago, not to see the many ways this will go very badly. For all of us.

I’m honestly not sure I understand either of those replies, maybe because my point wasn’t clear. What I said is that AI is the new cheap labor the wealthy are exploiting (and you’ve even hinted at that yourself). Instead of focusing on the exploitation itself, the lack of consent in using artists’ work or the absence of systems to pay them, most of the discussion I see in the media and online is about AI replacing jobs. In other words, people are blaming the cheap laborer instead of the people exploiting it.

It reminds me of slavery debates, where the question wasn’t “should people be exploited?” but “should slaves be free?” The moral framing was aimed at the wrong subject. Same thing here.

And just to clarify, my comment about being more tired of white people claiming the success of other white people than I am of AI wasn’t meant to start a debate; it was a tongue-in-cheek response to the thread title. But I’m happy to explain why I’m not tired of AI if that isn’t already obvious.

For example: if AI is being trained on data from real people, say, accounts receivable records or any other human-generated information, why shouldn’t those contributors be paid when that data creates value? Musicians get royalties every time their work is played or used commercially. Data is no different. Every data point ultimately traces back to human labor or creativity, and if AI depends on that human input, the humans behind it deserve compensation too.

Yet the conversation keeps being, “OMG AI is taking our jobs!” instead of recognizing that we could be setting up a fair system of exchange now, while AI is still scaling up, so that it benefits us early. But no, it’s easier to bash AI than to challenge the exploiters using it. Personally, I don’t care if it’s AI or something else entirely that ends up leveling the playing field, what matters is that we start building a structure that works for everyone, not just those profiting from other people’s work.
Bombs level playing fields. Everybody loses, even those too stupid to know it.

So who is to be compensated? The people who aggregate the data? Analyze it? Who ARE the data ( or their efforts are the data)?

People in the ‘in groups’ have been stealing from and exploiting those in the out groups for millennia. This includes but is not limited to taking credit for intellectual and creative work not originating with them. It does not, imo, make it better that this will happen to people who have, traditionally, been the in groups.

You are much more optimistic than I if you believe this levels the playing field in any way that does not resemble the work of a bomb.

I am not concerned that AI is ‘taking our jobs’ so much as AI is taking our humanity. How many people in this thread have referred to humans as what’s the term? Meat suits? Makes my skin crawl. But at least I’m alive and know it.

I get what you’re saying, and I don’t disagree that exploitation has always been part of the human story. But that’s exactly why I’m arguing we should design systems that make it harder, not easier, to keep repeating that pattern. I don't see how my use of the phrase “level the playing field,” warranted an equivalence to bombs. :rolleyes: I'm also not suggesting my idea is some magic wand that will erase exploitation. I mean equity through structure, like compensation models that recognize all human input in data creation. We’re talking about data here, which, if handled correctly, similar to the music industry, is traceable. Exploitation in the music industry hasn’t stopped, but without the copyright system, it’s arguable that not a single artist would be guaranteed any compensation at all.

The question of who gets compensated is valid, but it’s one we can only answer if we don’t throw our hands up just because people will always find ways to exploit others. They will, with or without solutions. But at least with a solution, someone deserving of compensation gets paid.

What I’m saying is that the general public keeps focusing on AI “taking their jobs,” which, in my view, signals to the people actually building and deploying AI that doing it the right way isn’t a priority, because no one’s really talking about that. I agree that AI risks dehumanizing us, but honestly, what doesn’t when humans are behind the wheel? That’s exactly why we should be rethinking the system around it now, instead of surrendering to the same old hierarchies, letting them roll out “improved” versions built on human labor, only to cut those same humans out of the corporate rewards once the system matures.

I guess we really can’t learn from the past after all. ¯\_(ツ)_/¯
 
AI is just the latest form of cheap labor the rich use to get richer, and, as always, they've convince the poor into believing the laborer (AI), not the system, is the problem. If AI can map the internet to learn every artist’s style, then surely it can remember whose work it learned from. IMHO the problem isn’t that it’s impossible to track, it’s that it’s inconvenient to pay.

Side note:
* Of course, being on school grounds might automatically constitute such lack of permission, but if a rule exists that says that weapons are prohibited on school grounds, that rule is almost certainly itself unconstitutional. Which is insane, and maybe even a good reason to eliminate the second amendment, but it is nonetheless still the plain consequence of the Bill of Rights as it currently reads.

America: “You have the right to bear arms, except where the people you elected might see them." ;)
But tech people have never thought to teach AI that stealing someone else’s intellectual property is not merely attributing the stolen work to the appropriate party/parties but to actually obtaining permission prior to using it. Because such things never occur to those who have no interest in anything except : more.

That didn’t slip past me just because I didn’t mention it. :rolleyes: Anyway, I’m honestly more tired of white people treating the success of other white people as their own than I am of AI. AI can stay as far as I’m concerned.
I’m almost certain AI is color blind, as was my comment. People are people are people.

Those who control AI ( for now) care only for green —or gold, perhaps

God knows people have stolen not just lives but accomplishments and inventions and art and music and science and mathematics from all who could not prevent such thefts.

Trust me: I’m not concerned about white people losing more than I am concerned about anyone else—everyone else.

I’m certain I’ve read too much sci fi and speculative fiction even if I did not burn with rage when someone dared alter my own work, childish as it was decades ago, not to see the many ways this will go very badly. For all of us.

I’m honestly not sure I understand either of those replies, maybe because my point wasn’t clear. What I said is that AI is the new cheap labor the wealthy are exploiting (and you’ve even hinted at that yourself). Instead of focusing on the exploitation itself, the lack of consent in using artists’ work or the absence of systems to pay them, most of the discussion I see in the media and online is about AI replacing jobs. In other words, people are blaming the cheap laborer instead of the people exploiting it.

It reminds me of slavery debates, where the question wasn’t “should people be exploited?” but “should slaves be free?” The moral framing was aimed at the wrong subject. Same thing here.

And just to clarify, my comment about being more tired of white people claiming the success of other white people than I am of AI wasn’t meant to start a debate; it was a tongue-in-cheek response to the thread title. But I’m happy to explain why I’m not tired of AI if that isn’t already obvious.

For example: if AI is being trained on data from real people, say, accounts receivable records or any other human-generated information, why shouldn’t those contributors be paid when that data creates value? Musicians get royalties every time their work is played or used commercially. Data is no different. Every data point ultimately traces back to human labor or creativity, and if AI depends on that human input, the humans behind it deserve compensation too.

Yet the conversation keeps being, “OMG AI is taking our jobs!” instead of recognizing that we could be setting up a fair system of exchange now, while AI is still scaling up, so that it benefits us early. But no, it’s easier to bash AI than to challenge the exploiters using it. Personally, I don’t care if it’s AI or something else entirely that ends up leveling the playing field, what matters is that we start building a structure that works for everyone, not just those profiting from other people’s work.
Bombs level playing fields. Everybody loses, even those too stupid to know it.

So who is to be compensated? The people who aggregate the data? Analyze it? Who ARE the data ( or their efforts are the data)?

People in the ‘in groups’ have been stealing from and exploiting those in the out groups for millennia. This includes but is not limited to taking credit for intellectual and creative work not originating with them. It does not, imo, make it better that this will happen to people who have, traditionally, been the in groups.

You are much more optimistic than I if you believe this levels the playing field in any way that does not resemble the work of a bomb.

I am not concerned that AI is ‘taking our jobs’ so much as AI is taking our humanity. How many people in this thread have referred to humans as what’s the term? Meat suits? Makes my skin crawl. But at least I’m alive and know it.

I get what you’re saying, and I don’t disagree that exploitation has always been part of the human story. But that’s exactly why I’m arguing we should design systems that make it harder, not easier, to keep repeating that pattern. I don't see how my use of the phrase “level the playing field,” warranted an equivalence to bombs. :rolleyes: I'm also not suggesting my idea is some magic wand that will erase exploitation. I mean equity through structure, like compensation models that recognize all human input in data creation. We’re talking about data here, which, if handled correctly, similar to the music industry, is traceable. Exploitation in the music industry hasn’t stopped, but without the copyright system, it’s arguable that not a single artist would be guaranteed any compensation at all.

The question of who gets compensated is valid, but it’s one we can only answer if we don’t throw our hands up just because people will always find ways to exploit others. They will, with or without solutions. But at least with a solution, someone deserving of compensation gets paid.

What I’m saying is that the general public keeps focusing on AI “taking their jobs,” which, in my view, signals to the people actually building and deploying AI that doing it the right way isn’t a priority, because no one’s really talking about that. I agree that AI risks dehumanizing us, but honestly, what doesn’t when humans are behind the wheel? That’s exactly why we should be rethinking the system around it now, instead of surrendering to the same old hierarchies, letting them roll out “improved” versions built on human labor, only to cut those same humans out of the corporate rewards once the system matures.

I guess we really can’t learn from the past after all. ¯\_(ツ)_/¯
Like I wrote earlier: I’ve perhaps read too much sci fi and speculative fiction to see anything but devastation.

That you can and do is a testament to your optimism and perhaps foresight.

Perhaps in different hands, I could see AI as being a tool to improve our lives. But it’s not and I don’t.

I’d much rather you be right than me.
 
Being able to do more with less human labour should be good for everyone.

That we have such a fucked up society that we tolerate a situation where doing more with less, means people who currently get little now getting nothing, while people who currently get lots now get even more, is bizarre and stupid - but so ingrained that it barely gets questioned.

Machines should never "take our jobs"; They should make our lives easier, not harder.

Having said which, AI seems unlikely to do much of either.
 
Being able to do more with less human labour should be good for everyone.

That we have such a fucked up society that we tolerate a situation where doing more with less, means people who currently get little now getting nothing, while people who currently get lots now get even more, is bizarre and stupid - but so ingrained that it barely gets questioned.

Machines should never "take our jobs"; They should make our lives easier, not harder.

Having said which, AI seems unlikely to do much of either.

Well said, comrade. Now let’s seize the means of machine learning. ;)
 
AI learns from its interactions with humans. It’s being developed by industry leaders who hire teams of talented people to build and refine these systems, while investors pour money into the infrastructure behind it, all expecting a return on investment. What I don’t understand is why the only people thinking about ROI right now are the investors, when the very data and interactions that train AI come from the rest of us. I don’t need to explain the roles of companies like Nvidia, Google, Apple, or Amazon, those are obvious. What puzzles me is why everyday people, the real builders of this system, aren’t fighting for their share of the return instead of trying to stop the inevitable deployment of AI.

Edit: Black folks fought for our share of the return. Granted, we never got it. We were just handed “freedom” something we already inherently owned, but never compensated for the damn work we did while living under the bullshit of the wealthy of that era. Fucking learn from us dumbass!
 
Last edited:
AI learns from its interactions with humans.
I thought normally the training (learning) is done before the AI is released. When LLMs are interacting with a human it has a short term memory that it can refer to but then normally it forgets those conversations. (well Copilot seems to have chat logs you can look at) Sometimes training involves interactions with humans but those humans might know their feedback is being used to improve the experience.
 
The insights an AI gathers from interacting with people are permanently logged and shape its evolution, new features, business strategies, and future iterations. While individual conversations might be fleeting, the system-wide knowledge acquisition is continuous and enduring. :rolleyes:

Edit: It’s called feedback from the end user, the same kind of input that helps improve future models, products, and profits.
 
Back
Top Bottom