• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

DrZoidberg

Contributor
Joined
Nov 28, 2007
Messages
11,202
Location
Copenhagen
Basic Beliefs
Atheist
I work in IT. I've many many many years in this business. From before the Internet. Yesterday I went to an IT conference. Half the talks were on AI. And not how-IT-will-change-the-future style talks. This was practical applications about stuff that is already working. A year ago we had loads of IT AI start ups with big dreams. This is normal, and almost all of them fail. But what's different now is that most of them haven't failed, and is already showing a profit. That's an extreme ROI for AI IT investements. I think the AI revolution will come much faster and be a bigger paradigm shift than anyone is prepared for. I don't think anyone knows where this is going other than that it's going to be fast and change everything. And it's happening now. I recommend adopting a curious and embracing attitude, because if you don't, you will be fucked!

Here's some links that may be of general usefulness to people here.

Here's a super cool site for how to use ChatGPT for the full effect. ChatGPT4 is mindblowingly good. People just haven't realised yet just how powerful and useful it is yet. When that comes it'll go fast.

https://learnprompting.org/

Here's another useful program. It takes ChatGPT3.5 (the free version) and sets up a local instance that doesn't communicate with the outside world. The good thing about this is that you can tell it all your company secrets. You can basically just tell it the basic info on how your company works and it'll optimise all the processes for you. In great detail and very comprehensively. Better than any human could. It's not as good as ChatGPT4. It's way worse. But at least you aren't leaking your company secrets to the world. If you're nervous about there being a backdoor, you can put it on a compter without Internet access. When you delete it, anything you've told it is gone. All totally FREE!!!

https://github.com/nomic-ai/gpt4all
 
Did your employer use it to optimise their processes? How did they figure out that the AI's solution was correct?
 
I work in IT. I've many many many years in this business. From before the Internet. Yesterday I went to an IT conference. Half the talks were on AI. And not how-IT-will-change-the-future style talks. This was practical applications about stuff that is already working. A year ago we had loads of IT AI start ups with big dreams. This is normal, and almost all of them fail. But what's different now is that most of them haven't failed, and is already showing a profit. That's an extreme ROI for AI IT investements. I think the AI revolution will come much faster and be a bigger paradigm shift than anyone is prepared for.
I think the one thing that is worrisome with be the Dark AI. Yes, AI helps... indiscriminately!
 
Did your employer use it to optimise their processes? How did they figure out that the AI's solution was correct?

Here's a concrete example I have used. Describing a user interface to ChatGPT and getting a comprehensive list of all possible ways to test the system to verify it working, corner cases and security testing. We already had a guy write test cases for it. By using ChatGPT we found a bunch more good test scenarios our hired consultant hadn't thought of.

Another one I used was to have ChatGPT create a packing list for a campsite. Awesome. Took me all of ten seconds to make. Turned out to be really useful.

An example that wasn't mine. Was a guy at another conference who used ChatGPT in order to create talking topics for varoius breakout groups. Everybody at the conference were highly specialised specialists and the topics were extremely initiated and useful. Some topics were dumb. But we just skipped those. This was a year ago, and ChatGPT3. The difference between ChatGPT3 and 4 is very big.

But these are just examples with ChatGPT. There's a very long list of AI implementations. Stuff you perhaps don't think of.

Facebook now barely has any people adminstrating it anymore. It's now all AI. Have you noticed the shift? Didn't think so. Amazon is well on their way here.

Gramarly, the grammar checking software. It's AI.

There's a whole host of e-learning platforms. All used today, which uses AI generated teaching material.

Alexa and Google Assistant is AI.

It's also good to keep terminolog correct. What is exploding now is "Narrow AI". It's AI designed to solve specific tasks. What people on telly and social media influencers usually talk about is "General AI". Which is AI that we can let loose in the world and they'll solve any problem as well as itself figuring out what problems to solve. We're nowhere near getting a general AI.

Another fun factoid is that the models used in the current market leading AI's were all written 1960 to 1980. If you know how shit the computers were back then, that's remarkable. These were math nerds sitting with pen and paper writing software for computers that hypothetically one day will be built (which has happened) having to invent their own programming languages becasue the ones that existed back then weren't good enough. The intelligence, and sharpness of mind, to do this, is bizarre to me. It's amazing.
 
I work in IT. I've many many many years in this business. From before the Internet. Yesterday I went to an IT conference. Half the talks were on AI. And not how-IT-will-change-the-future style talks. This was practical applications about stuff that is already working. A year ago we had loads of IT AI start ups with big dreams. This is normal, and almost all of them fail. But what's different now is that most of them haven't failed, and is already showing a profit. That's an extreme ROI for AI IT investements. I think the AI revolution will come much faster and be a bigger paradigm shift than anyone is prepared for.
I think the one thing that is worrisome with be the Dark AI. Yes, AI helps... indiscriminately!

Bah. There's no difference in combating these as it is in combating computer viruses. In computer technology we always start with the assumption that any technology will have a malicious use, and we engineer it into the systems we build.

The funny thing is that lots of companies identify more problems than they put money into solving. What this means is that the problem isn't bad enough to warrant plugging known and wide open gaping holes. And it's not that they don't care. It's that they are a business. Fixing this costs money. And if protecting ourselves cost more money than we'd lose from criminals stealing stuff, then the rational choice is to let criminals win sometimes. Which is what is happening now. So it's not that we can't protect ourselves. We just don't think it's worth it.

Basically, I think dark AI is a non issue and a word invented by some journalist to scare people with click bait. I'd relax if I was you. Smarter people than you or I are already on top of this.
 
Here's a concrete example I have used. Describing a user interface to ChatGPT and getting a comprehensive list of all possible ways to test the system to verify it working, corner cases and security testing. We already had a guy write test cases for it. By using ChatGPT we found a bunch more good test scenarios our hired consultant hadn't thought of.

Another one I used was to have ChatGPT create a packing list for a campsite. Awesome. Took me all of ten seconds to make. Turned out to be really useful.

An example that wasn't mine. Was a guy at another conference who used ChatGPT in order to create talking topics for varoius breakout groups. Everybody at the conference were highly specialised specialists and the topics were extremely initiated and useful. Some topics were dumb. But we just skipped those. This was a year ago, and ChatGPT3. The difference between ChatGPT3 and 4 is very big.

But these are just examples with ChatGPT. There's a very long list of AI implementations. Stuff you perhaps don't think of.
Yeah LLMs seem to be useful for quickly generating suggestions which can then be accepted or discarded by a human user. It's a class of ineffable problems where it is hard for you to describe the algorithm to get a solution, but you know a good solution when you see it.
 
Here's a concrete example I have used. Describing a user interface to ChatGPT and getting a comprehensive list of all possible ways to test the system to verify it working, corner cases and security testing. We already had a guy write test cases for it. By using ChatGPT we found a bunch more good test scenarios our hired consultant hadn't thought of.

Another one I used was to have ChatGPT create a packing list for a campsite. Awesome. Took me all of ten seconds to make. Turned out to be really useful.

An example that wasn't mine. Was a guy at another conference who used ChatGPT in order to create talking topics for varoius breakout groups. Everybody at the conference were highly specialised specialists and the topics were extremely initiated and useful. Some topics were dumb. But we just skipped those. This was a year ago, and ChatGPT3. The difference between ChatGPT3 and 4 is very big.

But these are just examples with ChatGPT. There's a very long list of AI implementations. Stuff you perhaps don't think of.
Yeah LLMs seem to be useful for quickly generating suggestions which can then be accepted or discarded by a human user. It's a class of ineffable problems where it is hard for you to describe the algorithm to get a solution, but you know a good solution when you see it.

Yes. This is still a revolution and a mindblowingly powerful paradigm shift. The amount of work needed to make plans, checklists, sketches, diagrams... etc is drastically reduced.

I don't think you fully apreciate the magnitude of what is happening. Humans aren't being replaced. It's more like humans are given super powers, making their efforts many times multiplied. The productivity of a single human will be many times multiplied. You don't think that's a big deal?

This is directly analogue to the Spinning Jenny and it's impact on the weaving industry.

Programmers will be replaced by IT prompters. Artists by creative prompters. Writers by writing prompters. Adminstorators by administration prompters. The most valuable skill of the future will be prompting. A skill almost nobody today really masters, and it's unclear even how such an education should be formulated. It's still a field in flux. It's going to be interesting times ahead.
 
Facebook now barely has any people adminstrating it anymore. It's now all AI. Have you noticed the shift?
The existence of rulings that are both idiotic and impossible to appeal is a direct consequence of this; And the closure of my Facebook account was the direct consequence of that.

So yes.

In fact, yes to the power of fuck yes.
 
To say that "Facebook now barely has any people administering it" is pure hyperbole. Jobs have changed because of the adoption of AI to moderate content, but that doesn't affect the size of the administration staff or budget. But reduction in staff and the automation of moderation was a trend that started before the introduction of LLM technology. Similarly, Grammarly was always dependent on language processing techniques, but it has only recently introduced an LLM component.

Let's not forget that AI is a branch of computer science that dates back to the 1950s, and its goal has always been to simulate intelligent behavior. LLMs represent an advance over previous techniques involving neural networks, so we are only seeing an incremental advance in that technology. The results have been impressive in terms of conversational interactions with search engines, but it is really useful in only a limited number of human-computer interactions. We are still very far away from creating machines that can think like human beings, even if the LLM chatbots give the impression that they can in an online discussion. As DrZoidberg has been pointing out, they can help a lot with brainstorming exercises and organizing activities. The danger is in relying on them too much for tasks that require intelligent judgment. Facebook moderation has generated a lot of controversy since it has become so automated, because a lot of that moderation has clearly not been done by intelligent beings. Trust me. Whether you are satisfied with IIDB moderator decisions or not, you don't want an LLM to take over here.
 
Facebook moderation has generated a lot of controversy since it has become so automated, because a lot of that moderation has clearly not been done by intelligent beings.
What a gift you have, for understatement!
 
Here's another useful program. It takes ChatGPT3.5 (the free version) and sets up a local instance that doesn't communicate with the outside world. The good thing about this is that you can tell it all your company secrets. You can basically just tell it the basic info on how your company works and it'll optimise all the processes for you. In great detail and very comprehensively. Better than any human could. It's not as good as ChatGPT4. It's way worse. But at least you aren't leaking your company secrets to the world. If you're nervous about there being a backdoor, you can put it on a compter without Internet access. When you delete it, anything you've told it is gone. All totally FREE!!!

https://github.com/nomic-ai/gpt4all
How do you know it is not connected to the outside world? The only safe computer is one that is turned off and any internal batteries removed.
So many problems are caused by the assumption that is is not connected to the outside world.
 

I don't think you fully apreciate the magnitude of what is happening. Humans aren't being replaced. It's more like humans are given super powers, making their efforts many times multiplied. The productivity of a single human will be many times multiplied. You don't think that's a big deal?
Can it gives us super-ethics and empathy instead of super powers?
We do not need more power. What we need is more wisdom and AI cannot give us that.

I am still looking for more natural intelligence rather than AI.
 
Here's another useful program. It takes ChatGPT3.5 (the free version) and sets up a local instance that doesn't communicate with the outside world. The good thing about this is that you can tell it all your company secrets. You can basically just tell it the basic info on how your company works and it'll optimise all the processes for you. In great detail and very comprehensively. Better than any human could. It's not as good as ChatGPT4. It's way worse. But at least you aren't leaking your company secrets to the world. If you're nervous about there being a backdoor, you can put it on a compter without Internet access. When you delete it, anything you've told it is gone. All totally FREE!!!

https://github.com/nomic-ai/gpt4all
How do you know it is not connected to the outside world? The only safe computer is one that is turned off and any internal batteries removed.
So many problems are caused by the assumption that is is not connected to the outside world.
Back in the day, we would fill the ethernet and USB ports with epoxy resin.

Of course, these days you'd also need a Faraday cage to prevent wifi or bluetooth connectivity.
 
Back in the late 1970s, the BBC and James Burke created a tv series named "Connections". It was a history of science and invention. One episode about information started with a portrayal of an English court in the middle ages. A young man had just turned 21 and he was suing his uncle in order to claim his inheritance from his father's estate. The point of the scene was, in a world of nearly universal illiteracy, written documents had little value. Before a document could be taken as evidence, a witness had to attest to what it said, and what's more, who wrote it and from where did it come. The testimony of a trusted person held real value.

For a short part of human history, a photograph was considered to be solid evidence. For an even shorter time, video was given the same credence. It wasn't long before technology developed which could manufacture convincing photographs which purported to record real events. Suddenly we were back in the middle ages. If a photograph, a sound recording, or a video is introduced as evidence, a living person has to attest to it's authenticity and it's provenance. Police departments across America still rely on Polaroid cameras to photograph crime scenes because that format is still very difficult to alter.

AI has many uses, but what it has done is introduce another element of doubt to stored information. There is a recent case of an attorney who used Chat GPT to write a legal brief. It looked like most legal briefs, complete with case law and citations of specific cases. When the opposing attorneys read it and prepared a counter brief, they quickly realized that none of the cited cases actually existed. Chat GPT had simply imitated the format of citations and filled in the blanks with words which sounded right. It was really no different than if the lawyer had handed the task to a pathological liar and then failed to check his work.
 
Generative AI is a powerful bullshit factory, for sure.

People at my work want to figure out how to train an LLM to tag unstructured data because people are really slow at it, but if you hand that kind of job over to an AI it's going to generate at least a little bit of bullshit, and the people looking at the results won't be able to easily audit the resulta to figure out exactly where the AI gave bad answers.
 
Facebook now barely has any people adminstrating it anymore. It's now all AI. Have you noticed the shift?
The existence of rulings that are both idiotic and impossible to appeal is a direct consequence of this; And the closure of my Facebook account was the direct consequence of that.

So yes.

In fact, yes to the power of fuck yes.
Facebook is a business and they are now making more money
 
To say that "Facebook now barely has any people administering it" is pure hyperbole. Jobs have changed because of the adoption of AI to moderate content, but that doesn't affect the size of the administration staff or budget. But reduction in staff and the automation of moderation was a trend that started before the introduction of LLM technology. Similarly, Grammarly was always dependent on language processing techniques, but it has only recently introduced an LLM component.

Let's not forget that AI is a branch of computer science that dates back to the 1950s, and its goal has always been to simulate intelligent behavior. LLMs represent an advance over previous techniques involving neural networks, so we are only seeing an incremental advance in that technology. The results have been impressive in terms of conversational interactions with search engines, but it is really useful in only a limited number of human-computer interactions. We are still very far away from creating machines that can think like human beings, even if the LLM chatbots give the impression that they can in an online discussion. As DrZoidberg has been pointing out, they can help a lot with brainstorming exercises and organizing activities. The danger is in relying on them too much for tasks that require intelligent judgment. Facebook moderation has generated a lot of controversy since it has become so automated, because a lot of that moderation has clearly not been done by intelligent beings. Trust me. Whether you are satisfied with IIDB moderator decisions or not, you don't want an LLM to take over here.
This is what is changing. You are describing AI a year ago. We're at the point where AI can be assumed to outperform human intelligence in narrowly defined domains. The usp of humans now is as generalists
 
Generative AI is a powerful bullshit factory, for sure.

People at my work want to figure out how to train an LLM to tag unstructured data because people are really slow at it, but if you hand that kind of job over to an AI it's going to generate at least a little bit of bullshit, and the people looking at the results won't be able to easily audit the resulta to figure out exactly where the AI gave bad answers.

I've had some experience with tagging unstructured data, and there are a lot of pitfalls inherent in it. LLMs are very good at building associative networks, but the task of tagging requires a more refined knowledge of syntactic phrasing. These programs are not very good at understanding language in terms of grammatical constructions, but they are good at resolving ambiguities in word usage. The problem with unstructured data is that it tends to be very noisy--not totally grammatical--but human readers are able to puzzle out meanings even when the data contains a lot of misspellings and unusual constructions. That has to do with the fact that human beings have the real world experiences that know the subject matter that speakers are talking about. LLMs just look at patterns of words and then reconstruct content that condenses and summarizes content. Tagging words and phrases requires a much finer grained analysis of English phrasing.

There are programs out there that attempt to tag unstructured text, but they tend to miss words and phrases that should be tagged more often than they find and tag accurately. And there is also a certain amount of inaccurate tagging that they produce. We did manage to build a prototype for a system for tagging sensitive data that corresponded to a set of policies. There is actually a tremendous market for programs that could do the tagging, if one could build a system that was reasonably accurate. Before I retired, we managed to patent a technique for building such a tagger, but it did get much beyond a proof of concept stage. Anyway, I haven't looked at how far the technology has advanced in the past 10 years, but it would be interesting to see if an LLM module could have produced something with our technique, which involved an analyst using a controlled version of English to paraphrase the policies that defined sensitive information to be searched for.
 
This is what is changing. You are describing AI a year ago. We're at the point where AI can be assumed to outperform human intelligence in narrowly defined domains. The usp of humans now is as generalists

Well, an abacus can outperform intelligent humans at calculating sums. Outperforming humans at tasks that humans want to accomplish is not something new to technology. It's actually what motivates us to build machines. That doesn't mean that the machines themselves are intelligent in a human or animal sense.
 
This is what is changing. You are describing AI a year ago. We're at the point where AI can be assumed to outperform human intelligence in narrowly defined domains. The usp of humans now is as generalists

Well, an abacus can outperform intelligent humans at calculating sums. Outperforming humans at tasks that humans want to accomplish is not something new to technology. It's actually what motivates us to build machines. That doesn't mean that the machines themselves are intelligent in a human or animal sense.

I don’t understand why you said that or why you think it's relevant to this discussion?

Artificial intelligence is just the catchy name for narrowly defined machine learning. Yes, I agree that its not actual intelligence. I never said it was. Just like an orgasmatron has most likely never given anyone an orgasm. It's a name that has caught on
 
Back
Top Bottom