• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Artificial intelligence paradigm shift

English should be fairly easy for an LLM, since it doesn't have much word morphology (variation in word forms) in its grammar, making it close to analytic or isolating.
I understand that was previous model google translate used. They were trying to use rules and understand meaning. Result were not great. Current model I understand is simply trying to sound natural. Basically with a large amount of (human written) texts in all languages they already have translations for everything, they just need to select correct one :)
It's better most of the time, but gets confused sometimes.
Also. Doesn’t have to be perfect. Just better and faster than a human specialist
 
Also. Doesn’t have to be perfect. Just better and faster than a human specialist
When it gets confused it is not better, it's much worse.
It's getting better though.
What I am saying is that the AI tools we have now are good enough to trust. Its happening right now. The paradigm shift is now a reality.

Companies who don’t shift to administration by AI will go bankrupt

Right now this is something only understood by IT specialists who go to conferences. It will filter out. Its also not going to feel dramatic. These tools will pop up augmentng stuff we already do. It'll just become ever more efficient, pushing up productivity. Bit by bit. But seen across the entire economy the growth of the economy will be explosive. Comparable to the introduction of the Internet. And fundamentaly change how we do stuff
 
Also. Doesn’t have to be perfect. Just better and faster than a human specialist
When it gets confused it is not better, it's much worse.
It's getting better though.
What I am saying is that the AI tools we have now are good enough to trust. Its happening right now. The paradigm shift is now a reality.

Companies who don’t shift to administration by AI will go bankrupt

Right now this is something only understood by IT specialists who go to conferences. It will filter out. Its also not going to feel dramatic. These tools will pop up augmentng stuff we already do. It'll just become ever more efficient, pushing up productivity. Bit by bit. But seen across the entire economy the growth of the economy will be explosive. Comparable to the introduction of the Internet. And fundamentaly change how we do stuff
I get it, ChatGPT saves a lot of time on writing nice emails and essays.
AI driving cars ... not so much.
 
Checked google translate. They appear to have fixed translating names like Heather, River, etc.
But it still gets confused by russian case system from time to time.
 
14 popular AI algorithms and their uses | InfoWorld - "Large language models have captured the news cycle, but there are many other kinds of machine learning and deep learning with many different use cases."

First explaining supervised vs. unsupervised learning. Supervised: having target values. Unsupervised: finding clustering, reducing dimensionality, etc. Reinforcement learning is a generalization of supervised learning.

Then mentioning linear regression, gradient descent, logistic regression, support vector machines, decision trees, random forests, extreme gradient boosting, k-means clustering, and principal component analysis.

Popular deep learning algorithms: convolutional neural networks, recurrent neural networks, long short-term memory, transformers, Q-learning.

Linear regression, fitting to a straight line, is an old one, and a very limited one, it must be noted. Gradient descent is how backpropagation works. Support vector machines use a linear classifier on data that was projected into more dimensions, thus permitting nonlinearity. Random forests are collections of decision trees. K-means clustering is a form of vector quantization, finding points that are at the average positions of their neighbors. Principal component analysis is fitting a multidimensional ellipsoid to one's data. One then reads off the lengths and directions of that ellipsoid's axes. I've often used that technique.

Convolutional neural network: an ANN that works on a sliding window in the data, working the same independent of position. Good for image data. Recurrent neural network: has "memory units" that receive values and give those values for the next run. Good for sequences of data.
 
Last edited:
All languages are roughly equal in complexity, but they differ in which areas of the system the linguistic complexity lies. The writing system can be an additional complicating factor. Language processing is particularly good for English, not because of its word structure, but because vastly more research and development has been done on the computational processing of English than any other language. And there are many more large collections of English online than for any other language. Language processing for Russian is also very good, because large numbers of very good computational linguists have worked on it in Russia and countries that deal with Russia. However, a language like Turkish poses more serious difficulties, partly because there simply aren't large numbers of computational linguists developing solutions for it. So the ability of Google Translate to deliver high quality translation varies.

Statistical approaches to translation can be very good, because they don't rely totally on handcrafted grammars, but translation is always about matching up pairs of languages. There are a lot of large collections of English text that have been manually tagged and available for a few decades now. So translation programs between English and other languages can be very good because of the ability to use pre-tagged datasets to train them on. The variable factor is the quality of tools developed to process the other language. I have found Google Translate to be more or less useful depending on which language I am trying to get translations for, but it is a very good general translation tool for those of us who are fluent in English and other languages for which large bases of tagged texts exist. If LLMs can be used effectively to generate conceptually tagged texts in language that have less development infrastructure available, that has the potential to really improve language translation programs.
 
Classifying artificial-intelligence technologies | Internet Infidels Discussion Board -- an earlier thread here

I quote a source: "Several data scientists I talked with said that the only sure way to find the very best algorithm is to try all of them."

Problem domains: perception, reasoning, knowledge, planning, communication

Tasks:
  • Supervised: classification (what category), regression (what numerical value)
  • Unsupervised: dimension reduction, clustering

It must be noted that top-down AI has had some successes: translation of high-level programming languages (compilation), computer algebra, and expert systems. In the first two, one wants guaranteed reliability, and explicitly stating inference rules is good for that.

That first one includes optimization, trying to find the fastest and/or the smallest code that will do something.
 
5 Tribes of Machine Learning – BMC Software | Blogs
In Pedro Domingos’ book, The Master Algorithm: How The Quest for the Ultimate Learning Machine Will Remake Our World, he categorizes the types of machine learning algorithms into five classes, which he calls the tribes of machine learning. Each group supports a set of principles, and, from them, stem different machine learning models.
The "master algorithm" is the algorithm that they will converge on, the algorithm that will make  Artificial general intelligence a.k.a. strong AI, full AI, human-level AI, and general intelligent action.

Symbolists - explicit inference rules, often top-down. The dominant paradigm of AI from its 1950's beginnings to the 1980's. No longer as dominant because of the success of bottom-up models in many problems.

Inference rules these very simplified ones for traveling and traffic lights:
  • If red, then stop
  • If yellow, then if relatively far, then stop, else go
  • If green, then go
Notice the additional test for a yellow light. That threshold may be determined in bottom-up fashion, creating a hybrid approach.

-- Decision trees, Random decision forests, Production rule systems, Inductive logic programming

All the rest are bottom-up, fas far as I can tell.

Connectionists (Neuroscience) - modeled on how nervous systems work, including brains. The connections are between parts that do simple things. A big problem is the difficulty of interpreting its parameter values.

-- Artificial neural nets, Reinforcement learning, Deep learning

Bayesians (Statisticians) - does probabilistic modeling, though often with other types of models: symbolic, connectionist.

-- Hidden Markov chains, Graphical models, Causal inference

Evolutionaries (Biologists) - "Looking at machine learning from a biologist stance, they are concerned with the evolution of an AI. They’re curious with how it grows, mutates…with how it becomes." and "Evolutionaries use genetic algorithms, and evolutionary programming. A common application of Evolutionary AI is on learning tasks."

Analogizers (Psychologists) - "Analogizers are usually the storytellers. They can create classes of entities. If an input, old or new, is identified as part of one of those classes, then Analogizers believe they can predict the outcome of the input as being like the outcome of that class." They like to use clustering algorithms.

Each approach has its usefulness. Combinations of these Machine Learning methods might be the answer to an AGI. Self-driving cars might learn to drive safely on the roads through an Evolutionary method, but they will use Connectionist methods to give the cars’ sensors sight. The car might get a huge boost in its user-driver interactions by utilizing the Analogizer’s way of classifying its drivers into types, Aggressive, Defensive, or Passive. Finally, there are rules to the road such as stopping at stop signs and staying between the lines, where methods from the traditional Symbolist tribe prove beneficial.
 
4 Types of Artificial Intelligence – BMC Software | Blogs -- levels of AI performance, roughly analogous to Abraham Maslow's hierarchy of needs.

That hierarchy:
  • Self-fulfillment needs:
    • Self-actualisation: achieving one's full needs potential, including creative activities
  • Psychological needs
    • Esteem needs: prestige, feeling of accomplishment
    • Belongingness & love needs: intimate relationships, friends
  • Basic needs
    • Safety needs: security, safety
    • Physiological needs needs: food, water, warmth, rest

Blogger Jonathan Johnson's sequence of types of AI:
  1. Reactive Machines
  2. Limited Memory
  3. Theory of Mind
  4. Self Aware
The first one is memoryless.

"Limited memory types refer to an A.I.’s ability to store previous data and/or predictions, using that data to make better predictions." II don't know why the blogger called that one's memory "limited", because one can imagine AI systems that collect large archives of data and manage them.

Theory of Mind - being able to infer what we are thinking and feeling.

Self Aware - human-scale, with consciousness.

One can go further in classifying the first two possibilities.
  • Memoryless system: output = function of input
  • Finite state machine/automaton: has an internal state: (output, new internal state) = function of (input, old internal state)
  • Pushdown automaton: has a pushdown stack where one can read and write from the top, add values to there, and remove values from there. Can implement context-free grammars, sets of production rules independent of context.
  • Turing machine: has a tape memory that can be moved back and forth. The maximum of feasible computability.
Strictly speaking, the third and fourth ones are defined for infinite memory, but the second one can act like the third and fourth ones to within resource limitations.
 
Language processing for Russian is also very good, because large numbers of very good computational linguists have worked on it in Russia and countries that deal with Russia.
How does syntax tagging work for Russian? Does it involve tagging words for noun case, verb aspect, etc.?
 
Also. Doesn’t have to be perfect. Just better and faster than a human specialist
When it gets confused it is not better, it's much worse.
It's getting better though.
What I am saying is that the AI tools we have now are good enough to trust. Its happening right now. The paradigm shift is now a reality.

Companies who don’t shift to administration by AI will go bankrupt

Right now this is something only understood by IT specialists who go to conferences. It will filter out. Its also not going to feel dramatic. These tools will pop up augmentng stuff we already do. It'll just become ever more efficient, pushing up productivity. Bit by bit. But seen across the entire economy the growth of the economy will be explosive. Comparable to the introduction of the Internet. And fundamentaly change how we do stuff
I get it, ChatGPT saves a lot of time on writing nice emails and essays.
AI driving cars ... not so much.

There's already AI driving cars. Saab Scania has had self driving cars for 5 years now. They’re just undercover about it. If there's an accident they don’t want it known. I have inside information.

Industry wants to avoid the Frankenfood debacle

It's much more than Chatgpt. Much much more
 
Language processing for Russian is also very good, because large numbers of very good computational linguists have worked on it in Russia and countries that deal with Russia.
How does syntax tagging work for Russian? Does it involve tagging words for noun case, verb aspect, etc.?

I don't know, because I haven't looked at any of their schemas. However, it would involve separating stems from suffixes and prefixes. Russia has one of the oldest linguistic traditions in Europe and was actually the birthplace of modern phonological theories. So they've been working on linguistic analysis of their language a very long time. I am familiar with a lot of the Soviet linguistic literature, which was largely inaccessible to Western linguists, and I even have a modest library of Russian linguistic works.
 
My best friend controls a bunch of high profile European alcohol brands. It's a bit complicated. But you could say that his job is to make sure already fancy or famous brands stay fancy and famous. It's not just marketting, since he controls how these brands are sold completely. Anyway, it's a high profile job with lots and lots of money floating about. These brands are worth billions. Many many billions.

He's just fired his copyrighter staff. ChatGPT4 does a better job. These are high profile jobs in the industry.
 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......

 
Sounds like an AI disaster on the horizon.

......
The November 16th issue of Nature has an article about ChatGPT: ChatGPT has entered the classroom: how LLMs could transform education. It reports that the latest version (GPT4) can only answer one third of questions correctly in physical chemistry, physics, and calculus. Nevertheless, the article promotes the idea that ChatGPT should be brought into the classroom!
.......


Not quite a disaster, but people are stupid when it comes to the threat that AI really poses for humanity. The problem is that all LLMs do is look at relationships between symbols. They don't have any real model of reality in which to ground concepts, but they can discern patterns in groups of word tokens and summarize the content of text in interesting ways. If the textbase contains a lot of contradictory information, the technology isn't very good at sorting that out and distinguishing good from bad information. The summaries it regurgitates are usually going to be contaminated by the inaccuracies, mistakes, and biases of the authors of the material it was trained on. Human beings develop filters from experiences of the world, and those filters help them to prune out information they consider untrustworthy over time. LLMs are not agents that interact with reality and learn from experiences associated with those interactions. Interactions with humans can shape their responses in ways that might prove useful, but those interactions are ephemeral and not part of a process of mental development.

Students who rely on LLMs to do their homework for them are going to discover that one needs to understand when and where those programs are spewing out inaccuracies. The inaccuracies will expose them to some very disappointing evaluations from their teachers, who can spot those inaccuracies and probably figure out that the student is cheating.
 
Even worse, experience has demonstrated AI will happliy make up its own "facts". AI does not understand concepts of truth, accuracy, and I don't know.
 
Back
Top Bottom