• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Historic Achievement: Microsoft researchers reach human parity in conversational speech recognition

You expect me to read that paper?

I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.
 
I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.

You said you could beat a computer doing conversational speech recognition by recording a video and listening to it over and over again, and I'm the one who doesn't understand? You must have some riveting conversations...

You think that applying ML methods which traditionally have not been used in speech recognition to improve accuracy over previous methods is an incremental improvement, I'm guessing in the same sense as a chess playing robot which is able to process X% more plies, and that the problem they're trying to solve is an 'unfair comparison' because you don't like the problem - and I'm the one who doesn't understand?

Professional transcriptionists, who MS compared their technology to, don't get infinite time to transcribe conversational speech. Sometimes they have to send the transcript immediately after the conversation has ended and they don't have the privilege of recording and repeatedly listening to it (for up to 40 years even). Is that unfair? Sure - but it's the problem being solved. Time is not the irrelevant factor you seem to think it is.
 
You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.

You said you could beat a computer doing conversational speech recognition by recording a video and listening to it over and over again, and I'm the one who doesn't understand? You must have some riveting conversations...

You think that applying ML methods which traditionally have not been used in speech recognition to improve accuracy over previous methods is an incremental improvement, I'm guessing in the same sense as a chess playing robot which is able to process X% more plies, and that the problem they're trying to solve is an 'unfair comparison' because you don't like the problem - and I'm the one who doesn't understand?

Professional transcriptionists, who MS compared their technology to, don't get infinite time to transcribe conversational speech. Sometimes they have to send the transcript immediately after the conversation has ended and they don't have the privilege of recording and repeatedly listening to it (for up to 40 years even). Is that unfair? Sure - but it's the problem being solved. Time is not the irrelevant factor you seem to think it is.
Real time human transcribing has very little to do with voice recognition.
 
You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.

You said you could beat a computer doing conversational speech recognition by recording a video and listening to it over and over again, and I'm the one who doesn't understand? You must have some riveting conversations...

You think that applying ML methods which traditionally have not been used in speech recognition to improve accuracy over previous methods is an incremental improvement, I'm guessing in the same sense as a chess playing robot which is able to process X% more plies, and that the problem they're trying to solve is an 'unfair comparison' because you don't like the problem - and I'm the one who doesn't understand?

Professional transcriptionists, who MS compared their technology to, don't get infinite time to transcribe conversational speech. Sometimes they have to send the transcript immediately after the conversation has ended and they don't have the privilege of recording and repeatedly listening to it (for up to 40 years even). Is that unfair? Sure - but it's the problem being solved. Time is not the irrelevant factor you seem to think it is.
The primary reason for recording and then transcribing from that recording is so that the person doing the transcribing can stop the tape so that they can have time write what they heard, not so they can replay for understanding. Many of the errors in real time transcription by shorthand notes is in translating the shorthand to English, even for the person who originally took the shorthand notes. Translating shorthand notes taken by someone else can be a daunting task.
 
Last edited:
I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.

My comment on the paper is that anyone with a sound statistics background and who maintains a proficiency with APL would be in heaven in the area. All that one would need add is adaptive logic for which Bayes might be a start. What is really needed is more Big Blue-ing, large processing and memory accessing machines and everything will be revealed. IOW brute force.


I took a flyer at SR back in the day (1970-85). Then I went to things like LISP and b-trees when search and information (knowledge engineering). Finally came back to lists and humans end the end. Wound up with GOMS tasks since I was evaluating Airplane operator systems.

I'm guessing we'll come back to human aided intelligence systems in the near future ... again.
 
The primary reason for recording and then transcribing from that recording is so that the person doing the transcribing can stop the tape so that they can have time write what they heard, not so they can replay for understanding. Many of the errors in real time transcription by shorthand notes is in translating the shorthand to English, even for the person who originally took the shorthand notes. Translating shorthand notes taken by someone else can be a daunting task.

My first impression is that I've seen all this before. Then I remembered. We did what you suggest back in the day (ought-70) in EEG analysis. Trained observers would record and score EEG. We compared those outcomes with frequency based tools and some came to believe humans were better. They weren't. Machines began having inference capabilities and soon made humans, with their human biases and beliefs confounding measures, obsolete. I think the same will happen here.

If you want an impartial tool you need to remove as much noise as possible. Believe me, humans are astoundingly noisy instruments.
 
I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.

My comment on the paper is that anyone with a sound statistics background and who maintains a proficiency with APL would be in heaven in the area. All that one would need add is adaptive logic for which Bayes might be a start. What is really needed is more Big Blue-ing, large processing and memory accessing machines and everything will be revealed. IOW brute force.
They ARE brute forcing everything now. Results are good but it seems deep neural nets have a certain ceiling which does not let them reach human level. Yes, they can create superficial metrics like the one discussed here where computers are better than humans but in reality they are still way short of reliably replacing humans. My feeling that brute force (deep NN) will work eventually but you need to go extremely deep, maybe 100 or 1000 times deeper than currently possible. Meanwhile I think conventional approaches (using biological Neural Nets also known as human brain) can in principle solve particular AI problems better.
 
I haven't been in the AI Expert systems world for about 20 years now. Back in the day I could beat AI programmers in both price and utility with well chosen and linked lists. That day has passed.

What is the real problem is that the brain isn't an AI thingie or and expert systems thingie, it's a functional human thingie. It's capacity as an AI systems is limited by its inescapable wiring to subjective desire structure. Obviously 10 or 20 years down the road computer speed, capacity, and inter-connectivity will be such that most material problems will be more cost effectively resolved by them.

I knew that then and I clearly see it now. Our pharmacological information system restricts our capacity to solve material problems outside our native environment.
 
I haven't been in the AI Expert systems world for about 20 years now. Back in the day I could beat AI programmers in both price and utility with well chosen and linked lists. That day has passed.

What is the real problem is that the brain isn't an AI thingie or and expert systems thingie, it's a functional human thingie. It's capacity as an AI systems is limited by its inescapable wiring to subjective desire structure. Obviously 10 or 20 years down the road computer speed, capacity, and inter-connectivity will be such that most material problems will be more cost effectively resolved by them.

I knew that then and I clearly see it now. Our pharmacological information system restricts our capacity to solve material problems outside our native environment.

And the brain isn't a computer. The brain is a jury rigged evolutionary mess that just works because eventually a drunk monkey on a typewriter will write the complete works of Shakespeare.

I've also studied AI at uni and a problem with all of the attempts, as I see it, is that we're not building them for problem solving. We're building them for "thinking" and behaving exactly like a human. That is the baseline for what is considered intelligent. There's lots of problems with that. The biggest one being that it is a waste of time, since we already have biological brains that can do that. We certainly don't need an electronic one that also will suck in the same ways.
 
There's lots of problems with that. The biggest one being that it is a waste of time, since we already have biological brains that can do that. We certainly don't need an electronic one that also will suck in the same ways.
You do understand that the goal is to replace biological brains?
 
There's lots of problems with that. The biggest one being that it is a waste of time, since we already have biological brains that can do that. We certainly don't need an electronic one that also will suck in the same ways.
You do understand that the goal is to replace biological brains?

No, it really isn't. The goal is to create devices that can replace us in certain specific ways, or supplement us. Whether it be to solve some specific problem, act as training wheels to humans or to predict the future. Ideally we want a AI that will outperform us, and then it's not a question of replacing. Then it's a question of designing a world to suit that superior intelligence.
 
You do understand that the goal is to replace biological brains?

No, it really isn't. The goal is to create devices that can replace us in certain specific ways, or supplement us. Whether it be to solve some specific problem, act as training wheels to humans or to predict the future. Ideally we want a AI that will outperform us, and then it's not a question of replacing. Then it's a question of designing a world to suit that superior intelligence.
Yes, really, goal is to replace humans.
 
No, it really isn't. The goal is to create devices that can replace us in certain specific ways, or supplement us. Whether it be to solve some specific problem, act as training wheels to humans or to predict the future. Ideally we want a AI that will outperform us, and then it's not a question of replacing. Then it's a question of designing a world to suit that superior intelligence.
Yes, really, goal is to replace humans.

Ok, I'm listening. Please explain what you mean by it?

- - - Updated - - -

A hammer mill needn't look like the blacksmith it replaces.

A useful AI needn't think like a human, for much the same reasons.

Exactly my point. I'd argue that an AI that thinks like a human isn't necessary nor wanted.
 
Here's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.

https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
 
A hammer mill needn't look like the blacksmith it replaces.

A useful AI needn't think like a human, for much the same reasons.
If you want AI to interact with people you need AI which think or at least pretend to think like them.
Imagine AI universal translator which does not think like people or robo-lawyer who does not think like people? Or robo-plitician.... OK, maybe not that, politicians don't think like normal human beings.
 
Last edited:
Here's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.

https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
Your link has absolutely nothing to do with AI
 
A hammer mill needn't look like the blacksmith it replaces.

A useful AI needn't think like a human, for much the same reasons.
If you want AI to interact with people you need AI which think or at least pretend to think like them.
Imagine AI universal translator which does not think like people or robo-lawyer who does not think like people? Or robo-plitician.... OK, maybe not that, politicians don't think like normal human beings.

Imagine instead of having a robot helping an old lady to cross the road, we instead fix a solution so that the old lady doesn't have to cross the road to begin with.

We don't want all robots and AI to interact with humans. We want specialized interaction robots that do nothing other than interact with humans. And then we want those in turn to figure out which robots to use to solve problems. This BTW is how the computer you are writing this on works.

BTW, your example of the universal translator is broken. I recommend googling "Searle and the Chinese room". A universal translator does not need to understand the various languages they translate between. It just needs to be able to mindlessly match symbols from two lists. But really fast.

- - - Updated - - -

Here's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.

https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
Your link has absolutely nothing to do with AI

That sounded like a statement, and not a question. Do you want me to explain why it does (have something to do with AI)?
 
If you want AI to interact with people you need AI which think or at least pretend to think like them.
Imagine AI universal translator which does not think like people or robo-lawyer who does not think like people? Or robo-plitician.... OK, maybe not that, politicians don't think like normal human beings.

Imagine instead of having a robot helping an old lady to cross the road, we instead fix a solution so that the old lady doesn't have to cross the road to begin with.

We don't want all robots and AI to interact with humans. We want specialized interaction robots that do nothing other than interact with humans. And then we want those in turn to figure out which robots to use to solve problems. This BTW is how the computer you are writing this on works.

BTW, your example of the universal translator is broken. I recommend googling "Searle and the Chinese room". A universal translator does not need to understand the various languages they translate between. It just needs to be able to mindlessly match symbols from two lists. But really fast.
I did not say it need to understand, I said it has to look like it understands. But one way of looking like it understands is actually understanding. But your google argument is irrelevant because as long as machine passes the turing test for all intents and purposes it has a mind, certainly in practical sense.
- - - Updated - - -

Here's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.

https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
Your link has absolutely nothing to do with AI

That sounded like a statement, and not a question. Do you want me to explain why it does (have something to do with AI)?
Yes, it was a statement.
 
Back
Top Bottom