You expect me to read that paper?
I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
You expect me to read that paper?
You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.You expect me to read that paper?
I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
Real time human transcribing has very little to do with voice recognition.You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.
You said you could beat a computer doing conversational speech recognition by recording a video and listening to it over and over again, and I'm the one who doesn't understand? You must have some riveting conversations...
You think that applying ML methods which traditionally have not been used in speech recognition to improve accuracy over previous methods is an incremental improvement, I'm guessing in the same sense as a chess playing robot which is able to process X% more plies, and that the problem they're trying to solve is an 'unfair comparison' because you don't like the problem - and I'm the one who doesn't understand?
Professional transcriptionists, who MS compared their technology to, don't get infinite time to transcribe conversational speech. Sometimes they have to send the transcript immediately after the conversation has ended and they don't have the privilege of recording and repeatedly listening to it (for up to 40 years even). Is that unfair? Sure - but it's the problem being solved. Time is not the irrelevant factor you seem to think it is.
The primary reason for recording and then transcribing from that recording is so that the person doing the transcribing can stop the tape so that they can have time write what they heard, not so they can replay for understanding. Many of the errors in real time transcription by shorthand notes is in translating the shorthand to English, even for the person who originally took the shorthand notes. Translating shorthand notes taken by someone else can be a daunting task.You posted it in your reply in my post. You clearly don't understand the issues here and it's very annoying when people who don't understand start throwing links without even reading them themselves it seems.
You said you could beat a computer doing conversational speech recognition by recording a video and listening to it over and over again, and I'm the one who doesn't understand? You must have some riveting conversations...
You think that applying ML methods which traditionally have not been used in speech recognition to improve accuracy over previous methods is an incremental improvement, I'm guessing in the same sense as a chess playing robot which is able to process X% more plies, and that the problem they're trying to solve is an 'unfair comparison' because you don't like the problem - and I'm the one who doesn't understand?
Professional transcriptionists, who MS compared their technology to, don't get infinite time to transcribe conversational speech. Sometimes they have to send the transcript immediately after the conversation has ended and they don't have the privilege of recording and repeatedly listening to it (for up to 40 years even). Is that unfair? Sure - but it's the problem being solved. Time is not the irrelevant factor you seem to think it is.
I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
The primary reason for recording and then transcribing from that recording is so that the person doing the transcribing can stop the tape so that they can have time write what they heard, not so they can replay for understanding. Many of the errors in real time transcription by shorthand notes is in translating the shorthand to English, even for the person who originally took the shorthand notes. Translating shorthand notes taken by someone else can be a daunting task.
They ARE brute forcing everything now. Results are good but it seems deep neural nets have a certain ceiling which does not let them reach human level. Yes, they can create superficial metrics like the one discussed here where computers are better than humans but in reality they are still way short of reliably replacing humans. My feeling that brute force (deep NN) will work eventually but you need to go extremely deep, maybe 100 or 1000 times deeper than currently possible. Meanwhile I think conventional approaches (using biological Neural Nets also known as human brain) can in principle solve particular AI problems better.I don't - the paper was linked for anyone who would like rigorous information about what MS is doing in the field.
My comment on the paper is that anyone with a sound statistics background and who maintains a proficiency with APL would be in heaven in the area. All that one would need add is adaptive logic for which Bayes might be a start. What is really needed is more Big Blue-ing, large processing and memory accessing machines and everything will be revealed. IOW brute force.
I haven't been in the AI Expert systems world for about 20 years now. Back in the day I could beat AI programmers in both price and utility with well chosen and linked lists. That day has passed.
What is the real problem is that the brain isn't an AI thingie or and expert systems thingie, it's a functional human thingie. It's capacity as an AI systems is limited by its inescapable wiring to subjective desire structure. Obviously 10 or 20 years down the road computer speed, capacity, and inter-connectivity will be such that most material problems will be more cost effectively resolved by them.
I knew that then and I clearly see it now. Our pharmacological information system restricts our capacity to solve material problems outside our native environment.
You do understand that the goal is to replace biological brains?There's lots of problems with that. The biggest one being that it is a waste of time, since we already have biological brains that can do that. We certainly don't need an electronic one that also will suck in the same ways.
You do understand that the goal is to replace biological brains?There's lots of problems with that. The biggest one being that it is a waste of time, since we already have biological brains that can do that. We certainly don't need an electronic one that also will suck in the same ways.
Yes, really, goal is to replace humans.You do understand that the goal is to replace biological brains?
No, it really isn't. The goal is to create devices that can replace us in certain specific ways, or supplement us. Whether it be to solve some specific problem, act as training wheels to humans or to predict the future. Ideally we want a AI that will outperform us, and then it's not a question of replacing. Then it's a question of designing a world to suit that superior intelligence.
Yes, really, goal is to replace humans.No, it really isn't. The goal is to create devices that can replace us in certain specific ways, or supplement us. Whether it be to solve some specific problem, act as training wheels to humans or to predict the future. Ideally we want a AI that will outperform us, and then it's not a question of replacing. Then it's a question of designing a world to suit that superior intelligence.
A hammer mill needn't look like the blacksmith it replaces.
A useful AI needn't think like a human, for much the same reasons.
If you want AI to interact with people you need AI which think or at least pretend to think like them.A hammer mill needn't look like the blacksmith it replaces.
A useful AI needn't think like a human, for much the same reasons.
Your link has absolutely nothing to do with AIHere's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.
https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
If you want AI to interact with people you need AI which think or at least pretend to think like them.A hammer mill needn't look like the blacksmith it replaces.
A useful AI needn't think like a human, for much the same reasons.
Imagine AI universal translator which does not think like people or robo-lawyer who does not think like people? Or robo-plitician.... OK, maybe not that, politicians don't think like normal human beings.
Your link has absolutely nothing to do with AIHere's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.
https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
I did not say it need to understand, I said it has to look like it understands. But one way of looking like it understands is actually understanding. But your google argument is irrelevant because as long as machine passes the turing test for all intents and purposes it has a mind, certainly in practical sense.If you want AI to interact with people you need AI which think or at least pretend to think like them.
Imagine AI universal translator which does not think like people or robo-lawyer who does not think like people? Or robo-plitician.... OK, maybe not that, politicians don't think like normal human beings.
Imagine instead of having a robot helping an old lady to cross the road, we instead fix a solution so that the old lady doesn't have to cross the road to begin with.
We don't want all robots and AI to interact with humans. We want specialized interaction robots that do nothing other than interact with humans. And then we want those in turn to figure out which robots to use to solve problems. This BTW is how the computer you are writing this on works.
BTW, your example of the universal translator is broken. I recommend googling "Searle and the Chinese room". A universal translator does not need to understand the various languages they translate between. It just needs to be able to mindlessly match symbols from two lists. But really fast.
Yes, it was a statement.- - - Updated - - -
Your link has absolutely nothing to do with AIHere's a good demonstration of how to think when it comes to replacing human functions. The sooner we stop thinking of humans as intelligent the sooner we can start making robots and AI useful. Instead of seeing ourselves as the pinnacle of creation, we should instead see ourselves as irresponsible children who don't know what is good for us.
https://www.linkedin.com/pulse/mach...-conely?trk=hp-feed-article-title-editor-pick
That sounded like a statement, and not a question. Do you want me to explain why it does (have something to do with AI)?