It's a hard problem to solve because when humans solve it, they solve it by narrowing and defining the domain in advance "the hard way" anyway. It's not as if we don't also have to do the work.
Try to narrow the domain down on a human being to a domain they haven't had years of training and work and it will sound like an idiot immediately, same as the computer. There's some great jokes about it in The IT Crowd, which I KNOW you've at least tried to watch (how could you not?).
This is what I'm talking about with "Chinese Room Humans". I am pretty well convinced that this model of internally ignorant word association is what is being leveraged by the Pols.
I know every joke in the IT-crowd forwards and backwards.
The Chinese Room is a philosophical problem, not technical. I believe the Chinese room can speak Chinese. Searle thinks it can't. But the result is the same. I also think that a computer program can have real emotions. Ie, as real as our emotions.
Being able to fluidly shift focus and domain of language is arguably creativity and novel problem solving. That's the goal. That said, we don't need machines to do that. Since we can do it. It's more just a fun science project. Having them solve problems within a pre-determined domain, is the real value of machine intelligence, and we're there already. Now it's more a question of how to implement it rather than the more basic, how to do it at all.
A friend of mine just launched a machine learning fraud detection system for a major Danish bank. The system does all the routine bullshit, so that employees can focus on novel types of fraud. That's where we're at right now.
I do not think it's impossible for computers to become truly intelligent. I think it's only a matter of time. But nobody knows how that world will look or develop. It's too weird.