I was with you up until the last bit, about responsibility. Having trouble seeing how that works. Do you for example mean awarding moral responsibility? I might agree that it's something we (our systems) do, in an automatic sort of way, as part of the way they function, but not that it can be justified, morally, as an ought. Though that's tricky, because I'm not wholly convinced that you can't get an ought from an is.
Let me try to put it this way. If a robot malfunctioned, we might (if we're temporarily being reasonable and rational) get it fixed, or taken out of service, or destroyed, but we wouldn't blame the robot itself........
And that's the same place that I fall off that bus. Without some acceptance of free will, I don't see how responsibility makes any sort of sense. It's completely senseless and simply cruel to blame anyone for anything ever, or to hold people accountable for their actions and decisions. When a company pollutes a stream with improperly treated waste... it wouldn't make any sense at all to hold them accountable for the damage done - they had no ability to take any other action, they could only have done exactly what they did. Punishing them for something they had no control over at all is simply cruel.
Might as well go back to shunning left-handed people because they're left-handed. Or you know, insisting that gay people are evil because they chose to be gay!
It isn't necessarily senseless or irrational to allocate responsibility. It may even be necessary to do so. Therefore, imho, we can (and would) still have to allocate responsibility even if there was no fundamental or absolute moral case for doing so. We could think of it as either (a) pragmatic or (b) unavoidable. And the suggestion, sometimes made (not by you) that giving up belief in free will would mean that we could or would not hold people responsible is of course untrue.
Now, to me, it's the particular
way we allocate responsibility (specifically how much retribution or moral opprobrium we attach) that changes depending on our view of what capacities we understand ourselves to have. We'd be back to not blaming the malfunctioning robot, essentially.
I find this very very tricky, not least because we aren't actually rational beings, we are a mixture of rational and emotional (and a mix of conscious and non-conscious). But that's one reason I like to get into it rather than, say, only argue over terms (although some of that is necessary, but it can stifle the subsequent considerations). It's intellectually juicy. Or perhaps marshy would be a better word.
I think it's fair to say that compatibilists (I'm not necessarily suggesting that's your preferred self-label) and incompatibilists generally agree on a lot of things when it comes to the (imo key) issue of personal responsibility. Sam Harris and Daniel Dennett are often at loggerheads, but they end up in roughly the same place on this, I think. In other words, their prognosis and prescriptions are similar, that we 'should' (to introduce a normative) change the way we think about free will, that we should, because (to dilute the normative) we would be better off generally to allow our belief in it to weaken.
Some say they fear 'creeping exculpation' if free will were to be binned. I personally see creeping exculpation as an enlightened opportunity for personal betterment, if that's not being too optimistic (and I am not given to excessive optimism so I reckon the outcomes would be mixed), or at least a potentially 'good' thing if dealt with and understood appropriately and accurately. I'm sort of totally fed up of the human desire for blame (including my own most of all) and not entirely sure it has to be that way, or at least as often or as entrenched.
And it's already happening anyway (in certain places to certain extents). And imo likely to continue. Gradually*. Which is good, imo. I wouldn't advocate drastic change. For one thing, our knowledge is insufficiently complete at this time. For another, drastic change tends to rock the boat, as in risking inducing at least some social instability (and perhaps even personal angst). It's a good thing we are just having the equivalent of an engrossing pub chat about it here.
* And I think that future genetics is likely to have as much input as future neuroscience.