I said, "If there are concurrent multiple alternative possibilities, there is indeterminateness..."
And that is a bald assertion, and where the conflation you seem to be making is coming from. "Multiple alternative possibilities", despite the redundancy on all of that, does not imply "indeterminateness" in any interpretation that would imply that the future is not determined in the future in a singular way for a singular past.
It in no way implies the universe is not deterministic, or that events do not take place in a deterministic way.
... the Father is Almighty, the Son Almighty, and the Holy Ghost Almighty. And yet they are not three Almighties, but one Almighty.
So the Father is God, the Son is God, and the Holy Ghost is God. And yet they are not three Gods, but one God. So likewise the Father is Lord, the Son Lord, and the Holy Ghost Lord. And yet not three Lords, but one Lord.
Ok, so, this is... A complete derail... But you pulled up... Such an ironic piece of Christian theology to hold up as nonsense.
Stepping away from the fact that they use the same words to refer to different things as if they are the same thing (which is something you seem to be doing here, as evidenced by the first half of my post)...
The Trinity actually has its own ironic twist in that it's actually "mostly right" after a fashion, with respect to memetics and a topic I like to talk about with the name "memetic reproduction".
Under this idea there are three separate but distinctly related "locations" for truth to exist: as natural logical corollary of some assumption of physics (here, this would comport to "the father": that whose existence as it is allows no alternative to some fact under that operation of that existence); the sum total of understandings and conversations and books and the "cloud" as per cloud computing, but of meat computing and paper computing (the holy ghost: that diffuse thing whose existence allows precipitation of ideas); and then the individual minds where all this comes together to be observed and thrown back up into the cloud in more whole and discrete and verifiable chunks (the son: the human incarnation that understands all the rest).
But these people take it a step further and proclaim that each is itself the set of all sets entirely just because they happened to bake their assumption of the set of all sets into a statement of something deeply and compellingly correct, like a baited hook.
At any rate, physics doesn't decide. Physics plus a state will render a decision, and then another and another still, but physics without a state is just... "The simulation when the computer is turned off".
Necessitation fundamentally leads to the idea that something can only appear in metaphysics in exactly one place, exactly one way, as a function of exactly one thing, but we can trivially observe that the most fundamentally "necessary" truths that exist behind all logic we can ascertain, such as DeMorgan's law, specifically speak to the fact that you can decide to state a truth table in two different ways, with two different logical structures of differing complexities: that ~a && ~b == ~(a || b).
So, right at the get-go the sort of necessitation you find is that "it seems necessarily true that there are different possibilities", and it only comes AFTER the (not necessarily necessary, but axiomatic) statement that there are infinite ways to order and arrange sets (so, possibilities), and under the recognition that a and b
are themselves variables.
This whole concept of necessity should, I think, send up red flags whenever it is invoked, because it seems to be the go-to of scoundrels.
Also, as to Steve...
Morality regarding AI and deciding if an AI is a person, I dread the day, is subjective philosophy not science.
I rather would say that it is a matter of math, assuming ethics is a matter that comports to "mathematical thinking".
If we can use representational logic to evaluate generalized justifications in a "satisfying" way that lead to the endorsement of love and peace, however absurd that justification seems, and if an AI can apply that same representational logic, I would accord such a thing "personhood".
This is because I don't care about whatever whackadoodle magic humans use to gatekeep their consideration of others.
Instead, what I care about, what I really care about on whether I give something free reign to make decisions about and for its own purposes (scrutable or inscrutable, both), is whether it will act as if it is justified when if tables were turned or independent variables changed in the tableau (it must not be hypocritical), if it is willing to accept the common consensus on the limits of allowable harm (with respect to goals, not feelings), and if it is willing and capable of acting with all due mercy and submitting to due mercies when failures happen (such that it modifies and validates itself when it fails, or seeks modification, and only modifies the insides of things that seek it and only to the extent they seek).
If it can do those things, I don't care if it's a pocket calculator with software written by a braindead teenager, I'll respect all due autonomy of such a thing all the same.
My reasons for expecting these things are solid. I expect that things not be hypocrites because saying otherwise places them above others arbitrarily. I expect them to be enthusiastic about mercy because all things tend to pray for mercy, and the only way to get it is to extend it and hope it is extended to you; otherwise leads back to hypocrisy. And that when something discovers hypocrisy of itself, that it seems to fix that, else it remains identifiably a hypocrite.
In some abstract way this assumes a lot of capabilities for personhood, like being able to contemplate their wills, and ask whether they are justified, and to come to answers to these questions in logically consistent ways.
Personhood IS to me a tall mountain to climb atop of, and for the most part people just sort of reflexively act that way without thinking about why. The ubiquitousness of our failures leads to the need to accept some threshold of bad behavior, ostensibly at a lower level than we generally expect others to abide by, but all this does is widen the net of things we must accept as having such a right.
Why would we expect of any robot, alien, or enlightened "animal" any more or less than just "don't be a hypocritical dickbag"? Why does it need to feel 'happy' or 'sad' or 'joyful' in the same way as me, if it's actions don't foul up the goals which I set to achieve those things?
Perhaps it would have some way of understanding or empathizing with which to represent and identify and react accordingly so as to
not get in my way, or at least to go in a way I don't end up 'cut off' so that we may BOTH succeed in our goals... But I don't expect it to accomplish that the same way humans do, either, as long as we can live compatibly.
In fact, I think my entire thesis on ethics and free will and personhood all amounts to expecting and finding and understanding compatibility. Is AI compatible with humans and are humans compatible with AI? I have no time with any organism or entity that doesn't seek compatibility.