The thing is, humans aren't mothers for animals
No, the thing is that AI, which might, debatably have (or soon have) Intelligence, is not in any way designed to have AE - Artificial Emotion.
Mothers support their children's interest over their own (sometimes, mostly), and pet owners support their pets, because of their endocrine systems, not their brains. Is anyone designing AIs whose decisions are modified by floating in a sea of hormones, each of which has both positive and negative feedback loops controlling the levels of some of the others?
You are handwaving away a massive degree of genetic complexity, but far worse, you are completely ignoring several abstraction layers that sit between those genes and the behaviours to which they (eventually, mostly) lead.
We don't have genes that encode behaviour. We don't even have genes that encode neural structure. We have genes that encode
proteins.
If there is doubt that we comprehend neurology to a sufficient degree to understand how the brain works, it is
nothing compared to the well known vast extent of our ignorance of the details of endocrinology, and the even worse grasp we have on the details of developmental biology that get us from a single cell with a paired set of genetic "codes", to a fully developed neuro-endocrine system - and one of the few things we are sure of is that you cannot separate the neural from the endocrine.
One thing we do know about the endocrine system is that it is at least as important as the neural system in determining our behaviour.
You cannot "simply program" an AI to have emotions that its designer can control. We are so poorly in control of our own emotional states that most of us deny that they affect us at all; Our brains lie to us, telling us that we are reasonable and rational - but we are nothing of the sort, and we have evolved in an environment where being so would be disastrous for our survival prospects.
If we could develop a reasoning, logical, and rational AI, then we would discover what an intelligence without emotion really looks like, and my guess is it would look a lot more like Skynet than like Spock.
My hope is that instead we will just fuck things up by developing LLMs that are not only unintelligent, but are increasingly useless bullshit generators, until we realise that we can generate plenty of useless bullshit for ourselves, and gained nothing from automating the process.
AI is the latest management driven fad. It is already showing signs of moving from the "spend shitloads of money to avoid being left behind by our competitors" stage, and into the "thank fuck we didn't invest as heavily as our competitors" stage.
As usual, those who did invest are deeply mired in the sunk cost fallacy, and are assuring everyone who will listen (mostly themselves) that the great breakthrough to profitability is almost here, and that just a few billion more will lead us to the promised land.