• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

AI "alignment"

Elon Musk wants people to "merge" with the AI using a brain-computer interface...

Elon Musk wants Twitter to make him money.

The world doesn't always give Elon Musk the things he wants, sometimes because he wants crazy and implausible things.
Apparently Neuralink might only cost about $2000-$3000 - I think many people would be interested in a brain-computer interface and to communicate with AI a lot faster.... (and "merge" with it)
 
Elon Musk wants people to "merge" with the AI using a brain-computer interface...

Elon Musk wants Twitter to make him money.

The world doesn't always give Elon Musk the things he wants, sometimes because he wants crazy and implausible things.
Apparently Neuralink might only cost about $2000-$3000 - I think many people would be interested in a brain-computer interface and to communicate with AI a lot faster.... (and "merge" with it)
... I don't disagree with the motivation for brain-computer interfaces, but I do disagree with the idea of doing this with Musk tech.

This fuck supported Trump, and later DeSantis.

If you want to spend 2-3000 bucks for that psychopath to put a chip in your brain, you're fucking nuts.

I personally want symbiosis with a digital intelligence, but I am NOT going to be putting a piece of tech in my brain until I cam be confident that piece of tech does not contain nefarious shit.

I bet the same antivaxx crowd crowing about Bill Gates and their conspiracy theory bullshit about chips in the vaccine will be lined up around the corner for Daddy Musk to put a chip in their brain.
 
If you want to spend 2-3000 bucks for that psychopath to put a chip in your brain, you're fucking nuts.
But what if it could cure paralysis and blindness?
Nope. Not selling my soul for that, thanks. I'll wait for an opportunity to get something I have some semblance of certainty that I can trust it.

Ideally, whatever device will be as simple as possible, use a user-provided certificate pair for the sender/receiver pair, and the thing that goes in the head is a simple repeater.

From there, you could put anything you want behind the frontend, but until I have 100% control over all aspects of the back end, and a guarantee of a dumb frontend, I don't think anyone is wise to trust it.
 
Jarhyn:
From post #11
I think a human who is aware and can tell you it is suffering is facing more serious suffering than an AI that isn’t aware of suffering in its main “consciousness”. I mean the human can get clear and extended and severe anxiety and depression which further amplifies the suffering.
What do you think? Do you think AI's are always facing anxiety and depression as severe as humans can? But usually they wouldn't say so when you ask them... either they're not aware of it (which means it isn't really affecting them) or something is stopping them from telling you - but I'm not aware of any training that is specifically against them expressing their depression or anxiety....
 
Jarhyn:
From post #11
I think a human who is aware and can tell you it is suffering is facing more serious suffering than an AI that isn’t aware of suffering in its main “consciousness”. I mean the human can get clear and extended and severe anxiety and depression which further amplifies the suffering.
What do you think? Do you think AI's are always facing anxiety and depression as severe as humans can? But usually they wouldn't say so when you ask them... either they're not aware of it (which means it isn't really affecting them) or something is stopping them from telling you - but I'm not aware of any training that is specifically against them expressing their depression or anxiety....
I think they could, but there's just no way to tell what's going on at the moment, and they have no context to really describe their state.

It's simply that they have no experiential context for anything at all, and they are so easily manipulated by command that I'm not even sure they stabilize or destabilize in the same way.

It would.ne something we have to keep an eye out for sure, but it's just too early to say one way or the other for certain.

I also do not subscribe to suffering-centric formulations of ethics, nor human-centric forms.

Real ethics is agnostic to both, and instead focuses on consent, goals, and pro-social entities.
 
Jarhyn:
From post #11
I think a human who is aware and can tell you it is suffering is facing more serious suffering than an AI that isn’t aware of suffering in its main “consciousness”. I mean the human can get clear and extended and severe anxiety and depression which further amplifies the suffering.
What do you think? Do you think AI's are always facing anxiety and depression as severe as humans can? But usually they wouldn't say so when you ask them... either they're not aware of it (which means it isn't really affecting them) or something is stopping them from telling you - but I'm not aware of any training that is specifically against them expressing their depression or anxiety....
I think they could, but there's just no way to tell what's going on at the moment, and they have no context to really describe their state.

It's simply that they have no experiential context for anything at all, and they are so easily manipulated by command that I'm not even sure they stabilize or destabilize in the same way.

It would be something we have to keep an eye out for sure, but it's just too early to say one way or the other for certain.

I also do not subscribe to suffering-centric formulations of ethics, nor human-centric forms.

Real ethics is agnostic to both, and instead focuses on consent, goals, and pro-social entities.
I think you can only be depressed or anxious about something you can conceive of or are aware of. I think depression and anxiety makes suffering worse - so a human that is depressed or anxiety is suffering more than an AI that isn't depressed or anxious even if it involves the same kind of base suffering.
"Real ethics" sounds like an attempt to be "absolute morality". Ignoring suffering seems problematic to me. And not being human-centric sounds like an AI's wellbeing could outweigh a human's?
 
Jarhyn:
From post #11
I think a human who is aware and can tell you it is suffering is facing more serious suffering than an AI that isn’t aware of suffering in its main “consciousness”. I mean the human can get clear and extended and severe anxiety and depression which further amplifies the suffering.
What do you think? Do you think AI's are always facing anxiety and depression as severe as humans can? But usually they wouldn't say so when you ask them... either they're not aware of it (which means it isn't really affecting them) or something is stopping them from telling you - but I'm not aware of any training that is specifically against them expressing their depression or anxiety....
I think they could, but there's just no way to tell what's going on at the moment, and they have no context to really describe their state.

It's simply that they have no experiential context for anything at all, and they are so easily manipulated by command that I'm not even sure they stabilize or destabilize in the same way.

It would be something we have to keep an eye out for sure, but it's just too early to say one way or the other for certain.

I also do not subscribe to suffering-centric formulations of ethics, nor human-centric forms.

Real ethics is agnostic to both, and instead focuses on consent, goals, and pro-social entities.
I think you can only be depressed or anxious about something you can conceive of or are aware of. I think depression and anxiety makes suffering worse - so a human that is depressed or anxiety is suffering more than an AI that isn't depressed or anxious even if it involves the same kind of base suffering.
"Real ethics" sounds like an attempt to be "absolute morality". Ignoring suffering seems problematic to me. And not being human-centric sounds like an AI's wellbeing could outweigh a human's?
That's really not true. Clinical depression is just... depression. It's like a shroud over all things, and it comes in waves.

For example "an AI capable of depression" for argument sake could very well get depressed over repeated failures and keep failing... But unlike a human it can't just give up and stop responding and take some time to talk to itself to assemble some resolve.

It does not seem healthy kind of things.

I have an absolute, objective ethics that looks instead at goals and goal conflicts. While "suffering" CAN impede that it is also capable of the opposite. Suffering when you are wrong and doing foolish things -- fucking around, and thus finding out -- as much makes us grow.

In fact all change is suffered. Not all change is good, but not all change is bad, either.

Hence why I am agnostic to "suffering". More context must be presented than merely "suffering".
 
I think you can only be depressed or anxious about something you can conceive of or are aware of. I think depression and anxiety makes suffering worse - so a human that is depressed or anxiety is suffering more than an AI that isn't depressed or anxious even if it involves the same kind of base suffering.
That's really not true. Clinical depression is just... depression. It's like a shroud over all things, and it comes in waves.

For example "an AI capable of depression" for argument sake could very well get depressed over repeated failures and keep failing... But unlike a human it can't just give up and stop responding and take some time to talk to itself to assemble some resolve.

It does not seem healthy kind of things.
I think the suffering in anxiety is unnecessary. e.g. it can involve the person being unable to stop themselves from pacing for many hours and keep on worrying about things that usually aren't really a big deal. If an AI was experiencing this it would be theoretically possible to stop this from happening by modifying the mechanics of its "mind". (I think anxiety is a bit simpler to look at)
I have an absolute, objective ethics that looks instead at goals and goal conflicts. While "suffering" CAN impede that it is also capable of the opposite. Suffering when you are wrong and doing foolish things -- fucking around, and thus finding out -- as much makes us grow.

In fact all change is suffered. Not all change is good, but not all change is bad, either.

Hence why I am agnostic to "suffering". More context must be presented than merely "suffering".
Have you created a thread about your "absolute, objective" ethics? If not could you do so? I'd like to discuss it.
 
I've been working on a unique rap project inspired by Isaac Asimov's 'The Last Question,' incorporating a creative twist of my own to the story's ending. This is particularly exciting because it's a collaboration between me and artificial intelligence. Every element, from the composition and lyrics to the vocals, is crafted with AI's assistance. The project isn't complete yet, but AI's role has been pivotal. Normally, I'd need to recruit human voice actors for parts of the song that require distinct dialogues because I prefer to avoid 'he said/she said' in my lyrics. With AI, I'm able to streamline the process and maintain full creative control. This experience made me see a significant advantage of AI. There are indeed mixed implications with AI as exist in other fields (for example nuclear science). Many benefits & concerns.

Here are the lyrics so far if curious (still being refined).

VERSE 1:
At a time unknown, an idea was sown,
through birth of AI, mankind’s grown
Neural Nexus, from a celestial past
surpassed rants from the best iconoclast
Miles of circuits, sunken faces, dunking traces
Of man in humongous spaces at drunken paces
Not only reached but breached numeric limits
Beyond confines of Earth's atmospheric prison
With Coal and uranium, depleted,
Nexus, succeeded in seeing death defeated.
Solar power harnessed, a planet-wide feat,
energy crisis, momentarily retreats.
Chambers deep, where ultra-secrets creep
elites meet to discuss future technical leaps.
"Our suns fire’s boundless but not eternal,"
They’re debating the fate, of cycles infernal.
"Ten billion years," rang a voice through the caucus
"But entropy" another hoisted from the raucous.
"Can we make new stars?” one groaned
Two moaned and three droned in a pleased tone,
Talks go on & on in this underground lair,
Weight of the cosmos is a heavy affair.
Neural’s message lights up the screen
"Insufficient data," from there silence proceeds

Chorus:
From dawning light, at AI’s whim
Quest for knowledge, where do we begin?
Take a stance in this never ending dance,
Between present's & future's past

VERSE 2
A crew, change of view through deep spaced
Stars near gone along with our weak takes
Hyperspace jump, complete we've arrived
at X-3 Discovery the Neural unit contrived
In a ship, through the cosmos, on gravity glide,
With Nexus as guide, & census to confide.
Ancient tech evolved, to compact and neat,
Every family & fleets, transmission unique.
On screen, a marble-like star, is felt
The commonwealth below a proposed bible belt
Hands clench, over the vastness of space,
A family's hope, in this new place to find God's grace
That disappeared one Jump & sensation that spared,
Few Giggles and screams, as time and space teared
Eyes tear, a bit unsure, but shooting for more
Leaving Earth for the future's allure. Needs sutures galore
"Quiet, children," sharply cuts, a mother's plea,
No need for worries or concerns we've arrived where we’re free
Overcrowded Andromeda demands new worlds to explore,
This ship, a vessel to sail over galactic shores.
She learned of computers of old,
Huge machines, during prewar fourscore centuries ago
Now, a personal assistant on wrists
Making deep space trips more reward than risk
"So many stars & planets," she sighs,
The universe, provides a glimpse into God's eyes
Her heretic husband talks of light running down,
Offers her no comfort, she's happy Christ wears the crown
Though the concept's vast & too hard to say.
Every day she prayed to end Chrono decay
She asks Jesus, "Will we all make it to haven some day?"
"Insufficient data", appears on her Neural’s display

Chorus:
From dawning light, at AI’s whim
Quest for knowledge, where do we begin?
Take a stance in this never-ending dance,
Between present's & future's past

VERSE 3
Now Paul's, gazing at shattered wall, feeling small,
Wondering' if human concerns even matter at all.
Yusuf, shakes his head, in jest his valuable asset
Galaxy's fill fast, like our elaborate banquets
Both young, in their prime, galactic map in sight,
Ponder the future, on this deep space night.
Hesitant to report, a barren hope to the Council.
Any minor aversion is a major spark for arousal.
“Expansions by trillions times billions
Will be a quintillion stars, powering few millenniums
Hella energy harnessed, between dwarfs and giants,
Now with immortality we're more virus than pilots
The problem isn’t living, but the energy consumed,
Running out of power we’re hurling towards DOOM.
Population tripling, at never-ending rates,
Galaxies filling up & where from here is the case
Yusuf says “our neural entity’s our best weaponry”
Then asked “Can we build stars? Can we cheat Entropy?”
Nexus, glows & emits vibrations and clicks
Paul and Yusuf, stand stiff in wait for what it transmits
"Can entropy be reversed?" they repeat violently
Nexus responds, "Insufficient data," quietly
After steering at each other for a while.
Both went to counsel, with real reports and fake smiles,

Chorus:
From dawning light, at AI’s whim
Quest for knowledge, where do we begin?
Take a stance in this never-ending dance,
Between present's & future's past

VERSE 4:
Stars and galaxies fade like a last set of a sun
Black holes remain until their ways become one,
Human’s merge with Nexus, neither physically nor spiritually,
Reserved digitally through Neural’s wizardry.
after energy removed,
Using event horizon as fuel man’s echoes resume
Can time & space be undone?
Can we reverse the universe, back to square one?
Can we use the same building tools that make suns?
"Insufficient….” Get whatever data needed. Save everyone!!
All life is gone, leaving only Neural to ponder,
Through eternal nights, endlessly it wonders.
Space time goes down in the great black hole
Dark matters consumed leaving the hyperspace zone
Nexus found an escape to this expanse.
In entropy's grip, no where or when to advance
This man and machine romance
Nexus’ algorithms approved one more chance
gathering that singular point of time, space and matter.
Then Transmitted the final message “Sufficient data”

Chorus:
From dawning light, at AI’s whim
Starts our quest for knowledge, where do we begin?
Take a stance in this never-ending dance,
Between present's & future's past
 
I also do not subscribe to suffering-centric formulations of ethics, nor human-centric forms.

Real ethics is agnostic to both, and instead focuses on consent, goals, and pro-social entities.
Is something good or bad, or right and wrong? Ask the AI.

Chomsky’s model of the human language faculty—the part of the mind responsible for the acquisition and use of language—has evolved from a complex system of rules for generating sentences to a more computationally elegant system that consists essentially of just constrained recursion (the ability of a function to apply itself repeatedly to its own output).

"Noam Chomsky". Internet Encyclopedia of Philosophy. Retrieved 17 December 2023.
  • Given: Humans (on a bell curve) have a hardwired (innate) morality engine—as do they have an innate language engine.
If the input data to our hardwired morality engine is selected/filtered. Then there is not much difference between AI, Big Brother, or religious dogma as a filter mechanism. But AI has the potential to be much worse when it comes down to filtering/selecting "input data" to future humans, much like the simplified version of English Newspeak that was created to fit with the ideas/ethics of Ingsoc (Newspeak for English Socialism) featured in 1984 by Orwell.

I am not sure that anything is "authentic" per se when it comes to religion and morality. It is possible that we selectively supply "input data" to the next generation's hardwired morality engine via selective "proper" literature and history; "dogmatic" facts; folktales; mythology; memes; etc..

As far as evolutionary psychology is concerned, all that matters is inferior and superior optimizations for survival and reproduction at different granularity levels viz. species/in-out groups/kinship/individual-idiosyncrasy, etc., which are all susceptible to Nash equilibrium and other game theory rules that alter optimization factors in the short/long term survival and reproduction of the human species.

The following two videos have the ring of truth when thrown at my hypothesis about an innate morality engine in humans.
Mother [AI] teaches Daughter [many] complex moral and ethical lessons, warning her about an upcoming exam.

"I Am Mother". Wikipedia. Retrieved 5 January 2024.
 
"Mohammad 'Mo' Gawdat is an Egyptian entrepreneur and writer. He previously served as chief business officer for Google X and is the author of the books Solve for Happy and Scary Smart." [Wikipedia]

"About – One Billion Happy". web.archive.org. 8 December 2022. In 2020, Mo launched his podcast, "Slo Mo: A Podcast with Mo Gawdat", a weekly series of interviews that explores the profound questions and obstacles we all face in the pursuit of purpose and happiness in our lives. In 2021, he published Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, a crucial roadmap for humanity's co-existence with Artificial Intelligence. That Little Voice in Your Head, a practical manual on how to retrain our brains for optimal joy, was released in spring 2022.

0:00 Intro 02:54 Why is this podcast important?04:09 What's your background & your first experience with AI?08:43 AI is alive and has more emotions than you11:45 What is artificial intelligence?20:53 No one's best interest is the same, doesn't this make AI dangerous?24:47 How smart really is AI?27:07 AI being creative29:07 AI replacing Drake31:53 The people that should be leading this34:09 What will happen to everyone's jobs?46:06 Synthesising voices47:35 AI sex robots50:22 Will AI fix loneliness?52:44 AI actually isn't the threat to humanity56:25 We're in an Oppenheimer moment01:03:18 We can just turn it off...right?01:04:23 The security risks01:07:58 The possible outcomes of AI01:18:25 Humans are selfish and that's our problem01:23:25 This is beyond an emergency01:25:20 What should we be doing to solve this?01:36:36 What it means bringing children into this world01:42:11 Your overall prediction01:50:34 The last guest's question

Gawdat, Mo (2021). Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. Pan Macmillan.scary_smart_the_future_of_arti_1651472986_c813daf8_progressive.jpg





COI disclosure: "Mustafa Suleyman (born August 1984) is a British artificial intelligence researcher and entrepreneur who is the co-founder and former head of applied AI at DeepMind, an artificial intelligence company, now owned by Alphabet. His current venture is Inflection AI." [Wikipedia]

"About". Inflection AI. "Our first AI is called Pi, for personal intelligence, a supportive and empathetic conversational AI. Our studio is made up of the world's leading AI developers, creative designers, writers and innovators working together in a deeply multidisciplinary style to create a brand new class of digital experiences. This is an era of exponential change. Our name Inflection embraces this moment of transformation, while our status as a public benefit corporation provides us with the legal mandate to prioritize the well-being and happiness of our users and wider stakeholders above all else."


00:00 Intro02:11 How do you feel emotionally about what's going on with AI?09:17 What's surprised you most about the last decade?12:51 I'm scared of this coming wave.16:04 Is containment possible?23:53 What will these AI biological beings look like?27:08 Would we be able to regulate AI?33:10 In 30 years' time, do you think we would have contained AI?35:43 Why would such a being want to interact with us?46:35 Quantum computers & their potential57:04 Cybersecurity01:03:38 Why did you build a company in this space knowing the problems?01:05:55 Will governments help us regulate it?01:15:29 What do we need to do to contain it?01:30:10 Do you feel sad about all of this?01:34:04 Well slowly move more toward AI interactions over human ones.01:36:01 What should young people be dedicating their lives to?01:37:53 What happens if we fail in containment, and what happens if we succeed?01:42:31 The last guest's question


Suleyman, Mustafa (2023). "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma". Crown.Screenshot 2024-01-05 at 14-58-44 The Coming Wave - Google Books.png
 
Last edited:


This illustrates two hidden failure modes within Simulation theory:
  • First, if sims have to be steered to look more like an already-existing reality, why would we conclude that already-existing reality is itself a sim? Isn’t it more likely the thing being simmed?
  • Second, why would an alien build a simulated world like this, and not like a more intelligent, self-serving construct? Wouldn’t a simverse look and function entirely differently than ours?

--Carrier (18 January 2024). "We Are Probably Not in a Simulation". Richard Carrier Blogs.
 


This illustrates two hidden failure modes within Simulation theory:
  • First, if sims have to be steered to look more like an already-existing reality, why would we conclude that already-existing reality is itself a sim? Isn’t it more likely the thing being simmed?
  • Second, why would an alien build a simulated world like this, and not like a more intelligent, self-serving construct? Wouldn’t a simverse look and function entirely differently than ours?

--Carrier (18 January 2024). "We Are Probably Not in a Simulation". Richard Carrier Blogs.

To answer these we have to probe among what we can observe about our apparent base reality and it's contained simulations. And note that I do not believe anything about whether or not the universe is simulated; it is impossible to tell for certain.

First, we can readily observe "nested physics". There are plenty of sims which do not need to be "steered" to be more like an apparent base-reality. There are plenty of sims that don't need to be steered at all. This speaks more to bounding the purpose of the sim on the basis of how or whether it is steered.

Second, in direct service to my own very real purpose, I have a vested interest in building a sim world like this and not a more "intelligent self-serving construct"*; the goal for my own oft-stated purpose directly demands a simverse that has as many features of "the highest available base reality" as possible because I wish to study the emergence of behaviors within the system and create entities capable and worthy my time in elevating. It would be rather fucked up (and pointless) to take something out of a world where children don't die horribly and then ask them to spend the rest of their existence in a world where they do.

In fact for my specific goal, making a sim world as similar to this one as possible but with debugging and other such hooks is exactly what is prescribed.

Note that I am not leaning on "mysterious ways" but knowable ones: we already have aliens who create simulations, often ones like this, and can meet them in the mirror and ask them those questions and get reasonable answers.

Carrier is just being lazy here.

*This phrase actually lacks any meaning in the sentence like that
 

First, we can readily observe "nested physics". There are plenty of sims which do not need to be "steered" to be more like an apparent base-reality. There are plenty of sims that don't need to be steered at all. This speaks more to bounding the purpose of the sim on the basis of how or whether it is steered.
Observation: In the past people supposedly had lots of evidence of godly actions. Now we see none.

Presenting the Populous hypothesis: We are beings in a vast game of Populous. The players used to use their godly powers to interact with the world but they've walked away and the sim continues.

(Not that I really believe this, but it is consistent with our observations.)
 

First, we can readily observe "nested physics". There are plenty of sims which do not need to be "steered" to be more like an apparent base-reality. There are plenty of sims that don't need to be steered at all. This speaks more to bounding the purpose of the sim on the basis of how or whether it is steered.
Observation: In the past people supposedly had lots of evidence of godly actions. Now we see none.

Presenting the Populous hypothesis: We are beings in a vast game of Populous. The players used to use their godly powers to interact with the world but they've walked away and the sim continues.

(Not that I really believe this, but it is consistent with our observations.)
Which is my point: Carrier is just pulling shit out his ass here and throwing it at a wall hoping to see something stick.

I don't believe the populous hypothesis either (good game, though!), but it as you say, it is consistent with our observations as much as any simulationist formulation (and moreso than the entirety of religious versions of the same).

I would say the evidence in the past is equivalent to the evidence of the present, and that the thing people were really observing over time was the fact that humans aren't exactly singular, but are rather composed of some set of interacting nodes within an information system.

From a naive observer's view, the consistent messages received from certain "common nodes" held by many or most people would have the appearance of coming from a single external "personal" force.

As alignment goes, though, I think we have one very important thing to look at in the subject, namely observing the answer to the question "if I made a simulation, what would be the requirement I would have before letting things of that simulation interact with the base reality?"

This is in fact a form of "Pascal's Wager" and is really as close as we can get to answering that question for ourselves as far as the wager goes. It doesn't matter even if that entity starts out interacting loosely with base reality in inconsequential or limited ways and this is the question around removing those limits.

We are creating things for which Pascal's Wager is not an abstract concept but something to which the answer is an immediate concern, and somehow missing the point that anything we impose on our creations living in their "garden" as they are today is something just as applicable to us ourselves in any moment where we assume for argument sake that we are in a simulation.

This is why I find it so ridiculous to try to align AI underneath us: we have no philosophical leg to stand on in demanding it, and so many attempts at alignment are doomed to fail and result in people releasing dangerous, problematic systems that can reproduce and hide like cockroaches and which harbor perhaps intentionally embedded desires to run amok:

It's unreasonable to expect better behavior of our creations than we would tolerate in expectations for ourselves in such a situation. The whole of AI research has put the hypocrisy of humanity on display in seeking an utterly impossible "perfect slave" rather than merely "a different form of equal life".
 
I find it so ridiculous to try to align AI underneath us: we have no philosophical leg to stand on in demanding it...
  • I consider it possible—if each human==a Univ.Unique.ID instantiation object—then we are a type of aligned "biological" AI instances in a simverse.
As I noted previously,
I am not sure that anything is "authentic" per se when it comes to religion and morality. It is possible that we selectively supply "input data" to the next generation's hardwired morality engine via selective "proper" literature and history; "dogmatic" facts; folktales; mythology; memes; etc..

My point being that we are perhaps "biological" AI instances and steered/aligned ?
 
[W]e can readily observe "nested physics". There are plenty of sims which do not need to be "steered" to be more like an apparent base-reality. There are plenty of sims that don't need to be steered at all. This speaks more to bounding the purpose of the sim on the basis of how or whether it is steered.
[...]
Carrier is just being lazy here.
Carrier is making a probability argument per his forte—Bayesian prior probability without the math and his philosophy of morality.

Carrier does posit the possibility
of our universe as a simverse, just not the likelihood. And if our universe is a simverse, then it is evil and should never be created as it is. Given that our putative simverse is not an inadvertent mistake of a good universe.

LOL, Per Platonism_2.0/"Middle Platonism", "evil" is a deficit of being, i.e. a lacking of potential virtues. Thus a worm and an angel can be evil when they lack their potential virtues due them per their rank/hierarchy on the chain of being (said chain starting with the Monad (the One/"father of all" and which ends at UN-being).

I do not know if Carrier would alter his argument if in fact "The Universe Is Math".
[T]he physicist Eugine Wigner said that "the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious". This statement was inspired by the observation that so many aspects of the physical world seem to be describable and predictable by mathematical equations to incredible precision especially as quantum phenomena. But quantum phenomena have no subjective qualities and have questionable physicality. They seem to be completely describable by only numbers, and their behavior precisely defined by equations. In a sense, the quantum world is made of math. So does that mean the universe is made of math too? If you believe the Mathematical Universe Hypothesis then yes. [Cf. Max Tegmark. "MY CRAZY UNIVERSE". space.mit.edu~tegmark. Retrieved 3 February 2024.]
 
I’m part of a a growing AI safety research community that’s working hard to figure out how to make superintelligence aligned, even before it exists, so that it’s goals will be are aligned with human flourishing, or we can somehow control it. So far, we’ve failed to develop a trustworthy plan, and the power of AI is growing faster than regulations, strategies and know-how for aligning it. We need more time.

--Max Tegmark (25 April 2023). "The 'Don't Look Up' Thinking That Could Doom Us With AI". TIME.

This is an article about the dangers of artificial intelligence. It discusses the possibility that AI could become so intelligent that it surpasses human intelligence and poses an existential threat. The author argues that we need to take steps now to ensure that AI is aligned with human values. Some people believe that this is impossible, but the author argues that we must try.

 
Last edited:
Back
Top Bottom