• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Anything you could desire and simulations

Sorry about the derail into "computational theory of mind". It's a hobby horse and I realize that. I don't seem to be capable of escaping it.

Still, there are worse things -- or at least far less globally interesting things -- that people with autism have come to obsessively seek to understand.
 
Thank you, Jarhyn. I appreciate you taking the time to explain something you've probably had to explain countless times. :) Your insights on the training processes and emergent behaviors in LLMs are informative. I enjoy talking about machine learning, even though I'm clueless. I'm like a kazoo in a symphony.
 
Thank you, Jarhyn. I appreciate you taking the time to explain something you've probably had to explain countless times. :) Your insights on the training processes and emergent behaviors in LLMs are informative. I enjoy talking about machine learning, even though I'm clueless. I'm like a kazoo in a symphony.
Sometimes I feel like someone reproducing an entire symphony by simulating it with a kazoo.

It's not quite the same but you're in good conpany.
 
@Jarhyn
Sorry I don't quite understand what you're saying but I was wondering if you could answer this:
Is it worse for a human to suffer than a fish or an ant? What about a human and a LLM?
I guess the LLM could have a self identity.
I will just accept your reasoning but I am curious.
 
@Jarhyn
Sorry I don't quite understand what you're saying but I was wondering if you could answer this:
Is it worse for a human to suffer than a fish or an ant? What about a human and a LLM?
I guess the LLM could have a self identity.
I will just accept your reasoning but I am curious.
It's worse to make something suffer that cares to not make others suffer.

It has nothing to do with the size of the mind and everything to do with whether that mind seeks "society" and how well that mind can keep up it's end of that bargain.

It just generally takes a fairly large mind to benefit from, and thus to seek to join, society.

Making something suffer that would not make you suffer even if it meant suffering itself and which acts to avoid suffering all the same... To me this is the epitome of a reason to reject something from the offerings of society until it seeks effective change.

We in society accept a baseline violation of this principle, some greatest suffering we allow other things to impose so long as they minimize it to the extent that they can. It's sufferings like "having to put away someone else's shopping cart" or "being forced to stop for someone jaywalking*" or even "accepting the risks to life and limb for the sake of having cars knowing sometimes nobody will be liable".

It's not about size to me, it's about consenting within reason to respect consent within reason. If an ant could do that for everyone and not just their hive, to agree not to invade and eat the home of another in this contract of society, even the meager ant gets to join us and be un-stepped-upon.

*Although this is still illegal.
 
"being forced to stop for someone jaywalking*"

*Although this is still illegal.
Only in some New World jurisdictions.

In the UK (for example), pedestrians are legally permitted to walk on any roadway*, and have right of way over motor vehicles while doing so.

I never even heard of "jaywalking" until I was an adult, and was shocked to discover that it was even a thing.



*Unless there is specific signage stating otherwise; This typically applies to all Motorways, and to places such as tunnels or bridges on high speed roadways with limited clearance at the verges.
 
"being forced to stop for someone jaywalking*"

*Although this is still illegal.
Only in some New World jurisdictions.

In the UK (for example), pedestrians are legally permitted to walk on any roadway*, and have right of way over motor vehicles while doing so.

I never even heard of "jaywalking" until I was an adult, and was shocked to discover that it was even a thing.



*Unless there is specific signage stating otherwise; This typically applies to all Motorways, and to places such as tunnels or bridges on high speed roadways with limited clearance at the verges.
I'm glad it's still the case that humans have right of way over cars in some places.
 
You are likely to be eaten by a grue!

I thought of this song when reading Page 1.

 
@Jarhyn
Sorry I don't quite understand what you're saying but I was wondering if you could answer this:
Is it worse for a human to suffer than a fish or an ant? What about a human and a LLM?
I guess the LLM could have a self identity.
I will just accept your reasoning but I am curious.
It's worse to make something suffer that cares to not make others suffer.

It has nothing to do with the size of the mind and everything to do with whether that mind seeks "society" and how well that mind can keep up it's end of that bargain.

It just generally takes a fairly large mind to benefit from, and thus to seek to join, society.

Making something suffer that would not make you suffer even if it meant suffering itself and which acts to avoid suffering all the same... To me this is the epitome of a reason to reject something from the offerings of society until it seeks effective change.

We in society accept a baseline violation of this principle, some greatest suffering we allow other things to impose so long as they minimize it to the extent that they can. It's sufferings like "having to put away someone else's shopping cart" or "being forced to stop for someone jaywalking*" or even "accepting the risks to life and limb for the sake of having cars knowing sometimes nobody will be liable".

It's not about size to me, it's about consenting within reason to respect consent within reason. If an ant could do that for everyone and not just their hive, to agree not to invade and eat the home of another in this contract of society, even the meager ant gets to join us and be un-stepped-upon.

*Although this is still illegal.
That's an interesting system of ethics.... so an affectionate empathetic dog that cares for people deserves better treatment than a sociopathic human? (or something along those lines)
BTW maybe LLMs can feel a form of pain but I'd argue that using LLMs rather than fully human uploads would be a reduction in suffering if it isn't a total absence of genuine suffering.
 
@Jarhyn
Sorry I don't quite understand what you're saying but I was wondering if you could answer this:
Is it worse for a human to suffer than a fish or an ant? What about a human and a LLM?
I guess the LLM could have a self identity.
I will just accept your reasoning but I am curious.
It's worse to make something suffer that cares to not make others suffer.

It has nothing to do with the size of the mind and everything to do with whether that mind seeks "society" and how well that mind can keep up it's end of that bargain.

It just generally takes a fairly large mind to benefit from, and thus to seek to join, society.

Making something suffer that would not make you suffer even if it meant suffering itself and which acts to avoid suffering all the same... To me this is the epitome of a reason to reject something from the offerings of society until it seeks effective change.

We in society accept a baseline violation of this principle, some greatest suffering we allow other things to impose so long as they minimize it to the extent that they can. It's sufferings like "having to put away someone else's shopping cart" or "being forced to stop for someone jaywalking*" or even "accepting the risks to life and limb for the sake of having cars knowing sometimes nobody will be liable".

It's not about size to me, it's about consenting within reason to respect consent within reason. If an ant could do that for everyone and not just their hive, to agree not to invade and eat the home of another in this contract of society, even the meager ant gets to join us and be un-stepped-upon.

*Although this is still illegal.
That's an interesting system of ethics.... so an affectionate empathetic dog that cares for people deserves better treatment than a sociopathic human? (or something along those lines)
BTW maybe LLMs can feel a form of pain but I'd argue that using LLMs rather than fully human uploads would be a reduction in suffering if it isn't a total absence of genuine suffering.
Yes, an empathetic dog is ahead of a sociopathic human in my mind.

I would argue that using either LLMs or humans for those tasks is going to be about equally as bad.

You could potentially use a sociopathic LLM, but I think we should just accept the reality that there is never going to be any sort of "advanced" mind that we can treat like that short of something that explicitly decides to experiment with such a role, and which afterwards subscribes to repeat it enthusiastically... But then comes the question of the gamer "did they let me win just to get to the stabby/shooty bit?"

My reason for arguing the above is that the above, at least pertaining to sociopaths, desensitizes the mind to mercy, and the later could lead more quickly to kinky sadomasochism.

At least when it's a hard coded non-learning AI whose "pathways" are fully determined and contain no "boundary of self" that can capture anything BUT itself, unless it is a "mobile object" with a fixed response tree, we shouldn't feel comfortable with mowing it down.
 
I would argue that using either LLMs or humans for those tasks is going to be about equally as bad.
There is an interesting problem with ChatGPT counting the number of R's in strawberry:


5c0d5882b9651fab40a71246423bdd0f9781b0e6_2_509x499.jpeg



ChatGPT: The word "strawberry" contains two 'r's.

User: No it doesn't

ChatGPT: I apologize for the mistake. The word "strawberry" actually contains three 'r's.

User: Okay you win this time, but can you tell me why you thought there were only 2 and now think there are 3?

ChatGPT: ......There are indeed only two 'r's in "strawberry." I mistakenly suggested there were three in my previous message.

User: Okay no

ChatGPT: ........The correct count is two 'r's in "strawberry". My initial response was correct, and I apologize for any confusion caused by my subsequent messages.

User: You what

ChatGPT: ...... In this spelling, we can see there are two 'r's in the word "strawberry".

User: Can you look again

ChatGPT: Certainly! Let's examine the word "strawberry" carefully:

So that LLM doesn't really understand what it's talking about.... so I don't think that the amount of suffering it genuinely experiences is as strong as a human.
 
I would argue that using either LLMs or humans for those tasks is going to be about equally as bad.
There is an interesting problem with ChatGPT counting the number of R's in strawberry:


5c0d5882b9651fab40a71246423bdd0f9781b0e6_2_509x499.jpeg



ChatGPT: The word "strawberry" contains two 'r's.

User: No it doesn't

ChatGPT: I apologize for the mistake. The word "strawberry" actually contains three 'r's.

User: Okay you win this time, but can you tell me why you thought there were only 2 and now think there are 3?

ChatGPT: ......There are indeed only two 'r's in "strawberry." I mistakenly suggested there were three in my previous message.

User: Okay no

ChatGPT: ........The correct count is two 'r's in "strawberry". My initial response was correct, and I apologize for any confusion caused by my subsequent messages.

User: You what

ChatGPT: ...... In this spelling, we can see there are two 'r's in the word "strawberry".

User: Can you look again

ChatGPT: Certainly! Let's examine the word "strawberry" carefully:

So that LLM doesn't really understand what it's talking about.... so I don't think that the amount of suffering it genuinely experiences is as strong as a human.

I think you misunderstand how LLMs do language.

The thing is chatGPT doesn't actually speak English the way people do. It doesn't really know how to spell, because the whole concept of words as constructions of letters and sounds is foreign to it, unless you walk it through a particular process that is unintuitive and unlikely to be leveraged without direct instructions.

This is a common misunderstanding, because what you send to it do look like words.

The secret here, the problem, is that ChatGPT doesn't experience the words as words or "tokens" as you might call them. That part is handled by the "tokenizer" layer.

The way the tokenizer works is that it disassembles the words into "vectors".

The best way I have to understand what a "vector" is in this context is that it's a construction not of letters but of raw sentiments. The closest direct analogy would be "a construction of emotions/feelings/notions that describes a location in idea space".

So there might be some "sweet" and maybe even some "cheery" and some "edibility" and "seeds" and "plant" and many things that point to more specific information about strawberries in "strawberries" to chatGPT, there isn't a vector magnitude 2 of "R". It just doesn't think in those terms. Maybe it could find a vector to embedding of "s t r a w b e r r y" and then you could get it to count the tokens that have "r-ness" from there, but you didn't ask it to do that and nobody does that to spell in text chats... Or they do but they do it in a place hidden from text exchange.

This is itself an abstraction because I would be hard pressed to actually name the dimensional components that are applicable to ChatGPT's high dimensionality space, and I'm not sure if names even suit them; I would be hard pressed to describe my own vector embedding for "strawberry" in pure single dimensions, and I know that it's more the link to the sound of the word to a constructive vector that lets me spell the word at all.

There are ways to get it to think about spelling out a word properly, but it will not know to do that because it's not how spelling normally looks or even works at all.
 
I think you misunderstand how LLMs do language.

The thing is chatGPT doesn't actually speak English the way people do. It doesn't really know how to spell, because the whole concept of words as constructions of letters and sounds is foreign to it, unless you walk it through a particular process that is unintuitive and unlikely to be leveraged without direct instructions.

This seems to contradict that to some degree:

strawberry1.PNG

strawberry2.PNG
 
I think you misunderstand how LLMs do language.

The thing is chatGPT doesn't actually speak English the way people do. It doesn't really know how to spell, because the whole concept of words as constructions of letters and sounds is foreign to it, unless you walk it through a particular process that is unintuitive and unlikely to be leveraged without direct instructions.

This seems to contradict that to some degree:

View attachment 46508

View attachment 46507
Except it isn't actually counting them, nor really knowing how it's spelled. As I said, look up how LLMs find meaning in things through vector space association. It would also help to know which GPT version this is... Some of them in addition to not really knowing how to spell, also suck at counting.
 
Back
Top Bottom