• Welcome to the Internet Infidels Discussion Board.

How did human language originate?

Says you.

No.

Says clear logic.

A finite process can not slowly grow into an infinite process.
Any time you're thinking you can settle a synthetic question by clear logic rather than by observation, it's a red flag that you're probably making a mistake.

Oh great. This is just yet another 'untermenche doesn't grasp the concept of infinity' thread in disguise.

No thanks.

Oh great.

More ignorance and avoiding the topic at hand at all costs.

When exactly does your ability to comprehend new sentences end, if it is not infinite?

What magic configuration of words makes your ability to comprehend another sentence end?
"Exactly" how many atomic diameters high is the Eiffel Tower? Wait, you can't tell me? Whoa! It must be infinitely many atomic diameters high!

As you can see, inability to say exactly where something ends is a red herring. It's sufficient to put an upper bound on it. The Eiffel Tower is less than a googol atomic diameters high. Therefore it's finite. See how it works?

There are less than a million English words that you know. There are zero sentences in English more than a million words long that you are able to comprehend. Therefore, your ability to comprehend new sentences ends at some number of sentences less than 10000001000000.
 
Bomb#20, that's resource limitations. untermensche's argument ignores resource limitations. In the absence of such limitations, there is an infinite number of possible sentences in many natural languages, likely most of them. This is because their grammar contains recursive features, and recursion allows infinite extent. Recursion appears here as the ability to construct sentence components that contain the same kind of components, like noun phrases that contain noun phrases.

Here is a simplified definition of a noun phrase in Backus-Naur form:
<noun-phrase> ::= <noun> | <adjective> <noun-phrase>

Note how <noun-phrase> appears in its own definition. That recursion makes it infinitely extensible.

But let's consider how that might have started. Most animal language is "words" in isolation:
<noun-phrase> ::= <noun>

Chimpanzees can create two-word signs. From Chimps, Washoe used such two-word phrases as "rock berry" (Brazil nut), "listen drink" (Alka-Seltzer), and "green banana" (cucumber).

So Washoe and some other apes could do
<noun-phrase> ::= <noun> | <adjective> <noun>

Adding more adjectives may give a parser like this:
<noun-phrase> ::= <noun> | <adjective> <noun> | <adjective> <adjective> <noun>

But once one does that, it becomes easier to use a recursive version.

So that's the big problem with Noam Chomsky's argument. He seems to think that it was a big jump to set up recursion when it isn't.
 
Looking for structure: Is the two-word stage of language development in apes and human children the same or different? | Patkowski | Studies in Second Language Learning and Teaching
Previously published corpora of two-word utterances by three chimpanzees and three human children were compared to determine whether, as has been claimed, apes possess the same basic syntactic and semantic capacities as 2-year old children. Some similarities were observed in the type of semantic relations expressed by the two groups; however, marked contrasts were also uncovered. With respect to the major syntactic mechanism displayed in two-word child language, namely word order, statistically significant differences were found in all three comparisons that were tested. These results indicate that chimpanzees do not exhibit the linguistic capacities of 2-year old children.

I'll quote where it states that 2-year-old children are typically at.
As is well known, Brown (1973) characterized Stage I child language primarily in semantic terms, and found that a set of eight “prevalent” two-term semantic relations (agent + action; action + object; agent + object; action + locative; entity + locative; possessor + possession; entity + attribute; demonstrative + entity) could account for approximately 70% of Stage I production (p. 178). In further characterizing Stage I language, Brown also listed three “operations of reference” (nomination, recurrence, and denial), noted that the modalities of interrogation, negation, and the imperative have their beginnings in Stage I (p. 180), and further noted that “word order seems to be the major syntactic mechanism controlled in Stage I English” (p. 203).

Pacesova (1981) provides a very useful and succinct summary of the grammatical patterns of Stage I English that express Brown’s semantic relations:

All the children evidently work on the expression of subject-verb-object relationships. Words in these roles are combined in subject-verb, verb-object, subject-object and subject-verb-object. Other productive patterns for most of the children are noun-locative, adjective-noun and demonstrative pronoun-noun. Word order is fairly stable, though inversion may occur in emotional speech. Personal pronouns are as yet rare, or completely absent. The grammar lacks provisions for copulas, prepositions and numerals. Inflection is not utilized. The constructions are simple and consist mostly of two or three morphemes. (p. 24)
After describing the methods and results in detail,
So far, the data examined in this study have led to two conclusions: (a) no statistically significant differences were found in the frequency with which three chimpanzees and three 2-year-old children expressed five types of semantic relations in their two-word utterances, (b) significant differences in word ordering were found in the three patterns that were tested. The general conclusion, it was suggested, is that while human and nonhuman primates may share certain cognitive capacities, the ability to process language syntactically is not one of them.
There are other differences. Chimps never made any possessor - possessed sentences, though the human children studied made lots of them.
Another marked contrast involves the sheer level of linguistic production: It only took child transcripts representing from 1 to 3.5 hours of conversation to yield a two-word corpus comparable in size to the chimpanzee corpus which, it will be recalled, was gathered over 5 months. Question making in general, and wh-interrogation in particular, also provide stark contrasts.
Chimps seem to understand questions, but unlike human children, they never generate questions.

The chimps studied had made lots of two-word compounds, something that the human children seldom did.
... 144 or 23.5% of the total chimpanzee corpus of 614 two-word, lexigram-only utterance types presented in Table 3 above consisted of “conjoined actions” (e.g., GRAB PLAY), “conjoined attributes” (e.g., QUIET SCARE), “conjoined entities” (e.g., BANANA RAISIN), and “conjoined locatives” (e.g., GROUPROOM BEDROOM).
After describing chimps' very limited production of three-word sentences,
... both studies essentially conclude that the two-word stage of language development in children and apes is different, and lend support to the contention by Hauser et al. (2014) that “animal communication systems have thus far failed to demonstrate anything remotely like our systems of phonology, semantics, and syntax, and the capacity to process even artificially created stimuli is highly limited” (pp. 8-9).

So chimps don't get very far beyond single "words".
 
Linguistics 001 -- Lecture 20 -- First Language Acquisition

From that page, with names of stages simplified:
[table="class: grid"]
[tr]
[td]Stage[/td]
[td]Typical age[/td]
[td]Description[/td]
[/tr]
[tr]
[td]Babbling[/td]
[td]6-8 months[/td]
[td]Repetitive CV patterns[/td]
[/tr]
[tr]
[td]One-word stage[/td]
[td]9-18 months[/td]
[td]Single open-class words or word stems[/td]
[/tr]
[tr]
[td]Two-word stage[/td]
[td]18-24 months[/td]
[td]"Mini-sentences" with simple semantic relations[/td]
[/tr]
[tr]
[td]Telegraphic stage or early multiword stage[/td]
[td]24-30 months[/td]
[td]"Telegraphic" sentence structures of lexical rather than functional or grammatical morphemes[/td]
[/tr]
[tr]
[td]Later multiword stage[/td]
[td]30+ months[/td]
[td]Grammatical or functional structures emerge[/td]
[/tr]
[/table]

Stages of Acquisition of first Language expands on this list to later years. Note: these stages are for English learners. What has been done for language acquisition for other languages? Especially grammatically very different ones.

So chimps make it only halfway into the two-word stage.
 
Linguistics 001 -- Communication: a Biological Perspective
Starts with "Where did human language come from, and why?"

Notes how the top of the voice box had moved down in human evolution. In chimps, it opened at the base of the nasal cavity, at the back of the tongue, while in us, it opens below the tongue. In babies of our species, it had an apelike configuration. While this is good for making lots of speech sounds, it is more vulnerable to choking.

After noting that we have big brains for our size, the page's authors then note how we are born premature by mammalian standards.

Then communication modes. The authors conclude that using sounds is both fast and long-range compared to its alternatives.

Then Robin Dunbar's theory. He's the one who came up with Dunbar's number for human group sizes: 150. He proposes that language emerged as an alternative to grooming, a way of keeping track of each other.

Then Terence Deacon's social-contract or negotiation theory.
Deacon then points out that human mating arrangments, though diverse across societies, share some characteristics that make our speciesnearly unique: "cooperative, mixed-sex social groups, with significant male care and provisioning of offspring, and relatively stable patterns of reproductive exclusion, mostly in the form of monogamous relationships." According to Deacon, "reproductive pairing is not found in exactly this pattern in any other species." The reason this pattern is not found, he argues, is that it's a recipe for sociosexual disaster: "the combination of provisioning and social cooperation produces a highly volatile social structure that is highly susceptible to disintegration."
This system is vulnerable to cheaters, men who make women pregnant without taking care of the resulting children.

Trying to keep such cheating from happening then provoked the development of language.
Needless to say, all of these proposals are speculative, although Deacon and Dunbar provide a considerable range of supporting fact and argument. There appears to be fairly general current agreement, at least, that humans are extensively adapted for language, and that establishment and maintenance of social structure was a key source of selective pressure in the evolutionary development of human linguistic adaptations.
 
Bomb#20, that's resource limitations. untermensche's argument ignores resource limitations. In the absence of such limitations, there is an infinite number of possible sentences in many natural languages,
But his argument doesn't merely ignore them -- it relies on equivocating about whether resource limitations are taken into account. Look what happens if instead of fudging the resource limitation issue, he were explicit about whether resource limits are taken into account. Then he'd get one of the following three arguments:

1. A human can produce and comprehend infinite statements. A finite process can not slowly grow into an infinite process. Therefore our ability did not arise by slow growth.

2. A human could produce and comprehend infinite statements if it weren't for resource limits. A finite process can not slowly grow into an infinite process. Therefore our ability did not arise by slow growth.

3. A human could produce and comprehend infinite statements if it weren't for resource limits. A finite process can not slowly grow into a process that would be infinite if it weren't for resource limits. Therefore our ability did not arise by slow growth.

So the question is, which of the above is a sound argument? And the answer is, none of them.

Argument 1 is unsound because the first premise is false. Argument 2 is unsound because it's formally invalid -- the conclusion doesn't follow from the premises. Argument 3 is unsound because the second premise is false.
 
I agree that he has not addressed resource limitations very well. They affect not only infinite spaces of possibilities, but also large finite ones.

So I think that untermensche's essential point is: what would happen in the absence of resource constraints?

My own position is that as one increases the complexity of one's language, it eventually becomes easier to describe some of it with recursive rules, and recursion is where untermensche's infinities come from. These rules need not be explicit, but implicit in the parsing and generating mechanisms.
 
Recursion may lead to hypothetical infinities; but in reality, the limit of human comprehension is not only finite, but is fairly small.

Real humans can only cope with a few cycles of recursion - more than a dozen is plenty for anyone.

Unless a dozen has been recently redefined as 'infinity', the idea that human language is infinite is laughable.
 
No.

Says clear logic.

A finite process can not slowly grow into an infinite process.
Any time you're thinking you can settle a synthetic question by clear logic rather than by observation, it's a red flag that you're probably making a mistake.

I'm not sure what a "synthetic" question is, but I know when people throw in unnecessary and meaningless jargon to try to dress up their bad ideas.

You must not comprehend the point. It is all derived from the most simple observations.

There is a process of animal communication. Animals use gesture and sound and other modalities to convey specific information to other members of the species. The sounds and the gestures are universal across the species.

This is a finite process. Somehow "programmed" into the brain the way nest building is "programmed" into birds.

Then there is the process of human language. Really a "program" to create a language.

It is not a finite process. In theory infinite different "languages" could arise, each containing infinite phrases or sentences. I agree understanding this this takes some extrapolation from observation, but not much.

If you have a finite process you can add to it and add to it and add to it.

It will never become an infinite process.

For a finite process to become an infinite process fundamental change must occur.

"Exactly" how many atomic diameters high is the Eiffel Tower? Wait, you can't tell me? Whoa! It must be infinitely many atomic diameters high!

So you're saying that there is in theory some limit, not merely the limits of time and the effects of aging, to the number of books you could read.

After some number of books your brain will stop functioning.

Absurd!!!

With what do you back that clear nonsense up with?
 
Recursion may lead to hypothetical infinities; but in reality, the limit of human comprehension is not only finite, but is fairly small.

Real humans can only cope with a few cycles of recursion - more than a dozen is plenty for anyone.

Unless a dozen has been recently redefined as 'infinity', the idea that human language is infinite is laughable.

"Human comprehension" is not the ordinary ability of humans to understand infinite sentences.

Or the clear understanding that infinite human "languages" were possible.

To not see you are dealing with something entirely different from a handful of squawks always meaning the same thing is astounding.

A real blindness.

A true limit to human comprehension.
 
Recursion may lead to hypothetical infinities; but in reality, the limit of human comprehension is not only finite, but is fairly small.

Real humans can only cope with a few cycles of recursion - more than a dozen is plenty for anyone.

Unless a dozen has been recently redefined as 'infinity', the idea that human language is infinite is laughable.

"Human comprehension" is not the ordinary ability of humans to understand infinite sentences.

Or the clear understanding that infinite human "languages" were possible.

To not see you are dealing with something entirely different from a handful of squawks always meaning the same thing is astounding.

A real blindness.

A true limit to human comprehension.

So, now you equivocate "not infinite" with "same as squawks"? such hyperbole is unnecessary from a person with a sound argument. So I guess you will continue with the hyperbole.
 
Any time you're thinking you can settle a synthetic question by clear logic rather than by observation, it's a red flag that you're probably making a mistake.

I'm not sure what a "synthetic" question is, but I know when people throw in unnecessary and meaningless jargon to try to dress up their bad ideas.
"Synthetic" is the opposite of "analytic". An analytic question is a question about math, logic, the relationships between concepts, the definitions of the words you're using, and so forth. A synthetic question is a question about what's going on out in the real world.

You must not comprehend the point. It is all derived from the most simple observations.

There is a process of animal communication. Animals use gesture and sound and other modalities to convey specific information to other members of the species. The sounds and the gestures are universal across the species.

This is a finite process. Somehow "programmed" into the brain the way nest building is "programmed" into birds.

Then there is the process of human language. Really a "program" to create a language.
Yes. A finite program. You can get a finite program from a simpler finite program by taking small steps.

It is not a finite process. In theory infinite different "languages" could arise, each containing infinite phrases or sentences. I agree understanding this this takes some extrapolation from observation, but not much.
You have a theory that infinite different languages could arise. That's why you keep saying "in theory". How do you know your theory is right? What observations imply human language is an infinite process? You're right, it takes some extrapolation from observation. How much extrapolation? It takes infinite extrapolation from observation. That's how much you're calling "not much".

If you have a finite process you can add to it and add to it and add to it.

It will never become an infinite process.
Okay, if you say so. But "an infinite process" and "a finite process that's infinite in theory" are not the same thing. Therefore you don't have grounds to assume that what's true of one must be true of the other. What reason do you have to claim that "If you have a finite process you can add to it and add to it and add to it but it will never become a finite process that's infinite in theory."? That's not at all obvious.

In fact, the contrary is obvious: it's obvious that you can incrementally add to a finite process and eventually it will become a finite process that's infinite in theory. Consider the finite process of shooting rockets into the sky. You shoot one up at 5,000 mph. It hits the ground and stops. So you shoot one up at 10,000 mph. It hits the ground and stops. So you shoot one up at 15,000 mph. It hits the ground and stops. So you shoot one up at 20,000 mph. It hits the ground and stops. So you shoot one up at 25,000 mph. It never hits the ground. It just keeps going and going. In theory it will keep going for ever and you have incrementally reached an infinite process. Of course in practice something will interrupt the process; but we're talking about what happens in theory.

"Exactly" how many atomic diameters high is the Eiffel Tower? Wait, you can't tell me? Whoa! It must be infinitely many atomic diameters high!

So you're saying that there is in theory some limit, not merely the limits of time and the effects of aging, to the number of books you could read.

After some number of books your brain will stop functioning.

Absurd!!!

With what do you back that clear nonsense up with?
But I didn't say any such nonsense. You make up nonsense and impute it to me because you have no answer to what you actually read. I said there's a limit to how many different sentences you can understand. After some number of books, either the books will start repeating things you already read, or else they will start saying things you can't understand. You can't understand a one-million-word sentence, no matter how long you live, and no matter how successfully future medical science prevents your brain from deteriorating.
 
First evidence that birds tweet using grammar | New Scientist
Bengal finches have their own versions of such rules – known as syntax – says Kentaro Abe of Kyoto University, Japan. “Songbirds have a spontaneous ability to process syntactic structures in their songs,” he says.

To show a sense of syntax in the animals, Abe’s team played jumbled “ungrammatical” remixes of finch songs to the birds and measured the response calls.
They also found out that the birds had to learn grammatical rules, and that a certain region of the birds' brains was involved in recognizing these rules.

Journal article: Songbirds possess the spontaneous ability to discriminate syntactic rules : Nature Neuroscience : Nature Research

However, Birdsong neurolinguistics: songbird context-free grammar claim is premature

Whale song reveals sophisticated language skills | New Scientist
Humpback whales use their own syntax – or grammar – in the complex songs they sing, say researchers who have developed a mathematical technique to probe the mysteries of whale song.

...
Male humpback whales produce songs that last anywhere from about six to 30 minutes. These vocalisations vary greatly across seasons, and during breeding periods they are thought to help attract female partners. Their eerie sound and patterns have captured the attention of marine biologists for decades.
They analyzed some whale songs with some software for looking for evidence of grammatical structure and hierarchy, and they found it.

Journal article: Information entropy of humpback whale songs
 
Elephant-Language Research -- Infrasound, Forest-Elephant Language, An Elephant Dictionary?

Elephants Communicate in Sophisticated Sign Language, Researchers Say – National Geographic Society (blogs)

SpeakDolphin - Research Projects > The Discovery of Dolphin Language
Researchers in the United States and Great Britain have made a significant breakthrough in deciphering dolphin language in which a series of eight objects have been sonically identified by dolphins. Team leader, Jack Kassewitz of SpeakDolphin.com, ‘spoke’ to dolphins with the dolphin’s own sound picture words. Dolphins in two separate research centers understood the words, presenting convincing evidence that dolphins employ a universal “sono-pictorial” language of communication.

The team was able to teach the dolphins simple and complex sentences involving nouns and verbs, revealing that dolphins comprehend elements of human language, as well as having a complex visual language of their own. Kassewitz commented, “We are beginning to understand the visual aspects of their language, for example in the identification of eight dolphin visual sounds for nouns, recorded by hydrophone as the dolphins echolocated on a range of submersed plastic objects.”

Dolphins also name themselves:  Signature whistle, Dolphins Have "Names," Respond When Called

Most of this research has been done on bottlenose dolphins (Tursiops truncatus).

What Are Killer Whales Saying? – National Geographic Society (blogs)
What we don’t know: We don’t know if they have words or language. We think they have signature calls (names) and recognize each other. We do know they can hear each other over tens of miles (about 30 miles, though some large whales such as fin and blue whales can hear each other over hundreds of miles).
Killer whales (orcas, Orcinus orca) are the biggest of the dolphins.
 
Really cool articles there on Animal LANGUAGE. Thank you.

It's so ironic how creationists dismiss the beautiful complexities of Nature in favor of an 'unknowable' magic making knowledge stopper. And THEN, accuse the Atheist of "believing in nothing".. while they allegedly are the ones with an "open mind". Hysterical how much reality they are missing... and dangerous, in many cases.
 
Returning to theories of the origin of language, many animal sounds fit the uh-oh theory, and some of them fit the hey-you theory. Dolphins' "audio pictures" of objects fit the bow-wow theory.

I've also found  Alex (parrot). That African gray parrot understood about 100 words and was known to ask questions, something that even the most proficient chimpanzee has never been known to do. However, Alex seemed to be unusually skilled by parrot standards, or at least unusually trainable.

Wikipedia has an article,  Talking bird, listing several species with members that have demonstrated an ability to imitate human speech. Parrots and related birds (Psittaciformes) are well-known for their abilities to imitate sounds, including human speech. Some songbirds (Passeriformes > Passeri) can also do that, like mockingbirds (Mimidae), named for their ability. This does not count as full-scale language ability, but it does show that these birds can learn to make different sounds in well-defined sequences.

Amniota
  • Sauropsida > Neoaves
    • Psittaciformes (parrots, parakeets, etc.)
    • Passeriformes > Passeri or Oscines (songbirds)
  • Synapsida > Eutheria
    • Afrotheria > Proboscidea (elephants)
    • Boreoeutheria
      • Euarchontoglires > Hominidae (great apes: humans, chimps, gorillas, orangutans)
      • Laurasiatheria > Cetacea
        • Odontoceti (toothed whales) > Delphinidae (oceanic dolphins)
          • Tursiops truncatus (bottlenose dolphin)
          • Orcinus orca (killer whale)
        • Mysticeti (baleen whales) > Megaptera novaeangliae (humpback whale)
So limited forms of language ability evolved several times.
 
"Human comprehension" is not the ordinary ability of humans to understand infinite sentences.

Or the clear understanding that infinite human "languages" were possible.

To not see you are dealing with something entirely different from a handful of squawks always meaning the same thing is astounding.

A real blindness.

A true limit to human comprehension.

So, now you equivocate "not infinite" with "same as squawks"? such hyperbole is unnecessary from a person with a sound argument. So I guess you will continue with the hyperbole.

This has been looked at.

All known forms of animal communication involve finite "signals" that always "signify" the same thing.

If that sounds like human language to you then you do not have one.

Humans can make sense of sentences they have never encountered. An infinite amount of them.

There is nothing beyond aging and death which prevents it. The ability is infinite, even if the animal with the ability is not.
 
"Synthetic" is the opposite of "analytic". An analytic question is a question about math, logic, the relationships between concepts, the definitions of the words you're using, and so forth. A synthetic question is a question about what's going on out in the real world.

Like I said, unnecessary posturing. Worthlessness.

You can get a finite program from a simpler finite program by taking small steps.

How do two finite programs help us? Why even mention it?

And I'm not talking about a program. I'm talking about a natural process.

Whether it works like some "program" is unknown.

You have a theory that infinite different languages could arise. That's why you keep saying "in theory".

It's not a theory. It's a logical conclusion.

Please tell me what (in your theory) would prevent it.

What observations imply human language is an infinite process?

Why would you even think it is not?

Do you imagine you will pick up a book today and not be able to make sense of it?

Do you imagine that if it were possible to give you infinite books today somewhere along the line your ability to read would stop functioning?

And of course I'm not saying any one person can understand everything. But within their understanding infinite expressions could be understood. In theory infinite books could be read.

That is the human language capacity. An ability to deal with infinite bits of information.

To think it is some crude growth from animal communication is absurd.

But humans love to follow absurdity.
 
So limited forms of language ability evolved several times.

Extremely limited forms of animal communication have arisen.

Human language has arisen once.

But some whales and a few songbirds do have something similar (in sound, no meaning can be derived from either, we cannot converse with whales or songbirds) to human language, but obviously neither of these two system could be precursors to human language which arose about 100,000 to 200,000 years ago.
 
So limited forms of language ability evolved several times.

Extremely limited forms of animal communication have arisen.
Limited compared to human language, yes.
Human language has arisen once.
With the possible exception of bottlenose dolphins and maybe also some other species in Delphinidae, like killer whales.
But some whales and a few songbirds do have something similar (in sound, no meaning can be derived from either, we cannot converse with whales or songbirds) to human language, but obviously neither of these two system could be precursors to human language which arose about 100,000 to 200,000 years ago.
What justifies that date?

The singing-before-speech theory is a much nicer one than Noam Chomsky's quasi-creationist big-jump theory, IMO. It proposes that our ancestors went through a phase much like where parrots, songbirds, and humpback whales are at. That is, being capable of making nontrivial sequences of sounds, but without giving those sounds much semantic content. That content would come later.
 
Back
Top Bottom