• Welcome to the Internet Infidels Discussion Board.

I have a question, just how general is general intelligence?

No, it isn't. It's different processes and different parts of the brain.


Nope. You clearly have zero knowledge of neuroscience. Take a person that is a novice on both plate tetonics and a type of complex machinery. Have the person reason about a causal system and make predictions based upon hypothetical changes to one part of the system. Their brain activation will look very similar and similarly different from a person with different IQ doing the same two tasks.

That's exactly why a well-constructed IQ contains as many different kinds of mental tasks as possible - because the same individual has a different performance on the different tasks. How could that be the case if it was an identical process each time?

Wrong again. Well constructed measures of g use only mental tasks that target similar basic aspects of information processing that are common to (general to) many specific tasks. For example, they do not throw in tasks of mentally rotating objects in space, or ability to discriminate acoustic sounds, or tests of creative novelty. The multiple tasks are selected on highly theoretical grounds related to the type of cognitive sub-processes they require. The g score is not merely an average of these test, but rather an extracted factor score that only reflects the ways in which the tasks are similar due to their reliance on the same sub-processes and brain regions. Any task will have sources of variance that is somewhat unique to it. This variance will not correlate with variance in the other measures, thus is excluded from g. For example, a person may have had less exposure to language, thus a reasoning test that requires logical manipulation of verbal tokens will capture some of that. But it also captures more basic abilities to manipulate mental tokens of any type, thus it shares variance with a test like Ravens that is all novel visual symbols without need for verbal comprehension or responding. This shared variance is only what gets counted in g, and is reflecting in shared brain region activity. Some tasks capture g very well all by themselves, such as Ravens. Because it uses stimuli that are novel to almost everyone, does not rely upon verbal exposure or specialized knowledge, it is a more pure measure of the basic ability to control attention in processing new information, recognize patterns and hold them in memory while you process more info and then draw and verify predictive inferences. It correlates with g that is computed using various tasks at .80.


Its because the kinds of mental processes that contribute to the variance in g are so basic and neccessary for most forms of information processing and anything that could be called reasoning or problem solving that any specific "real world" task involving reasoning and problem solving would be impacted by it.

That's certainly the assumption. However, in the absence of any kind of test or measure for 'basic' mental processes, IQ tasks have to settle for ordinary task measurement, in as many ways as it can, so it can be assumed that the measure relates to something fundamental.

There are plenty of tests for basic mental processes. Researchers can and do regularly verify predictions based upon those hypothesized processes. They are able to predict the degree to which various tasks will correlate with each other and their relative loading onto a g factor based upon an analyses of the task and the kinds of sub processes they would require. They also use computer simulations to test predictions based upon those presumed sub-processes and are able to model variance in human performance very well.


Given the marked absence of any kind of 'basic processing' module in the brain that would correspond to this measure of mental capacity, it's more often seen by the scientists involved as a convenient abstraction.

You know what has the greatest marked absence of any evidence? Domain-specific processing modules.
The IQ number itself is of course an abstraction rather than a quantity that directly corresponds to a specific brain feature. That is because the brain (and cognition) is not modular. That thinking is about 30 years out of date. There isn't even a "language" module of the brain despite persistence of this outdated notion in the media. The absence of a g module isn't evidence against g, but rather due to that the brain doesn't have modules.d The shared mental processes in the tests used to compute the g factor arise (like all cognition) from a complex interaction of many aspects of various distributed but networked brain regions.
Going back to the two causal reasoning tasks on plate tetonics and machine processes. Have the person go back and forth between reasoning causally about the two domains while you scan their brain. Then take the scans for each trial on each task and randomly pair them with each other. Neuroscience experts would not be able to tell whether the two images were from the same or different tasks or which task they were on. In contrast have them try to memorize the parts of the machine or think of creative novel uses for the machine and their brain scans would look very different from when they are causally reasoning about the machine's processes. That's because it isn't the subject matter, domain, or topic that matters for what brain regions are involved by the basic sub processes that are engaged in the task.
 
 Theory of multiple intelligences Howard Gardner's 1983 theory:
  • Musical–rhythmic–harmonic
  • Visual–spatial
  • Verbal–linguistic
  • Logical–mathematical
  • Bodily–kinesthetic
  • Interpersonal
  • Intrapersonal
  • Naturalistic
  • Existential
IQ tests cover only some of these -- and cover the easier-to-measure ones parts of them.

I've also found  g factor (psychometrics) As to whether it's a statistical artifact, one can test that hypothesis with seeing what one gets with uncorrelated test scores.
 
Maybe this article helps:

Abstract
''We hypothesized that individual differences in intelligence (Spearman's g) are supported by multiple brain regions, and in particular that fluid (gF) and crystallized (gC) components of intelligence are related to brain function and structure with a distinct profile of association across brain regions. In 225 healthy young adults scanned with structural and functional magnetic resonance imaging sequences, regions of interest (ROIs) were defined on the basis of a correlation between g and either brain structure or brain function. In these ROIs, gC was more strongly related to structure (cortical thickness) than function, whereas gF was more strongly related to function (blood oxygenation level-dependent signal during reasoning) than structure. We further validated this finding by generating a neurometric prediction model of intelligence quotient (IQ) that explained 50% of variance in IQ in an independent sample. The data compel a nuanced view of the neurobiology of intelligence, providing the most persuasive evidence to date for theories emphasizing multiple distributed brain regions differing in function.''
 
 Theory of multiple intelligences Howard Gardner's 1983 theory:
  • Musical–rhythmic–harmonic
  • Visual–spatial
  • Verbal–linguistic
  • Logical–mathematical
  • Bodily–kinesthetic
  • Interpersonal
  • Intrapersonal
  • Naturalistic
  • Existential
IQ tests cover only some of these -- and cover the easier-to-measure ones parts of them.


Something you missed from your same Wiki link is that Gardner's theory is not widely accepted in cognitive science on the grounds that there is a severe lack of empirical evidence for it, and he abuses and distorts the meaning of "intelligence" to the point of making it meaningless, conflating specific knowledge with learning ability. In short, the things on his list that qualify as "intelligence" in the "ability to learn sense do load onto g.

[P]Lack of empirical evidence[edit]
According to a 2006 study many of Gardner's "intelligences" correlate with the g factor, supporting the idea of a single dominant type of intelligence. According to the study, each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics.[6]

Linda Gottfredson (2006) has argued that thousands of studies support the importance of intelligence quotient (IQ) in predicting school and job performance, and numerous other life outcomes. In contrast, empirical support for non-g intelligences is lacking or very poor. She argued that despite this the ideas of multiple non-g intelligences are very attractive to many due to the suggestion that everyone can be smart in some way.[7]

A critical review of MI theory argues that there is little empirical evidence to support it:

To date there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was "little hard evidence for MI theory" (2000, p. 292). In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be "delighted were such evidence to accrue",[36] and admitted that "MI theory has few enthusiasts among psychometricians or others of a traditional psychological background" because they require "psychometric or experimental evidence that allows one to prove the existence of the several intelligences."[36][37]

The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences:

... the human brain is unlikely to function via Gardner’s multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping "what is it?" and "where is it?" neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner’s intelligences could operate "via a different set of neural mechanisms" (1999, p. 99). Equally important, the evidence for the "what is it?" and "where is it?" processing pathways, for Kahneman’s two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences.[37]
[/P]
 
Maybe this article helps:

Abstract
''We hypothesized that individual differences in intelligence (Spearman's g) are supported by multiple brain regions, and in particular that fluid (gF) and crystallized (gC) components of intelligence are related to brain function and structure with a distinct profile of association across brain regions. In 225 healthy young adults scanned with structural and functional magnetic resonance imaging sequences, regions of interest (ROIs) were defined on the basis of a correlation between g and either brain structure or brain function. In these ROIs, gC was more strongly related to structure (cortical thickness) than function, whereas gF was more strongly related to function (blood oxygenation level-dependent signal during reasoning) than structure. We further validated this finding by generating a neurometric prediction model of intelligence quotient (IQ) that explained 50% of variance in IQ in an independent sample. The data compel a nuanced view of the neurobiology of intelligence, providing the most persuasive evidence to date for theories emphasizing multiple distributed brain regions differing in function.''

For those inclined to misrepresent what this research means, the "multiple" being referenced is not supportive of Gardner-style multiple intelligences but rather that that gF and gC are each the product of multiple interacting brain regions, which isn't surprising since even singular tasks involve distributed activation, because notions of "modules" are wrong-headed and overly simplistic. Looking for the one spot in the brain that is "g" (the g-spot :) is like looking for the one spot in the body that makes some people run faster than others.
The most telling thing about the cited work is that is shows there are are reliable networks of activation whose variance corresponds to variance in measured IQ.
IOW, brain scans of people taking the IQ test can be used to predict their relative scores. That would be impossible if IQ scores were the meaningless "statistical artifact" that some here are proclaiming it is.
 
I thought it would be obvious that every aspect of 'conscious activity' - perception, thought, comprehension, etc - is the product of multiple interacting brain regions...
 
I thought it would be obvious that every aspect of 'conscious activity' - perception, thought, comprehension, etc - is the product of multiple interacting brain regions...

This is true, which is why the critique that some have offered of "g" that it doesn't correspond to some particular physical "module" in the brain is invalid. However, if the g was just a statistical artifact, then there would be no predictable relationship between any multi-region activation pattern and computed g scores. Each person would have a multiple interacting brain region while taking the test, but it would be largely random as to what pattern people with different g scores had. Only if g scores reflect the common variance across parts of the test due to their shared reliance on particular neurological networks common to information processing would the observed correlations occur.
 
Nope. You clearly have zero knowledge of neuroscience.

Hm.. If you say so. I think we're talking at cross purposes, but maybe I'm just ignorant?

Take a person that is a novice on both plate tetonics and a type of complex machinery. Have the person reason about a causal system and make predictions based upon hypothetical changes to one part of the system. Their brain activation will look very similar and similarly different from a person with different IQ doing the same two tasks.

This sounds like a fascinating experiment, but I can't make much of your description (very similar and similarly different?). Can you give a link, or failing that, a citation, so we're all on the same page?

That's exactly why a well-constructed IQ contains as many different kinds of mental tasks as possible - because the same individual has a different performance on the different tasks. How could that be the case if it was an identical process each time?

Wrong again. Well constructed measures of g use only mental tasks that target similar basic aspects of information processing that are common to (general to) many specific tasks. For example, they do not throw in tasks of mentally rotating objects in space, or ability to discriminate acoustic sounds, or tests of creative novelty. The multiple tasks are selected on highly theoretical grounds related to the type of cognitive sub-processes they require.
...
This shared variance is only what gets counted in g, ...

Well sort of. The theory is highly influenced by the results of the scores, and the ideas of which tasks are related and unrelated are based principally on the results of these tests. What you're doing, in practice, is taking a set of tests where the results vary greatly just as I described, measuring the correlation, and then creating a conceptual category g that includes all of the correlated bits and none of the uncorrelated bits, thus creating a common element between them. The problem is that this isn't, in itself, evidence of the existence of g. You can perform a similar operation on any cross-correlated data set.

The second problem is that, by pruning away various abilities and concepts from g, it becomes increasing unclear what g is supposed to comprise of. If it doesn't, for example, comprise creativity, specialist processing skills, and various other factors, then it becomes clear that the g you're measuring is not, as commonly understood, general intelligence.

Its because the kinds of mental processes that contribute to the variance in g are so basic and neccessary for most forms of information processing and anything that could be called reasoning or problem solving that any specific "real world" task involving reasoning and problem solving would be impacted by it.

That's certainly the assumption. However, in the absence of any kind of test or measure for 'basic' mental processes, IQ tasks have to settle for ordinary task measurement, in as many ways as it can, so it can be assumed that the measure relates to something fundamental.

There are plenty of tests for basic mental processes. Researchers can and do regularly verify predictions based upon those hypothesized processes. They are able to predict the degree to which various tasks will correlate with each other and their relative loading onto a g factor based upon an analyses of the task and the kinds of sub processes they would require.

Well sure, so can most teachers. All you need there is experience in giving tests. The problem remains that you're still labelling a test performance as indicating 'basic processing' based on performance scores. If you do 100 random tests, there will be correlations between some of them. You can label those correlations as being some kind of common element in task processing, but there is no way to distinguish whether that common element is actually a single cognitive process in it's own right, or simply some other commonality of task performance.

Given the marked absence of any kind of 'basic processing' module in the brain that would correspond to this measure of mental capacity, it's more often seen by the scientists involved as a convenient abstraction.

You know what has the greatest marked absence of any evidence? Domain-specific processing modules.
Evidence against a different theory is not evidence for your own.

The absence of a g module isn't evidence against g,

I don't know that anyone is arguing for a g module. The problem of whether intelligence is unitary or multivariate is more a statistical question than a neurophysiological one.

The shared mental processes in the tests used to compute the g factor arise (like all cognition) from a complex interaction of many aspects of various distributed but networked brain regions.

Strictly speaking no, the g factor arises from the complex interaction of many performance characteristics. You're then assuming that there must be a shared mental process underlying this. It's the manifestation of these measured differences as a single shared process common to these tasks that is being disputed here. That, and whether this shared mental process can meaningfully be called general intelligence, now that so much has been stripped away from it in order to ensure the purity of the correlation.
 
Take a person that is a novice on both plate tetonics and a type of complex machinery. Have the person reason about a causal system and make predictions based upon hypothetical changes to one part of the system. Their brain activation will look very similar and similarly different from a person with different IQ doing the same two tasks.

This sounds like a fascinating experiment, but I can't make much of your description (very similar and similarly different?). Can you give a link, or failing that, a citation, so we're all on the same page?

He means, you have two people with two IQs. Call them IQA person and IQB person. Now give them two different tasks that use differnt parts of the brain, call them Task1 and Task2.

When IQA does Task1 v Task2, her brain looks different in what's functioning ("Lighting up" in the scan) The brain use difference, we can call TaskDIFF represents the different way the brain is used because of the different type of task.

Now, when IQB does Task1 and Task2, her brain also has a very similar TaskDIFF. And atthe same time the difference between the two subjects exists.

So: IQATask1 is different from IQBTask1.
And IQATask2 is different from IQBTask2.
But IQATaskDIFF is very similar to IQBTaskDIFF.


Does that clear it up?



Kind of like imagining me and (Olympic Gold Medalist) Nastia Leukin doing a back handspring and a split jump. Her ability to do a back handspring compared to mine is very different! But if you say her handspring v her split leap are different from each other in a very similar way to how my handspring and my split leap are different from each other (which end of our bodies point to the ceiling, whether our hands touch the ground). And they are "similarly different" in that both her handspring and her split leap are each individually MILES away from each of mine (height, extension, form, pointed toes, hang time, speed) even while it is clear when each of us is doing one move or the other move. They are different from each other in similar ways to how mine are different from each other. And the skill of her handspring is equally advanced over my handspring as her split leap is over my split leap. "Similarly different".
 
Last edited:
Kind of like imagining Me and Nastia Leukin doing a back handspring and a split jump. Her ability to do a back handspring compared to mine is very different! But if you say her handspring v her split leap are different from each other in a very similar way to how my handspring and my split leap are different from each other. And the are "similarly different" in that both her handspring and her split leap are each individually MILES away from each of mine.

Ok, that makes more sense. I'd still like to see a reference, if only to help shore up my 'zero knowledge of neuroscience'.

I guess I'm not seeing why that would imply that a back handspring and a split jump involves the same muscles groups, neurones, etc.
 
Kind of like imagining Me and Nastia Leukin doing a back handspring and a split jump. Her ability to do a back handspring compared to mine is very different! But if you say her handspring v her split leap are different from each other in a very similar way to how my handspring and my split leap are different from each other. And the are "similarly different" in that both her handspring and her split leap are each individually MILES away from each of mine.

Ok, that makes more sense. I'd still like to see a reference, if only to help shore up my 'zero knowledge of neuroscience'.

I guess I'm not seeing why that would imply that a back handspring and a split jump involves the same muscles groups, neurones, etc.

The implication is that if they used completely different systems, one would frequently see differences in the gaps. For example, say we performed together and were scored.

If there is similarity of muscle/neural use, you'll see her get a 9.8 and a 9.7 while I get a 4.2 and a 4.1.
If there were completely separate muscle/neural use, you'll see her get a 9.8 and a 7.7 while I get, a 4.2 and a 6.8.
If the two of us can be really close on one move, but really far apart on another, then we can infer that one set of muscles/neurons is trained and in used while another is not. But if our score gaps are similar no matter what move we do, we can infer that similar muscle/neural support is in play and hers is better.
 
Put in the context of this thread on intelligence,

If I am twice as good as Sarah Palin on math and also music and also english language and also political acumen and also relationship building and also public speaking, then we can infer that some base THING is responsible for that broad and reliable gap. But if I am twice as good at one thing, ten times as good at another, half again as good at a third, and I fall below her in playing basketball, then we can conclude that there is no real inherent "g" or something that gives me my superiority in all of those tasks/challenges.

So the evidence given was that in tests of disparate tasks, the GAP is uniform. Hence common factor driving it.
And yeah, I'd want to read the study, too. ;)
 
Take a person that is a novice on both plate tetonics and a type of complex machinery. Have the person reason about a causal system and make predictions based upon hypothetical changes to one part of the system. Their brain activation will look very similar and similarly different from a person with different IQ doing the same two tasks.

This sounds like a fascinating experiment, but I can't make much of your description (very similar and similarly different?).,

Their brain activation on one causal task would look "very similar" to their activation on another task, and that activation would look different than that of another person doing the same tasks but who had a very different IQ level. IOW, causal reasoning uses common brain systems regardless of what topic the causal reasoning is about, rather than (as you claimed) "different parts of the brain" depending on whether its about the causal system underlying earthquakes versus underlying the working of a complex machine.

Can you give a link, or failing that, a citation, so we're all on the same page?
Its a hypothetical, but the basic assumption is tested by essentially any FMRI study that shows reliable patterns when engaged in causal reasoning versus other tasks (like deciding whether two objects are "associated" but not causally, like plate and cup). Here is one such study.

That's exactly why a well-constructed IQ contains as many different kinds of mental tasks as possible - because the same individual has a different performance on the different tasks. How could that be the case if it was an identical process each time?

Wrong again. Well constructed measures of g use only mental tasks that target similar basic aspects of information processing that are common to (general to) many specific tasks. For example, they do not throw in tasks of mentally rotating objects in space, or ability to discriminate acoustic sounds, or tests of creative novelty. The multiple tasks are selected on highly theoretical grounds related to the type of cognitive sub-processes they require.
...
This shared variance is only what gets counted in g, ...

Well sort of. The theory is highly influenced by the results of the scores, and the ideas of which tasks are related and unrelated are based principally on the results of these tests.

No. Which tests will related to g can be a priori predicted based upon a theoretical models of what mental sub-processes are required to reliably reach a correct answer. Tasks that theoretically share the same sub-process are predicted to correlate with each other, the one's accurately predicted to load highly onto g (rather than other sets of correlated tasks) are those whose shared sub-processes rely the least on domain specific prior knowledge and skills but the most on things like holding some new info active in memory while you process new stimuli so they can be compared. IOW, they are tasks that cannot be done by using knowledge or skills that would be specific to it, because few people have such knowledge. In fact, Raven's matrices is not some random task that people just happened to find was highly correlated to g scores computed across many other tasks. Raven's were specifically created to depend upon general thinking processes that would apply to many contexts but yet the test items do not include familiar contexts or stimuli in order to prevent people from performing the tasks by just relying upon context-specific knowledge or skills. The fact that this test by itself was theoretically designed to and does in fact explain about 65% of the variance in a g-factor computed from an array of IQ tests supports the underlying theory of basic general abilities that cut across contexts as the primary source of the observed inter-correlations that give rise to g.


The second problem is that, by pruning away various abilities and concepts from g, it becomes increasing unclear what g is supposed to comprise of. If it doesn't, for example, comprise creativity, specialist processing skills, and various other factors, then it becomes clear that the g you're measuring is not, as commonly understood, general intelligence.

The scientific concept of general intelligence has never been intended to refer to "all that is important for human cognition", which is they way you are using it and how most novices (mis)use it. Like all scientific concepts, it is made more meaningful by specifying what it excludes (What something is NOT is essential to all meaning of what something is). In fact, it is just nonsensical to think that "specialist skills" would be part of "general intelligence". By definition, they cannot be because they are highly context specific, as the word "specialist" implies. Also, how much info one has been exposed to is not any form of "ability" at all or even any aspect of the person, but rather it speaks to features of their environment they happen to be in. Thus, any cognitive task that depends primarily upon whether one has been exposed to particular info cannot be conceived as reflecting any kind of ability, let alone intelligence, let alone general intelligence.
Since Spearman, all general intelligence research has fully acknowledge that there are factors that impact intellectual performance that are outside of g.
"General" does not mean all inclusive or the only causal factor, but rather something which has some degree of impact across topics and domains. It merely distringuishes it from a concept such as "He has quality X that enhances the ease with which he learns about earthquakes but nothing else." Those concepts do not differ in whether the "X" is the sole determining factor in learning anything. They both merely imply that the "X" is a factor in learning something. They differ merely in the breadth of topics for which that "X" has some degree of impact.


Its because the kinds of mental processes that contribute to the variance in g are so basic and neccessary for most forms of information processing and anything that could be called reasoning or problem solving that any specific "real world" task involving reasoning and problem solving would be impacted by it.

That's certainly the assumption. However, in the absence of any kind of test or measure for 'basic' mental processes, IQ tasks have to settle for ordinary task measurement, in as many ways as it can, so it can be assumed that the measure relates to something fundamental.

Sorry, but you are simply wrong and are ignoring the last 25 years of research on g. Theory is used to choose tasks on a priori grounds, theories that make validated predictions of how strongly various tasks will correlate with each other, and which specific tasks will have the highest % of overlap between their variance and the variance shared by all the other tasks. The research also predicts which brain networks will activate for different tasks, and which networks will have variance that correlates with factor-analytic g-scores (which is what the article DBT cited shows).

Research on g has advanced massively in the last 20 years, but it has never been the kind of theoretically blind, "throw every possible task in the pool and see what correlates" approach you paint it as.


There are plenty of tests for basic mental processes. Researchers can and do regularly verify predictions based upon those hypothesized processes. They are able to predict the degree to which various tasks will correlate with each other and their relative loading onto a g factor based upon an analyses of the task and the kinds of sub processes they would require.

Well sure, so can most teachers. All you need there is experience in giving tests. The problem remains that you're still labelling a test performance as indicating 'basic processing' based on performance scores. If you do 100 random tests, there will be correlations between some of them. You can label those correlations as being some kind of common element in task processing, but there is no way to distinguish whether that common element is actually a single cognitive process in it's own right, or simply some other commonality of task performance.

Once again, they do not simply do post-hoc labeling based on already known correlations. Intelligence research can and have designed and predicted inter-test correlations based upon theory of what the tasks require and whether those steps depends upon shared basic information processing systems. They also use these theories to predict brain activation patterns.
In addition, correlated tests would not show similar brain activation patterns, if those correlations were spurious artifacts of non-common mental processes. They only show shared reliance on the same brain systems because they share influence from basic cognitive systems.


Given the marked absence of any kind of 'basic processing' module in the brain that would correspond to this measure of mental capacity, it's more often seen by the scientists involved as a convenient abstraction.

You know what has the greatest marked absence of any evidence? Domain-specific processing modules.
Evidence against a different theory is not evidence for your own.

You set up a straw man that information is processed in modules and thus a general intelligence would require a "'basic processing' module in the brain".
I merely pointed out the lack of evidence of domain specific processing modules in the brain and the wrongness of your modularity premise, thus lack of evidence of a corresponding processing module for g is completely meaningless and implies nothing about its neurological basis or validity as a cognitive construct.

The absence of a g module isn't evidence against g,

I don't know that anyone is arguing for a g module.

No, they aren't, but you are creating a straw man that g presumes corresponding "module in the brain" and then claiming that the lack of evidence for that module is evidence against g. That's nonsense.


The shared mental processes in the tests used to compute the g factor arise (like all cognition) from a complex interaction of many aspects of various distributed but networked brain regions.

Strictly speaking no, the g factor arises from the complex interaction of many performance characteristics. You're then assuming that there must be a shared mental process underlying this.

FMRI research shows beyond any doubt that there are shared brain networks underlying this, and the ability of theory to predict a priori which tasks will correlate under what conditions, and which overlap most with the shared g variance give strong evidence to support the theories, not only that there are shared mental processes, but what those processes are.

It's the manifestation of these measured differences as a single shared process common to these tasks that is being disputed here.
g does not need to be a "single" shared process. There can very well be and likely is multiple shared processes involved. That does not in any way correspond to the concept of "multiple intelligences". Intelligence is about executing complex mental processes, because even single mental "tasks" require many processes.
There are sets of processes that are involved in acquiring new knowledge that are common across many contexts and topics. They might not help your ability to track a moving object in space or paint a picture that emotionally moves people, because acquiring new knowledge are not the central aspect of those tasks IOW, many skills and abilities are not a form of "intelligence" no matter how specific or general. Ability is a more abstract category that subsumes intelligence and other types of abilities. You are equating "intelligence" with "ability' and thus claim that "general intelligence" isn't valid because it does not meet the all encompassing criteria of every type of performance that "general ability" would. Intelligence can be specific to types of mental processes and yet still be general relative to whether it applies to those processes regardless of the specific conceptual content or topic of the information being processes and learned.
 
This sounds like a fascinating experiment, but I can't make much of your description (very similar and similarly different?). Can you give a link, or failing that, a citation, so we're all on the same page?

He means, you have two people with two IQs. Call them IQA person and IQB person. Now give them two different tasks that use differnt parts of the brain, call them Task1 and Task2.

When IQA does Task1 v Task2, her brain looks different in what's functioning ("Lighting up" in the scan) The brain use difference, we can call TaskDIFF represents the different way the brain is used because of the different type of task.

Now, when IQB does Task1 and Task2, her brain also has a very similar TaskDIFF. And atthe same time the difference between the two subjects exists.

So: IQATask1 is different from IQBTask1.
And IQATask2 is different from IQBTask2.
But IQATaskDIFF is very similar to IQBTaskDIFF.

This also applies and is the same basic idea but sliced differently than what I meant. This corresponds more to the study that DBT linked.
I was responded to Togo's claim that a person reasoning about the causality on topic A is using "different brain regions" than when they performing the same type of task of reasoning about causality, but just on topic B. He implied this in response to by argument that the "not the real world" argument is nonsense because the type of mental processes involved are quite similar in these tests, schoolwork, and "real world" problems. It is the topics that differ but topic does not fundamentally alter the neural substrates involved in the task.
 
OK, this discussion is getting too large, particular when our focus is supposedly the politics.

I've included a full response below, for those interested. As a brief summary:

-It's still not clear what the brain scan evidence is supposed to demonstrate.
-You're arguing that the theories of cognition are produced a priori, and then IQ tests provide evidence to support them. I disagree with you on this point, since the theories are developed in part in response to the tests, and the tests are tested and altered to produce to correct response profile. It would help if you could explain how a scientific theory could possibly be held a priori, or if you could give some clue as to which theories you're referring to.
-As such, still no evidence for g as anaything other than a statistical construct

There seems to be some confusion as to the g being discussed. I'm talking about Spearman's g, the positive manifold, supporting a theory of unitary intelligence such that human mental ability goes up and down based on a single common factor (or set of factors), rather than a mutivariate theory of intellgience, in which human mental ability varies by multiple factors that vary independently. This is the basis of assigned an IQ scores, the controversial declarations made around racial differences, and the basis for the public policy prouncements made and so heavily criticised that some IQ researchers are claiming to be under seige or subject to some form of extraordinary censorship.

Ron appears to be trying to defend something much smaller, a g that's simply a set of processes that may be shared between some, but not all, tasks, thus forming the basis for a correlation between tasks. This doesn't correspond to the history of IQ testing as discussed, the criticisms of IQ tests and the validity of IQ, the title of the thread, or much else that has been discussed, but may explain the disagreement.


This sounds like a fascinating experiment, but I can't make much of your description (very similar and similarly different?).,

Their brain activation on one causal task would look "very similar" to their activation on another task, and that activation would look different than that of another person doing the same tasks but who had a very different IQ level. IOW, causal reasoning uses common brain systems regardless of what topic the causal reasoning is about, rather than (as you claimed) "different parts of the brain" depending on whether its about the causal system underlying earthquakes versus underlying the working of a complex machine.

Can you give a link, or failing that, a citation, so we're all on the same page?


Its a hypothetical, but the basic assumption is tested by essentially any FMRI study that shows reliable patterns when engaged in causal reasoning versus other tasks (like deciding whether two objects are "associated" but not causally, like plate and cup). Here is one such study.

Ok, I think I'm beginning to see the confusion. This is a study that shows neural activity in solving various cognitive tasks. How does this support either g, or, since you're starting to distinguish the two, theories of unitary untelligence?


That's exactly why a well-constructed IQ contains as many different kinds of mental tasks as possible - because the same individual has a different performance on the different tasks. How could that be the case if it was an identical process each time?

Wrong again. Well constructed measures of g use only mental tasks that target similar basic aspects of information processing that are common to (general to) many specific tasks.

No, I'm talking about the process of constructing one. You start off with as many different tasks as possible, and then exclude the ones that don't cross-correlate strongly enough, (or are impractical to carry out). You have two conflicting goals - trying to identify a common element between tasks, which means you want the correlation as high as possible, and trying to ensure that your common element covers as wide a range as possible, which means you want the broadest range of tasks you can while keeping that correlation. 100% correlated tasks would have to be essentially identical, so you aim for an overall correlation of a particular amount, say 80%, and try and get as broad a range as possible within that.

For example, they do not throw in tasks of mentally rotating objects in space,

Yeah, but that's because of McFarland. And because it would have to be a carefully timed sub-task, and thus doesn't fit within the format.

or ability to discriminate acoustic sounds,

Doesn't fit the format

or tests of creative novelty.

Worst of all, doesn't have a right answer!

The multiple tasks are selected on highly theoretical grounds related to the type of cognitive sub-processes they require.

Here we go again - can I have a link, citation or refernce to these 'highly theoretical grounds'. Which of the many theories of cognition are you leaning on here?

Togo said:
...
This shared variance is only what gets counted in g, ...

Well sort of. The theory is highly influenced by the results of the scores, and the ideas of which tasks are related and unrelated are based principally on the results of these tests.

No. Which tests will related to g can be a priori predicted based upon a theoretical models of what mental sub-processes are required to reliably reach a correct answer. Tasks that theoretically share the same sub-process are predicted to correlate with each other, the one's accurately predicted to load highly onto g (rather than other sets of correlated tasks) are those whose shared sub-processes rely the least on domain specific prior knowledge and skills but the most on things like holding some new info active in memory while you process new stimuli so they can be compared.

The theories are based on the experimental data. Seriously, if you don't think these theoretical models of loading onto g are based on measures of g, what do you think they're based on?

In fact, Raven's matrices is not some random task that people just happened to find was highly correlated to g scores computed across many other tasks. Raven's were specifically created to depend upon general thinking processes that would apply to many contexts but yet the test items do not include familiar contexts or stimuli in order to prevent people from performing the tasks by just relying upon context-specific knowledge or skills.

They're a refinement of Standford-Binet style tests with less emphasis on language ability. And no, they're not just random tests. They're the finalists in a very large selection of random tests that have been carefully tested for g-loading and cross-correlation, and progressively altered to product a bell-curve result profile.

That's the basic issue with pychometric tests - they're fitted to produce results in a particular pattern. That's not always a problem depending on the use you have for them, but what it means is that you absolutely can not say this...

The fact that this test by itself was theoretically designed to and does in fact explain about 65% of the variance in a g-factor computed from an array of IQ tests supports the underlying theory of basic general abilities that cut across contexts as the primary source of the observed inter-correlations that give rise to g.

...because it simply isn't true. For IQ testing you don't slap together a whole load of questions based on some notion of how cognitive processing is put together, and then act amazed when it turns out to be correlated. You measure the correlations first, put together a test on that basis, and then carefully test and rebalance your questions until you get the desired response profile.

Togo said:
The second problem is that, by pruning away various abilities and concepts from g, it becomes increasing unclear what g is supposed to comprise of. If it doesn't, for example, comprise creativity, specialist processing skills, and various other factors, then it becomes clear that the g you're measuring is not, as commonly understood, general intelligence.

The scientific concept of general intelligence has never been intended to refer to "all that is important for human cognition", which is they way you are using it and how most novices (mis)use it. Like all scientific concepts, it is made more meaningful by specifying what it excludes (What something is NOT is essential to all meaning of what something is).

g as spearman originally described it was part of a theory of unitary intelligence. That is, the idea that intelligence was capaable of being expressed as a single measure. Yes, idiots still try and push that idea today, even though specialised processing is well established. But restricting g to simply a common factor amongst a small handful of tasks begs the question of what it is exactly. Is IQ just a minor subset of cognitive processing, and if so, why assume even that's unitary? Why assume it's important at all?

These concepts are open to having their validity challenged. That's why they aren't universally supported amongst psychologists, let alone in the wider world.

"General" does not mean all inclusive or the only causal factor, but rather something which has some degree of impact across topics and domains. It merely distringuishes it from a concept such as "He has quality X that enhances the ease with which he learns about earthquakes but nothing else." Those concepts do not differ in whether the "X" is the sole determining factor in learning anything. They both merely imply that the "X" is a factor in learning something. They differ merely in the breadth of topics for which that "X" has some degree of impact.


Still no evidence that X actually exists though. You can calculate an average man, but that doesn't mean you can find him. Similarly with g, you can calcuate it as an extracted factor, but that doesn't mean it's an actual brain process, physical structure, or cognitive pattern.


Togo said:
Its because the kinds of mental processes that contribute to the variance in g are so basic and neccessary for most forms of information processing and anything that could be called reasoning or problem solving that any specific "real world" task involving reasoning and problem solving would be impacted by it.

That's certainly the assumption. However, in the absence of any kind of test or measure for 'basic' mental processes, IQ tasks have to settle for ordinary task measurement, in as many ways as it can, so it can be assumed that the measure relates to something fundamental.

Sorry, but you are simply wrong and are ignoring the last 25 years of research on g.

I don't agree, I'm afraid. Please point to a test that measures g without measuring task performance.

Theory is used to choose tasks on a priori grounds, theories that make validated predictions of how strongly various tasks will correlate with each other, and which specific tasks will have the highest % of overlap between their variance and the variance shared by all the other tasks.

Ok, now this may be a terminology issue, but you seem to be contradicting yourself on a key point. Are these theories Validated (i.e. confirmed by experiment)? Or are they held a priori? They can't actually be both.

ASsuming you agree they are confirmed by experiment, do you believe that they were applied to the construction of IQ tests without any kind of testing, or do you agree, that yes, IQ tests are in fact constructed via a process that includes a lot of testing and balancing.

Having got that far can you see why the idea that the correlations that occur within IQ tests might not be indepednent confirmation of the validity of the test?

Research on g has advanced massively in the last 20 years, but it has never been the kind of theoretically blind, "throw every possible task in the pool and see what correlates" approach you paint it as.

So how did Spearman come up with the positive manifold then? And what theoretical basis of IQ testing did Binet use? I'm pretty sure those were about finding correlations in tasks chosen largely at random, but if you have evidence otherwise, I'd be happy to hear it.


Given the marked absence of any kind of 'basic processing' module in the brain that would correspond to this measure of mental capacity, it's more often seen by the scientists involved as a convenient abstraction.

You know what has the greatest marked absence of any evidence? Domain-specific processing modules.
Evidence against a different theory is not evidence for your own.

You set up a straw man that information is processed in modules

No, I really didn't. That was your interpretation of what I said, but that's not the point I was making at the time, and I have no reason to push the idea of an entirely modular brain.

However, if you want g to be an inherent feature of the brain, then can you say how this inherent feature is manifest?

None of which answers the point I made, which is that there is no evidence that g is anything more than a statistical abstraction. You've mentioned brain scans, but you've not said how that would be relevent, and I did specfically ask you to explain this, both directly and in discussion with Rhea. If we're agreed that there isn't going to be a g module or g neurological structure, or g circuit, then what exactly are the patterns of the scan going to show? Because it seems like they'd only show that similar tasks have similar patterns, which doesn't establish g at all.

It's the manifestation of these measured differences as a single shared process common to these tasks that is being disputed here.
g does not need to be a "single" shared process.

There isn't a statistical requirement, not. But if you're supporting Spearman's theory of g as a unitary measure of intellgence, and the idea of 'basic' or 'fundamental' processing, then there needs to be at least one process common to all the tasks, although obviously more than one would be more likely.

I don't see that the brain scan study you cited actually supports that.

There can very well be and likely is multiple shared processes involved. That does not in any way correspond to the concept of "multiple intelligences".

Multiple intelligences versus unitary intelligence is about univarience and multivariance. The idea being that there is a single factor underlying all tasks, rather than multiple factors. Correlation does not support one over the other.

Intelligence can be specific to types of mental processes and yet still be general relative to whether it applies to those processes regardless of the specific conceptual content or topic of the information being processes and learned.


Then it isn't IQ.

If it doesn't measure ability, it's not a measure of potential, then it doesn't fit any of the models of IQ from Binet through to the moderen day. A theory of unitary intelligence, as proposed by Spearman as the positive manifold, or g, and as supported by Jensen, is the theory that intelligence is a single ability. If you strip away everything that doesn't fit, creating a vastly reduced g, then yes, you'll get a powerful correlation. But that isn't unitary intelligence. That's multivariate intelligencce, that is an intelligence that varies across multiple dimensions, with all but one of the dimensions stripped out.

 
Maybe this article helps:

Abstract
''We hypothesized that individual differences in intelligence (Spearman's g) are supported by multiple brain regions, and in particular that fluid (gF) and crystallized (gC) components of intelligence are related to brain function and structure with a distinct profile of association across brain regions. In 225 healthy young adults scanned with structural and functional magnetic resonance imaging sequences, regions of interest (ROIs) were defined on the basis of a correlation between g and either brain structure or brain function. In these ROIs, gC was more strongly related to structure (cortical thickness) than function, whereas gF was more strongly related to function (blood oxygenation level-dependent signal during reasoning) than structure. We further validated this finding by generating a neurometric prediction model of intelligence quotient (IQ) that explained 50% of variance in IQ in an independent sample. The data compel a nuanced view of the neurobiology of intelligence, providing the most persuasive evidence to date for theories emphasizing multiple distributed brain regions differing in function.''

I'll add this more recent paper: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0117295
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

It should be emphasized again that our work paves a new way for a research on predicting an infant’s future IQ score by using neuroimaging data, which can be a potential indicator for parents to prepare their child’s education if needed.
 
I'll add this more recent paper: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0117295
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

It should be emphasized again that our work paves a new way for a research on predicting an infant’s future IQ score by using neuroimaging data, which can be a potential indicator for parents to prepare their child’s education if needed.

Years ago, we had a craze...big SLR 35mm cameras. These cameras were clearly superior to those that came before and they got bigger and bigger. Also, their actual physical appearance was enough different from other cameras that you could quickly, with a single glance of a picture of one of these, say to yourself, " Yup! that's a superior camera." I mention this here because it has bearing on this issue and MRI scans. In the case of the cameras mentioned, it was possible to look at a certain configuration of a camera with an understanding of the totality of its function and clearly assess its capabilities.

We are not there yet, when it comes to brains and their images. We can recognize certain appearances that indicate low neural activity and assess the size of certain ganglia in the brain and extrapolate theories on how well this or that portion of the brain can function in a given situation we think is a gauge of intelligence. We can find that persons who are extremely proficient in certain activities will show differences with the norm in specific areas of the brain. The problem is not in our ability to see and measure what is happening. It is in our attempt to define something we are calling "general intelligence." That problem does not go away regardless of how completely we can map the brain and its functions.

Our brains may have separate measurements in regard to specific tasks and there can be correlations between brain scans and performance of a person in the task, but this still is not "general intelligence." It is a specific kind of intelligence or perhaps we can call it a talent of some kind. I am not trying to be a spoiler here, but I do feel that the attempt to define and rate people in terms of their "general intelligence," one would have to assign subjective values to various talents and this would skew the results in favor of a predetermined notion of what intelligent means.
 
We are not there yet, when it comes to brains and their images.

For the IQ denialist, we will never be there.

According to the article you cited, we aren't there. The authors do mention that at no point did they use longitudinal data - i.e. they didn't test to see if their measures of IQ were accurate, merely consistent. It's a very clever statistical model, which, if we assume that IQ is related to identified neurological features, will prove very useful in estimating their impact. I'm not sure what you think it proves though.
 
For the IQ denialist, we will never be there.

According to the article you cited, we aren't there. The authors do mention that at no point did they use longitudinal data - i.e. they didn't test to see if their measures of IQ were accurate, merely consistent. It's a very clever statistical model, which, if we assume that IQ is related to identified neurological features, will prove very useful in estimating their impact. I'm not sure what you think it proves though.

Yeah, but the notion that people of differing IQs have differing brains is becoming harder to disprove.

New brain science shows poor kids have smaller brains than affluent kids

http://www.washingtonpost.com/local/education/new-brain-science-shows-poor-kids-have-smaller-brains-than-affluent-kids/2015/04/15/3b679858-e2bc-11e4-b510-962fcfabc310_story.html
 
Back
Top Bottom