This sounds like a fascinating experiment, but I can't make much of your description (very similar and similarly different?).,
Their brain activation on one causal task would look "very similar" to their activation on another task, and that activation would look different than that of another person doing the same tasks but who had a very different IQ level. IOW, causal reasoning uses common brain systems regardless of what topic the causal reasoning is about, rather than (as you claimed) "different parts of the brain" depending on whether its about the causal system underlying earthquakes versus underlying the working of a complex machine.
Can you give a link, or failing that, a citation, so we're all on the same page?
Its a hypothetical, but the basic assumption is tested by essentially any FMRI study that shows reliable patterns when engaged in causal reasoning versus other tasks (like deciding whether two objects are "associated" but not causally, like plate and cup).
Here is one such study.
Ok, I think I'm beginning to see the confusion. This is a study that shows neural activity in solving various cognitive tasks. How does this support either g, or, since you're starting to distinguish the two, theories of unitary untelligence?
That's exactly why a well-constructed IQ contains as many different kinds of mental tasks as possible - because the same individual has a different performance on the different tasks. How could that be the case if it was an identical process each time?
Wrong again. Well constructed measures of g use only mental tasks that target similar basic aspects of information processing that are common to (general to) many specific tasks.
No, I'm talking about the process of constructing one. You start off with as many different tasks as possible, and then exclude the ones that don't cross-correlate strongly enough, (or are impractical to carry out). You have two conflicting goals - trying to identify a common element between tasks, which means you want the correlation as high as possible, and trying to ensure that your common element covers as wide a range as possible, which means you want the broadest range of tasks you can while keeping that correlation. 100% correlated tasks would have to be essentially identical, so you aim for an overall correlation of a particular amount, say 80%, and try and get as broad a range as possible within that.
For example, they do not throw in tasks of mentally rotating objects in space,
Yeah, but that's because of McFarland. And because it would have to be a carefully timed sub-task, and thus doesn't fit within the format.
or ability to discriminate acoustic sounds,
Doesn't fit the format
or tests of creative novelty.
Worst of all, doesn't have a right answer!
The multiple tasks are selected on highly theoretical grounds related to the type of cognitive sub-processes they require.
Here we go again - can I have a link, citation or refernce to these 'highly theoretical grounds'. Which of the many theories of cognition are you leaning on here?
Togo said:
...
This shared variance is only what gets counted in g, ...
Well sort of. The theory is highly influenced by the results of the scores, and the ideas of which tasks are related and unrelated are based principally on the results of these tests.
No. Which tests will related to g can be a priori predicted based upon a theoretical models of what mental sub-processes are required to reliably reach a correct answer. Tasks that theoretically share the same sub-process are predicted to correlate with each other, the one's accurately predicted to load highly onto g (rather than other sets of correlated tasks) are those whose shared sub-processes rely the least on domain specific prior knowledge and skills but the most on things like holding some new info active in memory while you process new stimuli so they can be compared.
The theories are based on the experimental data. Seriously, if you don't think these theoretical models of loading onto g are based on measures of g, what do you think they're based on?
In fact, Raven's matrices is not some random task that people just happened to find was highly correlated to g scores computed across many other tasks. Raven's were specifically created to depend upon general thinking processes that would apply to many contexts but yet the test items do not include familiar contexts or stimuli in order to prevent people from performing the tasks by just relying upon context-specific knowledge or skills.
They're a refinement of Standford-Binet style tests with less emphasis on language ability. And no, they're not just random tests. They're the finalists in a very large selection of random tests that have been carefully tested for g-loading and cross-correlation, and progressively altered to product a bell-curve result profile.
That's the basic issue with pychometric tests - they're fitted to produce results in a particular pattern. That's not always a problem depending on the use you have for them, but what it means is that you absolutely can not say this...
The fact that this test by itself was theoretically designed to and does in fact explain about 65% of the variance in a g-factor computed from an array of IQ tests supports the underlying theory of basic general abilities that cut across contexts as the primary source of the observed inter-correlations that give rise to g.
...because it simply isn't true. For IQ testing you don't slap together a whole load of questions based on some notion of how cognitive processing is put together, and then act amazed when it turns out to be correlated. You measure the correlations first, put together a test on that basis, and then carefully test and rebalance your questions until you get the desired response profile.
Togo said:
The second problem is that, by pruning away various abilities and concepts from g, it becomes increasing unclear what g is supposed to comprise of. If it doesn't, for example, comprise creativity, specialist processing skills, and various other factors, then it becomes clear that the g you're measuring is not, as commonly understood, general intelligence.
The scientific concept of general intelligence has never been intended to refer to "all that is important for human cognition", which is they way you are using it and how most novices (mis)use it. Like all scientific concepts, it is made more meaningful by specifying what it excludes (What something is NOT is essential to all meaning of what something is).
g as spearman originally described it was part of a theory of unitary intelligence. That is, the idea that intelligence was capaable of being expressed as a single measure. Yes, idiots still try and push that idea today, even though specialised processing is well established. But restricting g to simply a common factor amongst a small handful of tasks begs the question of what it is exactly. Is IQ just a minor subset of cognitive processing, and if so, why assume even that's unitary? Why assume it's important at all?
These concepts are open to having their validity challenged. That's why they aren't universally supported amongst psychologists, let alone in the wider world.
"General" does not mean all inclusive or the only causal factor, but rather something which has some degree of impact across topics and domains. It merely distringuishes it from a concept such as "He has quality X that enhances the ease with which he learns about earthquakes but nothing else." Those concepts do not differ in whether the "X" is the sole determining factor in learning anything. They both merely imply that the "X" is a factor in learning something. They differ merely in the breadth of topics for which that "X" has some degree of impact.
Still no evidence that X actually exists though. You can calculate an average man, but that doesn't mean you can find him. Similarly with g, you can calcuate it as an extracted factor, but that doesn't mean it's an actual brain process, physical structure, or cognitive pattern.
Togo said:
Its because the kinds of mental processes that contribute to the variance in g are so basic and neccessary for most forms of information processing and anything that could be called reasoning or problem solving that any specific "real world" task involving reasoning and problem solving would be impacted by it.
That's certainly the assumption. However, in the absence of any kind of test or measure for 'basic' mental processes, IQ tasks have to settle for ordinary task measurement, in as many ways as it can, so it can be assumed that the measure relates to something fundamental.
Sorry, but you are simply wrong and are ignoring the last 25 years of research on g.
I don't agree, I'm afraid. Please point to a test that measures g without measuring task performance.
Theory is used to choose tasks on a priori grounds, theories that make validated predictions of how strongly various tasks will correlate with each other, and which specific tasks will have the highest % of overlap between their variance and the variance shared by all the other tasks.
Ok, now this may be a terminology issue, but you seem to be contradicting yourself on a key point. Are these theories Validated (i.e. confirmed by experiment)? Or are they held a priori? They can't actually be both.
ASsuming you agree they are confirmed by experiment, do you believe that they were applied to the construction of IQ tests without any kind of testing, or do you agree, that yes, IQ tests are in fact constructed via a process that includes a lot of testing and balancing.
Having got that far can you see why the idea that the correlations that occur within IQ tests might not be indepednent confirmation of the validity of the test?
Research on g has advanced massively in the last 20 years, but it has never been the kind of theoretically blind, "throw every possible task in the pool and see what correlates" approach you paint it as.
So how did Spearman come up with the positive manifold then? And what theoretical basis of IQ testing did Binet use? I'm pretty sure those were about finding correlations in tasks chosen largely at random, but if you have evidence otherwise, I'd be happy to hear it.
Given the marked absence of any kind of 'basic processing' module in the brain that would correspond to this measure of mental capacity, it's more often seen by the scientists involved as a convenient abstraction.
You know what has the greatest marked absence of any evidence? Domain-specific processing modules.
Evidence against a different theory is not evidence for your own.
You set up a straw man that information is processed in modules
No, I really didn't. That was your interpretation of what I said, but that's not the point I was making at the time, and I have no reason to push the idea of an entirely modular brain.
However, if you want g to be an inherent feature of the brain, then can you say how this inherent feature is manifest?
None of which answers the point I made, which is that there is no evidence that g is anything more than a statistical abstraction. You've mentioned brain scans, but you've not said how that would be relevent, and I did specfically ask you to explain this, both directly and in discussion with Rhea. If we're agreed that there isn't going to be a g module or g neurological structure, or g circuit, then what exactly are the patterns of the scan going to show? Because it seems like they'd only show that similar tasks have similar patterns, which doesn't establish g at all.
It's the manifestation of these measured differences as a single shared process common to these tasks that is being disputed here.
g does not need to be a "single" shared process.
There isn't a statistical requirement, not. But if you're supporting Spearman's theory of g as a unitary measure of intellgence, and the idea of 'basic' or 'fundamental' processing, then there needs to be at least one process common to all the tasks, although obviously more than one would be more likely.
I don't see that the brain scan study you cited actually supports that.
There can very well be and likely is multiple shared processes involved. That does not in any way correspond to the concept of "multiple intelligences".
Multiple intelligences versus unitary intelligence is about univarience and multivariance. The idea being that there is a single factor underlying all tasks, rather than multiple factors. Correlation does not support one over the other.
Intelligence can be specific to types of mental processes and yet still be general relative to whether it applies to those processes regardless of the specific conceptual content or topic of the information being processes and learned.
Then it isn't IQ.
If it doesn't measure ability, it's not a measure of potential, then it doesn't fit any of the models of IQ from Binet through to the moderen day. A theory of unitary intelligence, as proposed by Spearman as the positive manifold, or g, and as supported by Jensen, is the theory that intelligence is a single ability. If you strip away everything that doesn't fit, creating a vastly reduced g, then yes, you'll get a powerful correlation. But that isn't unitary intelligence. That's multivariate intelligencce, that is an intelligence that varies across multiple dimensions, with all but one of the dimensions stripped out.