• Welcome to the Internet Infidels Discussion Board.

Evidence of strong gender-bias in University hiring

We can treat people fairly without treating them right, so even though treating people fairly is the right thing to do, doing so isn't necessarily an instance of doing the right thing, so we must be careful not to be blinded by our urge to do what's right in treating people fairly when doing so may be the wrong thing to do--not that the fair treatment is wrong but the instances where we're treating people fairly yet wrong.
 
I'm a little surprised at Toni's resistance to the obvious:

Here we report five hiring experiments in which faculty evaluated hypothetical female and male applicants,using systematically varied profiles disguising identical scholarship, for assistant professorships in biology, engineering,economics, and psychology. Contrary to prevailing assumptions,men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference.Comparing different lifestyles revealed that women preferred divorced mothers to married fathers and that men preferred mothers who took parental leaves to mothers who did not. Our findings, supported by real-world academic hiring data, suggest advantages for women launching academic science careers.

Clearly faculty in all four academic disciplines are sexist (with the exception male economists who, unsurprisingly, are not). Unless one supposes that the faculty selected were "cherry picked" it seems to be a valid sample of the discriminatory attitudes against males and for women in a variety of academic fields.

And such discrimination is not unusual and is backed up by real world data:

Real-world data ratify our conclusion about female hiring advantage. Research on actual hiring shows female Ph.D.s are disproportionately less likely to apply for tenure-track positions,but if they do apply, they are more likely to be hired (16, 30–34),sometimes by a 2:1 ratio (31). These findings of female hiring advantage were especially salient in a National Research Council report on actual hiring in six fields, five of which are mathematically intensive, at 89 doctoral-granting universities (encompassing more than 1,800 faculty hires): “once tenure-track females apply to aposition, departments are on average inviting more females to interview than would be expected if gender were not a factor” (ref. 16,p. 49). [See SI Appendix for descriptions of other audits of actual hiring that accord with this view, some dating back to the 1980s. ... Thus, real-world hiring data showing a preference for women...(and) are consistent with our experimental findings.

And finally, contrary to Toni's minimization of the implications:

Although the point of entry into the professoriate is just one step in female faculty’s journey at which gender bias can occur, it is an extremely important one. Elsewhere we have examined subsequent factors in women’s versus men’s academic science careers, such as gender differences in remuneration, research productivity, citations, job satisfaction, and promotion and concluded that with some exceptions, the academy is gender-fair(14).

Aside from the rationalizing excuse makers, the evidence is clear. The only question is, what are the moralists going to do about protecting males from this pervasive bigotry?
 
Assuming this study is not a bullshit and effect is real, I think this could be a result of perceived impression that women are better at and more willing to do the teaching. Faculty hiring is heavily tilted toward teaching. Anyway as I mentioned before there was a similar study which came to completely opposite conclusion, I remember a heated discussion about that where feminists were accusing men in sexism. It's all so confusing.
 
Assuming this study is not a bullshit and effect is real, I think this could be a result of perceived impression that women are better at and more willing to do the teaching.

Okay, but that is called sexism that would be no different than a 2:1 bias in favor of men based on the perception that men are more competent scientists. The applications used were extensive and contained lots of information about teaching capabilities and interests. IF the evaluators used gender based stereotypes about teaching instead of actual applicant specific data and information about teaching, then they engaged in sexism and stereotype-based discrimination. This is made worse by the lack of evidence that female University faculty are better at or more willing to teach at the University level.


Faculty hiring is heavily tilted toward teaching.
The studies used large sample of the 3 tiers of 4-year Universities which differ in how much teaching is involved in the job: schools that don't have grad programs or much research and are all about teaching undergrads with a 5-8 course-load per year, schools that have Master's level grad programs with less teaching load, and what are called "Research 1" Universities with PhD grad programs where getting grants and doing research are primary and the teaching load in 1-2 classes per year (sometimes less). The data showed not effect of school type on the pro-female gender bias, meaning that percieved ability to performing teaching responsibilities had little to nothing to do with the effect.


Anyway as I mentioned before there was a similar study which came to completely opposite conclusion, I remember a heated discussion about that where feminists were accusing men in sexism. It's all so confusing.

This is the only study (actually a set of 5 studies demonstrating high reliability and consistency) that has been posted on these boards where they actually experimentally manipulated and thus tested the independent causal impact of applicant gender of relative hiring rankings for University faculty.
So, it isn't all that confusing if you'r not trying to invent reasons to dismiss the science.
 
Okay, but that is called sexism that would be no different than a 2:1 bias in favor of men based on the perception that men are more competent scientists. The applications used were extensive and contained lots of information about teaching capabilities and interests. IF the evaluators used gender based stereotypes about teaching instead of actual applicant specific data and information about teaching, then they engaged in sexism and stereotype-based discrimination. This is made worse by the lack of evidence that female University faculty are better at or more willing to teach at the University level.
I am not judging, I am merely explaining the data.
The fact is, different groups objectively have different distribution of skills and taking that into account is not really a sexism.
Faculty hiring is heavily tilted toward teaching.
The studies used large sample of the 3 tiers of 4-year Universities which differ in how much teaching is involved in the job: schools that don't have grad programs or much research and are all about teaching undergrads with a 5-8 course-load per year, schools that have Master's level grad programs with less teaching load, and what are called "Research 1" Universities with PhD grad programs where getting grants and doing research are primary and the teaching load in 1-2 classes per year (sometimes less). The data showed not effect of school type on the pro-female gender bias, meaning that percieved ability to performing teaching responsibilities had little to nothing to do with the effect.
Are you quoting from OP paper?
Universities with PhD and research programs are still mostly about teaching. That's where most money come from.
All faculty members do the teaching. And undergrad teaching is basically babysitting nowadays.


Anyway as I mentioned before there was a similar study which came to completely opposite conclusion, I remember a heated discussion about that where feminists were accusing men in sexism. It's all so confusing.

This is the only study (actually a set of 5 studies demonstrating high reliability and consistency) that has been posted on these boards where they actually experimentally manipulated and thus tested the independent causal impact of applicant gender of relative hiring rankings for University faculty.
So, it isn't all that confusing if you'r not trying to invent reasons to dismiss the science.
All I am saying there was another presumably good study which came to the opposite conclusion.
Obviously I am biased toward this OP study but the fact that there are studies with opposite conclusion says there are problems in this whole field.
 
I found the study rather strange. I have been on numerous search committees. We have never known the marriage status of the applicants or whether they have children.

There were 5 studies, some studies had that info and some did not, and none of it notably altered the gender bias in favor of female applicants. The studies that included that info were those that did not use an actual CV for each candidate but rather used a written narrative summary of the applicants supplied by a search committee who had met and interviewed the candidates. The info about their spouses and kids were mentioned in passing within the context of the applicants comments about potentially moving to the area, etc.. In addition, it isn't uncommon for applicants to say things in their personal statements that imply this info, or put something (like pics) linked on their webpages which any thorough search committee member examines when deciding among the short listed candidates. Regardless, the same result was obtained whether or not this info was there or whether what was evaluated was CVs or narrative summaries by a search committee, showing the robustness, reliability, and rather context-independent nature of the pro-female bias.


In many cases, we don't even know the gender of the applicant (if the name is unisex or from a country that we are unfamiliar with).

The vast majority of the more common names imply a likely gender. In addition, these days it is rare for search committee members not to view a candidates professional webpage, which almost always have pics, not to mention often links to personal interests. Also, the names of these candidates where not always included, because that is the most transparent way that gender is manipulated in studies of bias. In the studies Gender of applicant was implied by gendered pronouns in the search committee summaries. Again, these differences did not matter to the results.
 
I am not judging, I am merely explaining the data.
The fact is, different groups objectively have different distribution of skills and taking that into account is not really a sexism.

Again, there is no evidence of any objective difference in University teaching skill. In fact, the only data we really have are student ratings which tend to favor the male professors. Also, using group-level average differences to make a probability judgment when no individual-level data is available is thing (though still sexism when you don't have any real data, just assumptions), but in this case the evaluators all had extensive detailed information about teaching skill for each individual candidate. To ignore this in favor of a priori assumed differences between genders is the very definition of sexism.

The studies used large sample of the 3 tiers of 4-year Universities which differ in how much teaching is involved in the job: schools that don't have grad programs or much research and are all about teaching undergrads with a 5-8 course-load per year, schools that have Master's level grad programs with less teaching load, and what are called "Research 1" Universities with PhD grad programs where getting grants and doing research are primary and the teaching load in 1-2 classes per year (sometimes less). The data showed not effect of school type on the pro-female gender bias, meaning that percieved ability to performing teaching responsibilities had little to nothing to do with the effect.
Are you quoting from OP paper?
Universities with PhD and research programs are still mostly about teaching. That's where most money come from.
All faculty members do the teaching. And undergrad teaching is basically babysitting nowadays.


Yes, the paper talks about "teaching-intensive" versus "research-intensive" Universities, and they actually know what they are talking about.
Research 1 Universities, are called that because of their intense research focus. The teaching load for new faculty is often 1-2 courses per year, which amounts to about 10% of their time. Their tenure is determined almost entirely by the research productivity and grant acquisition. The Universities get massive $$ from research grants that their faculty apply for and get. For every $1 that goes to the actual research, about 50 cents goes to the University more generally as "Indirect Cost Recovery". That means, a 5 million dollar grant by one faculty member brings 2.5 million into the Universities coffers, and there are nearly 300 such "Research Universities" whose faculty bring in a minimum of 40 million per year in Federal research grants. A huge % of the undergrad teaching at these schools is done by part-time "adjunct faculty" who cannot get tenure, are not hired via typical faculty searches and are just paid by the class each semester when the departments need someone. Prior teaching experience is barely a factor in tenure-track hiring decisions at such schools. In contrast, it is THE primary factor in hiring at "Liberal Arts" schools with no grad programs that often require 8 courses per year, leaving maybe 10% of the time to do research, without any serious expectations that faculty do much research. My friend just got hired at one such place and was told that 2-3 total papers in the next 5 years would get him tenure. In contrast, the Research University he got his PhD from wouldn't hire a person to even be considered for tenure in 5 years unless they already had 15 publications before they walked in the door, and another 15 in the next 5 years. In sum, being faculty at these schools and the requirements of their respective positions have very little in common.


All I am saying there was another presumably good study which came to the opposite conclusion.
Obviously I am biased toward this OP study but the fact that there are studies with opposite conclusion says there are problems in this whole field.

There is not a "good study with the opposite conclusion". The only other study to use methods allowing for an experimental test of the causal impact of gender on tenure track University hiring was done 18 years ago, and was limited to only Psychology. The difference in recency means it can't be either the same or and opposite result, because it is a study about a rapidly changing attitudes at a different time period. In addition, the single study had small sample sizes. Each evaluator only examined a single job candidate, and a sample of only about 60 faculty evaluating each of the candidates (compared to 363 evaluators in Study 1 of the OP paper). Also, the results of gender were mixed, depending on the objective strength of the applicants. When evaluating a highly competitive, objectively strong candidates with a realistic chance of getting hired at most Universities (as were the candidates in the OP study), they found no difference in the ratings of the candidate when the given name was male versus female. The male named applicant only got higher ratings when the application was rather weak, representing a person fresh out of grad school with little experience or track record (they had zero papers under review or in preparation and only 9 published and 9 conference presentations (that is highly mediocre to below average for a tenure track applicant in Psych).
In sum, the only other relevant finding is a single, non-replicated small sample study showing that 18 years ago, the was no gender bias among strong candidates and a male bias among weak candidates. That doesn't even conflict with the current result, since each paper is about the attitudes and practices of their respective times, and the most comparable results (the competitive candidate merely showed no bias either way, not an opposite bias from the current studies). Finally, each one of the current studies has superior methods to that old one, and their is 5 of them showing cross-context replication.

The only other study done recently that wasn't and experiment, but at least controlled for rates of applying for jobs and other qualification confounds produced a highly similar 2:1 bias in favor of hiring women to tenure track positions. That is cited in the paper, as I mentioned early in the thread.
 
There were 5 studies, some studies had that info and some did not, and none of it notably altered the gender bias in favor of female applicants. The studies that included that info were those that did not use an actual CV for each candidate but rather used a written narrative summary of the applicants supplied by a search committee who had met and interviewed the candidates. The info about their spouses and kids were mentioned in passing within the context of the applicants comments about potentially moving to the area, etc.. In addition, it isn't uncommon for applicants to say things in their personal statements that imply this info, or put something (like pics) linked on their webpages which any thorough search committee member examines when deciding among the short listed candidates. Regardless, the same result was obtained whether or not this info was there or whether what was evaluated was CVs or narrative summaries by a search committee, showing the robustness, reliability, and rather context-independent nature of the pro-female bias.
In my experience, it is not common for candidates to mention such personal information in their applications. Which makes me wonder what those studies. Extrapolating conclusions from such studies to real situation that do not contain such information seems a bit dodgy.


The vast majority of the more common names imply a likely gender.
That does not seem to be the case in my experience.

In addition, these days it is rare for search committee members not to view a candidates professional webpage, which almost always have pics, not to mention often links to personal interests.
Not in my experience. We typically have 70 to 100 applicants which makes such in-depth analysis extremely time-consuming.
Also, the names of these candidates where not always included, because that is the most transparent way that gender is manipulated in studies of bias. In the studies Gender of applicant was implied by gendered pronouns in the search committee summaries. Again, these differences did not matter to the results.
Maybe, maybe not. Given the results are not based on actual searches but on poorly simulated searches, I find the results not terribly convincing. Add in that some fields may wish to hire women in order to attract more female students, and I find the results even less convincing.
 
ronburgundy, you sound almost as if you were the author of the study.
I am going to continue to be suspicious of this whole field and their conclusions.
 
ronburgundy, you sound almost as if you were the author of the study.

No. I just carefully read and accurately understood the paper. Also, I am aware of the literature and capable and willing to evaluate this and other studies as would critical peer reviewing scientist. After suffering through countless claims based upon completely meaningless, improperly analyzed correlational data, it is a breath of fresh air to read a paper by researchers with at least minimal scientific competence and integrity. Had they found different results, such as opposing biases in different fields or negative effects of females taking maternal leave in grad school, I would have thought the paper equally sound and informative. I was actually surprised that engineering faculty showed such bias in their private evaluations without the social Affirmative Action pressure that making these rankings in front of your colleagues brings.


I am going to continue to be suspicious of this whole field and their conclusions.


You should be suspicious of all conclusions in all fields, including evolution. But if you want to be rational and not faith-based, then you should give weight to conclusions based upon the actual quality of their methodology and its ability to account for, counter, and rule out various objections and alternative explanations.
You are downgrading this studies conclusions because you have some vague memory of some past study that you presume researched an opposite conclusion, but cannot produce that study to confirm this. That is not a rational scientific approach. Odds are that you have some subjective probability that you assign the to issue of gender bias in academic hiring (even if it is currently at 50/50). As imperfect as these studies are, you cannot point to any empirical data that is more valid and methodologically rigorous that favors a different conclusion, and neither you nor anyone else in the thread have raised any valid objections to their methodology that are not countered by the variations in the methods across the studies, all converging on the same finding. So while you can remain uncertain, it is the definition of irrational not to significantly shift your subjective probability toward there currently being a pro-female bias in academic hiring, given that the tally of experimental studies capable of testing the current bias is 5 to 0 in favor of a current female bias, plus 1 to 0 in correlational studies that actually control for application rates and other objective confounds with gender. That is not very ambiguous or unclear.
 
No. I just carefully read and accurately understood the paper. Also, I am aware of the literature and capable and willing to evaluate this and other studies as would critical peer reviewing scientist. After suffering through countless claims based upon completely meaningless, improperly analyzed correlational data, it is a breath of fresh air to read a paper by researchers with at least minimal scientific competence and integrity. Had they found different results, such as opposing biases in different fields or negative effects of females taking maternal leave in grad school, I would have thought the paper equally sound and informative. I was actually surprised that engineering faculty showed such bias in their private evaluations without the social Affirmative Action pressure that making these rankings in front of your colleagues brings.


I am going to continue to be suspicious of this whole field and their conclusions.


You should be suspicious of all conclusions in all fields, including evolution. But if you want to be rational and not faith-based, then you should give weight to conclusions based upon the actual quality of their methodology and its ability to account for, counter, and rule out various objections and alternative explanations.
You are downgrading this studies conclusions because you have some vague memory of some past study that you presume researched an opposite conclusion, but cannot produce that study to confirm this. That is not a rational scientific approach. Odds are that you have some subjective probability that you assign the to issue of gender bias in academic hiring (even if it is currently at 50/50). As imperfect as these studies are, you cannot point to any empirical data that is more valid and methodologically rigorous that favors a different conclusion, and neither you nor anyone else in the thread have raised any valid objections to their methodology that are not countered by the variations in the methods across the studies, all converging on the same finding. So while you can remain uncertain, it is the definition of irrational not to significantly shift your subjective probability toward there currently being a pro-female bias in academic hiring, given that the tally of experimental studies capable of testing the current bias is 5 to 0 in favor of a current female bias, plus 1 to 0 in correlational studies that actually control for application rates and other objective confounds with gender. That is not very ambiguous or unclear.
I have not read and not going to read this paper.
I will just try to trash it without reading using only what was quoted here.
Looks like they observed this "bias" in three fields and no bias in economics. Have they explained that?
what is so damn special about economics which makes them objectively unbiased?
Were these people aware/suspicious that they were taking part in the experiment?
Maybe economists were not aware and the rest were and tried to overcompensate?
have they thought about that?
Maybe economists have procedures in place to prevent this bias and experimenters were not aware of that?
In any case, without explanation that fact put their conclusion in question, you have to understand that.
 
In my experience, it is not common for candidates to mention such personal information in their applications. Which makes me wonder what those studies. Extrapolating conclusions from such studies to real situation that do not contain such information seems a bit dodgy.

They provide empirical evidence that most academics disagree with you and found the application materials highly realistic and plausible. They did a validation study where they had a national sample of academics give them feedback about the plausibility of the materials. In addition, as I have already explained many times, that information you find "uncommon" was not always included, thus they have clear evidence that including it had not significant impact on the results. IOW, there is no need to rely on your unreasoned speculations about it, because there is direct empirical evidence that your speculations are wrong.

The vast majority of the more common names imply a likely gender.
That does not seem to be the case in my experience.

Wow. That comment should be linked in the Wiki for "Grasping at straws". If that is "your experience", then you live an alternate reality from 99.9% of humanity whom by age 5 could reliably sort-by-gender the most common names in their culture (most applicants come from the culture of the University).

In addition, these days it is rare for search committee members not to view a candidates professional webpage, which almost always have pics, not to mention often links to personal interests.
Not in my experience. We typically have 70 to 100 applicants which makes such in-depth analysis extremely time-consuming.

The vast majority of applicants are eliminated with little effort by the fact that they don't even have the proper degree or their record is so undeniably and extremely inferior to the top 10% of the pool. That leaves a handful of applicants whose websites would be visited to glean more info to help identify and rank a short-list. Again, this is a moot point since almost all adults can and do immediate and reliably associate a gender with the majority of the names that would be on an applicant list.

Also, the names of these candidates where not always included, because that is the most transparent way that gender is manipulated in studies of bias. In the studies Gender of applicant was implied by gendered pronouns in the search committee summaries. Again, these differences did not matter to the results.

Maybe, maybe not.

Nope, no maybe, just definitely not. It isn't a matter of opinion. They directly tested whether these factors mattered and showed that they did not. Like most opinions you express, this one is also directly refuted by scientific data.

Given the results are not based on actual searches but on poorly simulated searches,

They provide empirical data that most academics find the simulations realistic, and direct empirical tests showing that every aspect that you find "poor" had no impact on the results.


I find the results not terribly convincing.
Of course not, because they conflict with your purely faith based a priori belief. Your prior use of and agreement with "research" on various topics employing far inferior and less valid empirical methods suggests that whether you are "convinced" has zero to do with reasoned evaluation of methods and 100% to do with whether you were already convinced of prior to the research. You absurd, invention of the notion that most academics have no perceptions of applicant gender even when they know their name is itself highly valid evidence for the conclusion that you are willing to invent any excuse you can to not be "convinced".

Add in that some fields may wish to hire women in order to attract more female students, and I find the results even less convincing.

Great, yet more evidence that your being "convinced" is rooted in political stance and emotion, and as no basis in reasoned evaluation of the methods and data.
Whether the pro female bias reliably and validly demonstrated by these studies is motivated by a political or educational agenda that you feel is positive has no bearing on the results themselves or their validity. The underlying motive for the bias is a completely separate question from the existence of the bias.
That said, the data show that if some fields have such a motive, it is not plausibly the motive for the same bias in other fields, such as Psychology and Biology which are already 50% female faculty and, in Psychology, already 75% female students. You not only just further demonstrated that you reject the results, not on method, but because you think they conflict your a priori beliefs about educational goals, but you also demonstrated that you have no concept of the logical and scientific difference between the 3 issues of 1) Is there a bias?; 2) What is the motivation for it?; 3) Is there a poltically/morally justifiable motive for it?
 
They provide empirical evidence that most academics disagree with you and found the application materials highly realistic and plausible....
Well, since there was no actual hiring (i.e. no actual consequences for the results), that is still unconvincing.

Wow. That comment should be linked in the Wiki for "Grasping at straws". If that is "your experience", then you live an alternate reality from 99.9% of humanity whom by age 5 could reliably sort-by-gender the most common names in their culture (most applicants come from the culture of the University).
In my area, most of the applicants come from other lands and I am not familiar with how the names necessarily indicate gender.


The vast majority of applicants are eliminated with little effort by the fact that they don't even have the proper degree or their record is so undeniably and extremely inferior to the top 10% of the pool. That leaves a handful of applicants whose websites would be visited to glean more info to help identify and rank a short-list. Again, this is a moot point since almost all adults can and do immediate and reliably associate a gender with the majority of the names that would be on an applicant list.
You simply have no idea what goes on in my area. We get 70 to 100 applicants who meet the qualifications in the notice of vacancy. And as I mentioned above, a large portion of the applicants are foreign, and I am unfamiliar with determining how their name necessarily tells one their gender.

They provide empirical data that most academics find the simulations realistic, and direct empirical tests showing that every aspect that you find "poor" had no impact on the results.
Totally irrelevant - there were no searches.


Of course not, because they conflict with your purely faith based a priori belief. Your prior use of and agreement with "research" on various topics employing far inferior and less valid empirical methods suggests that whether you are "convinced" has zero to do with reasoned evaluation of methods and 100% to do with whether you were already convinced of prior to the research. You absurd, invention of the notion that most academics have no perceptions of applicant gender even when they know their name is itself highly valid evidence for the conclusion that you are willing to invent any excuse you can to not be "convinced".
I find the result unconvincing because the trials were contrived and there were no actual searches or hires. It doesn't matter what other academics thought about the perceived realism - it was not real. No amount of posturing on your part can change that basic fact.


Great, yet more evidence that your being "convinced" is rooted in political stance and emotion, and as no basis in reasoned evaluation of the methods and data.
Whether the pro female bias reliably and validly demonstrated by these studies is motivated by a political or educational agenda that you feel is positive has no bearing on the results themselves or their validity. The underlying motive for the bias is a completely separate question from the existence of the bias.
That said, the data show that if some fields have such a motive, it is not plausibly the motive for the same bias in other fields, such as Psychology and Biology which are already 50% female faculty and, in Psychology, already 75% female students. You not only just further demonstrated that you reject the results, not on method, but because you think they conflict your a priori beliefs about educational goals, but you also demonstrated that you have no concept of the logical and scientific difference between the 3 issues of 1) Is there a bias?; 2) What is the motivation for it?; 3) Is there a poltically/morally justifiable motive for it?
Sigh. First, if attracting more students to study one's field is a legitimate factor in hiring faculty and if faculty believe (rightly or wrongly) that all other things equal, a woman is more likely to attract more students than a man, it is no more of a bias to prefer women over men than it is to prefer PhDs to non-PhDs.

Second, if that belief is held by faculty (even if it is invalid), then the proportion of women in the faculty or student body is not relevant.

Third, you have no idea what my a priori beliefs are. An intellectually honest person can find results unconvincing even when they confirm a priori beliefs.
 
No. I just carefully read and accurately understood the paper. Also, I am aware of the literature and capable and willing to evaluate this and other studies as would critical peer reviewing scientist. After suffering through countless claims based upon completely meaningless, improperly analyzed correlational data, it is a breath of fresh air to read a paper by researchers with at least minimal scientific competence and integrity. Had they found different results, such as opposing biases in different fields or negative effects of females taking maternal leave in grad school, I would have thought the paper equally sound and informative. I was actually surprised that engineering faculty showed such bias in their private evaluations without the social Affirmative Action pressure that making these rankings in front of your colleagues brings.


You should be suspicious of all conclusions in all fields, including evolution. But if you want to be rational and not faith-based, then you should give weight to conclusions based upon the actual quality of their methodology and its ability to account for, counter, and rule out various objections and alternative explanations.
You are downgrading this studies conclusions because you have some vague memory of some past study that you presume researched an opposite conclusion, but cannot produce that study to confirm this. That is not a rational scientific approach. Odds are that you have some subjective probability that you assign the to issue of gender bias in academic hiring (even if it is currently at 50/50). As imperfect as these studies are, you cannot point to any empirical data that is more valid and methodologically rigorous that favors a different conclusion, and neither you nor anyone else in the thread have raised any valid objections to their methodology that are not countered by the variations in the methods across the studies, all converging on the same finding. So while you can remain uncertain, it is the definition of irrational not to significantly shift your subjective probability toward there currently being a pro-female bias in academic hiring, given that the tally of experimental studies capable of testing the current bias is 5 to 0 in favor of a current female bias, plus 1 to 0 in correlational studies that actually control for application rates and other objective confounds with gender. That is not very ambiguous or unclear.

I have not read and not going to read this paper.
I will just try to trash it without reading using only what was quoted here.

Funny, when I first read that last line, I thought you were accurately mocking all the others in the thread raising invalid objections that are all empirically addressed and refuted by the paper itself. At least you admit to not actually reading it, but its going to show in your invalid interpretations below. Also, if you aren't going to read a paper with the only experimental data on the question, then you clearly have little interest in forming a reasoned based view on the issue. That is fine, to each their own. But it means that to be honest to yourself and others, you either must admit that you are either forming a non-rational view based more in ideology than evidence, or that you have no interest in forming any view on it either way, which makes it odd why you are even looking at let alone posting in this thread.

Looks like they observed this "bias" in three fields and no bias in economics. Have they explained that?
what is so damn special about economics which makes them objectively unbiased?

They did observe an overall bias in economics, and it was not statistically different from the other fields.

fromthearticle said:
[P]"There was no evidence that women were
preferred more often in some fields than others: women were
strongly preferred in biology, engineering, economics, and psychology,
with χ2 ranging from 3.89 to 19.17 and all Ps < 0.05."[/P]

What you are misinterpreting is that when they also divided each discipline by gender of the evaluator, and analyzed them separately, there was no significant bias among male faculty in economics but a still at strong 2:1 bias among female faculty in economics. Note that they did not find a statically significant difference between the bias of male and female faculty in economics, nor a difference from other disciplines. What this means is that they are doing follow up analyses where they are breaking up the sample into sub-groups and analyzing the bias within each one separately, even though the more rigorous overall analysis fail to support their being a notable difference between genders or between disciplines. That isn't terrible on their part, because they fully acknowledge it and emphasize that the bias emerges overall across faculty gender and discipline. One problem with breaking the sample into subgroups when the overall (aka "Omnibus" test) fails to support doing so, is that you wind up analyzing much smaller sample sizes. Even though there were 363 people in that study, it so happened that male economists was the smallest sub-group with only 31 people in it, so looking at it separately gives an unreliable estimate. That said, maybe that small sample estimate is in fact a good population estimate of male econ faculty. It is plausible that male economist don't show a bias despite males in the other 3 disciplines doing so.
Why is a separate question of whether the bias exists (a distinction many in this thread fail to grasp). Economists typically show up as being more conservative and less liberal than other disciplines, including the "hard" sciences. A study showed that 39% of econ faculty identify as "politically conservative", which is as high as Business faculty and notably higher than all other disciplines except "Nursing" (Physics and Math have only 11% "conservatives"). That also makes it likely that the "liberal" economists are relatively moderate in their liberalism. One would think that their conservatism would be especially likely to show up on the issue of hiring standards and affirmative action since that is very much an economics issue and tied to "free market" solutions versus regulation and social engineering solutions to existing outcome inequalities. But notice they still did not show a pro-male bias, just not a pro-female bias.


Were these people aware/suspicious that they were taking part in the experiment?
They went to very great lengths to mask from the subjects that gender of the applicant was the hypothesized factor in the study, and they did follow up interviews showing that few subjects suspected this. The methods by which they did this are complex (and actually quite clever).
If you're not interested enough in the facts of the study to read it, I can't be bothered to reproduce it for your here.

Maybe economists were not aware and the rest were and tried to overcompensate?

I think my explanations for the male economists above is more plausible, and actually have prior data to support them. Why would male economists be less aware of the gender bias nature of the study?

Maybe economists have procedures in place to prevent this bias and experimenters were not aware of that?
See above.

In any case, without explanation that fact put their conclusion in question, you have to understand that.

See above. Your "fact" isn't a fact and the actual fact (lack of bias among the sub-sample of male economists when analyzed separately) doesn't call their conclusion into question. They fully acknowledge this fact in their conclusions but also acknowledge that this was the sole exception among the 8 sub-samples that didn't show the strong 2:1 female bias, and it didn't show an opposite bias just no bias. Take 99% of the scientific results that you fully accept, break the sample up into relatively small sub-samples on various traits of the participants, and analyze each separately. The effect found overall will fail to emerge in some of those sub-samples. If that is your criteria for doubting the findings, then you need to doubt the findings of 99% of the findings you've ever heard in any area of science.
 
The reason I don't read it is because all these studies invariably weak or simply trash. Regardless of their findings.
What you are misinterpreting is that when they also divided each discipline by gender of the evaluator, and analyzed them separately, there was no significant bias among male faculty in economics but a still at strong 2:1 bias among female faculty in economics. Note that they did not find a statically significant difference between the bias of male and female faculty in economics, nor a difference from other disciplines
That does not parse, try again.


I will also add, I really don't see how they can hide the fact that this is an experiment.
 
Back
Top Bottom