• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Two metastudies: gun ownership rates vs violent crime rates

Underseer

Contributor
Joined
May 29, 2003
Messages
11,413
Location
Chicago suburbs
Basic Beliefs
atheism, resistentialism
http://annals.org/article.aspx?articleid=1814426

http://www.realclearscience.com/blo...een_gun_ownership_rates_and_higher_crime.html

This is a hot button issue, and I'm hoping to shy away from the usual partisan hollering on the gun issue. We've got two meta-analyses showing conflicting results, but what you should take away from both studies is that whatever conclusions we can draw on this topic are weak at best.

I bring this up because as many of us are very concerned with poorly supported science claims. Even if you pick the metastudy that supports "your side" of this issue, if you read the study you should come to the conclusion that at best you get a very weak conclusion from the metastudy, and strong claims about the correlation or lack thereof between gun ownership rates and gun crime rates should be avoided.

We spend a lot of time either yelling at anti-science people or yelling about anti-science people. If we're going to be consistent about things, then we should probably not focus on the question of whether or not gun availability correlates with crime rates, because any conclusion you try to draw either way will be a weak conclusion. There are still plenty of other aspects of the gun control policy discussion to yell yourself hoarse if that's what you want to do, but if we're going to yell at people for drawing poor conclusions about GMO, vaccines, etc. from the available science, then we should probably avoid claiming that there is no correlation or that their is correlation, because quite frankly the evidence is all over the place,
 
The problem here seems to be that those doing the studies seem to ignore a basic scientific principle that correlation does not imply causation. Drawing conclusions from correlations is not science but a (propaganda) technique employed by advocates in support of their causes.
 
The problem here seems to be that those doing the studies seem to ignore a basic scientific principle that correlation does not imply causation. Drawing conclusions from correlations is not science but a (propaganda) technique employed by advocates in support of their causes.

Yeah, self defense gun ownership rates would tend to follow crime rates. Since you have a scenario that has both A -> B and B -> A correlations aren't evidence of which arrow is bigger.
 
The problem here seems to be that those doing the studies seem to ignore a basic scientific principle that correlation does not imply causation. Drawing conclusions from correlations is not science but a (propaganda) technique employed by advocates in support of their causes.

Yeah, self defense gun ownership rates would tend to follow crime rates. Since you have a scenario that has both A -> B and B -> A correlations aren't evidence of which arrow is bigger.
Not only that but both could be accounted for by another factor or each could be accounted for by independent factors that had no relation whatsoever to do with each other.

ETA:
I thought Wiki might have a better description of the fallacy and they do:

http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation

Correlation does not imply causation is a phrase used in science and statistics to emphasize that a correlation between two variables does not necessarily imply that one causes the other.[1][2] Many statistical tests calculate correlation between variables. A few go further and calculate the likelihood of a true causal relationship; examples are the Granger causality test and convergent cross mapping.

The counter assumption, that correlation proves causation, is considered a questionable cause logical fallacy in that two events occurring together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for "with this, therefore because of this", and "false cause". A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for "after this, therefore because of this").
 
Last edited:
From everything I've seen, having or not having guns is pretty much irrelevant to the crime rate. The only thing I've ever seen them to do is make it more likely that someone will die if a crime occurs.
 
How avbout we start here.
Firearms cause an estimated 31 000 deaths annually in the United States (1).

I'm sure the people kill people people are having a nice laugh about this article first sentence.

So much for correlation and causality, yes?

By the way did anyone read that personal narrative about electrical stimulation and cancer in SA?

I'm thinking doctors are having trouble being doctors when the profit motive keeps erupting.
 
The problem here seems to be that those doing the studies seem to ignore a basic scientific principle that correlation does not imply causation. Drawing conclusions from correlations is not science but a (propaganda) technique employed by advocates in support of their causes.

The articles both mention the things that are done to make sure this isn't happening. Are they doing enough? I'm too lazy to read each of the studies used in the metastudies to find out.

- - - Updated - - -

The problem here seems to be that those doing the studies seem to ignore a basic scientific principle that correlation does not imply causation. Drawing conclusions from correlations is not science but a (propaganda) technique employed by advocates in support of their causes.

Yeah, self defense gun ownership rates would tend to follow crime rates. Since you have a scenario that has both A -> B and B -> A correlations aren't evidence of which arrow is bigger.

One of the articles I cited in the OP directly addresses this.
 
Yeah, self defense gun ownership rates would tend to follow crime rates. Since you have a scenario that has both A -> B and B -> A correlations aren't evidence of which arrow is bigger.
Not only that but both could be accounted for by another factor or each could be accounted for by independent factors that had no relation whatsoever to do with each other.

Of course. It's just there's a very obvious counter-explanation, this is worse than the typical example.
 
The articles both mention the things that are done to make sure this isn't happening. Are they doing enough? I'm too lazy to read each of the studies used in the metastudies to find out.
Not that I see. They have both only developed hypotheses based on the data (absolutely nothing wrong with that) but they present those hypotheses as conclusions (there is a hell of a lot wrong with that). Hypotheses mean nothing if not tested and, unfortunately, it is difficult to impossible to design such tests because of concern over serious breaches of ethics and/or the need for oppressive authoritarian governmental control. You may, or may not, be aware that the overwhelming majority of hypotheses fail once they are subjected to rigorous testing and at least one (if not both) of these hypotheses must necessarily fail.

Designing tests to evaluate each hypothesis would be no problem if not for ethical considerations and Constitutional law.
 
Last edited:
The articles both mention the things that are done to make sure this isn't happening. Are they doing enough? I'm too lazy to read each of the studies used in the metastudies to find out.
Not that I see. They have both only developed hypotheses based on the data (absolutely nothing wrong with that) but they present those hypotheses as conclusions (there is a hell of a lot wrong with that). Hypotheses mean nothing if not tested and, unfortunately, it is difficult to impossible to design such tests because of concern over serious breaches of ethics and/or the need for oppressive authoritarian governmental control. You may, or may not, be aware that the overwhelming majority of hypotheses fail once they are subjected to rigorous testing and at least one (if not both) of these hypotheses must necessarily fail.

Designing tests to evaluate each hypothesis would be no problem if not for ethical considerations and Constitutional law.

Your argument makes no sense.

This is a metastudy that combines the conclusions of a large number of scientific studies, and you are claiming that all of those scientific studies failed to test their hypotheses?
 
Not that I see. They have both only developed hypotheses based on the data (absolutely nothing wrong with that) but they present those hypotheses as conclusions (there is a hell of a lot wrong with that). Hypotheses mean nothing if not tested and, unfortunately, it is difficult to impossible to design such tests because of concern over serious breaches of ethics and/or the need for oppressive authoritarian governmental control. You may, or may not, be aware that the overwhelming majority of hypotheses fail once they are subjected to rigorous testing and at least one (if not both) of these hypotheses must necessarily fail.

Designing tests to evaluate each hypothesis would be no problem if not for ethical considerations and Constitutional law.

Your argument makes no sense.

This is a metastudy that combines the conclusions (hypotheses) of a large number of scientific studies, and you are claiming that all of those scientific studies failed to test their hypotheses?
I don't know what you call "testing their hypotheses" but I have never seen such a sociological study where all other variables (and there are possibly hundreds of factors that could account for the observed correlation) are held unchanged and only the factor they hypothesize to be the cause changed. Then observe to see if what they hypothesize to be the effect actually happens. What I see is the studies starting with their belief that there is some specific cause and effect then amassing societal data to find correlations (again, correlation does not imply cause). They then interpret the correlations based on what their already existing beliefs are. This is why we can get two mutually opposed "conclusions" from the same data set (with a lot of hand waving justifications on both sides)... the starting hypotheses were not tested. In effect, what they started with is what they "conclude" and all those who hold the same beliefs nod their head thinking their belief was "proven" and all those who hold opposing beliefs shake their heads, no, and point to the opposite "conclusion" that agrees with their beliefs.

As I said, the overwhelming majority of hypotheses do not stand up to rigorous testing and have to be scrapped. It is unfortunate that rigorous testing is, for the most part, not possible in the social sciences so we end up with a lot of "just-so-stories" (hypotheses) being accepted as science.

Perhaps you could point to some study in the social sciences where the person doing the research "testing their hypothesis" found that their hypothesis was wrong. I don't mean where they changed their mind and accepted someone else's hypothesis because it sounded more reasonable. I mean where they were trying to verify their hypothesis and found it wrong themselves through their testing. There are an untold number of such revelations in the hard sciences - this is essentially how hard science is done, learning by proving your ideas wrong.
 
Last edited:
Perhaps you could point to some study in the social sciences where the person doing the research "testing their hypothesis" found that their hypothesis was wrong. I don't mean where they changed their mind and accepted someone else's hypothesis because it sounded more reasonable. I mean where they were trying to verify their hypothesis and found it wrong themselves through their testing. There are an untold number of such revelations in the hard sciences - this is essentially how hard science is done, learning by proving your ideas wrong.

The hypothesis being tested in social science is the null hypothesis. Generally there are two types of study. Those by individuals who have a model they would like to test and those by individuals who want to extend data to current theory. If the null is rejected an explanation is provided for the differences found by the experimental methods which fits in to current validated theory, usually a micro-theory extending current theory. Sometimes a new theory that can explain the data better than previous theories is presented. The latter is very very rare subject to close review and demands for replication.

Studies that don't achieve significant results are generally not published or published in journals of non-significant results or considered in discussion where other parts of the study achieved significant results.
 
Not all meta-studies are created equal.

Science is not limited to falsification of causal claims. That fails to capture most of what goes on in every scientific field and is a form of radical Popperianism long ago rejected by most philosophers of science and practicing scientists as naively simplistic.

Examining covariance patterns can be science, and that is what the researchers here are doing.

The studies in question are testing for patterns of reliable covariance among events/states, namely legal "gun availability" and "crime". However, both constructs are broad and the two studies differ greatly in what they are really studying. This is not unlike saying two biologist are studying "mammals", but one is studying dogs and the other cats, which can make all the difference. Thus, their is no contradiction between the findings. Mostly, they ask different questions, and where they ask the same question, the get the same answer.

The first study (which finds the positive covariance) is studying actual personal access to a gun by ownership or cohabitation with the gun owner. The outcome they are studying is the person's later involvement of that person in a homicide or suicide. They measure the variables at the level of each individual person, which greatly reduces many of the confounds that apply when the variables are measured at an aggregate level like town, state, etc.. There are still 3rd variable confounds, but far fewer with this approach. Also, the homicides and suicides always occur later in time than the measured gun access, thus ruling out reverse causality.

The second study (which finds null results for somethings and positive association for others) is NOT studying any personal-level variables and thus not studying actual access to guns. Rather it relies only on studies that measure how many guns are owned within different geographic regions or time points, which has very minimal relationship to the variable of actual gun access by different people. IOW, it only relates to gun access if one presumes that a person who knows no one with guns has more access to guns simply because another guy across the state has a gun. In addition, two states with identical gun ownership rates will vary greatly in how proximal each person is to a gun, depending on factors like the number of people in a state, the avg per sq mile pop density, the exact way the pop is distributed (in a few clusters or evenly dispersed). This isn't just an issue of confounds but of massive random measurement error in which the actual plausible mechanism (personal access to a gun) is only a tiny % of the variance captured by what is being measured. This makes "null" results like what was found very likely, regardless of whether gun access is correlated with (or directly causes) violence. In addition, the outcome variable is not homicide or suicide, but rather crime in general, including non-violent crimes (auto-theft and drug use) and violent crimes where guns are frequently not involved (assault and rape).
When he limits the analysis to where homicide is the outcome, there is a positive correlation with gun ownership rates, and the studies that don't find one are those that don't actually test for one, but rather test for a correlation that is independent of many factors that are not reasonable to expect it to be. IOW, adding "control variables" is wrongly presumed by many (including this researcher) to test whether the effects are "real". This is not true. While correlations that hold up to many control variables being added are more impressive, those that do not hold up is not at all evidence that they are spurious, non-causal, or not "real".
IT merely means that the control variables cannot be ruled out as alternative explanations, but also are not any more plausible as explanations than the main predictor. For example, imagine that low education makes a person more likely to buy a gun, and that gun enables a homicide that would not have otherwise occurred. The gun played a causal role in the homicide. Yet, if you control for their lower income, that gun-homicide relationship may drop to non-significant. Thus, such "null" results after controls are not any kind of evidence against the relationship, just a failure to rule out some alternative explanations for it, often because the variables make it near impossible to do so. What is telling is that the relationships are virtually never negative, even after the control variables are entered, and one would expect some negative relations if the true relationship was spurious and non-causal.

The bottom line is that only the first meta-study examines studies that actually test whether personal access to guns predicts later personally involved gun-violence, and it finds highly consistent and strong association (3 fold increase) across many studies. The second study is basically a strawman attack, testing a claim that no one makes, namely whether being in a state with more guns predicts more crime in general, independently of many things that it makes little sense to think an actual causal relation would be independent of.
 
Acting as a scientist does not make what one does science. I'm all in favor of people looking at and manipulating data in a manner where others can use common understandings anchored in measurable standards to find things out and to communicate those things to others. But that's not enough to be science.

As it stands now we have three our four overarching models that all can be related to the others through experiment, standards, and data as well as by the training and performance requirements for those who practice in those . those are the science. Others who model after those science yet don't have data that can be tied to what we know of the physical world may, in time, become sciences. Sure their investigators work as scientists, but, these disciplines are not sciences. That is true simply because they have not yet arrived at the point where they have an overarching theory that ties into existing physical theory.

For example, for the extent to which psychologists in the neurosciences link to physics (psychophysics), biology (nervous activity, function, and organization), they are scientists doing science in fields that, as yet, has no overarching theory beyond that of physics and biology. As for the the others, historians who use anthroppolgical and statistical methods they are scientists using scientific tools in fields that are not sciences.
 
Back
Top Bottom