• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Isoprene correlates with movie ratings

Underseer

Contributor
Joined
May 29, 2003
Messages
11,413
Location
Chicago suburbs
Basic Beliefs
atheism, resistentialism
https://www.cnbc.com/2018/11/27/gen...p-threatens-to-cut-subsidies-for-company.html

[YOUTUBE]h-HAX3nWORI[/YOUTUBE]

Watson makes a good point about methodology (caveat: I'm not a scientist by any stretch of the imagination, so I'm far from an expert on methodology).

Good methodology
Based on our understanding of neuroscience, we think organic compound X in the air will correlate with movie rating. Let's show people a bunch of movies and measure the amount of organic compound X in the air at each showing.
Bad methodology
Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating.

A scientific hypothesis needs to be falsifiable before you design the methodology.

The hypothesis in this would be "hey, if we test for enough organic compounds in the air around a mass of humans, I bet we can find one that correlates with just about anything!"

Of course, this does not prove a causal link.

Trigger Warning
In the past, Rebecca Watson has argued that women are not mere objects that exist for the titillation and pleasure of men. Forum members of a more delicate nature (e.g. conservatives, libertarians, or anyone who uses terms like "social justice warrior") should think twice before clicking on the above video. I will not be held responsible for any rage or tears that result.
 
Good methodology
Based on our understanding of neuroscience, we think organic compound X in the air will correlate with movie rating. Let's show people a bunch of movies and measure the amount of organic compound X in the air at each showing.
Bad methodology
Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating.

A scientific hypothesis needs to be falsifiable before you design the methodology.

The hypothesis in this would be "hey, if we test for enough organic compounds in the air around a mass of humans, I bet we can find one that correlates with just about anything!"

So, what if they rerun the experiment, this time only measuring the compound they found the correlation with in the first experiment. And the find that the correlation still exists. Would this not be following the "Good methodology" and be more valid of a conclusion?

Moose
 
Good methodology
Based on our understanding of neuroscience, we think organic compound X in the air will correlate with movie rating. Let's show people a bunch of movies and measure the amount of organic compound X in the air at each showing.
Bad methodology
Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating.

A scientific hypothesis needs to be falsifiable before you design the methodology.

The hypothesis in this would be "hey, if we test for enough organic compounds in the air around a mass of humans, I bet we can find one that correlates with just about anything!"

So, what if they rerun the experiment, this time only measuring the compound they found the correlation with in the first experiment. And the find that the correlation still exists. Would this not be following the "Good methodology" and be more valid of a conclusion?

Moose
Indeed it would. In fact "Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating." is not necessarily "bad methodology", depending on the purpose of the experiment and the conclusions drawn. If this is done as an exploratory experiment, and we refrain from concluding that this one correlation is significant, then there is no problem. Such an exploratory experiment is fine for identifying potential compounds of interest, which could then be experimented with. The part that is critically important is to understand the limitations of the experimental design.

Peez
 
Odd, you'd think that box office returns would be a pretty accurate reflection on if people like a movie.
 
Good methodologyBased on our understanding of neuroscience, we think organic compound X in the air will correlate with movie rating. Let's show people a bunch of movies and measure the amount of organic compound X in the air at each showing.
Bad methodologyHey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating.

A scientific hypothesis needs to be falsifiable before you design the methodology.

The hypothesis in this would be "hey, if we test for enough organic compounds in the air around a mass of humans, I bet we can find one that correlates with just about anything!"

So, what if they rerun the experiment, this time only measuring the compound they found the correlation with in the first experiment. And the find that the correlation still exists. Would this not be following the "Good methodology" and be more valid of a conclusion?

Moose
Indeed it would. In fact "Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating." is not necessarily "bad methodology", depending on the purpose of the experiment and the conclusions drawn. If this is done as an exploratory experiment, and we refrain from concluding that this one correlation is significant, then there is no problem. Such an exploratory experiment is fine for identifying potential compounds of interest, which could then be experimented with. The part that is critically important is to understand the limitations of the experimental design.

Peez

Upon further experimentation, we've found that completely <sic> immersing people in isoprene for the duration of movies causes negative ratings to improve, while it causes positive ratings to decline.
 
Am I the only one who noticed that they used the existing (supposedly subjective and therefore flawed) ratings system as the benchmark against which their new (supposedly objective, and therefore reliable) measurements were tested for effectiveness?

If the old system works well enough to be used to benchmark the new system, why change? If it doesn't, then how do they know that the new system works at all?
 
Good methodology
Based on our understanding of neuroscience, we think organic compound X in the air will correlate with movie rating. Let's show people a bunch of movies and measure the amount of organic compound X in the air at each showing.
Bad methodology
Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating.

A scientific hypothesis needs to be falsifiable before you design the methodology.

The hypothesis in this would be "hey, if we test for enough organic compounds in the air around a mass of humans, I bet we can find one that correlates with just about anything!"

So, what if they rerun the experiment, this time only measuring the compound they found the correlation with in the first experiment. And the find that the correlation still exists. Would this not be following the "Good methodology" and be more valid of a conclusion?

Moose
Indeed it would. In fact "Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating." is not necessarily "bad methodology", depending on the purpose of the experiment and the conclusions drawn. If this is done as an exploratory experiment, and we refrain from concluding that this one correlation is significant, then there is no problem. Such an exploratory experiment is fine for identifying potential compounds of interest, which could then be experimented with. The part that is critically important is to understand the limitations of the experimental design.

Peez

The conclusion is that isoprene measurements should be used to rate movies in order to make movie ratings more impartial.

- - - Updated - - -

Good methodology
Based on our understanding of neuroscience, we think organic compound X in the air will correlate with movie rating. Let's show people a bunch of movies and measure the amount of organic compound X in the air at each showing.
Bad methodology
Hey, let's show people a bunch of movies, measure all of the organic compounds in the air, and arbitrarily pick the one that happens to correlate with the movie rating.

A scientific hypothesis needs to be falsifiable before you design the methodology.

The hypothesis in this would be "hey, if we test for enough organic compounds in the air around a mass of humans, I bet we can find one that correlates with just about anything!"

So, what if they rerun the experiment, this time only measuring the compound they found the correlation with in the first experiment. And the find that the correlation still exists. Would this not be following the "Good methodology" and be more valid of a conclusion?

Moose

It is more than possible for follow up research to show that there is in fact a meaningful correlation and that the above is not some kind of fluke.

The hard part would be identifying and eliminating possible other variables that affect the outcome given that the researchers don't actually know the mechanism by why this correlation happens.

Then again, follow up research could possibly show that the above was just a fluke.
 
Correlations are not really what science is about although we hear about correlations all the time.

Correlations are something for science to try to understand.

Causation is science.

Showing how one thing leads to another.

Correlations may or may not show causation and they are not yet science.
 
Isoprene and movie ratings?

One has to check for confounding factors, other possible sources of correlation. Variation in movie audiences are an obvious one. Different sorts of movies attract different sorts of moviegoers. This may cause isoprene differences directly, by what their bodies release, or indirectly, by what their preferred foods release. Different sorts of moviegoers may prefer different foods.

Furthermore, this success with isoprene may be a case of p-hacking, selecting out something that was lucky enough to have a large-enough correlation. One has to look at other volatile organic compounds, and also the likes of hydrogen sulfide, something present in our flatulence. If one can detect isoprene, then one must be able to detect lots of other VOC's.
 
Back
Top Bottom