• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

NASA engineers

Not all engineers are retards, but when you come across really retarded idea, chances are good, engineer is behind it.

And why are engineers doing physics here? Why do their "physics" have to be utter nonsense?
The same reason that every university with a decent school of science (and funding) wanted to try to duplicate the Fleischmann–Pons claims to have achieved cold fusion. If there was something to it then they would be in on the ground floor in describing a "new physics". If there was nothing to it then they would be noted as identifying the claim as nonsense.
Cold Fusion fiasco was different because the only reason why replication was attempted by all these universities is because they trusted PR before actual paper release. Afterwards they all said they would not had tried to replicate such BS.
Here we have clear BS from the get-go and no actual scientists (that is physicists) are even trying to replicate this atrocity, just engineers.
 
Last edited:
How do you measure errors in measurement, barbos?

To my knowledge, they come in two forms:
  1. Known margin of error in the equipment doing the measuring
  2. Differences in a large number of repeated measurements

Which of these do you believe NASA did not report to your satisfaction?

Both

With respect to item 2, they didn't take enough measurements to accurately calculate the error in repeated measurements. IIRC, the largest number of repetitions in any specific test was 5. That's not enough to statistically calculate error. It is unreasonable of you to be dissatisfied that NASA didn't calculate error in repeated measurements, when to do so would be statistically inappropriate. Thus, I suggest that you dismiss item #2 from consideration for your complaint.

That leaves item #1.

This is outside my area of expertise with respect to the specific test conducted. Speaking more generally, however, reporting the known margin of error in the equipment doing the measuring is only pertinent if the equipment error is material relative to the measurement being made. For example, if you're measuring distance with a grade-school ruler, and you're measuring something in sixteenths of an inch, then it would be reasonable to report equipment error... because the smallest unit of measurement, and the level of accuracy that you're reporting, are of a scale. But if, for example, you're measuring something in yards, using a measuring tape with units that are accurate to a sixteenth of an inch... then reporting the error in the equipment is meaningless. The error is immaterial.

Note that I do NOT know if this is the case. I don't know enough about the types of tests being conducted, nor of the equipment being used, to give anything remotely resembling an intelligent opinion on the matter. I am, however, pointing out that lack of error margins reported does not necessarily indicate that the testers are "a bunch of retarded engineers". It does not necessarily indicate any lack of skill, intelligence, or care on their part. Without considerably more information, you cannot accurately make that assessment.
 
It's totally reasonable to demand ramdom (statistical) errors to be provided..
Statistical errors are easy to estimate and 5 tests is enough to estimate random errors.
Systematical errors on the other hand are sometimes very hard to estimate.
Papers are automatically rejected if measurement contains no errors, because measurement without errors is meaningless.
Their "experiment" is a complete failure. But judging from the "data" they provided it's safe to assume their errors are mostly systematic.
 
Without considerably more information, you cannot accurately make that assessment.

I dunno... he was able to determine that my link here did not say what I C&Ped, without even reading it. Don't underestimate.
 
Emily Lake:
With respect to item 2, they didn't take enough measurements to accurately calculate the error in repeated measurements. IIRC, the largest number of repetitions in any specific test was 5. That's not enough to statistically calculate error.
It seems to me that barbos has got his/her shorts in a knot about this topic, but it is true that an estimate reported without any indication of variance is pretty meaningless. Further, it is quite possible to estimate variance from 5 values (e.g., for the data set 1, 2, 3, 4, 5; the mean is 3.0 and the statistical variance is 2.5). If, for whatever reason, there is no way to report any indication of the variability of the estimate, then the estimate should not be reported at all as a result.

Peez
 
Without considerably more information, you cannot accurately make that assessment.

I dunno... he was able to determine that my link here did not say what I C&Ped, without even reading it. Don't underestimate.
I think it's safe to assume that 4th year CS graduate student can not be considered authoritative source in the field of climatology.
So your link should have been sent straight to the garbage based on that alone.
 
Emily Lake:
With respect to item 2, they didn't take enough measurements to accurately calculate the error in repeated measurements. IIRC, the largest number of repetitions in any specific test was 5. That's not enough to statistically calculate error.
It seems to me that barbos has got his/her shorts in a knot about this topic, but it is true that an estimate reported without any indication of variance is pretty meaningless. Further, it is quite possible to estimate variance from 5 values (e.g., for the data set 1, 2, 3, 4, 5; the mean is 3.0 and the statistical variance is 2.5). If, for whatever reason, there is no way to report any indication of the variability of the estimate, then the estimate should not be reported at all as a result.

Peez
Errors is the first thing students learn to put in their lab report. And here we have NASA fucking engineers who don't care.
And frankly I would not bother wasting my time on grading lab report which does not provide errors and simply give 1-2 depending on how good handwriting is.

I am not a NASA engineer and have no way to determine significance of their results on my own.
 
Last edited:
I dunno... he was able to determine that my link here did not say what I C&Ped, without even reading it. Don't underestimate.
I think it's safe to assume that 4th year CS graduate student can not be considered authoritative source in the field of climatology.
So your link should have been sent straight to the garbage based on that alone.

Hokay.... Exposed, here.

- - - Updated - - -
 
It's totally reasonable to demand ramdom (statistical) errors to be provided..
Statistical errors are easy to estimate and 5 tests is enough to estimate random errors.
barbos, I'm not entirely sure about this. Statistical error is a function of standard deviation. Standard deviation on a sample of 5 is so large as to be meaningless. Statistically speaking, is has no real value. Several of their tests had a sample of 1, which means that error cannot be calculated at all. From a rigorous statistical point of view, error can't be calculated for any of the samples they took - there aren't enough measurements to do so with any credibility.

Systematical errors on the other hand are sometimes very hard to estimate.
Papers are automatically rejected if measurement contains no errors, because measurement without errors is meaningless.
Their "experiment" is a complete failure. But judging from the "data" they provided it's safe to assume their errors are mostly systematic.
You've made this claim a few times now, that papers are automatically rejected if they contain no error measurements. At this point, I've provided justification for the lack of one type of measurement, and potential reasoning for the other. At this point, I'd like to ask that you provide evidence to support your claim that papers are automatically rejected if they don't include measurement of error. Thank you.
 
barbos, I'm not entirely sure about this. Statistical error is a function of standard deviation. Standard deviation on a sample of 5 is so large as to be meaningless. Statistically speaking, is has no real value. Several of their tests had a sample of 1, which means that error cannot be calculated at all. From a rigorous statistical point of view, error can't be calculated for any of the samples they took - there aren't enough measurements to do so with any credibility.
2 measurements is enough to estimate random error, this error will have large error but it's enough.
5 is more than enough.
Systematical errors on the other hand are sometimes very hard to estimate.
Papers are automatically rejected if measurement contains no errors, because measurement without errors is meaningless.
Their "experiment" is a complete failure. But judging from the "data" they provided it's safe to assume their errors are mostly systematic.
You've made this claim a few times now, that papers are automatically rejected if they contain no error measurements. At this point, I've provided justification for the lack of one type of measurement, and potential reasoning for the other. At this point, I'd like to ask that you provide evidence to support your claim that papers are automatically rejected if they don't include measurement of error. Thank you.

There is no justification for lack of errors.
At the very least you pull one out of your ass and say "We pulled errors out of our ass and believe it's good" without errors measurement has no meaning at all.
It's the same as measurement with no units "Weight of the object is 13.7", utterly meaningless.
 
Emily Lake:
With respect to item 2, they didn't take enough measurements to accurately calculate the error in repeated measurements. IIRC, the largest number of repetitions in any specific test was 5. That's not enough to statistically calculate error.
It seems to me that barbos has got his/her shorts in a knot about this topic, but it is true that an estimate reported without any indication of variance is pretty meaningless. Further, it is quite possible to estimate variance from 5 values (e.g., for the data set 1, 2, 3, 4, 5; the mean is 3.0 and the statistical variance is 2.5). If, for whatever reason, there is no way to report any indication of the variability of the estimate, then the estimate should not be reported at all as a result.

Peez

Allow me to clarify; you can calculate variance, but the credibility of that variance is very low. The variance calculated is meaningless from a statistical perspective. With only 5 samples, there is no indication of the actual distribution. Consider a normal distribution - it has long tails, and there is always a chance, however small, of getting a sample that is far out on one of the tails. So let's say you take a sample of {1,5,9,21,110} You can calculate a mean of 29.2 and a standard deviation of 45.8, sure. You can do the calculation. But that sample isn't necessarily representative. It could easily turn out that the true mean is actually around 10, and the true standard deviation is around 5. And if you took a sample of 500, you would see that distribution emerge. But because you took only 5 samples, and one of them was far out on the right hand tail, it is skewing your apparent results.

NASA can report that they got results of {a,b,c,d,and e}. It would be statistically irresponsible of them to calculate standard deviations and errors from a sample size that small.
 
2 measurements is enough to estimate random error, this error will have large error but it's enough.
5 is more than enough.
Not with any credibility. Have you never taken a statistics course? In my opinion it's worse to imply credibility where there is none, than to omit a calculation that would be non-credible. You're insisting that they include a number that is both meaningless and misleading simply for the sake of form?
 
2 measurements is enough to estimate random error, this error will have large error but it's enough.
5 is more than enough.
Not with any credibility. Have you never taken a statistics course? In my opinion it's worse to imply credibility where there is none, than to omit a calculation that would be non-credible. You're insisting that they include a number that is both meaningless and misleading simply for the sake of form?
You have no clue, do you?
5 is more than enough.
And you still don't get it that for someone like me their number without error is utterly meaningless.
I am perfectly ready to trust them with determination of errors, but it's waste of paper to show measurement without errors.
 
It seems to me that barbos has got his/her shorts in a knot about this topic, but it is true that an estimate reported without any indication of variance is pretty meaningless. Further, it is quite possible to estimate variance from 5 values (e.g., for the data set 1, 2, 3, 4, 5; the mean is 3.0 and the statistical variance is 2.5). If, for whatever reason, there is no way to report any indication of the variability of the estimate, then the estimate should not be reported at all as a result.

Peez

Allow me to clarify; you can calculate variance, but the credibility of that variance is very low. The variance calculated is meaningless from a statistical perspective. With only 5 samples, there is no indication of the actual distribution. Consider a normal distribution - it has long tails, and there is always a chance, however small, of getting a sample that is far out on one of the tails. So let's say you take a sample of {1,5,9,21,110} You can calculate a mean of 29.2 and a standard deviation of 45.8, sure. You can do the calculation. But that sample isn't necessarily representative. It could easily turn out that the true mean is actually around 10, and the true standard deviation is around 5. And if you took a sample of 500, you would see that distribution emerge. But because you took only 5 samples, and one of them was far out on the right hand tail, it is skewing your apparent results.

NASA can report that they got results of {a,b,c,d,and e}. It would be statistically irresponsible of them to calculate standard deviations and errors from a sample size that small.
In statistics 5 is a large number.
 
Allow me to clarify; you can calculate variance, but the credibility of that variance is very low. The variance calculated is meaningless from a statistical perspective. With only 5 samples, there is no indication of the actual distribution. Consider a normal distribution - it has long tails, and there is always a chance, however small, of getting a sample that is far out on one of the tails. So let's say you take a sample of {1,5,9,21,110} You can calculate a mean of 29.2 and a standard deviation of 45.8, sure. You can do the calculation. But that sample isn't necessarily representative. It could easily turn out that the true mean is actually around 10, and the true standard deviation is around 5. And if you took a sample of 500, you would see that distribution emerge. But because you took only 5 samples, and one of them was far out on the right hand tail, it is skewing your apparent results.

NASA can report that they got results of {a,b,c,d,and e}. It would be statistically irresponsible of them to calculate standard deviations and errors from a sample size that small.
In statistics 5 is a large number.

Can you elaborate as to why you think that assertion is true?
 
It seems to me that barbos has got his/her shorts in a knot about this topic, but it is true that an estimate reported without any indication of variance is pretty meaningless. Further, it is quite possible to estimate variance from 5 values (e.g., for the data set 1, 2, 3, 4, 5; the mean is 3.0 and the statistical variance is 2.5). If, for whatever reason, there is no way to report any indication of the variability of the estimate, then the estimate should not be reported at all as a result.

Peez

Allow me to clarify; you can calculate variance, but the credibility of that variance is very low. The variance calculated is meaningless from a statistical perspective. With only 5 samples, there is no indication of the actual distribution. Consider a normal distribution - it has long tails, and there is always a chance, however small, of getting a sample that is far out on one of the tails. So let's say you take a sample of {1,5,9,21,110} You can calculate a mean of 29.2 and a standard deviation of 45.8, sure. You can do the calculation. But that sample isn't necessarily representative. It could easily turn out that the true mean is actually around 10, and the true standard deviation is around 5. And if you took a sample of 500, you would see that distribution emerge. But because you took only 5 samples, and one of them was far out on the right hand tail, it is skewing your apparent results.

NASA can report that they got results of {a,b,c,d,and e}. It would be statistically irresponsible of them to calculate standard deviations and errors from a sample size that small.

This.

The reality is that sometimes you shouldn't do the math.

And that 110 is more likely a measurement goof than random error anyway.

- - - Updated - - -

2 measurements is enough to estimate random error, this error will have large error but it's enough.
5 is more than enough.
Not with any credibility. Have you never taken a statistics course? In my opinion it's worse to imply credibility where there is none, than to omit a calculation that would be non-credible. You're insisting that they include a number that is both meaningless and misleading simply for the sake of form?

I think the question should be whether he has understood a statistics course. I've seen all too many people who took a statistics course but did it by rote with no real understanding of how it applied to the real world.
 
The reality is that sometimes you shouldn't do the math.

And that 110 is more likely a measurement goof than random error anyway.

It's possible... but the tails of distributions do exist in actuality. Assuming that a large measurement is a goof isn't statistically sound either, unless you have a very large number of measurements and can confidently identify aberrations. Particularly if it's not normally distributed. Lognormals have extremely long right-hand tails, and a lot of the data that I work with is lognormally distributed. If I were to throw out really large data points on the assumption that they were measurement goofs, I would bias my results.
 
The reality is that sometimes you shouldn't do the math.

And that 110 is more likely a measurement goof than random error anyway.

It's possible... but the tails of distributions do exist in actuality. Assuming that a large measurement is a goof isn't statistically sound either, unless you have a very large number of measurements and can confidently identify aberrations. Particularly if it's not normally distributed. Lognormals have extremely long right-hand tails, and a lot of the data that I work with is lognormally distributed. If I were to throw out really large data points on the assumption that they were measurement goofs, I would bias my results.

Sure, it's possible. However, given a distribution like that and no reason to expect something like a lognormal distribution it's more likely an error.

You don't just throw them out but you consider them highly suspect if they do not otherwise fit the pattern.
 
Back
Top Bottom