• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

NASA engineers

General practice doctors and dentists are morel like biological systems engineers. They maintain and trouble shoot (diagnose) bodies. Just like an engineer would do on a system.

That I observed first hand in intensive care with a flesh eating bacteria infection.

Out in the real world scientist ad engineer are job titles.

Someone with a science degree doing routine work in lab gets to call him or her self a scientist.

It sounds like someone is frustrated by course of a career.
 
I was not planning to start discussion about doctors.
I merely mentioned another case where people who are utterly unqualified to do some job yet somehow end up doing it.
I know civil engineers have some kind of certification and in Canada at least I think it's a crime to call yourself an engineer (civil engineer) if you are not. Somewhat similar is with medical doctors. No such thing with scientists, of course scientists can't cause immediate damage as doctors and civil engineers who build bridges and stuff so there is no need for a law but still, every god damn electrician thinks he can say "virtual plasma" and get a freaking grant from the government.
I know a lot of actual physicists who work in what is called material science and there are a lot of people with engineering background who work in the same field. Yet there is an ocean of difference in the background between them. And when engineers start making a salad out of terms they only heard on TV (probably StarTrek episode) I can't help it.
Now back to NASA, it's not the first and only case of NASA "science". On this very board "Mono lake aliens" incident was discussed too.
Of course that lady at least formally was a scientist, incompetent but scientist and her claims were not so outrageous but still. Astrophysics is great at NASA, there is no question about it but when it gets a bit less related to the core of their business it gets ridiculouser and ridiculouser.
 
Last edited:
I was not planning to start discussion about doctors.
I merely mentioned another case where people who are utterly unqualified to do some job yet somehow end up doing it.
I know civil engineers have some kind of certification and in Canada at least I think it's a crime to call yourself an engineer (civil engineer) if you are not. Somewhat similar is with medical doctors. No such thing with scientists, of course scientists can't cause immediate damage as doctors and civil engineers who build bridges and stuff so there is no need for a law but still, every god damn electrician thinks he can say "virtual plasma" and get a freaking grant from the government.
I know a lot of actual physicists who work in what is called material science and there are a lot of people with engineering background who work in the same field. Yet there is an ocean of difference in the background between them. And when engineers start making a salad out of terms they only heard on TV (probably StarTrek episode) I can't help it.
Now back to NASA, it's not the first and only case of NASA "science". On this very board "Mono lake aliens" incident was discussed too.
Of course that lady at least formally was a scientist, incompetent but scientist and her claims were not so outrageous but still. Astrophysics is great at NASA, there is no question about it but when it gets a bit less related to the core of their business it gets ridiculouser and ridiculouser.


My infinite pointless loop detector is giving me a a time to abort indication...you have the last word.
 
steve_bnk, sorry, someone must have recalibrated (incorrectly) my "random crap is thrown" detector. It has tripped on your posts multiple times so far, please disregard my reaction to your random crap.

Anyhow, this thread goes its usual path, engineers are convinced that they are smartest people on earth and can do anything including science. Then they get pissed off when you present evidence to the contrary.
So keep having no errors in your "data" and put bullshit like "virtual plasma" in your lab reports.
 
Last edited:
I'm neither an engineer nor a scientist. I have a degree in applied mathematics, and I work predominantly with statistics and data analysis, as well as building complex mathematical models for consumer analytics.

What camp does that put me in, barbos?
 
I'm neither an engineer nor a scientist. I have a degree in applied mathematics, and I work predominantly with statistics and data analysis, as well as building complex mathematical models for consumer analytics.

What camp does that put me in, barbos?
Applied mathematicians camp?
And I see now why you (erroneously) thought that 5 was a small number.
 
In what universe of statistics is 5 a large number? What is your justification for insisting that my classification of 5 as a small sample is erroneous?
 
I'm neither an engineer nor a scientist. I have a degree in applied mathematics, and I work predominantly with statistics and data analysis, as well as building complex mathematical models for consumer analytics.

What camp does that put me in, barbos?


computer camp?
 
In what universe of statistics is 5 a large number? What is your justification for insisting that my classification of 5 as a small sample is erroneous?

Psychoacoustics? Never more than four experts required for definitive bound finding.

Air Force or Navy performance study? Ever hear of the golden arm? Never mind. The entire squad follows.

Physics? Strength of signal. One signal more than five sigma above noise. Discovery!

...'tis only in those fields where ring of Saturn effects have meaning, those social experiments based on unconfirmable stuff, where n=5 is too small.

Gotta have consensus don't we....
 
I'm terribly sorry, fromderinside, but I don't understand the majority of your post. It contains a lot of references that don't have any meaning to me :(
 
In what universe of statistics is 5 a large number? What is your justification for insisting that my classification of 5 as a small sample is erroneous?
I think it may be a matter of someone who believes that mathematics is reality rather than a tool used to understand reality.

A lot of statistical manipulation can be done with a sample of five and, for someone who believes that being able to manipulate numbers is all that is important, then five samples is more than sufficient. However, for anyone wanting to understand the real world that the samples supposedly represent, five could be more than useless, even misleading – unless the set the samples supposedly represent is damned small.
 
If I am measuring a length with a micrometer I might make 5 measurements and average.

if I have a box of 100 chips, 90 red and 10 blue a sample of 5 is pretty useless.

When making electrical measurements the sample size is determined by the noise parameters. and the noise is not always Gaussian and stationary, parameters may vary with time.

http://en.wikipedia.org/wiki/Signal_averaging

There is no magic '5'.
 
If I am measuring a length with a micrometer I might make 5 measurements and average.

if I have a box of 100 chips, 90 red and 10 blue a sample of 5 is pretty useless.

When making electrical measurements the sample size is determined by the noise parameters. and the noise is not always Gaussian and stationary, parameters may vary with time.

http://en.wikipedia.org/wiki/Signal_averaging

There is no magic '5'.

You raise an interesting point. Your post makes clear to me that I am approaching this from a biased position - I am viewing this from the perspective of someone who does random samples and deals with distributions on a daily basis. I completely neglected to think in terms of measurement.

Five measurements, when you have good instruments, seems perfectly reasonable to me, more than sufficient. Honestly, if you've got good instruments, then it seems like two measurements is plenty for most things (leaving aside questions of noise and stuff that is well outside my bailiwick).

But if you're working with a sample, a selection that is is smaller than the population, a way to estimate reality... then five is probably not enough.

I'm certainly thinking of this in terms of my own background, which is definitely more closely aligned with the latter.
 
If I am measuring a length with a micrometer I might make 5 measurements and average.

if I have a box of 100 chips, 90 red and 10 blue a sample of 5 is pretty useless.

When making electrical measurements the sample size is determined by the noise parameters. and the noise is not always Gaussian and stationary, parameters may vary with time.

http://en.wikipedia.org/wiki/Signal_averaging

There is no magic '5'.

You raise an interesting point. Your post makes clear to me that I am approaching this from a biased position - I am viewing this from the perspective of someone who does random samples and deals with distributions on a daily basis. I completely neglected to think in terms of measurement.

Five measurements, when you have good instruments, seems perfectly reasonable to me, more than sufficient. Honestly, if you've got good instruments, then it seems like two measurements is plenty for most things (leaving aside questions of noise and stuff that is well outside my bailiwick).

But if you're working with a sample, a selection that is is smaller than the population, a way to estimate reality... then five is probably not enough.

I'm certainly thinking of this in terms of my own background, which is definitely more closely aligned with the latter.

It depends on whether you are talking about sample size in terms of observations or whether you are talking about sample size in terms of observers. Many topics permit using repeated measures from a given observer. In apsychoacoustic experiment in which one models using signal detection theory four observers may each report several thousand observations over a range of signal magnitudes of one sort or another. An EEG experiment may also require the use of multiple representations to a single observer to achieve signals reliably above the background noise inherent in such signals. In this latter case observations are over a given time after some condition is applied to the observer and recorded in millisecond samples over that interval.

A good start would be  Sample size determination where the article concentrates on conditions with which you will probably be familiar. The examples I gave are from the family of repeated measure studies where observations rather than observers signal minimum sample size.

As for golden arms that is a problem we run up against in estimating many performance variables in the military. A golden arm is an acknowledged expert leader who dominates his team or squadron in terms of process that makes getting useful results by those of us exploring such processes very difficult.
 
Emily,

It is all the same math.



Imagine a shipping container on ship in a port. It has 1000 units of some product with mean weight of 5 pounds and a standard deviation of 0.2. You measure each one's weight and plot each sequential value. An xy scatter plot.

It will look like this.

SCAT-PLT.jpeg

Now imagine an electronic position sensor that puts out a dc voltage 0 to10 volts scaled in volts/meter. The sensor is putting out a 5 volt signal corrupted by noise with a zero mean and a standard deviation of 0.2. Make 1000 electrical measurements(samples) and make a scatter plot. It will look like above.

The electrical scenario is a statistical sampling process just like random sampling from a barrel of widgets.

Start at the first data point and average an increasing sample size. As the number of points averaged increases the sample average will converge o the true mean value .

In one case it is the mean weight, and the other the true dc value of a voltage. Same process.

Electronic instruments these days have averaging functions built in.
 
Accuracy, measurement variability, and resolution are three different parameters.

Make multiple measurements and the standard derivation says nothing about accuracy.

Accuracy is set by the combined measurement tolerances of the instruments, which are calibrated traceable to NIST primary standards.
 
If I am measuring a length with a micrometer I might make 5 measurements and average.

if I have a box of 100 chips, 90 red and 10 blue a sample of 5 is pretty useless.

When making electrical measurements the sample size is determined by the noise parameters. and the noise is not always Gaussian and stationary, parameters may vary with time.

http://en.wikipedia.org/wiki/Signal_averaging

There is no magic '5'.

Average, yes. Standard deviation, doesn't mean much.
 
Average, yes. Standard deviation, doesn't mean much.

Oh contraire.

A Measure of Dispersion: The Standard Deviation:
http://www.fgse.nova.edu/edl/secure/stats/lesson2.htm

The standard deviation is important because, regardless of the mean, it makes a great deal of difference whether the distribution is spread out over a broad range or bunched up closely around the mean. For example, suppose you have two classes whose mean reading scores are the same. With only that information, you would be inclined to teach the two classes in the same way. But suppose you discover that the standard deviation of one of the classes is 27 and the other is 10, as in the examples we just finished working with. That means that in the first class (the one where
image28.gif
27), you have many students throughout the entire range of performance.

There is another statistic every beginning psych student learns. That is Kurtosis. Sumarizing data isn't very meaningful if the data is spread abnormally. If the bulk of the data is clustered at two points,say the extremes of that being measured mean and median are meaningless since the data is scarce there the distribution measures are nearly as useless since they don't actually capture the two clusters at the extremes but rather signal a very broad span of distribution. Correlations with such data always come to the wrong conclusion. For instance drug comparisons may both be useful at certain dose levels or may be indicated at certain disease levels, but comparing them would be like comparing apples and oranges since one drug would have data clustered at one end of the regime spectrum while the other would be clustered at the other end. What would mean, median, or standard deviation tell them. Nothing.

One should not conduct experiments if one has no real idea of the underlying distributions.
 
But if you're working with a sample, a selection that is is smaller than the population, a way to estimate reality... then five is probably not enough.

I was working on a paper once and I had read another paper that, using only about 5 or 6 data points, found a strong correlation between two quantities of interest to me. Because of their strong correlation they interpreted a relationship between two things that I did not believe existed, though one could easily imagine a physical reason why there would be such a relationship. In the course of my work I expanded upon the sample by obtaining over 40 (I don't recall the exact number) data points and these exhibited no correlation at all.

So, did they simply grab 5 or 6 data points that happened to fall upon a line from a 'randomly' distributed sample? Likely yes. I don't recall whether they calculated a probability that the result they got could have been consistent with a null result, based on the small number statistics, but clearly they should have. A few years later I happened to talk to one of the co-authors (an advisor to the student who wrote the original paper) and he admitted that there is indeed no correlation, but they had these data for other purposes and needed a project for a student.

A good example of when 5 is not enough.
 
Back
Top Bottom