• Welcome to the Internet Infidels Discussion Board.

Group analysis and logical fallacies

Bigfield: Do you have an appropriate algorithm for morality?
Can you explain how this is relevant, and not a red herring?

You still have not told me how you would be able to avoid the "cognitives" as you call them by merely programming them into your human resources computer.:thinking:
I'm sorry that my responses are not timely enough for you.
 
For instance, the best way to select a job candidate may be to delegate the job to a computer, thus making it impossible for a human evaluator to make a sub-optimal decision based on either conscious or unconscious biases.

To the extent that a computer can have all the needed information this is how it should be done. In practice there are things the computer can't see, though.
 
1) Actors. If you are portraying multiple characters related by blood you want to choose characters that are not obviously not related. Star Wars showed Luke as white, we didn't see Vader at that time. Later when we do see Vader you need him to be approximately white.

2) Actors. When dealing with historical things you often have to choose an actor that looks like the character. (Note that "historical" can cover more than history. If you're going to do a Nero Wolfe TV show you need a fat guy to portray Nero Wolfe. While he never actually existed the books are already written, you shouldn't go against them.)

3) Models. Sometimes the color of what is being modeled restricts the color of the model.
I stand corrected. Race is relevant only when a particular race needs to be depicted, and even then the race only needs to be close enough.
 
For instance, the best way to select a job candidate may be to delegate the job to a computer, thus making it impossible for a human evaluator to make a sub-optimal decision based on either conscious or unconscious biases.

To the extent that a computer can have all the needed information this is how it should be done. In practice there are things the computer can't see, though.
It's not a perfect solution; it is merely better than relying on human judgement that, for example, is biased by a candidate's name.
 
Bigfield: Do you have an appropriate algorithm for morality? I noticed you mentioned it in your post immediately above this one. In my estimation having read up on the so called ecological fallacy and studied a few examples, it really should be called the statistical fallacy.
You still have not told me how you would be able to avoid the "cognitives" as you call them by merely programming them into your human resources computer.:thinking:
I only just realised I omitted a word from the post you refer to: I meant to say 'cognitive biases', i.e. that human reasoning is rife with such biases. I did not intend the word cognitive to be a noun. I have a habit of typing and re-typing my posts in a non-linear fashion and sometimes my grammar goes completely to shit.

Sorry for any confusion -- I hope the correction clarifies what I meant.
 
Back
Top Bottom