• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

How a wrong logic could affect mathematics?

Speakpigeon

Contributor
Joined
Feb 4, 2009
Messages
6,317
Location
Paris, France, EU
Basic Beliefs
Rationality (i.e. facts + logic), Scepticism (not just about God but also everything beyond my subjective experience)
I believe mathematical logic is wrong. I mean, really, really wrong. I mean actually all wrong. However, I'm only really worrying, and more generally, as to the possible consequences for mathematics of using a formal logic that would be wrong.

This question is in fact quite difficult to assess. Nearly all mathematicians use in fact their logical intuition to prove theorems. Thus, they don't have to rely on any method of formal proof and thus it doesn't seem to matter that mathematical logic should be wrong. At the same time, most mathematicians probably receive a comprehensive training in formal logic, and I can indeed routinely spot problematic statements, presented as "obviously" true, being made by mathematicians when they discuss formal logic questions, suggesting that their logical sense may be wrongly affected by their formal logic training. Yet, I'm not sure whether that actually affects the proof mathematicians produce in their personal work.

It seems to me it's inevitable that it does. I know of specific proofs that are wrong in the sense that it's not something humans would normally accept. Mathematicians who accept them are obviously affected by their training in formal logic. However, these are proofs of logical formulas, not of mathematical theorems and these are much more difficult to assess in this respect.

Yet, even if it is the case that actual proofs done by mathematicians using their intuition are wrongly affected by their training in formal logic, I'm still not clear what could be the consequences of that in practical term.

One possible method to assess the possible consequences would be to compare proofs obtained using different methods of mathematical logic, such as relevance logics, intuitionistic logics, paraconsistent logics etc. However, I can't find examples of mathematical theorems proved using these methods. Further, all these methods are weaker than standard, "classical", mathematical logic, meaning that they deem valid a smaller number of logical implications and therefore, presumably, would end up with a smaller subset of the theorems currently accepted by mathematicians. Which may be good or bad but how do we know which?
EB
 
Can I ask what your background in mathematics and logic is?
 
Back in the early 20th century at a conference rhetorically Hilbert asked if all true mathematical propositions can be proven. The issue at the time was whether or not there was an inconsistency in the foundations of math lurking.

I believe in the 90s a systematic review from the bottom up was done.

In the end logic is consistent which means valid. Conclusion is followed from premise. Actual proof of logic is test in reality.

All math that has real world usage is verified by usage. No inconsistencies in any applied math has surfaced as yet. There is no way to say a problem will never arise. Same as any physical science theory, the proof is in the results.

Ann abstract proof of any logic is simply showing that using rules of logic the conclusion follows from the premise.

Try reading a short book How To Read And Do Mathematical Proofs. Read it in the 80s. It will answer all your questions by doing the exercises. You will never understand it as a philosophical abstraction.
 
Can I ask what your background in mathematics

Can I ask what your background in intelligence is? :rolleyes:

I'm not discussing mathematics in general (only mathematical logic).

I'm not discussing whether mathematical theories are logically consistent (I'm sure they are).

I'm also confident all methods of logic proposed in mathematical logic are logically consistent.

What I dispute is that any method of logic proposed in mathematical logic could possibly correctly model the logic of human beings. My claim is based on the existence of the paradox of material implication, which is at the foundation of mathematical logic since 1850. The paradox of material implication has been discovered by mathematicians themselves and it is in all mathematical logic textbooks.

Here is an example:

All men are immortal; therefore all squares are circles.

This is considered a valid implication in mathematical logic just because it is false that all men are immortal. If you believe that it is not true that all men are immortal, you should accept that the conclusion is valid. Me, I don't think that's how the human mind works. Nearly everybody would accept that this is just stupid, except possibly many mathematicians but mathematicians are biased precisely because of their training in mathematical logic: All mathematicians are human beings and all human beings are very probably biased by their education, therefore...

Another example:

Jo is a giraffe and Jo is not a giraffe; therefore Jo is an elephant.

This is again considered as valid simply because the premises are contradictory and therefore necessarily false. See? It is valid to infer that Jo is an elephant...

I have a few other reasons to dismiss mathematical logic as a joke but this one is plenty good enough.


To answer your question: my background in mathematics is minimal. I did two years in mathematics and physics at university and that was a long time ago.

I have answered your question so answer mine: What is your background in intelligence?

and logic is?

You ask about logic, not mathematical logic. My background in logic is that I am a human being. I don't see any good reason to assume that I would be less qualified to speak about logic than mathematicians. Further, my claims are quite minimal and it is rather obvious that they are correct. If you think not, please explain to me your thinking. I have been researching actively this subject for several months now and I haven't been able to find anyone who could provide a convincing counter-argument.

My background in logic is also Aristotle. You know, the guy who defined what people came to call "logic" for 2,500 yeras?

So, me, I read Aristotle and I see that his notion of syllogism properly defines my own intuitive notion of what logical validity is. You should start from there.

Concerning mathematical logic, I had a class in mathematical logic when I was a university math student. It was my first encounter with formal logic and I'm quite sure I never thought about logic as such previously. Looking at what the young teacher was telling us about formal logic, I just took it in initially without any problem. The conjunction, the disjunction, the negation. I was discovering the principle of defining these things through truth tables. No problem at all. In fact, all pretty damn obvious. Then came the turn of the material implication. I remember vividly: I looked at the truth table of the material implication and my brain just puked. I had no background in formal logic. There's no way I could have been biased in any way. And my reaction was entirely intuitive. I haven't changed since. My intuitive reaction is still the same. I also found several times in books written by professional logicians the observation that may students couldn't understand the material implication. I myself didn't say anything at the time. It is likely that most students who have a problem don't say anything so that the problem is under-reported.

You can also test yourself. Look at any truth table of various conjunctions and disjunctions. You should be able to notice your own intuitive reaction: all good. Now look at the truth table of various implications that are deemed valid in mathematics. Now, try to understand how the result is obtained. Well, me, I don't understand. It just doesn't make any sense. You know, formal logic is supposed to be a model of our intuitive sense of logic. How could mathematical logic be possibly correct and yet contradict our sense of logic?!

I understand for example the following argument: A square is a circle and all circles are round; therefore all squares are round. It's clearly valid even though we usually understand the premises and the conclusion to be false. So, I have not problem with validity in case of false premises. But I certainly have a problem with the idea that an argument is valid because its premises are false. This is plain idiotic.

And yet mathematicians have been peddling this idiotic notion of validity since broadly Boole, who sort of invented the material implication, so broadly since 1850. So, it's now nearly 170 years that (most) mathematicians ground their work on logic on this idiotic definition. 170 years, that's maybe 7 or 8 generations of mathematicians. There is in the world probably more than a million intellectual workers alive today. They would have been all trained on this basis! Mathematicians doing theoretical work on mathematical logic are much fewer but that's still probably a lot of people (plus computer scientists and philosophers). And yet, all they can do is material implication?! Whoa. I'm underwhelmed.

Please also note mathematical logic is a branch of mathematics, not a method. Mathematical logic includes all sorts of different methods of logic, "classical", paraconsistent, multivalued, relevant etc. Alternatives to mainstream, so-called "classical", logic, are even more pathetic so I won't discuss them here. There are too many of them to even know how many there are exactly. These methods adopt axioms and calculus principles that are often contradictory to each other so that they can't possibly be all correct. So, in effect, it's not just me saying mathematical logic is junk. It's many mathematicians themselves who prove my point.

Please note that mathematicians have produced different, and contradictory, theories of geometry, i.e. curved and flat. These theories are all logically consistent but they still contradict each other. We all accept that these theories are just that, theories. Whether they may be correct is an empirical matter and should be decided on empirical grounds, not on the say-so and authority of mathematicians who in reality have no more expertise on this than any idiot alive today. Same for logic.
EB
 
Try reading a short book How To Read And Do Mathematical Proofs. Read it in the 80s. It will answer all your questions by doing the exercises. You will never understand it as a philosophical abstraction.

Try reading my post.
EB
 
my background in mathematics is minimal. I did two years in mathematics and physics at university and that was a long time ago...My background in logic is that I am a human being. I don't see any good reason to assume that I would be less qualified to speak about logic than mathematicians.

Ok then.
 
Try reading a short book How To Read And Do Mathematical Proofs. Read it in the 80s. It will answer all your questions by doing the exercises. You will never understand it as a philosophical abstraction.

Try reading my post.
EB

Same response as dozens of times in the past when you ask the same question.

All logic is the same matemetical or otherwise. If, and, or...

Symoloc systems vary. Boolean logic symbols in electronics or mathematical symbolic logic. It is all the same.

Logic can be valid in the by the rules conclusion follows from pro[positions. True of an elecytronic lojic or verbal syslogusm.

The 'proof' of anything is how it is reflected in reality There is a saying in spgtware, the logic in code alwys works perfectly, it performs as written. The problem is when logic does not do what the cider thinj]ks it does. Formal logic can never be invalid or wrong when rules are properly applied.

The application of logic can be wrong only when it is improperly applied to a problem.

I believe mathematical logic is wrong. I mean, really, really wrong. I mean actually all wrong. However, I'm only really worrying, and more generally, as to the possible consequences for mathematics of using a formal logic that would be wrong

The only proof is in testing. Can there possibly be same problem in the rules if logic not yet mahifested, there is no way to say. The Incompleteness Theorem may apply.

You really, really do not grasp logic. And you really do not seem to wasn't an actual answer.



Yes, I read your OP, did you?.
 
The 'proof' of anything is how it is reflected in reality (...)

Formal logic can never be invalid or wrong when rules are properly applied.

Your two sentences here contradict each other. Could you rephrase?

The only proof is in testing. Can there possibly be same problem in the rules if logic not yet mahifested, there is no way to say.

So, you say here there could be a problem with logic, apparently, but then you just said formal logic can never be wrong. Sorry, but no one will understand you. Could you rephrase or explain yourself?

The Incompleteness Theorem may apply.

I'd love to see you explain this one.

You really, really do not grasp logic.

LOL.

And you really do not seem to wasn't an actual answer.

Sorry, I can't parse that.

Yes, I read your OP, did you?.

Good, so explain to me what's wrong with it.
EB
 
There is no way to say conclusively there is no problem lurking in the foundations. If you worked real world problems instead of simple syllogisms you might understand that.

As a logician are you familiar with the Incompleteness Theorem and its implications? It would be pointless to engage you on this. It has real world imp,ications in algorithms.

I'd say your understanding of logic is like understanding the syntax and grammar of a language but being unable to understand how to read and comprehend. You say mathematicians have it all wrong. Can you go from the geberal to a specific example where there is something wrong? Or are you just trying to eleveate yoursdelf?

What does A AND A! mean, What dies A OR A! mean?
 
There is no way to say conclusively there is no problem lurking in the foundations.

Yes? But it's you who said this:

- Formal logic can never be invalid or wrong when rules are properly applied.

- The application of logic can be wrong only when it is improperly applied to a problem.


You seem unable to say anything without contradicting yourself at every turn.

If you worked real world problems instead of simple syllogisms you might understand that.

What do you know of me?! You think I spent all my life considering "simple syllogisms"?! Are you for real?!

As a logician are you familiar with the Incompleteness Theorem and its implications? It would be pointless to engage you on this. It has real world imp,ications in algorithms.

You don't have to explain anything. We all know you can't even explain yourself.

I'd say your understanding of logic is like understanding the syntax and grammar of a language but being unable to understand how to read and comprehend.

And how could you possibly know that?

You say mathematicians have it all wrong.

No, I didn't. I'm talking only of mathematical logic and only to the extent that it is understood or presented as a correct model of the logic of human reasoning. Boole talked of the "Laws of thought". Frege thought he could formalise mathematical proof. They had it all wrong.

Can you go from the geberal to a specific example where there is something wrong?

I already gave a specific example. It's a very well know example, not even my own. So what's wrong with you?

Or are you just trying to eleveate yoursdelf?

You know, in most countries it's probably a crime to advocate the rape of children, don't you?

What does A AND A! mean, What dies A OR A! mean?

There's no difficulty there. Even Boole could get that right. Now, please explain to us how the implication works... Nobody did, you know. So if you can do it, please show the world.
EB
 
Logic can never be wrong as logic when the rules are applied properly. As the saying goes in software, GIGO garbage in garbage out. Software always works exactly as it is coded.

Just because a syllogism is logically valid, conclusion follows from premise, does not mean the syllogism was properly constructed to solve a problem. GIGO again.

One of the consequences of the Incompleteness Theorem is that in an axiomatic system there are truths that can not be proven in the system. Euclidean geometry is an example. A point is infinitely small and massless, a line is comprised of an infinite number of points. Not provable in geometry.

It has consequences. I write a program. A write a second program to validat e the first program. How do I validate the second program?

Logic is the same. You can not use logic as an absolute proof of logic. What you demon stare is whiter or logical statements are valid within the system. Whether there can be a hidden flaw is not provable. That was in part the genesis of the Turing Machine.

In math there is the Laplace and Fourier Transform's. It is everywhere in science and engineering. It rests on the idea that there is for any given function there is only one unique pair of transforms. There is a proof that says there is only one unique transform pair for any function. If an exception were found it could have serious consequences.

How do you prove the proof is correct and without flaw? The truth table of an AND function is axiomatic, it is not logically provable.
 
I believe mathematical logic is wrong. I mean, really, really wrong. I mean actually all wrong. However, I'm only really worrying, and more generally, as to the possible consequences for mathematics of using a formal logic that would be wrong.

This question is in fact quite difficult to assess. Nearly all mathematicians use in fact their logical intuition to prove theorems. Thus, they don't have to rely on any method of formal proof and thus it doesn't seem to matter that mathematical logic should be wrong. At the same time, most mathematicians probably receive a comprehensive training in formal logic, and I can indeed routinely spot problematic statements, presented as "obviously" true, being made by mathematicians when they discuss formal logic questions, suggesting that their logical sense may be wrongly affected by their formal logic training. Yet, I'm not sure whether that actually affects the proof mathematicians produce in their personal work.

It seems to me it's inevitable that it does. I know of specific proofs that are wrong in the sense that it's not something humans would normally accept. Mathematicians who accept them are obviously affected by their training in formal logic. However, these are proofs of logical formulas, not of mathematical theorems and these are much more difficult to assess in this respect.

Yet, even if it is the case that actual proofs done by mathematicians using their intuition are wrongly affected by their training in formal logic, I'm still not clear what could be the consequences of that in practical term.

One possible method to assess the possible consequences would be to compare proofs obtained using different methods of mathematical logic, such as relevance logics, intuitionistic logics, paraconsistent logics etc. However, I can't find examples of mathematical theorems proved using these methods. Further, all these methods are weaker than standard, "classical", mathematical logic, meaning that they deem valid a smaller number of logical implications and therefore, presumably, would end up with a smaller subset of the theorems currently accepted by mathematicians. Which may be good or bad but how do we know which?
EB

You said two geometries conflict, which and how, link?

Ig you read a book on proofs and did examples yu would understand.

In Calculus we learn intergration by parts. There is a form but no rules or systematic merthods to apply it. It is part intuition from exerince and part trial and eror. There is no way for a given function can actualy be solved through intergrtionnby parts.

It is the same with prrofs. There is no structured method of any kind that given a term a proof results.

Like integratiin by parts it is part intuition from experience, knowledge of math, and trial and error.

Computers are now being used to derive proofs. Millions of trial and error trials can be run quickly. A quote attributed to AE, the greatest tool a scientist gas is a wastebasket.

So in this case you are right, mathematicians can make mistakes. Mistakes can exist for years before being corrected. But that is not breaking or profound news or revelation. What is your point? We know that.

The question as to whether there is a way to prove all theorems by an algorithm goes bc k at least to the early 20th century. The question was part of the genesis of the Turing Machine. A universal algorithmic processor. It is also related to the derivation of the Incompleteness Theorem. No consistent logical axiomatic system can be complete. Consistent in a general sense means that no matter how the system is applied, for example geometry or algebra, the answer will always be the same.

I covered this stuff in a Theory Of Computation class. I read a book on doing proofs so I could read proofs for theorems in engineering text books. You may not think so, engineering foundations contains many proofs. A lot of the control systems theorems and proofs were developed by a mathematician-engineer Bode at Bell in the 1939s. Used to have a copy of his book.

For me this all was not an academic debate.
 
Last edited:
I asked the question because I knew you would not understand the question.

What does A AND B mean? It does not mean anything. It is a definition, defined by a truth table.

Given the logical definitions such as AND, OR, and the rest can it be logically proven that combinations of functions in formal logic will never lead to erroneous results? If so how?
 
Reminder

I believe mathematical logic is wrong. I mean, really, really wrong. I mean actually all wrong. However, I'm only really worrying, and more generally, as to the possible consequences for mathematics of using a formal logic that would be wrong.

This question is in fact quite difficult to assess. Nearly all mathematicians use in fact their logical intuition to prove theorems. Thus, they don't have to rely on any method of formal proof and thus it doesn't seem to matter that mathematical logic should be wrong. At the same time, most mathematicians probably receive a comprehensive training in formal logic, and I can indeed routinely spot problematic statements, presented as "obviously" true, being made by mathematicians when they discuss formal logic questions, suggesting that their logical sense may be wrongly affected by their formal logic training. Yet, I'm not sure whether that actually affects the proof mathematicians produce in their personal work.

It seems to me it's inevitable that it does. I know of specific proofs that are wrong in the sense that it's not something humans would normally accept. Mathematicians who accept them are obviously affected by their training in formal logic. However, these are proofs of logical formulas, not of mathematical theorems and these are much more difficult to assess in this respect.

Yet, even if it is the case that actual proofs done by mathematicians using their intuition are wrongly affected by their training in formal logic, I'm still not clear what could be the consequences of that in practical term.

One possible method to assess the possible consequences would be to compare proofs obtained using different methods of mathematical logic, such as relevance logics, intuitionistic logics, paraconsistent logics etc. However, I can't find examples of mathematical theorems proved using these methods. Further, all these methods are weaker than standard, "classical", mathematical logic, meaning that they deem valid a smaller number of logical implications and therefore, presumably, would end up with a smaller subset of the theorems currently accepted by mathematicians. Which may be good or bad but how do we know which?
EB
 
I asked the question because I knew you would not understand the question.

What does A AND B mean? It does not mean anything. It is a definition, defined by a truth table.

Given the logical definitions such as AND, OR, and the rest can it be logically proven that combinations of functions in formal logic will never lead to erroneous results? If so how?

Now, please explain to us how the implication works... Nobody did, you know. So if you can do it, please show the world.
EB

Go on, do that.
EB
 
There is something about the generally accepted definition of “valid” that I can’t quite shake. It initially struck me as a definition with too broad a scope, allowing in things it shouldn’t (like contradictions) while capturing what it should (being collectively exhaustive one layer down), meaning what would otherwise be sound should the argument also have true premises. It’s an overreach as far as definitions go, so it seems more of a conflation with an off-target insight than anything that resembles the ‘is’ of identity.

Apparently, according to mathematical logic, anything follows from a contradiction; it’s a good thing it doesn’t matter if that’s true since it’s guarentees any such argument it’s used in is unsound; but we really should consider a forward approach (what’s included) rather than a backwards approach (everything not excluded) when tweaking the definition.

This is a stipulative definition for a reason. The surrounding construct is already in place. A deductive argument MUST be sound if it’s VALID and all premises are TRUE. The fact that a contradiction implies validity according to the definition of “validity” used is as about as meaningful as if were instead true that contradictions implies they’re not valid. That should mean something about whether or not the truth of whether contradictions imply anything is true is merely a function of satisfying a definition that is exclusionary (backwards).

I’m sorry guys. I just don’t have the right words. That has to sound muddled. It’s just that an implication through form makes sense when parts of the conclusion is to be found in the premises; it’s ONLY through definition that allows for contradictions to imply anything. Had the definition been equally as flawed, but from an opposite direction, our reasoning that contradictions imply (not nothing) but that nothing is true would still be based on the satisfaction of a definition.

Again, it’s not lexical but stipulative for a reason, and it’s slightly off target. It accomplishes our goal but is slightly too broad in that it captures and says of contradictions something that simply oughtn’t be considered as true. What should be the case is that contradictions within deductive arguments fail to imply anything; I’m not sure if contradictions should render an argument invalid, but it should definitely not be valid, (considering the distinction between not valid and invalid should it be important).

I’m not sure how to reword the definition, but at the very least I could piece meal it, sloppy as that might be. Simply exclude contradictions within the definition.
 
The way I look at it valid means consistent with rules. A valid sentence in terms of syntax and grammar. You can have a sentence that is correct garmmaticley and syntx wise, yet be completely meaningless.

Lewis Carol's Jabberwocky poem.

Semantics of course,

In a discussion someone can have a valid argument, but it is not necessarily true. In a two sided debate an independent observer may say both sides have valid arguments. Given the hypotheses and premise a valid conclusion follows on both sides. The question being which of the hypothesis and premise is true in reality.

A valid syllogism according the rules of logic does not necessarily mean the conclusion is true in reality, only that conclusion follows from premise.

That is why pure linear logic is limited real world complex problems. All problems are not reducible to syllogisms..
 
I asked the question because I knew you would not understand the question.

What does A AND B mean? It does not mean anything. It is a definition, defined by a truth table.

Given the logical definitions such as AND, OR, and the rest can it be logically proven that combinations of functions in formal logic will never lead to erroneous results? If so how?

Now, please explain to us how the implication works... Nobody did, you know. So if you can do it, please show the world.


EB

Go on, do that.
EB

Asked and answered. A definition has no meaning, it is a 'definition'. Formal logic is a set of arbitrary rules based system.If it was not enough for you then try harder to understand. I can not connect the mental dots for you.

You do not appear to actually have done much reasoning or work with logic and math. Try the short book on proofs I recommended and work some of the problems. It will all become a lot clearer.

The questions you pose go a lot deeper. It touches on formal languages in Theory Of Computations and computer science. Computer science is the center for logic today. Formal logic in math is axiomatic, such as geometry. The system develops from a defined set of relations which are not necessarily provable. That is why I think the incompleteness Thermo applies. There is no way using logic to prove a particular set of logic will always be correct in a general sense.

If you do not understand Gödel then you are thinking in pre 20th century philosophy. Aristotle thought given a set of basic principles the universe can be deuced logically. Gödel says not. In electrical engineering Gödel is known in general. I rerad a book about him.

https://en.wikipedia.org/wiki/Axi

An axiom or postulate is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Greek axíōma (ἀξίωμα) 'that which is thought worthy or fit' or 'that which commends itself as evident.'[1][2]

The term has subtle differences in definition when used in the context of different fields of study. As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question.[3] As used in modern logic, an axiom is a premise or starting point for reasoning.[4]

As used in mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". Logical axioms are usually statements that are taken to be true within the system of logic they define (e.g., (A and B) implies A), often shown in symbolic form, while non-logical axioms (e.g., a + b = b + a) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom", "postulate", and "assumption" may be used interchangeably. In general, a non-logical axiom is not a self-evident truth, but rather a formal logical expression used in deduction to build a mathematical theory. To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms). There are typically multiple ways to axiomatize a given mathematical domain.

Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics

Mathematical logic[edit]

In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively).

Logical axioms[edit]

These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense.

https://plus.maths.org/content/goumldel-and-limits-logic

Gödel proved that the mathematical methods in place since the time of Euclid (around 300 BC) were inadequate for discovering all that is true about the natural numbers. His discovery undercut the foundations on which mathematics had been built up to the 20th century, stimulated thinkers to seek alternatives and generated a lively philosophical debate about the nature of truth. Gödel's innovative techniques, which could readily be applied to algorithms for computations, also laid the foundation for modern computer science.
 
So the answer is you don't have the beginning of a clue.
EB
 
I’m not sure how to reword the definition, but at the very least I could piece meal it, sloppy as that might be. Simply exclude contradictions within the definition.

The fact that their definition of validity is wrong shows mathematicians don't even understand logical validity. Mathematical logic is not even a restriction of logic. It's not logic at all.
EB
 
Back
Top Bottom