For the last decade, I've been using computer based programs for stating and verifying deductive arguments for mathematical theorems, and for checking the correctness of computer programs, so I work with deductive logic quite aggressively day-to-day. To be fair, the computer does a lot of the work, and is much better when it comes to fiddly details. I mostly think about the proofs a few levels up.
Ah, that's interesting. Any chance you could give us an example?
Specifically, I'm interested in the kind of typical deductive arguments you have to deal with.
It's pretty impenetrable, but here's a two line snippet of code that is typical of some of the code I've written, a snippet which advertises that it's about logic:
Code:
assume "between P1 X P2 /\ between Q1 X Q2" at [4]
hence "~(P1 = P2) /\ ~(Q1 = Q2)" at [5] by [BET_NEQS]
You'll have to excuse the ASCII art. The software was designed before Unicode. "/\" is ∧ and means "logical and".
The snippet deduces that P1 is not equal to P2, and Q1 is not equal to Q2, given that X lies (strictly) between the respective pairs of points, using a lemma called "BET_NEQS". I was mostly sad about having to write this code, since I consider the matter obvious, but when you use computerised theorem provers, you often have to spell stuff out.
Modern science and modern mathematics were already well developed when modern logic got fully formalised, say around the 1930's. So, basically, before that, scientists and mathematicians had to rely on their logical intuition and they seem to have done really well. I haven't found in the literature any grand claim by contemporary logicians that formal logic would be crucial to the current development of science and maths (I found just one very modest claim, which suggests that's all I'll ever find). It's my understanding that mathematical theorems are usually not even fully formalised, let alone demonstrated formally. Rather, there is a lot a reliance on intuition. I also expect that all demonstrations come down to A & B & C etc. imply Z. So, basically, formal proof would consist largely in making explicit the conjunction of premises. I don't see why you'd need modern logic at all for that. Do you?
This paragraph is all correct. Maths is almost entirely unformalised (thank god, given the current state of formalised mathematics!) But we pretty much work on the philosophy that, even if we almost never formalise, we potentialy
could, if we were so inclined, and we assume that our informal proofs would be a reasonable high-level account of the fully formalised proof.
That said, there are now a few high profile cases where practising mathematicians have been forced to formalise. In these case, the proofs are so subtle, and have so many case-splits, that you can't trust a human to account for it reliably. This was the situation with the proof of the Four Colour Theorem and the Kepler Conjecture, proofs that rely on big computations to eliminate cases, and which are way too laborious to check by hand. Mathematicians make mistakes, and we've had big cases of erroneous proofs getting through review (the first "proof" of the Four Colour Theorem was floating around for a decade before someone pointed out that it was broke). By contrast, a machine verified proof is pretty much beyond all possible reproach.
So I would stress that formal logic and verification here are used to help you
check proofs. Otherwise, you mostly figure out the proof using your sloppy ape brain, with good-old-fashioned intuition. You just bring in the formal logic to vet your logical acumen. This, so I've read, is exactly how Frege intended formal logic: it wasn't the way to do maths. It was the way to vet maths. Indeed, my experience of working in formal logic tells me that formal logic is a really poor match for the way I actually think about maths problems. It's mostly for the computers, not for the humans.
Concerning the correctness of computer programmes, as I see it, the critical point is that the computer does exactly what the programmer intended it to do.
That almost never happens. What do you think a bug is?
I'm not sure how you could ever check that it is the case outside using stringent protocols for writing and testing programmes in real time, including identifying all possible states the programme could reach, and often there are too many of those and we can't even identify all of them in a reasonable amount of time. In other words, there's no formal method to prove a programme. Am I wrong on this?
EB
For lots of programs, we already have engineering specifications. If you want to write a web server, there are a whole bunch of engineering documents which tell you what your program needs to do before it can be called a "conforming implementation." It's not
that hard to rewrite these implementations in formal logic, after which, you can use a machine verifier to absolutely prove that a given implementation is absolutely a conforming implementation of the specification.
There is a lot more code out there that could be matched to formal specifications, and it's usually the code that most needs it.