steve_bank
Diabetic retinopathy and poor eyesight. Typos ...
If you go through the link you will see it looks similar to formal logic.
https://en.wikipedia.org/wiki/Boolean_algebra
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. Instead of elementary algebra where the values of the variables are numbers, and the prime operations are addition and multiplication, the main operations of Boolean algebra are the conjunction and denoted as ∧, the disjunction or denoted as ∨, and the negation not denoted as ¬. It is thus a formalism for describing logical relations in the same way that elementary algebra describes numeric relations.
Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854).[1] According to Huntington, the term "Boolean algebra" was first suggested by Sheffer in 1913,[2] although Charles Sanders Peirce in 1880 gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics".[3] Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.[4]
History[edit]
Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields.[5] In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, and others until it reached the modern conception of an (abstract) mathematical structure.[5] For example, the empirical observation that one can manipulate expressions in the algebra of sets by translating them into expressions in Boole's algebra is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In circuit engineering settings today, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[6][7][8] Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification.[9]
Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.[10][11][12] Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics.[5] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity
Basic operations[edit]
The basic operations of Boolean algebra are as follows:
AND (conjunction), denoted x∧y (sometimes x AND y or Kxy), satisfies x∧y = 1 if x = y = 1, and x∧y = 0 otherwise.
OR (disjunction), denoted x∨y (sometimes x OR y or Axy), satisfies x∨y = 0 if x = y = 0, and x∨y = 1 otherwise.
NOT (negation), denoted ¬x (sometimes NOT x, Nx or !x), satisfies ¬x = 0 if x = 1 and ¬x = 1 if x = 0.
Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows:
Secondary operations[edit]
The three Boolean operations described above are referred to as basic, meaning that they can be taken as a basis for other Boolean operations that can be built up from them by composition, the manner in which operations are combined or compounded. Operations composed from the basic operations include the following examples:
x → y = ¬ x ∨ y {\displaystyle x\rightarrow y=\neg {x}\vee y} x\rightarrow y=\neg {x}\vee yx ⊕ y = ( x ∨ y ) ∧ ¬ ( x ∧ y ) = ( x ∧ ¬ y ) ∨ ( ¬ x ∧ y ) {\displaystyle x\oplus y=(x\vee y)\wedge \neg {(x\wedge y)}=(x\wedge \neg y)\vee (\neg x\wedge y)} {\displaystyle x\oplus y=(x\vee y)\wedge \neg {(x\wedge y)}=(x\wedge \neg y)\vee (\neg x\wedge y)}x ≡ y = ¬ ( x ⊕ y ) = ( x ∧ y ) ∨ ( ¬ x ∧ ¬ y ) {\displaystyle x\equiv y=\neg {(x\oplus y)}=(x\wedge y)\vee (\neg x\wedge \neg y)} {\displaystyle x\equiv y=\neg {(x\oplus y)}=(x\wedge y)\vee (\neg x\wedge \neg y)}
These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs
Laws[edit]
A law of Boolean algebra is an identity such as x ∨ (y ∨ z) = (x ∨ y) ∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x∨(y∧z) = x∨(z∧y) from y∧z = z∧y as treated in the section on axiomatization.
Monotone laws[edit]
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[14]
Associativity of ∨ {\displaystyle \vee } \vee : x ∨ ( y ∨ z ) {\displaystyle x\vee (y\vee z)} {\displaystyle x\vee (y\vee z)} = ( x ∨ y ) ∨ z {\displaystyle =(x\vee y)\vee z} {\displaystyle =(x\vee y)\vee z}
Associativity of ∧ {\displaystyle \wedge } \wedge : x ∧ ( y ∧ z ) {\displaystyle x\wedge (y\wedge z)} {\displaystyle x\wedge (y\wedge z)} = ( x ∧ y ) ∧ z {\displaystyle =(x\wedge y)\wedge z} {\displaystyle =(x\wedge y)\wedge z}
Commutativity of ∨ {\displaystyle \vee } \vee : x ∨ y {\displaystyle x\vee y} x\vee y = y ∨ x {\displaystyle =y\vee x} {\displaystyle =y\vee x}
Commutativity of ∧ {\displaystyle \wedge } \wedge : x ∧ y {\displaystyle x\wedge y} x\wedge y = y ∧ x {\displaystyle =y\wedge x} {\displaystyle =y\wedge x}
Distributivity of ∧ {\displaystyle \wedge } \wedge over ∨ {\displaystyle \vee } \vee : x ∧ ( y ∨ z ) {\displaystyle x\wedge (y\vee z)} {\displaystyle x\wedge (y\vee z)} = ( x ∧ y ) ∨ ( x ∧ z ) {\displaystyle =(x\wedge y)\vee (x\wedge z)} {\displaystyle =(x\wedge y)\vee (x\wedge z)}
Identity for ∨ {\displaystyle \vee } \vee : x ∨ 0 {\displaystyle x\vee 0} {\displaystyle x\vee 0} = x {\displaystyle =x} =x
Identity for ∧ {\displaystyle \wedge } \wedge : x ∧ 1 {\displaystyle x\wedge 1} {\displaystyle x\wedge 1} = x {\displaystyle =x} =x
Annihilator for ∧ {\displaystyle \wedge } \wedge : x ∧ 0 {\displaystyle x\wedge 0} {\displaystyle x\wedge 0} = 0 {\displaystyle =0} =0
The following laws hold in Boolean Algebra, but not in ordinary algebra:
Annihilator for ∨ {\displaystyle \vee } \vee : x ∨ 1 {\displaystyle x\vee 1} {\displaystyle x\vee 1} = 1 {\displaystyle =1} =1
Idempotence of ∨ {\displaystyle \vee } \vee : x ∨ x {\displaystyle x\vee x} {\displaystyle x\vee x} = x {\displaystyle =x} =x
Idempotence of ∧ {\displaystyle \wedge } \wedge : x ∧ x {\displaystyle x\wedge x} {\displaystyle x\wedge x} = x {\displaystyle =x} =x
Absorption 1: x ∧ ( x ∨ y ) {\displaystyle x\wedge (x\vee y)} {\displaystyle x\wedge (x\vee y)} = x {\displaystyle =x} =x
Absorption 2: x ∨ ( x ∧ y ) {\displaystyle x\vee (x\wedge y)} {\displaystyle x\vee (x\wedge y)} = x {\displaystyle =x} =x
Distributivity of ∨ {\displaystyle \vee } \vee over ∧ {\displaystyle \wedge } \wedge : x ∨ ( y ∧ z ) {\displaystyle x\vee (y\wedge z)} {\displaystyle x\vee (y\wedge z)} = ( x ∨ y ) ∧ ( x ∨ z ) {\displaystyle =(x\vee y)\wedge (x\vee z)} {\displaystyle =(x\vee y)\wedge (x\vee z)}
Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2×2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1, for example in Absorption Law 1 the left hand side would be 1(1+1) = 2 while the right hand side would be 1, and so on.
All of the laws treated so far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms so far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement
Nonmonotone laws[edit]
The complement operation is defined by the following two laws.
Complementation 1 x ∧ ¬ x = 0 Complementation 2 x ∨ ¬ x = 1 {\displaystyle {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}} {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}
All properties of negation including the laws below follow from the above two laws alone.[4]
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, whence in both algebras it satisfies the double negation law (also called involution law)
Double negation ¬ ( ¬ x ) = x {\displaystyle {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}} {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}
But whereas ordinary algebra satisfies the two laws
( − x ) ( − y ) = x y ( − x ) + ( − y ) = − ( x + y ) {\displaystyle {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}} {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}
Boolean algebra satisfies De Morgan's laws:
De Morgan 1 ¬ x ∧ ¬ y = ¬ ( x ∨ y ) De Morgan 2 ¬ x ∨ ¬ y = ¬ ( x ∧ y ) {\displaystyle {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}} {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}
https://en.wikipedia.org/wiki/Boolean_algebra
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. Instead of elementary algebra where the values of the variables are numbers, and the prime operations are addition and multiplication, the main operations of Boolean algebra are the conjunction and denoted as ∧, the disjunction or denoted as ∨, and the negation not denoted as ¬. It is thus a formalism for describing logical relations in the same way that elementary algebra describes numeric relations.
Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854).[1] According to Huntington, the term "Boolean algebra" was first suggested by Sheffer in 1913,[2] although Charles Sanders Peirce in 1880 gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics".[3] Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.[4]
History[edit]
Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields.[5] In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, and others until it reached the modern conception of an (abstract) mathematical structure.[5] For example, the empirical observation that one can manipulate expressions in the algebra of sets by translating them into expressions in Boole's algebra is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In circuit engineering settings today, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.[6][7][8] Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification.[9]
Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.[10][11][12] Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics.[5] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity
Basic operations[edit]
The basic operations of Boolean algebra are as follows:
AND (conjunction), denoted x∧y (sometimes x AND y or Kxy), satisfies x∧y = 1 if x = y = 1, and x∧y = 0 otherwise.
OR (disjunction), denoted x∨y (sometimes x OR y or Axy), satisfies x∨y = 0 if x = y = 0, and x∨y = 1 otherwise.
NOT (negation), denoted ¬x (sometimes NOT x, Nx or !x), satisfies ¬x = 0 if x = 1 and ¬x = 1 if x = 0.
Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows:
Secondary operations[edit]
The three Boolean operations described above are referred to as basic, meaning that they can be taken as a basis for other Boolean operations that can be built up from them by composition, the manner in which operations are combined or compounded. Operations composed from the basic operations include the following examples:
x → y = ¬ x ∨ y {\displaystyle x\rightarrow y=\neg {x}\vee y} x\rightarrow y=\neg {x}\vee yx ⊕ y = ( x ∨ y ) ∧ ¬ ( x ∧ y ) = ( x ∧ ¬ y ) ∨ ( ¬ x ∧ y ) {\displaystyle x\oplus y=(x\vee y)\wedge \neg {(x\wedge y)}=(x\wedge \neg y)\vee (\neg x\wedge y)} {\displaystyle x\oplus y=(x\vee y)\wedge \neg {(x\wedge y)}=(x\wedge \neg y)\vee (\neg x\wedge y)}x ≡ y = ¬ ( x ⊕ y ) = ( x ∧ y ) ∨ ( ¬ x ∧ ¬ y ) {\displaystyle x\equiv y=\neg {(x\oplus y)}=(x\wedge y)\vee (\neg x\wedge \neg y)} {\displaystyle x\equiv y=\neg {(x\oplus y)}=(x\wedge y)\vee (\neg x\wedge \neg y)}
These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs
Laws[edit]
A law of Boolean algebra is an identity such as x ∨ (y ∨ z) = (x ∨ y) ∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x∨(y∧z) = x∨(z∧y) from y∧z = z∧y as treated in the section on axiomatization.
Monotone laws[edit]
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:[14]
Associativity of ∨ {\displaystyle \vee } \vee : x ∨ ( y ∨ z ) {\displaystyle x\vee (y\vee z)} {\displaystyle x\vee (y\vee z)} = ( x ∨ y ) ∨ z {\displaystyle =(x\vee y)\vee z} {\displaystyle =(x\vee y)\vee z}
Associativity of ∧ {\displaystyle \wedge } \wedge : x ∧ ( y ∧ z ) {\displaystyle x\wedge (y\wedge z)} {\displaystyle x\wedge (y\wedge z)} = ( x ∧ y ) ∧ z {\displaystyle =(x\wedge y)\wedge z} {\displaystyle =(x\wedge y)\wedge z}
Commutativity of ∨ {\displaystyle \vee } \vee : x ∨ y {\displaystyle x\vee y} x\vee y = y ∨ x {\displaystyle =y\vee x} {\displaystyle =y\vee x}
Commutativity of ∧ {\displaystyle \wedge } \wedge : x ∧ y {\displaystyle x\wedge y} x\wedge y = y ∧ x {\displaystyle =y\wedge x} {\displaystyle =y\wedge x}
Distributivity of ∧ {\displaystyle \wedge } \wedge over ∨ {\displaystyle \vee } \vee : x ∧ ( y ∨ z ) {\displaystyle x\wedge (y\vee z)} {\displaystyle x\wedge (y\vee z)} = ( x ∧ y ) ∨ ( x ∧ z ) {\displaystyle =(x\wedge y)\vee (x\wedge z)} {\displaystyle =(x\wedge y)\vee (x\wedge z)}
Identity for ∨ {\displaystyle \vee } \vee : x ∨ 0 {\displaystyle x\vee 0} {\displaystyle x\vee 0} = x {\displaystyle =x} =x
Identity for ∧ {\displaystyle \wedge } \wedge : x ∧ 1 {\displaystyle x\wedge 1} {\displaystyle x\wedge 1} = x {\displaystyle =x} =x
Annihilator for ∧ {\displaystyle \wedge } \wedge : x ∧ 0 {\displaystyle x\wedge 0} {\displaystyle x\wedge 0} = 0 {\displaystyle =0} =0
The following laws hold in Boolean Algebra, but not in ordinary algebra:
Annihilator for ∨ {\displaystyle \vee } \vee : x ∨ 1 {\displaystyle x\vee 1} {\displaystyle x\vee 1} = 1 {\displaystyle =1} =1
Idempotence of ∨ {\displaystyle \vee } \vee : x ∨ x {\displaystyle x\vee x} {\displaystyle x\vee x} = x {\displaystyle =x} =x
Idempotence of ∧ {\displaystyle \wedge } \wedge : x ∧ x {\displaystyle x\wedge x} {\displaystyle x\wedge x} = x {\displaystyle =x} =x
Absorption 1: x ∧ ( x ∨ y ) {\displaystyle x\wedge (x\vee y)} {\displaystyle x\wedge (x\vee y)} = x {\displaystyle =x} =x
Absorption 2: x ∨ ( x ∧ y ) {\displaystyle x\vee (x\wedge y)} {\displaystyle x\vee (x\wedge y)} = x {\displaystyle =x} =x
Distributivity of ∨ {\displaystyle \vee } \vee over ∧ {\displaystyle \wedge } \wedge : x ∨ ( y ∧ z ) {\displaystyle x\vee (y\wedge z)} {\displaystyle x\vee (y\wedge z)} = ( x ∨ y ) ∧ ( x ∨ z ) {\displaystyle =(x\vee y)\wedge (x\vee z)} {\displaystyle =(x\vee y)\wedge (x\vee z)}
Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2×2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1, for example in Absorption Law 1 the left hand side would be 1(1+1) = 2 while the right hand side would be 1, and so on.
All of the laws treated so far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms so far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement
Nonmonotone laws[edit]
The complement operation is defined by the following two laws.
Complementation 1 x ∧ ¬ x = 0 Complementation 2 x ∨ ¬ x = 1 {\displaystyle {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}} {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}
All properties of negation including the laws below follow from the above two laws alone.[4]
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, whence in both algebras it satisfies the double negation law (also called involution law)
Double negation ¬ ( ¬ x ) = x {\displaystyle {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}} {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}
But whereas ordinary algebra satisfies the two laws
( − x ) ( − y ) = x y ( − x ) + ( − y ) = − ( x + y ) {\displaystyle {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}} {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}
Boolean algebra satisfies De Morgan's laws:
De Morgan 1 ¬ x ∧ ¬ y = ¬ ( x ∨ y ) De Morgan 2 ¬ x ∨ ¬ y = ¬ ( x ∧ y ) {\displaystyle {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}} {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}