lpetrich
Contributor
Gottfried Wilhelm Leibniz (1646 - 1716) has a Google Doodle, as I write this.
He did a *lot* of work in mathematics.
Systems of linear equations
He expressed systems of linear equations in matrix form: A.x = b for matrix a, vector x to be found, and vector b.
He also worked on determinants of square matrices, a sort of sum of products of the matrix's elements. It turns out to be useful for solving systems of linear equations, and Leibniz discovered Cramer's rule: each component of x has this value:
det(A with that component's location replaced by b) / det(A)
where "det" means calculate a determinant. Though Cramer's rule is impractical for all but the smallest systems of linear equations, it is nevertheless good theoretically. If det(A) = 0, then matrix A is "singular", and a system of equations with it either has no solutions or some infinite set of solutions.
Leibniz worked out how to calculate determinants recursively, in terms of determinants of subsets of one's matrix. Though theoretically correct, for size n, this algorithm takes O(n!) operations. Leibniz also did some Gaussian elimination for systems of linear equations, a method that requires only O(n^3) operations. It can also be used on determinants, with that same overall runtime.
Functions
He had a general idea of function, finding some value by doing some operations on some input value. He used the idea in geometry: a point (x,y) on a curve has y being a function of x, or else both x and y being functions of some parameter t.
Calculus
This is the mathematics of slopes of curves (differentiation) and areas underneath curves (integration). He invented it independently of Sir Isaac Newton, and he got involved in a nasty fight over who invented it. For integration, he used a big S, for "sum", and for differentials of quantities, he used the letter d. His notation became much more widely used than Newton's, though Newton's is still used in some cases, like a dot over something for time derivatives.
He used infinitesimals, quantities that are smaller than any nonzero number but are nevertheless nonzero. For instance:
df(x)/dx = (f(x+dx) - f(x))/(dx)
where dx is an infinitesimal. In the nineteenth century, Karl Weierstrass developed the notion of limits, thus making infinitesimals unnecessary.
With calculus, I'm sure, he derived this formula for pi:
(pi/4) = 1 - 1/3 + 1/5 - 1/7 - 1/9 + ...
This is from arctan(x) = x - x^3/3 + x^5/5 - x^7/7 + ...
How to derive it. Consider integral of 1/(1+x^2) by x. Set x = tan, a trigonometric function, and thus y = arctan(x) its inverse. Then the integral becomes
integral of 1/(1 + tan^2) * d(tan)/dy by y = integral of 1 by y = y
Returning to the original integral, 1/(1+x^2) = 1 - x^2 + x^4 - x^6 + ... Integrating over x gives x - x^3/3 + x^5/5 - x^7/7 + ...
Thus, what we wanted to derive.
He did a *lot* of work in mathematics.
Systems of linear equations
He expressed systems of linear equations in matrix form: A.x = b for matrix a, vector x to be found, and vector b.
He also worked on determinants of square matrices, a sort of sum of products of the matrix's elements. It turns out to be useful for solving systems of linear equations, and Leibniz discovered Cramer's rule: each component of x has this value:
det(A with that component's location replaced by b) / det(A)
where "det" means calculate a determinant. Though Cramer's rule is impractical for all but the smallest systems of linear equations, it is nevertheless good theoretically. If det(A) = 0, then matrix A is "singular", and a system of equations with it either has no solutions or some infinite set of solutions.
Leibniz worked out how to calculate determinants recursively, in terms of determinants of subsets of one's matrix. Though theoretically correct, for size n, this algorithm takes O(n!) operations. Leibniz also did some Gaussian elimination for systems of linear equations, a method that requires only O(n^3) operations. It can also be used on determinants, with that same overall runtime.
Functions
He had a general idea of function, finding some value by doing some operations on some input value. He used the idea in geometry: a point (x,y) on a curve has y being a function of x, or else both x and y being functions of some parameter t.
Calculus
This is the mathematics of slopes of curves (differentiation) and areas underneath curves (integration). He invented it independently of Sir Isaac Newton, and he got involved in a nasty fight over who invented it. For integration, he used a big S, for "sum", and for differentials of quantities, he used the letter d. His notation became much more widely used than Newton's, though Newton's is still used in some cases, like a dot over something for time derivatives.
He used infinitesimals, quantities that are smaller than any nonzero number but are nevertheless nonzero. For instance:
df(x)/dx = (f(x+dx) - f(x))/(dx)
where dx is an infinitesimal. In the nineteenth century, Karl Weierstrass developed the notion of limits, thus making infinitesimals unnecessary.
With calculus, I'm sure, he derived this formula for pi:
(pi/4) = 1 - 1/3 + 1/5 - 1/7 - 1/9 + ...
This is from arctan(x) = x - x^3/3 + x^5/5 - x^7/7 + ...
How to derive it. Consider integral of 1/(1+x^2) by x. Set x = tan, a trigonometric function, and thus y = arctan(x) its inverse. Then the integral becomes
integral of 1/(1 + tan^2) * d(tan)/dy by y = integral of 1 by y = y
Returning to the original integral, 1/(1+x^2) = 1 - x^2 + x^4 - x^6 + ... Integrating over x gives x - x^3/3 + x^5/5 - x^7/7 + ...
Thus, what we wanted to derive.