lpetrich
Contributor
The software that learned how to play chess and Go has now been adapted to doing a problem in mathematics: doing matrix multiplication efficiently, where a matrix is a rectangular array of numbers or number-like entities. For multiplying a m*n matrix and a n*p matrix, the usual algorithm requires m*n*p additions and multiplications. If one avoids the first addition for each new value, then the number of additions becomes m*(n-1)*p. Thus, for two 2*2 matrices, one needs only 4 additions and 8 multiplications. One can improve on that with the Strassen algorithm - it involves doing some of the additions before the multiplications, and it needs 18 additions and 7 multiplications. If multiplication is much more computationally costly than addition, then this rearrangement can be worthwhile.
Breaking down one's multiplicand matrices into 2*2 submatrices, one can use Strassen's algorithm on them, complete with recursively doing it with each submatrix multiplication. The asymptotic performance is nlog(2,7) ~ n2.8074 instead of n3.
While 7 is the lower limit on the number of multiplications for 2*2 matrix multiplication, one can do better on larger matrices, and DeepMind has adapted AlphaZero to finding more efficient algorithms: AlphaTensor.
Breaking down one's multiplicand matrices into 2*2 submatrices, one can use Strassen's algorithm on them, complete with recursively doing it with each submatrix multiplication. The asymptotic performance is nlog(2,7) ~ n2.8074 instead of n3.
While 7 is the lower limit on the number of multiplications for 2*2 matrix multiplication, one can do better on larger matrices, and DeepMind has adapted AlphaZero to finding more efficient algorithms: AlphaTensor.
- Discovering novel algorithms with AlphaTensor - DeepMind itself - "First extension of AlphaZero to mathematics unlocks new possibilities for research"
- Discovering faster matrix multiplication algorithms with reinforcement learning | Nature
- Artificial intelligence finds faster algorithms for multiplying matrices
- AlphaTensor: DeepMind's AI For Discovering Efficient Matrix Multiplication Algorithms – Weights & Biases - "DeepMind has revealed AlphaTensor, an extension of AlphaZero which searches for more efficient matrix multiplication algorithms by treating the search process like a game."
- How DeepMind's AlphaTensor AI Devised a Faster Matrix Multiplication - The New Stack - "The DeepMind team approached the matrix multiplication problem like a game, with AlphaTensor building upon its game-playing predecessor, AlphaZero."
- AI Reveals New Possibilities in Matrix Multiplication | Quanta Magazine - "Inspired by the results of a game-playing neural network, mathematicians have been making unexpected advances on an age-old math problem."