If you’re a computer, it turns out that the fastest way to multiply two numbers, especially two very large numbers, is not by the grade school method of stacking the two numbers and then multiplying each digit in the top number by each digit in the bottom number and adding the results. Since 1960, mathematicians have been discovering ever faster methods to multiply and recently, a pair of mathematicians discovered a method that is perhaps the fastest way possible.
Their method is a refinement of the major work that came before them. It splits up digits, uses an improved version of the fast Fourier transform, and takes advantage of other advances made over the past forty years. “We use [the fast Fourier transform] in a much more violent way, use it several times instead of a single time, and replace even more multiplications with additions and subtractions,” van der Hoeven said.
What’s interesting is that independently of these discoveries, computers have become a lot better at multiplication:
In addition, the design of computer hardware has changed. Two decades ago, computers performed addition much faster than multiplication. The speed gap between multiplication and addition has narrowed considerably over the past 20 years to the point where multiplication can be even faster than addition in some chip architectures.
(via @macgbrown, who passed this along after I posted this video on Russian multiplication)
Tags: mathematicsfrom kottke.org https://ift.tt/2HXvFP0
via IFTTT
EmoticonEmoticon