Matrix Multiplication Not Yet Doable In Quadratic Time

Last night, a paper appeared on the arXiv titled A \Theta(n^2) time matrix multiplication algorithm.

At first look the paper seemed to score zero on Scott Aaronson’s list of Ten Signs a Claimed Mathematical Breakthrough is Wrong, and it came from a researcher who has worked extensively on the related problem of all-pairs shortest path, so it looked exciting. Unfortunately, it actually scores one on Scott’s list: it contradicts known impossibility results. (Thanks to Yuval Filmus for the references.)

The first step in the algorithm is to show how to compute the inner product of two 3^n-dimensional vectors using n^{O(1)} \cdot 2^n multiplication. This contradicts a lower bound due to Pan (see here the proof of a more general result) that computing the inner product of two N-dimensional vectors require at least N multiplications.

In addition, Raz has proved an \Omega (n^2 \log n) lower bound for matrix multiplication arithmetic circuits, and, to the best of my understanding, the paper’s claims imply an O(n^2) size arithmetic circuit. (Raz’s lower bound assumes that the circuit is not given access to large constants; as far as I can see, the formulas in Han’s paper do not require large constants.)

6 thoughts on “Matrix Multiplication Not Yet Doable In Quadratic Time

  1. In the last paragraph, there is a typo: a square is missing (to actually get the contradiction), the lower bound is $\Omega(n^2\log n)$.

  2. I also took the paper seriously at first. The problem I found is that it claims to reduce multiplying two $n \times n$ matrices to multiplying a $n \times c$ matrix with a $c \times n$ matrix with $c \ll n$ and, importantly, the reduction is applied independently to the two matrices. But this would imply that the product must have rank $\leq c$, which is impossible if both of the original matrices have full rank and $c<n$.

    I would argue that it does score higher on Scott's test.

  3. 1) Has the author retracted?
    2) Is there something of interest anyway? An improvement on the known MM?

    Babai’s recent results-retraction ended up with still having a result of interest.

    Alas, this is rare- most proofs that P=NP or P\ne NP have nothing of merit in them.

    Since the MM “result” is a more reasonable thing to have partial results on– are there any?

  4. Did you have a look at the new updated version? The proof looks more plausible now.

  5. I know its after a lot of years, but I comment anyway. I had a similar idea once. But I stopped investigating because of the following argument.

    If you have some data you want to compress yout could save it into a matrix A (left). Then we mutliply that data with the identitiy matrix (right). Preprocessing the row vectors of matrix A with such an algorithm would mean, we can compress any content (if its big enough, even random data). Why? Because instead of saving the rows of matrix A we would save the preprocessed informations of the row vectors of A which is less than O(N) information. In this case it would be O(log n^…). But even O(n^0.9) cannot be possible. We don’t even have to save the preprocessed columns of matrix B, because this can be computed anytime.

    So sending only the preprocessed data (of the left row vectors) to someone else would mean, we can reconstruct the matrix A by multipying it with the identity matrix on the receivers side.

    This is clearly not possible! It would mean random data can be compressed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s