Automatic Algorithms Optimization via Fast Matrix Exponentiation (2015)

  • So, compilers can do this, and do do this, depending on the language and it's guarantees.

    GCC and LLVM use a chain of recurrence algebra and form for symbolic analysis that could easily represent this (or be extended to). But they probably won't bother any time soon.

    To understand why requires a little more detail first:

    The statement "However, compilers do not replace the computational algorithm, written by a human, by a more asymptotically efficient one."

    is mostly wrong.

    They generally do where they are allowed. They just aren't often allowed, or it's not worth it.

    There are languages that are playing with this even more heavily.

    But whether it's a good idea, eh.

    In numerics in particular, people often pick their algorithms very carefully for stability/etc reasons.

    For more generic things like sorts, a lot of this gets hidden in standard libraries anyway.

    For things nobody could really complain about, the compilers do idiom recognition of things like memcpys, string compares, etc and replace them.

    In any case, the problem is not recognizing and replacing the algorithm or idiom. The problem is deciding when doing so actually serves your users well.

    Going back up, when we first implemented scalar evolutions in GCC, we supported everything you can think of - closed form polynomial functions, matrices, mixers.

    In the end though, we were spending a lot of time handling cases that were very rare, mostly optimal as written anyway (so not useful for the optimizer), but still required exponential time solving/etc to do anything with.

    So a lot of that code was removed.

    Could you handle matrix exponentiation because it's fast? Sure.

    But it's unlikely to be worth it.

  • This was a hot-topic 10-15 years ago. Then, over a few years, we went from "there has been little work applying computer algebra methods to program analysis and optimization" to "everything under the sun has already been done."

    It's been about 3 years since I looked at this stuff, and I'm having trouble remembering the names of the people in this area (lots of Europeans). Most of the action is in finding ever-larger fragments of programming languages they can solve. The one in this article handles what are called "arithmetic programs;" a bunch of people have worked on handling restricted shades of recursive programs.

  • undefined

  • Unfortunately it only works for toy examples.

    > cpmoptimize.recompiler.RecompilationError: Can't optimize loop: Unsupported instruction (LOAD_ATTR, 'b') at line 12 in fib, file "test.py"

  • It is funny to see the repeated squares algorithm every time it is rediscovered. Knuth, unsurprisingly, had one of his earlier papers involving it. He presented it as related to addition chains and shows that just repeated squaring is not always superior.

    I'm curious what all came of this effort, seeing it is marked 2015. It is also fun to see benchmarks of this method used in executing fib.

  • undefined