_ks3e 2 days ago

It's nice to see some high-performance linear algebra code done in a modern lanugage! Would love to see more!

Is your approach specific to the case where the matrix fits inside cache, but the memory footprint of the basis causes performance issues? Most of the communication-avoiding Krylov works I've seen, e.g [0,1] seem to assume that if the matrix fits, so will its basis, and so end up doing some partitioning row-wise for the 'large matrix' case; I'm curious what your application is.

[0] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-..., e.g. page 25. [1] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-...

  • adgjlsfhk1 2 days ago

    You might be interested in ExponentialUtilities.jl then. Julia has a really unique ability to make high performance linear algebra look like the math. See https://github.com/SciML/ExponentialUtilities.jl (specifically src/kiops.jl and src/krylov_phiv.jl) for an example of a good matrix exponential operator in ~600 lines of code+comments.

    • zamalek 2 days ago

      I have massive hopes for Julia, especially for ML. What really held me back last I looked at it was a lack of cargo-tier tooling, has that changed?

      • adgjlsfhk1 a day ago

        When did you look and what tooling was missing? Julia's package manager Pkg was pretty heavily inspired by cargo, and IMO it does a very good job. Also in the past 2-3 years Juliaup (modeled after rustup) has become the primary way of installing and managing Julia versions

Sesse__ 2 days ago

It seems the DNS servers for lukefleed.xyz are subtly misconfigured, causing occasional connectivity problems:

https://dns.squish.net/traverses/de494a9fe3310415f30369a9cb1...

Or more precisely, lukefreed.xyz has NS records pointing to ns[1234].afraid.org, and the DNS servers for _afraid.org_ are subtly misconfigured (one of the six nameservers for afraid.org is evergreen.v6.afraid.org, and since you are trying to look up something in afraid.org but you already trying to resolve afraid.org, you'll need some extra “glue records” as part of the NS response, which is missing for that specific server).

jkafjanvnfaf 2 days ago

How accurate is this two-pass approach in general? From my outsider's perspective, it always looked like most of the difficulty in implementing Lanczos was reorthogonalization, which will be hard to do with the two-pass algorithm.

Or is this mostly a problem when you actually want to calculate the eigenvectors themselves, and not just matrix functions?

  • lukefleed 2 days ago

    That's an interesting question. I don't have too much experience, but here's my two cents.

    For matrix function approximations, loss of orthogonality matters less than for eigenvalue computations. The three-term recurrence maintains local orthogonality reasonably well for moderate iteration counts. My experiments [1] show orthogonality loss stays below $10^{-13}$ up to k=1000 for well-conditioned problems, and only becomes significant (jumping to $10^{-6}$ and higher) around k=700-800 for ill-conditioned spectra. Since you're evaluating $f(T_k)$ rather than extracting individual eigenpairs, you care about convergence of $\|f(A)b - x_k\|$, not spectral accuracy. If you need eigenvectors themselves or plan to run thousands of iterations, you need the full basis, and the two-pass method won't help. Maybe methods like [2] would be more suitable?

    [1] https://github.com/lukefleed/two-pass-lanczos/raw/master/tex...

    [2] https://arxiv.org/abs/2403.04390

vatsachak 2 days ago

I leafed through your thesis and now will see aside some time in the future to learn more about succint data structures.

I hope you get your pay day, your blog is great!

  • lukefleed 2 days ago

    Thanks!! I'm currently working on expanding that work. I will post something for sure when it's done.

sfpotter 2 days ago

Nice result! Arnoldi is a beautiful algorithm, and this is a good application of it.

What are you using this for and why are you working on it?

I admit I'm not personally convinced of the value of Rust in numerics, but that's just me, I guess...

  • lukefleed 2 days ago

    Hi there, thanks! I started doing this for a university exam and got carried away a bit.

    Regarding Rust for numerical linear algebra, I kinda agree with you. I think that theoretically, its a great language for writing low-level "high-performance mathematics." That's why I chose it in the first place.

    The real wall is that the past four decades of research in this area have primarily been conducted in C and Fortran, making it challenging for other languages to catch up without relying heavily on BLAS/LAPACK and similar libraries.

    I'm starting to notice that more people are trying to move to Rust for this stuff, so it's worth keeping an eye open on libraries like the one that I used, faer.

    • sfpotter 2 days ago

      Nice. I'd be curious to see if this has already been done in the literature. It is a very nice and useful result, but it also kind of an obvious one---so I have to assume people who do work on computing matrix functions are aware of it... (This is not to take anything away from the hard work you've done! You may just appreciate having a reference to any existing work that is already out there.)

      Of course, what you're doing depends on the matrix being Hermitian reducing the upper Hessenberg matrix in the Arnoldi iteration to tridiagonal form. Trying to do a similar streaming computation on a general matrix is going to run into problems.

      That said... one area of numerical linear algebra research which is very active is randomized numerical linear algebra. There is a paper by Nakatsukasa and Tropp ("Fast and accurate randomized algorithms for linear systems and eigenvalue problems") which presents some randomized algorithms, including a "randomized GMRES" which IIRC is compatible with streaming. You might find it interesting trying to adapt the machinery this algorithm is built on to the problem you're working on.

      As for Rust, having done a lot of this research myself... there is no problem relying on BLAS or LAPACK, and I'm not sure this could be called a "wall". There are also many alternative libraries actively being worked on. BLIS, FLAME, and MAGMA are examples that come to mind... but there are so many more. Obviously Eigen is also available in C++. So, I'm not sure this alone justifies using Rust... Of course, use it if you like it. :)

      • lukefleed 2 days ago

        Sorry for the late answer.

        The blog post is a simplification of the actual work; you can check out the full report here [1], where I also reference the literature about this algorithm.

        On the cache effects: I haven't seen this "engineering" argument made explicitly in the literature either. There are other approaches to the basis storage problem, like the compression technique in [2]. Funny enough, the authors gave a seminar at my university literally this afternoon about exactly that.

        I'm also unfamiliar with randomised algorithms for numerical linear algebra beyond the basics. I'll dig into that, thanks!

        On the BLAS point, let me clarify what I meant by "wall": when you call BLAS from Rust, you're essentially making a black-box call to pre-compiled Fortran or C code. The compiler loses visibility into what happens across that boundary. You can't inline, can't specialise for your specific matrix shapes or use patterns, can't let the compiler reason about memory layout across the whole computation. You get the performance of BLAS, sure, but you lose the ability to optimise the full pipeline.

        Also, Rust's compilation model flattens everything into one optimisation unit: your code, dependencies, all compiled together from source. The compiler sees the full call graph and can inline, specialise generics, and vectorise across what would be library boundaries in C/C++. The borrow checker also proves at compile time that operations like our pointer swaps are safe and that no aliasing occurs, which enables more aggressive optimisations; the compiler can reorder operations and keep values in registers because it has proof about memory access patterns. With BLAS, you're calling into opaque binaries where none of this analysis is possible.

        My point is that if the core computation just calls out to pre-compiled C or Fortran, you lose much of what makes Rust interesting for numerical work in the first place. That's why I hope to see more efforts directed towards expanding the Rust ecosystem in this area in the future :)

        [1] https://github.com/lukefleed/two-pass-lanczos/raw/master/tex...

        [2] https://arxiv.org/abs/2403.04390

        • sfpotter 2 days ago

          Thanks for clarifying.

          I think the argument you're making is compelling and interesting, but my two concerns with this are: 1) how does it affect compile time? and 2) how easy it to make major structural changes to an algorithm?

          I haven't tried Rust, but my worry is that the extensive compile-time checks would make quick refactors difficult. When I work on numerical algorithms, I often want to try many different approaches to the same problem until I hit on something with the right "performance envelope". And usually memory safety just isn't that hard... the data structures aren't that complicated...

          Basically, I worry the extra labor involved in making Rust code work would affect prototyping velocity.

          On the other hand, what you're saying about compiling everything together at once, proving more about what is being compiled, enabling a broader set of performance optimizations to take place... That is potentially very compelling and worth exploring if that gains are big. Do you have any idea how big? :)

          This is also a bit reminiscent of the compile time issues with Eigen... If I have to recompile my dense QR decomposition (which never changes) every time I compile my code because it's inlined in C++ (or "blobbed together" in Rust), then I waste that compile time every single time I rebuild... Is that worth it for a 30% speedup? Maybe... Maybe not... Really depends on what the code is for.

          • spockz a day ago

            If code is split in sufficiently small crates compile times are not big of a deal for iterations. There is a faster development build and I would think that most time will be spent running the benchmark and checking perf to see processor usage dwarfing any time needed for compilation.

    • uecker a day ago

      The advantage of having stuff in C and Fortran is that it can easily be used from other languages. I would also argue that your algorithm written in C would be far more readable.

    • adgjlsfhk1 2 days ago

      Have you looked into Julia at all? IMO it's a pretty great mix of performance but with a lot fewer restrictions than what Rust ends up with.

    • imtringued a day ago

      BLAS/LAPACK don't do any block level optimizations. Heck, they don't even let you define a fixed block sparsity pattern. Do the math yourself and write down all 16 sparsity patterns for a 2x2 block matrix and try to find the inverse or LU decomposition on paper.

      https://lukefleed.xyz/posts/cache-friendly-low-memory-lanczo...

      I mean just look at the saddle point problem you mentioned in that section. It's a block matrix with highly specific properties and there is no BLAS call for that. Things get even worse once you have parameterized matrices and want to operate on a series of changing and non-changing matrix multiplications. Some parts can be factorized offline.

chrisweekly 2 days ago

Fantastic post; I'm not much of a mathemetician, but the writing and logical progression were so clearly articulated, I was able to follow the gist the whole way through. Kudos!

manbash 2 days ago

Nice work. I have gone through the fairly straightforward paper.

May I ask what you've used to confirm the cache hit/miss rate? Thanks!

  • lukefleed 2 days ago

    Thanks! I used perf to look at cache miss rates and memory bandwidth during runs. The measurements showed the pattern I expected, but I didn't do a rigorous profiling study (different cache sizes, controlled benchmarks across architectures, or proper statistical analysis).

    This was for a university exam, and I ran out of time to do it properly. The cache argument makes intuitive sense (three vectors cycling vs. scanning a growing n×k matrix), and the timing data supports it, but I'd want to instrument it more carefully in the future :)

gigatexal 2 days ago

the comments here might be a good precursor to defending your thesis -- good luck with that btw!