LGen: A Basic Linear Algebra Compiler


We introduce LGen, a compiler that produces performance-optimized basic linear algebra computations (BLACs) of fixed size. By "basic linear algebra" we mean computations on matrices, vectors, and scalars that are composed from matrix multiplication, matrix addition, transposition, and scalar multiplication.

Examples of BLACs include:

  1. Simple computations, such as y = Ax.
  2. Computations that closely match the BLAS interface, e.g., C = αABT + βC.
  3. Computations that need more than one BLAS call, e.g., γ = xT(A+B)y + δ.

The input to LGen is a BLAC including the (fixed) size of its operands. The output is an optimized C function, optionally using intrinsics for vectorization, that computes the BLAC.

Code Generation Overview

The picture below provides an overview of LGenís internal structure. LGen is designed after the version of Spiral that produces looped fixed size code [3].

A valid input to LGen is a BLAC with input and output operands of fixed size. The BLAC is specified in a DSL that we call Linear algebra Language (LL). In the first step, the tiling is fixed for the computation. The resulting fully-tiled computation is translated into a second DSL called Σ-LL, which is based on Σ-SPL used in Spiral [3]. The latter is still a valid mathematical representation of the original BLAC but makes loops and access functions explicit. At this level LGen performs loop-level optimizations such as loop merging and exchange. Next, the Σ-LL expression is translated into a C intermediate representation (C-IR) for code-level optimizations, such as loop unrolling and translation into SSA form. Finally, the C-IR code is unparsed into C and performance results are used in the autotuning feedback loop.

Vectorization is done by decomposing the computation into a fixed set of nu-BLACs pre-implemented once for every vector architecture:

More details about LGen and its vectorization process can be found in [1].


The plots below show performance results for four different classes of single precision floating point BLACs on an Intel Xeon X5680 with SSE 4.2 and 32 kB L1 D-cache. In the first three cases we use matrices with narrow rectangular shapes (panels) or small squares (blocks). The panel sizes are either n x 4 or 4 x n, chosen to fit into L1 D-cache. For the last case (micro-BLACs) the matrices are all n x n, with 2 ≤ n ≤ 10.

Case 1: Simple BLAC - y = Ax

A is n x 4 A is 4 x n

Case 2: BLAC that closely matches BLAS - C = αAB + βC

A is n x 4, B is 4 x 4 A is 4 x 4, B is 4 x n
A is 4 x n, B is n x 4 A is n x 4, B is 4 x n

Case 3: BLAC that needs more than one BLAS call - C = α(A0+A1)TB + βC

A0 is 4 x n, B is 4 x 4 A0 is 4 x 4, B is 4 x n
A0 is n x 4, B is n x 4 A0 is 4 x n, B is 4 x n

Case 4: Micro-BLACs - n x n, with 2 ≤ n ≤ 10

y = Ax C = AB
δ = xTAy


  1. Nikolaos Kyrtatas, Daniele G. Spampinato and Markus Püschel 
    A Basic Linear Algebra Compiler for Embedded Processors 
    Proc. Design, Automation and Test in Europe (DATE), pp. 1054-1059, 2015
  2. Daniele G. Spampinato and Markus Püschel 
    A Basic Linear Algebra Compiler 
    Proc. International Symposium on Code Generation and Optimization (CGO), pp. 23-32, 2014
  3. Franz Franchetti, Frédéric de Mesmay, Daniel McFarlin and Markus Püschel 
    Operator Language: A Program Generation Framework for Fast Kernels 
    Proc. IFIP Working Conference on Domain Specific Languages (DSL WC), Lecture Notes in Computer Science, Springer, Vol. 5658, pp. 385-410, 2009
  4. Franz Franchetti, Yevgen Voronenko and Markus Püschel 
    Formal Loop Merging for Signal Transforms 
    Proc. Programming Languages Design and Implementation (PLDI), pp. 315-326 , 2005

More information

Contact: Daniele Spampinato, [first].[last] AT inf.ethz.ch