Acceleration with Numba

We explore how the computation of cost functions can be dramatically accelerated with numba’s JIT compiler.

The run-time of iminuit is usually dominated by the execution time of the cost function. To get good performance, it recommended to use array arthimetic and scipy and numpy functions in the body of the cost function. Python loops should be avoided, but if they are unavoidable, Numba can help. Numba can also parallelize numerical calculations to make full use of multi-core CPUs and even do computations on the GPU.

Note: This tutorial shows how one can generate faster pdfs with Numba. Before you start to write your own pdf, please check whether one is already implemented in the numba_stats library. If you have a pdf that is not included there, please consider contributing it to numba_stats.

[1]:
# !pip install matplotlib numpy numba scipy iminuit
from iminuit import Minuit
import numpy as np
import numba as nb
import math
from scipy.stats import expon, norm
from matplotlib import pyplot as plt
from argparse import Namespace

The standard fit in particle physics is the fit of a peak over some smooth background. We generate a Gaussian peak over exponential background, using scipy.

[2]:
np.random.seed(1)  # fix seed

# true parameters for signal and background
truth = Namespace(n_sig=2000, f_bkg=10, sig=(5.0, 0.5), bkg=(0.0, 4.0))
n_bkg = truth.n_sig * truth.f_bkg

# make a data set
x = np.empty(truth.n_sig + n_bkg)

# fill m variables
x[: truth.n_sig] = norm(*truth.sig).rvs(truth.n_sig)
x[truth.n_sig :] = expon(*truth.bkg).rvs(n_bkg)

# cut a range in x
xrange = np.array((1.0, 9.0))
ma = (xrange[0] < x) & (x < xrange[1])
x = x[ma]

plt.hist(
    (x[truth.n_sig :], x[: truth.n_sig]),
    bins=50,
    stacked=True,
    label=("background", "signal"),
)
plt.xlabel("x")
plt.legend();
../_images/notebooks_numba_3_0.svg
[3]:
# ideal starting values for iminuit
start = np.array((truth.n_sig, n_bkg, truth.sig[0], truth.sig[1], truth.bkg[1]))


# iminuit instance factory, will be called a lot in the benchmarks blow
def m_init(fcn):
    m = Minuit(fcn, start, name=("ns", "nb", "mu", "sigma", "lambd"))
    m.limits = ((0, None), (0, None), None, (0, None), (0, None))
    m.errordef = Minuit.LIKELIHOOD
    return m
[4]:
# extended likelihood (https://doi.org/10.1016/0168-9002(90)91334-8)
# this version uses numpy and scipy and array arithmetic
def nll(par):
    n_sig, n_bkg, mu, sigma, lambd = par
    s = norm(mu, sigma)
    b = expon(0, lambd)
    # normalisation factors are needed for pdfs, since x range is restricted
    sn = s.cdf(xrange)
    bn = b.cdf(xrange)
    sn = sn[1] - sn[0]
    bn = bn[1] - bn[0]
    return (n_sig + n_bkg) - np.sum(
        np.log(s.pdf(x) / sn * n_sig + b.pdf(x) / bn * n_bkg)
    )


nll(start)
[4]:
-103168.78482586428
[5]:
%%timeit -r 3 -n 1
m = m_init(nll)  # setup time is negligible
m.migrad();
157 ms ± 2.02 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)

Let’s see whether we can beat that. The code above is already pretty fast, because numpy and scipy routines are fast, and we spend most of the time in those. But these implementations do not parallelize the execution and are not optimised for this particular CPU, unlike numba-jitted functions.

To use numba, in theory we just need to put the njit decorator on top of the function, but often that doesn’t work out of the box. numba understands many numpy functions, but no scipy. We must evaluate the code that uses scipy in ‘object mode’, which is numba-speak for calling into the Python interpreter.

[6]:
# first attempt to use numba
@nb.njit(parallel=True)
def nll(par):
    n_sig, n_bkg, mu, sigma, lambd = par
    with nb.objmode(spdf="float64[:]", bpdf="float64[:]", sn="float64", bn="float64"):
        s = norm(mu, sigma)
        b = expon(0, lambd)
        # normalisation factors are needed for pdfs, since x range is restricted
        sn = np.diff(s.cdf(xrange))[0]
        bn = np.diff(b.cdf(xrange))[0]
        spdf = s.pdf(x)
        bpdf = b.pdf(x)
    no = n_sig + n_bkg
    return no - np.sum(np.log(spdf / sn * n_sig + bpdf / bn * n_bkg))


nll(start)  # test and warm-up JIT
[6]:
-103168.78482586426
[7]:
%%timeit -r 3 -n 1 m = m_init(nll)
m.migrad()
1.02 s ± 70 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)

It is even a bit slower. :( Let’s break the original function down by parts to see why.

[8]:
# let's time the body of the function
n_sig, n_bkg, mu, sigma, lambd = start
s = norm(mu, sigma)
b = expon(0, lambd)
# normalisation factors are needed for pdfs, since x range is restricted
sn = np.diff(s.cdf(xrange))[0]
bn = np.diff(b.cdf(xrange))[0]
spdf = s.pdf(x)
bpdf = b.pdf(x)

%timeit -r 3 -n 100 norm(*start[2:4]).pdf(x)
%timeit -r 3 -n 500 expon(0, start[4]).pdf(x)
%timeit -r 3 -n 1000 n_sig + n_bkg - np.sum(np.log(spdf / sn * n_sig + bpdf / bn * n_bkg))
543 µs ± 9.57 µs per loop (mean ± std. dev. of 3 runs, 100 loops each)
724 µs ± 139 µs per loop (mean ± std. dev. of 3 runs, 500 loops each)
129 µs ± 6.23 µs per loop (mean ± std. dev. of 3 runs, 1,000 loops each)

Most of the time is spend in norm and expon which numba could not accelerate and the total time is dominated by the slowest part.

This, unfortunately, means we have to do much more manual work to make the function faster, since we have to replace the scipy routines with Python code that numba can accelerate and run in parallel.

[9]:
# when parallel is enabled, also enable associative math
kwd = {"parallel": True, "fastmath": {"reassoc", "contract", "arcp"}}


@nb.njit(**kwd)
def sum_log(fs, spdf, fb, bpdf):
    return np.sum(np.log(fs * spdf + fb * bpdf))


@nb.njit(**kwd)
def norm_pdf(x, mu, sigma):
    invs = 1.0 / sigma
    z = (x - mu) * invs
    invnorm = 1 / np.sqrt(2 * np.pi) * invs
    return np.exp(-0.5 * z ** 2) * invnorm


@nb.njit(**kwd)
def nb_erf(x):
    y = np.empty_like(x)
    for i in nb.prange(len(x)):
        y[i] = math.erf(x[i])
    return y


@nb.njit(**kwd)
def norm_cdf(x, mu, sigma):
    invs = 1.0 / (sigma * np.sqrt(2))
    z = (x - mu) * invs
    return 0.5 * (1 + nb_erf(z))


@nb.njit(**kwd)
def expon_pdf(x, lambd):
    inv_lambd = 1.0 / lambd
    return inv_lambd * np.exp(-inv_lambd * x)


@nb.njit(**kwd)
def expon_cdf(x, lambd):
    inv_lambd = 1.0 / lambd
    return 1.0 - np.exp(-inv_lambd * x)


def nll(par):
    n_sig, n_bkg, mu, sigma, lambd = par
    # normalisation factors are needed for pdfs, since x range is restricted
    sn = norm_cdf(xrange, mu, sigma)
    bn = expon_cdf(xrange, lambd)
    sn = sn[1] - sn[0]
    bn = bn[1] - bn[0]
    spdf = norm_pdf(x, mu, sigma)
    bpdf = expon_pdf(x, lambd)
    no = n_sig + n_bkg
    return no - sum_log(n_sig / sn, spdf, n_bkg / bn, bpdf)


nll(start)  # test and warm-up JIT
[9]:
-103168.78482586426

Let’s see how well these versions do:

[10]:
%timeit -r 5 -n 100 norm_pdf(x, *start[2:4])
%timeit -r 5 -n 500 expon_pdf(x, start[4])
%timeit -r 5 -n 1000 sum_log(n_sig / sn, spdf, n_bkg / bn, bpdf)
The slowest run took 43.32 times longer than the fastest. This could mean that an intermediate result is being cached.
203 µs ± 363 µs per loop (mean ± std. dev. of 5 runs, 100 loops each)
The slowest run took 9.76 times longer than the fastest. This could mean that an intermediate result is being cached.
36.5 µs ± 44.9 µs per loop (mean ± std. dev. of 5 runs, 500 loops each)
The slowest run took 32.84 times longer than the fastest. This could mean that an intermediate result is being cached.
89.4 µs ± 151 µs per loop (mean ± std. dev. of 5 runs, 1,000 loops each)

Only a minor improvement for sum_log, but the pdf calculation was drastically accelerated. Since this was the bottleneck before, we expect also Migrad to finish faster now.

[11]:
%%timeit -r 3 -n 1
m = m_init(nll)  # setup time is negligible
m.migrad();
The slowest run took 37.64 times longer than the fastest. This could mean that an intermediate result is being cached.
245 ms ± 169 ms per loop (mean ± std. dev. of 3 runs, 1 loop each)

Success! We managed to get a big speed improvement over the initial code. This is impressive, but it cost us a lot of developer time. This is not always a good trade-off, especially if you consider that library routines are heavily tested, while you always need to test your own code in addition to writing it.

By putting these faster functions into a library, however, we would only have to pay the developer cost once. You can find those in the numba_stats library.

Try to compile the functions again with parallel=False to see how much of the speed increase came from the parallelization and how much from the generally optimized code that numba generated for our specific CPU. On my machine, the gain was entirely due to numba.

In general, it is good advice to not automatically add parallel=True, because this comes with an overhead of breaking data into chunks, copy chunks to the individual CPUs and finally merging everything back together. For large arrays, this overhead is negligible, but for small arrays, it can be a net loss.

So why is numba so fast even without parallelization? We can look at the assembly code generated.

[12]:
for signature, code in norm_pdf.inspect_asm().items():
    print(f"signature: {signature}\n{'-'*(len(str(signature)) + 11)}\n{code[:1000]}\n[...]")

signature: (array(float64, 1d, C), float64, float64)
----------------------------------------------------
        .text
        .file   "<string>"
        .section        .rodata.cst8,"aM",@progbits,8
        .p2align        3
.LCPI0_0:
        .quad   0x3ff0000000000000
.LCPI0_1:
        .quad   0x3fd9884533d43651
        .section        .rodata.cst16,"aM",@progbits,16
        .p2align        4
.LCPI0_2:
        .quad   8
        .quad   8
        .text
        .globl  _ZN8__main__8norm_pdfB3v22B148c8tJTC_2fWQAlbW1yBC0oR6GELEUMELYSPGrIQMVjAQniQcIXKQIMVwoOGKoQDDVQQR1NHAS2FQ9XgSs8w86AhbIsexNXqyfl5JUWJySXqOgrqiUXJBSC6CJi6i_2fOT1WuB9gFjEWI1AA_3d_3dE5ArrayIdLi1E1C7mutable7alignedEdd
        .p2align        4, 0x90
        .type   _ZN8__main__8norm_pdfB3v22B148c8tJTC_2fWQAlbW1yBC0oR6GELEUMELYSPGrIQMVjAQniQcIXKQIMVwoOGKoQDDVQQR1NHAS2FQ9XgSs8w86AhbIsexNXqyfl5JUWJySXqOgrqiUXJBSC6CJi6i_2fOT1WuB9gFjEWI1AA_3d_3dE5ArrayIdLi1E1C7mutable7alignedEdd,@function
_ZN8__main__8norm_pdfB3v22B148c8tJTC_2fWQAlbW1yBC0oR6GELEUMELYSPGrIQMVjAQniQcIXKQIMVwoOGKoQDDVQQR1NHAS2FQ9XgSs8w86AhbIsexNXqyfl5JUWJySXqOgrqiUXJBSC6CJi6i_2fOT1WuB9gFjEWI1AA_3d_3dE5ArrayIdLi1E1C7mutable7alignedEdd:
        .cfi_startproc
        pushq   %rbp
        .cfi_def_cfa_offset 16
        .cfi_offset %rbp, -16
        mov
[...]

This code section is very long, but the assembly grammar is very simple. Constants starts with . and SOMETHING: is a jump label for the assembly equivalent of goto. Everything else is an instruction with its name on the left and the arguments are on the right.

The interesting commands are those that end with pd and ps, those are SIMD instructions that operate on up to eight doubles at once. This is where the speed comes from.

[13]:
import re
from collections import Counter

for signature, code in norm_pdf.inspect_asm().items():
    print(f"signature: {signature}\n{'-'*(len(str(signature)) + 11)}")
    simd_instructions = re.findall(" *([a-z]+p[ds])\t*%", code)
    c = Counter(simd_instructions)
    print("SIMD instructions")
    for k, v in c.items():
        print(f"{k:10}: {v:5}")

signature: (array(float64, 1d, C), float64, float64)
----------------------------------------------------
SIMD instructions
vxorpd    :     2
vmovupd   :     8
vxorps    :     5
vmovups   :    10
vmovaps   :     9
vsubpd    :     2
vmulpd    :     3
vmovapd   :     4
vunpcklpd :     2
  • mov: copy values from memory to CPU registers and back

  • sub: subtract numbers

  • mul: multiply numbers

  • xor: binary xor, the compiler often inserts these to zero out memory

You can google all the other commands.

There is a lot of repetition, because the optimizer partially unrolled some loops to make them faster. Using unrolled loops only works if the remaining chunk of data is large enough. Since the compiler does not know the length of the incoming array, it also generates sections which handle shorter chunks and all the code to select which section to use. Finally, there is some code which does the translation from and to Python objects with corresponding error handling.

We don’t need to write SIMD instructions by hand, the optimizer does it for us and in a very sophisticated way.