On occasion scientific computations at double(quadruple) precision are simply not sufficient. In lieu of the usual NumPy and SciPy one can instead make use of mpmath or SymPy. For a 'large-scale' calculation one must appeal to parallelism and indeed distributed resources (eg. Dask-distributed). We describe a package that provides for library delegation based on calculation requirements at runtime.
A common idiom in the scientific community, when it comes to performing large-scale simulations is to make use of Python as a kind of 'glue'. A mathematical setup is conceived to solve some problem of interest which may then be rapidly prototyped leveraging existing tools within packages such as NumPy or SciPy; missing 'low-level' algorithms may even be implemented as native Python code whereupon attaching a single decorator leads to dramatically accelerated execution via the JIT tools offered by Numba. This cuts development time, code complexity and yet allows for execution speeds that are competitive with native C/Fortran.
On occasion however, double (quadruple) precision floating point computations are not sufficient. Here one may make use of packages such as mpmath or SymPy. While it is sometimes possible to exploit caching in order to accelerate code in this new scenario, appealing to parallelism and indeed distributed resources (via Dask-distributed) is a superior path forward in general. It is thus our goal in this talk to describe a package that collects all the above ideas, providing simple, unified interfaces for library delegation to a user based on calculation requirements at run-time. Time permitting we delve into applications showing the results of calculations involving (spinorial) tensorial fields evolved on the 2-sphere and constraint deformation in the context of numerical relativity.