Abstract
The numerical ill-conditioning associated with approximating an electron density with a convex sum of Gaussian or Slater-type functions is overcome by using the (extended) Kullback–Leibler divergence to measure the deviation between the target and approximate density. The optimized densities are non-negative and normalized, and they are accurate enough to be used in applications related to molecular similarity, the topology of the electron density, and numerical molecular integration. This robust, efficient, and general approach can be used to fit any non-negative normalized functions (e.g., the kinetic energy density and molecular electron density) to a convex sum of non-negative basis functions. We present a fixed-point iteration method for optimizing the Kullback–Leibler divergence and compare it to conventional gradient-based optimization methods. These algorithms are released through the free and open-source BFit package, which also includes a L2-norm squared optimization routine applicable to any square-integrable scalar function.