You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While there are other factors that contribute to the slower performance, bg_models.py is strongly unexpected. These functions are just called to perform a simple mathematical operation and return a vector.
After closer inspection, I performed a different benchmark to see if there was a strong overhead for calling models in bg_models.py
which reveals a clear overhead for calling these models
36717502 function calls (36716002 primitive calls) in 14.316 seconds
Ordered by: cumulative time
List reduced from 83 to 5 due to restriction <5>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.003 0.003 14.325 14.325 <ipython-input-8-2bded830471e>:1(Bmodel)
500 0.012 0.000 14.278 0.029 bg_models.py:314(bg_exp)
500 0.007 0.000 14.264 0.029 bg_models.py:12(_parsargs)
500 0.001 0.000 14.244 0.028 inspect.py:1512(stack)
500 0.036 0.000 14.243 0.028 inspect.py:1484(getouterframes)
---------------------------------------------------------------------------------------
16002 function calls (14502 primitive calls) in 0.022 seconds
Ordered by: cumulative time
List reduced from 29 to 5 due to restriction <5>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.002 0.002 0.022 0.022 <ipython-input-8-2bded830471e>:7(Bmath)
500 0.000 0.000 0.020 0.000 <__array_function__ internals>:2(linspace)
2000/500 0.003 0.000 0.020 0.000 {built-in method numpy.core._multiarray_umath.implement_array_function}
500 0.007 0.000 0.019 0.000 function_base.py:23(linspace)
500 0.000 0.000 0.004 0.000 <__array_function__ internals>:2(any)
I can reproduce these results for both dd_models.py and ex_models.py.
While this overhead is negligible for most operations within DeerLab, fitsignal relies heavily on calling these models each time a residual vector is computed during optimization. This propagates strongly into the performance of the fit as seen in the profile above.
The text was updated successfully, but these errors were encountered:
My current approach has been to implement the model functions is such a way, that they return a function handle model containing the basic math as part of the info dictionary, for example
This way, the basic function containing the math is requested once from the model in fitsignal and then can be directly called without the need to access one of the modules bgmodels.py, ex_models.py, or dd_models.py and the corresponding model inside there, avoiding the call overhead entirely.
While running tests on
fitsignal
I noticed a slowdown in performance.A simple profiling of
fitsignal
revealed a very slow performance for such a simple fit.
While there are other factors that contribute to the slower performance,
bg_models.py
is strongly unexpected. These functions are just called to perform a simple mathematical operation and return a vector.After closer inspection, I performed a different benchmark to see if there was a strong overhead for calling models in
bg_models.py
which reveals a clear overhead for calling these models
I can reproduce these results for both
dd_models.py
andex_models.py
.While this overhead is negligible for most operations within DeerLab,
fitsignal
relies heavily on calling these models each time a residual vector is computed during optimization. This propagates strongly into the performance of the fit as seen in the profile above.The text was updated successfully, but these errors were encountered: