You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The root finders in scipy.optimize give different answers across operating systems and Python versions as has been seen in our unit tests in the ./tests/ directory. We have had tests that pass in the GitHub actions but do not pass locally and vice versa. And the reasons are failing the np.allclose() criteria in comparing objects.
We need to get the solvers in OG-Core to give the same answers to an adequate degree of precision across platforms. Here are some helpful references and proposed potential solutions.
Consistent numerical precision across platforms.This articles discusses the relative numerical precision performance of Intel and Apple M1 chips in some famously difficult problems. The Intel and Apple M1 chips give the same wrong answers. The Numpy documentation for "Array types and conversions between types" lists the different numerical types available in Numpy and says the following. We could try setting all our inputs to numerical routines to np.float64 (double precision). There are higher levels of precision, but I don't think we need them.
Since many of these [numerical data types] have platform-dependent definitions, a set of fixed-size aliases are provided (see Sized aliases).
Correctly scaled problem and different tolerance levels across solvers.This Stack Overflow question and answer discuss the importance of having a well-posed optimization problem that is correctly scaled. It also discusses the merits of setting the tolerance of the problem to be sufficiently precise.
Try other solvers that might be more consistent. I can't find any non-SciPy root finders. I thought that cvxopt might be one, but it is really a minimizer.
After thinking through this with some local test failures in test_txfunc.py in PR #836, one idea would be the following.
I was getting test failures with some of the different functional forms when testing test_txfunc.py::test_txfunc_est. One could make this robust across operating systems by providing a list of Numpy arrays rather than just one Numpy array of parameter values for each parameterization. Then the test would work something like:
fori, vinenumerate(expected_tuple):
ifi!=2: ## the last element is the number of observations, will not vary across platformstest_pass=0forj, v2inenumerate(v): # v here is a list of arrays or scalars to provide different values for different platformstest_pass+=np.allclose(test_tuple[i], v)
asserttest_passelse:
assertnp.allclose(test_tuple[i], v)
Perhaps there's a more Pythonic way for this, but hopefully the above provides the gist of what I mean.
This is not as easy to do with the two other functions that I'm seeing issues with (test_txfunc.py::test_tax_func_loop and test_txfunc.py::test_tax_func_estimate) because they read in pickle files with large tuples of results.
The root finders in
scipy.optimize
give different answers across operating systems and Python versions as has been seen in our unit tests in the./tests/
directory. We have had tests that pass in the GitHub actions but do not pass locally and vice versa. And the reasons are failing thenp.allclose()
criteria in comparing objects.We need to get the solvers in OG-Core to give the same answers to an adequate degree of precision across platforms. Here are some helpful references and proposed potential solutions.
np.float64
(double precision). There are higher levels of precision, but I don't think we need them.Correctly scaled problem and different tolerance levels across solvers. This Stack Overflow question and answer discuss the importance of having a well-posed optimization problem that is correctly scaled. It also discusses the merits of setting the tolerance of the problem to be sufficiently precise.
Try other solvers that might be more consistent. I can't find any non-SciPy root finders. I thought that
cvxopt
might be one, but it is really a minimizer.cc: @jdebacker
The text was updated successfully, but these errors were encountered: