-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance two integration tests #511
Enhance two integration tests #511
Conversation
Enhance test_append and test_array_creation 1. add negative tests 2. add more test cases 3. refactor test code
for more information, see https://pre-commit.ci
tests/integration/test_append.py
Outdated
num.append(a, c, axis=1) | ||
|
||
with pytest.raises(IndexError): | ||
num.append(a, a, axis=5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest splitting these up. Best practice is to have descriptively named tests that check a one thing. A class can be useful for grouping related test cases together:
class TestAppendErrors:
def test_bad_dimension(self):
# arrange
msg = "All arguments to concatenate must have the same number of dimensions"
with pytest.raises(ValueError, match=msg):
num.append(a, b, axis=1)
def test_bad_index(self):
# arrange
with pytest.raises(IndexError):
num.append(a, a, axis=5)
The test output will look like TestAppendErrors:test_bad_dimension
etc. and the class name can be conveniently utilized in test selection as well.
I'm also concerned about the AssertionError
case. Assertions should be used to control internal program invariants, and generally speaking, should be not applied to user-supplied values. The reason being: we can only make assertions about things we control, but we have no control over what users may pass. So I would not expect to be maintenance of an AssertionError
related to any value passed to a public API. This seems like a situation where the function should be raising a ValueError
instead. That's what numpy does:
In [6]: np.append(a, c, axis=1)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 np.append(a, c, axis=1)
File <__array_function__ internals>:180, in append(*args, **kwargs)
File ~/anaconda3/lib/python3.8/site-packages/numpy/lib/function_base.py:5392, in append(arr, values, axis)
5390 values = ravel(values)
5391 axis = arr.ndim-1
-> 5392 return concatenate((arr, values), axis=axis)
File <__array_function__ internals>:180, in concatenate(*args, **kwargs)
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 1 and the array at index 1 has size 10
@magnatelee can you comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@magnatelee It's found only in eager execution test, it raises ValueError. In other tests, it raises AssertionError.
|
||
num.full((2, 3), [1]) | ||
with pytest.raises(AssertionError): | ||
num.full((2, 3), [10, 20, 30]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest splitting these individual cases into individual tests, similar to the suggestion above. Then also the shape could be passed using pytest.mark.parametrize
, which could be applied to the class:
@pytest.mark.parametrize('shape', shapes)
class TestCreationErrors:
def test_empty(self, shape):
with pytest.raises(ValueError):
num.empty(shape)
def test_zeros(self, shape):
with pytest.raises(ValueError):
num.zeros(shape)
.....
# additional special case for full
def test_creation_full_assertion():
num.full((2, 3), [1])
with pytest.raises(AssertionError):
num.full((2, 3), [10, 20, 30])
Though, my same concern above about AssertionError
also applies here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's found only in eager execution test, it raises ValueError. In other tests, it raises AssertionError.
1. Create test class for negative testing 2. Refactor out test functions 3. Use parameterize
for more information, see https://pre-commit.ci
1. update run_test name to check_array_method 2. use parameterize for step zero cases of arange
class TestArrangeErrors: | ||
def test_negative_sizes(self): | ||
with pytest.raises(ValueError): | ||
###np.arange(-10) returns [] successfully | ||
# np.arange(-10) returns [] successfully |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@magnatelee Is this difference in results between numpy and cunumeric here expected or a bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that in case there are currently expected failures, we can still have the test run (and avoid leaving commented code in the repo) by using @pytest.mark.xfail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this approach. If an xfail
test switches to succeeding (when we fix the corresponding bug), will that show up as a test suite failure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@manopapad yes it will which is a nice way to know in case you've accidentally fixed a bug :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bryevdv I like this approach, too. For AssertionError raised by cuNumeric, do we need to expect ValueError as NumPy in our test code and make it xfail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to expect ValueError as NumPy in our test code and make it xfail?
That is certainly an option although if this is a very simple fix I'd say just fix it (to raise ValueError) in this PR if possible.
I think xfail
is definitely appropriate for the numerical discrepancies, since those probably should be fixed in a dedicated PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AssertionError is fixed. It raises ValueError now. Other failures are added with xfail.
### output: num: array([ 1, -1, -3, -5, -7]), | ||
### np: array([ 1. , -1.5, -4. , -6.5, -9. ] | ||
# output: num: array([ 1, -1, -3, -5, -7]), | ||
# np: array([ 1. , -1.5, -4. , -6.5, -9. ] | ||
# (1, -10, -2.5), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@magnatelee Is this difference in results between numpy and cunumeric here expected or a bug?
@@ -76,24 +85,38 @@ def test_full(value): | |||
SHAPES_NEGATIVE = [ | |||
-1, | |||
(-1, 2, 3), | |||
# num.array([2, -3, 4]), ## it would raise RuntimeError("Unable to find attachment to remove") when num.array | |||
# is removed at the end as global variable | |||
## it raises RuntimeError("Unable to find attachment to remove") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@magnatelee Is this expected or an issue here? Thanks
AFAICT all the differences that @robinw0928 has pointed out above are real bugs, including the cases where cunumeric asserts-out instead of throwing a |
1. add pytest.mark.xfail for cases with expected failure 2. Small Fix: replace Assert with raising ValueError in deferred.py
for more information, see https://pre-commit.ci
LGTM! |
cunumeric/deferred.py
Outdated
assert result.shape[dim] == 1 | ||
if result.shape[dim] != 1: | ||
raise ValueError( | ||
f"Shape did not match along dimension {dim}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leave a space at the end of the string here, otherwise when they get joined it will look like this: "... dimension 3and the ..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Thanks. @manopapad After this review, can I merge the change?
* Enhance two integration tests Enhance test_append and test_array_creation 1. add negative tests 2. add more test cases 3. refactor test code * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address comments 1. Create test class for negative testing 2. Refactor out test functions 3. Use parameterize * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address comments - part2 1. update run_test name to check_array_method 2. use parameterize for step zero cases of arange * Address comments - Part 3 1. add pytest.mark.xfail for cases with expected failure 2. Small Fix: replace Assert with raising ValueError in deferred.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Address comments - fix a typo Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Enhance test_append and test_array_creation