Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mitigate S3 test interference + Unlimited Dimensions in NCZarr #2755

Merged
merged 32 commits into from
Oct 2, 2023

Conversation

DennisHeimbigner
Copy link
Collaborator

This PR started as an attempt to add unlimited dimensions to NCZarr. It did that, but this exposed significant problems with test interference. So this PR is mostly about fixing -- well mitigating anyway -- test interference.

The problem of test interference is now documented in the document docs/internal.md. The solutions implemented here are also describe in that document. The solution is somewhat fragile but multiple cleanup mechanisms are provided. Note that this feature requires that the AWS command line utility must be installed.

Unlimited Dimensions.

The existing NCZarr extensions to Zarr are modified to support unlimited dimensions. NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group". Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.

  • Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
  • Form 2: A dictionary with the following keys and values"
    • "size" with an integer value representing the (current) size of the dimension.
    • "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.

For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases. That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension. This is the standard semantics for unlimited dimensions.

Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.

  • Did a partial refactor of the slice handling code in zwalk.c to clean it up.
  • Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
  • Added several NCZarr specific unlimited tests; more are needed.
  • Add test of endianness.

Misc. Other Changes

  • Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the AWS Transfer Utility mechanism. This is controlled by the ```#define TRANSFER```` command in that file. It defaults to being disabled.
  • Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
  • Fixed an obscure memory leak in ncdump.
  • Removed some obsolete unit testing code and test cases.
  • Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
  • Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
  • Modify the semantics of zmap#ncsmap_write to only allow total rewrite of objects.
  • Modify the semantics of zodom to properly handle stride > 1.
  • Add a truncate operation to the libnczarr zmap code.

Note: This PR replaces PR #2736

WardF and others added 24 commits August 23, 2023 15:29
…mplicated by the fact that the 'official' method uses cmake.
…iscovered at configure time is functional. Still need to wire it into autotools.
This PR started as an attempt to add unlimited dimensions to NCZarr.
It did that, but this exposed significant problems with test interference.
So this PR is mostly about fixing -- well mitigating anyway -- test
interference.

The problem of test interference is now documented in the document docs/internal.md.
The solutions implemented here are also describe in that document.
The solution is somewhat fragile but multiple cleanup mechanisms
are provided. Note that this feature requires that the
AWS command line utility must be installed.

## Unlimited Dimensions.
The existing NCZarr extensions to Zarr are modified to support unlimited dimensions.
NCzarr extends the Zarr meta-data for the ".zgroup" object to include netcdf-4 model extensions. This information is stored in ".zgroup" as dictionary named "_nczarr_group".
Inside "_nczarr_group", there is a key named "dims" that stores information about netcdf-4 named dimensions. The value of "dims" is a dictionary whose keys are the named dimensions. The value associated with each dimension name has one of two forms
Form 1 is a special case of form 2, and is kept for backward compatibility. Whenever a new file is written, it uses format 1 if possible, otherwise format 2.
* Form 1: An integer representing the size of the dimension, which is used for simple named dimensions.
* Form 2: A dictionary with the following keys and values"
   - "size" with an integer value representing the (current) size of the dimension.
   - "unlimited" with a value of either "1" or "0" to indicate if this dimension is an unlimited dimension.

For Unlimited dimensions, the size is initially zero, and as variables extend the length of that dimension, the size value for the dimension increases.
That dimension size is shared by all arrays referencing that dimension, so if one array extends an unlimited dimension, it is implicitly extended for all other arrays that reference that dimension.
This is the standard semantics for unlimited dimensions.

Adding unlimited dimensions required a number of other changes to the NCZarr code-base. These included the following.
* Did a partial refactor of the slice handling code in zwalk.c to clean it up.
* Added a number of tests for unlimited dimensions derived from the same test in nc_test4.
* Added several NCZarr specific unlimited tests; more are needed.
* Add test of endianness.

## Misc. Other Changes
* Modify libdispatch/ncs3sdk_aws.cpp to optionally support use of the
   AWS Transfer Utility mechanism. This is controlled by the
   ```#define TRANSFER```` command in that file. It defaults to being disabled.
* Parameterize both the standard Unidata S3 bucket (S3TESTBUCKET) and the netcdf-c test data prefix (S3TESTSUBTREE).
* Fixed an obscure memory leak in ncdump.
* Removed some obsolete unit testing code and test cases.
* Uncovered a bug in the netcdf-c handling of big-endian floats and doubles. Have not fixed yet. See tst_h5_endians.c.
* Renamed some nczarr_tests testcases to avoid name conflicts with nc_test4.
* Modify the semantics of zmap\#ncsmap_write to only allow total rewrite of objects.
* Modify the semantics of zodom to properly handle stride > 1.
* Add a truncate operation to the libnczarr zmap code.
@WardF
Copy link
Member

WardF commented Sep 28, 2023

@DennisHeimbigner Does this have any impact on the work we're doing in #2741, or does it address the hang we were seeing? I'm worried about merging this and then having a lot of conflicts between the two, particularly since #2741 contains some cmake and autoconf configuration file fixes.

@WardF
Copy link
Member

WardF commented Sep 28, 2023

The test interference fix is great but I think we're at a point where we should hold off on new features until we've fixed the errors we're already observing/that are blocking any new releases.

@DennisHeimbigner
Copy link
Collaborator Author

I think the interference fixes are necessary since we both going to be running S3 testing.
Otherwise, we will have trouble figuring out whether s3 failures are from interference
or from true problems.

As for #2741 , I think this PR is ultimately compatible
with that PR. There may be minor conflicts, but the CMakeLists.txt and configure.ac changes
you made can probably safely override any conflicts with this PR.

Is #2741 in a state that I can attempt to merge it
into a copy of this PR to see what conflicts arise?

@WardF
Copy link
Member

WardF commented Sep 28, 2023

Hi Dennis, that sounds great; I was going to try that myself, tomorrow XD. #2741 should be in a point where it could be merged, although it does not address the overarching 'why is this hanging on some platform' issues. From a purely technical standpoint, it could be merged in with this PR and then a new PR can be minted for the work on the awssdk-cpp related hangs.

@DennisHeimbigner
Copy link
Collaborator Author

Ok, if you want to try the merge then go ahead and let me review it before final merge.

@DennisHeimbigner
Copy link
Collaborator Author

Ok, the significant libnetcdf.settings differences with mine are below. None look particularly important.

25c25
< Extra libraries:	-lhdf5-shared -lhdf5_hl-shared -lzlib -lcurl_imp
---
> Extra libraries:	-lhdf5 -lhdf5_hl -lzlib -lblosc -lcurl_imp
55,57c55,57
< Logging:     		no
< SZIP Write Support:     no
< Standard Filters:       deflate bz2
---
> Logging:     		yes
> SZIP Write Support:     yes
> Standard Filters:       deflate szip blosc bz2

@WardF
Copy link
Member

WardF commented Sep 29, 2023

Dennis, when running with -DENABLE_S3=TRUE -DENABLE_S3_INTERNAL=TRUE -DWITH_S3_TESTING=TRUE I'm seeing the following repeatable error:

1/1 Test #234: nczarr_test_run_interop ..........***Failed    0.12 sec
findplugin.sh loaded
final HDF5_PLUGIN_DIR=/Users/wfisher/Desktop/dennis-merge/netcdf-c/build/plugins
	o Running File Testcase:	power_901_constants	zarr	
gunzip: can't stat: /Users/wfisher/Desktop/dennis-merge/netcdf-c/nczarr_test/ref_zarr_test_data_2d.cdl.gz (/Users/wfisher/Desktop/dennis-merge/netcdf-c/nczarr_test/ref_zarr_test_data_2d.cdl.gz.gz): No such file or directory

I see the file nczarr_test/ref_zarr_test_data.cdl.gz, but no nczarr_test/ref_zarr_test_data_2d.cdl.gz. Is there a missing file? Or do I need to adjust the filename in the test?

@DennisHeimbigner
Copy link
Collaborator Author

What I am curious about is why that .gz.gz extension.

@DennisHeimbigner
Copy link
Collaborator Author

In any case, there is a missing file. I have attached it.
ref_zarr_test_data_2d.cdl.gz

Copy link
Collaborator Author

@DennisHeimbigner DennisHeimbigner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks good to me.

@WardF
Copy link
Member

WardF commented Sep 29, 2023

Running tests, and it appears to be working on my end as well on MacOS and Linux. So we are where we were before, but with the fixes you've added. I'll test and figure out what's going on with the non-S3/Zarr related failures on Windows, once I have that nailed down, hopefully we can confirm that running with the S3_INTERNAL will work on Windows.

@WardF
Copy link
Member

WardF commented Sep 29, 2023

And incidentally, well done with the changes you made to avoid race conditions in testing; that's a huge leap forward in terms of debugging and parallelism!

@WardF WardF merged commit 140cb83 into Unidata:main Oct 2, 2023
99 checks passed
@WardF WardF mentioned this pull request Oct 2, 2023
3 tasks
@DennisHeimbigner DennisHeimbigner deleted the s3interfere.dmh branch October 10, 2023 19:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants