Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump CCCL version to include cuda::std::span fix #631

Merged
merged 2 commits into from
Jun 24, 2024

Conversation

sleeepyjack
Copy link
Contributor

@sleeepyjack sleeepyjack commented Jun 12, 2024

Description

This PR updates the CCCL version to include a fix for cuda::std::span which is required for cuCollections to work properly with CCCL 2.5.0.

Most of the changes between the last CCCL version bump (#607) and this one were related to doc updates and unit test fixes, so I don't expect much functional impact for RAPIDS.

After this PR we likely have to bump the cuco version again to include the new changes.

CCCL PR:

CUCO PR:

RAPIDS PRs:

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.
  • The cmake-format.json is up to date with these changes.
  • I have added new files under rapids-cmake/
    • I have added include guards (include_guard(GLOBAL))
    • I have added the associated docs/ rst file and update the api.rst

@bdice
Copy link
Contributor

bdice commented Jun 12, 2024

The diff is here: NVIDIA/cccl@fde1cf7...e21d607

I feel confident in merging this as soon as you think it's sufficiently tested @sleeepyjack @PointKernel. All of the commits up to those from yesterday (which unfortunately includes the fix you need) were tested by the latest run of NVIDIA/cccl#1667.

@PointKernel
Copy link
Member

PointKernel commented Jun 12, 2024

Added rmm PR as well: rapidsai/rmm#1584 since the raft failure (rapidsai/raft#2358) pointed to an rmm invocation:

Thread 1 "CORE_TEST" hit Catchpoint 1 (exception thrown), 0x00007fffb24824a1 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(cuda-gdb) bt 5
#0  0x00007fffb24824a1 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x0000555555654a0a in rmm::mr::limiting_resource_adaptor<rmm::mr::device_memory_resource>::do_allocate(unsigned long, rmm::cuda_stream_view) ()
#2  0x00005555556c3d8f in void* cuda::mr::__4::_Resource_vtable_builder::_Alloc_async<rmm::mr::limiting_resource_adaptor<rmm::mr::device_memory_resource> >(void*, unsigned long, unsigned long, cuda::__4::stream_ref) ()
#3  0x00005555556becd1 in raft::Raft_WorkspaceResource_Test::TestBody() ()
#4  0x00005555557cfee1 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) ()

@sleeepyjack
Copy link
Contributor Author

cugraph CI has some problems. I restarted it one more time. Fingers crossed. Apart from that all other projects seem to be fine with the change.

@sleeepyjack
Copy link
Contributor Author

sleeepyjack commented Jun 20, 2024

cugraph now finally also passes all unit tests. What's going on with the failing rapids-cmake tests? I can't make sense of them and thus would like a second pair of eyes to verify if it's related to this change.

@sleeepyjack
Copy link
Contributor Author

All problems have been resolved. This PR is ready for review.

@bdice bdice added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Jun 24, 2024
@bdice
Copy link
Contributor

bdice commented Jun 24, 2024

/merge

@rapids-bot rapids-bot bot merged commit 5d47273 into rapidsai:branch-24.08 Jun 24, 2024
16 checks passed
@bdice
Copy link
Contributor

bdice commented Jun 24, 2024

Thanks @sleeepyjack!

@sleeepyjack sleeepyjack deleted the bugfix/cccl-span branch June 24, 2024 14:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improves an existing functionality non-breaking Introduces a non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants