Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logical CUDA stream management #278

Open
makortel opened this issue Mar 11, 2019 · 1 comment
Open

Logical CUDA stream management #278

makortel opened this issue Mar 11, 2019 · 1 comment

Comments

@makortel
Copy link

in #100 the CUDAScopedContext either re-uses the cuda::stream_t<> of an input CUDAProduct<T>, or creates a new one, depending whether the EDProducer author wants it to queue the work on the same stream as where the input was produced, or to a different one.

The reason to queue on the same stream is to mimic TBB flowgraph's streaming_node (with a regular EDProducer just queuing more work to the input/output CUDA stream).

But if a product has two (or more) consumers whose work is independent, it would make sense to use different streams for their work to expose the parallelism to CUDA.

Having the EDProducer author to decide whether to reuse or create a CUDA stream is suboptimal, since, in general, the author does not know the full DAG of modules that will be run. Therefore it would be better to the decide whether to re-use or create new stream in a place that actually knows the DAG.

(or we forget the "streaming mode")

@makortel
Copy link
Author

#305 makes an attempt to automatize the CUDA stream reuse/creation is a relatively simple way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant