Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use memory of GPU for a process #466

Open
hainguyenct opened this issue May 11, 2017 · 3 comments
Open

use memory of GPU for a process #466

hainguyenct opened this issue May 11, 2017 · 3 comments

Comments

@hainguyenct
Copy link

Hello everyone,
I have 2 GPUs with 12GB RAM for each. I would like to run a process which is able to use RAM of both 2GPU (for example, I need to run a process which needs 20GB of memory).
If you have information on this, please help me.
Thank you so much

@wanghechong
Copy link

do you solve the problem?
@hainguyenct

@VincentSC
Copy link

You cannot just sum the GPU-memories. Between the GPUs there needs to be data-sharing, which often goes via the PCI-bus. But Torch solves this differently.

First split your training-data in 2 sets, as you can merge the training data results. See torch/cutorch#42 for some pseudo code.

@hainguyenct
Copy link
Author

Hi @wanghechong , I converted my code to Keras which supported with multi-GPUs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants