Skip to content

Commit

Permalink
NVIDIA: fix auto adjustment
Browse files Browse the repository at this point in the history
fix #2564

- add the dataset size to the maximum allowed memory usage
  • Loading branch information
psychocrypt committed Nov 25, 2019
1 parent bcbd88b commit 5ef97da
Showing 1 changed file with 6 additions and 1 deletion.
7 changes: 6 additions & 1 deletion xmrstak/backend/nvidia/nvcc_code/cuda_extra.cu
Original file line number Diff line number Diff line change
Expand Up @@ -322,6 +322,12 @@ extern "C" int cuda_get_deviceinfo(nvid_ctx* ctx)
hashMemSize = std::max(hashMemSize, algo.Mem());
}

const size_t dataset_size = getRandomXDatasetSize();
/* increase maxMemUsage by the dataset because the upper limits are
* only for the scratchpad and does not take the randomX dataset into account.
*/
maxMemUsage += dataset_size;

#ifdef WIN32
/* We use in windows bfactor (split slow kernel into smaller parts) to avoid
* that windows is killing long running kernel.
Expand All @@ -346,7 +352,6 @@ extern "C" int cuda_get_deviceinfo(nvid_ctx* ctx)
size_t availableMem = freeMemory - (128u * byteToMiB) - 200u;
size_t limitedMemory = std::min(availableMem, maxMemUsage);

const size_t dataset_size = getRandomXDatasetSize();
if(limitedMemory <= dataset_size)
limitedMemory = 0;
else
Expand Down

0 comments on commit 5ef97da

Please sign in to comment.