Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bad performance(compare with cuMemcpy) on x86 system #286

Closed
nkflash opened this issue Nov 27, 2023 · 3 comments
Closed

bad performance(compare with cuMemcpy) on x86 system #286

nkflash opened this issue Nov 27, 2023 · 3 comments

Comments

@nkflash
Copy link

nkflash commented Nov 27, 2023

CPU info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma c
x16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pst
ate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt
lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization features:
Virtualization: AMD-V
Caches (sum of all):
L1d: 4 MiB (128 instances)
L1i: 4 MiB (128 instances)
L2: 64 MiB (128 instances)
L3: 512 MiB (32 instances)
NUMA:
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerabilities:
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Srbds: Not affected
Tsx async abort: Not affected

IB/RDMA like:
hca_id: mlx5_2
transport: InfiniBand (0)
fw_ver: 20.31.2006
node_guid: 043f:7203:00df:3db8
sys_image_guid: 043f:7203:00df:3db8
vendor_id: 0x02c9
vendor_part_id: 4123
hw_ver: 0x0
board_id: MT_0000000223
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 302
port_lid: 232
port_lmc: 0x00
link_layer: InfiniBand
hca_id: mlx5_5
transport: InfiniBand (0)
fw_ver: 20.31.2006
node_guid: 043f:7203:00df:4394
sys_image_guid: 043f:7203:00df:4394
vendor_id: 0x02c9
vendor_part_id: 4123
hw_ver: 0x0
board_id: MT_0000000223
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 302
port_lid: 233
port_lmc: 0x00
link_layer: InfiniBand

UehTHHctHT

Is there any known issue?

@nkflash nkflash changed the title bad performance on x86 system bad performance(compare with cuMemcpy) on x86 system Nov 27, 2023
@pakmarkthub
Copy link
Collaborator

Hi @nkflash,

GDRCopy is designed for small message sizes. Based on what you have shown, GDRCopy is faster than cuMemcpy in that regime. Can you elaborate on the issue?

Also, you may want to run the application with numactl -C <cpus> -l or numactl -N <nodes> -l. It will perform better if you pin it to the CPU core that is close to the GPU in the PCIe tree. The -l will make sure that the host buffer allocation is from that numa node or CPU core.

@nkflash
Copy link
Author

nkflash commented Nov 27, 2023

Hi @nkflash,

GDRCopy is designed for small message sizes. Based on what you have shown, GDRCopy is faster than cuMemcpy in that regime. Can you elaborate on the issue?

Also, you may want to run the application with numactl -C <cpus> -l or numactl -N <nodes> -l. It will perform better if you pin it to the CPU core that is close to the GPU in the PCIe tree. The -l will make sure that the host buffer allocation is from that numa node or CPU core.

Thank you for your quick response.
Sure, I see the small size is better than large size. In this case I didn't bind special CPU. Is that(bind to special cpu) comparable to 'cuMemcpy' in large size?

BTW: Why gdrcopy only suitable for small message size?

@pakmarkthub
Copy link
Collaborator

GDRCopy uses CPU to push/pull data from GPU. It has lower latency compared with using GPU's copy engine, which is used by cuMemcpy (with some exception). However, CPU is generally not as good as the copy engine for moving large data.

To clarify, GDRCopy is not a replacement of cuMemcpy. The most efficient way is to use GDRCopy for moving small data and cuMemcpy for large data. You will need to find the cross point of these two on each platform you want to run your applications so that you know what sizes you should use GDRCopy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants