Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

without lots of CPU cores and with a high-end NVIDIA card your speedup expectations just can become a bit higher. Typically 100x when comparing GPU-friendly algos to unoptimized (but native) CPU code or 10x when comparing it to decently optimized code running on slower CPUs.

Generally I think a performance/cost comparison is more useful: Take the price of the GPU and compare it to something with equivalent cost in CPU+RAM.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: