Message boards :
Science :
What else does the project need besides more computing power?
Message board moderation
Author | Message |
---|---|
Send message Joined: 12 Nov 22 Posts: 5 Credit: 4,387,274 RAC: 5,288 ![]() |
While more computing power is always beneficial, what else does the project need most at this stage? Are there specific areas where volunteers can contribute beyond donating crunching power? For example, are there needs related to software development, outreach, documentation, or anything else that could help this projects thrive? |
![]() Send message Joined: 8 Jul 11 Posts: 1366 Credit: 613,343,017 RAC: 744,696 ![]() ![]() ![]() |
While more computing power is always beneficial, what else does the project need most at this stage? It might be possible to improve the GPU app, but that would require a GPU expert who can figure out how to efficiently implement a multi-precision integer library. My current implementation gives speedups between 10 to 100 times over a CPU (depending on the GPU), but I hear a typical GPU can be over 1000 times faster than a CPU, so I wonder if there is room for improvement. The first step might be to reach out to one of the other projects, like Prime Grid, to see how they handle very large integers (~1000 bits). |
Send message Joined: 4 Jan 25 Posts: 13 Credit: 40,882,444 RAC: 553,189 ![]() ![]() |
I was wondering if you had come across this article previously (and if it's even relevant to NumberFields processing). Accelerate Large Linear Programming Problems with NVIDIA cuOpt Synopsys The evolution of linear programming (LP) solvers has been marked by significant milestones over the past century, from Simplex to the interior point method (IPM). The introduction of primal-dual linear programming (PDLP) has brought another significant advancement. If i remember correctly, back in the very early days of Seti@home before GPUs were a thing, Nvidia actually helped develop the first code to make use of their GPUs in order to promote CUDA for such work. While they won't do that these days thanks to AI/LLMs being all the rage, a post on their developers boards asking for some insights might prove useful, possibly gaining the interest of someone that wants to work on their coding and optimisation skills. Grant Darwin NT, Australia. |
![]() Send message Joined: 8 Jul 11 Posts: 1366 Credit: 613,343,017 RAC: 744,696 ![]() ![]() ![]() |
I was wondering if you had come across this article previously (and if it's even relevant to NumberFields processing). Linear programming is for a certain class of problems that have linear relations, which we don't have here. Even if we did, the real bottleneck is with the large precision integers, since GPUs don't have native data types for that, so you have to create your own (think creating 128bit adders/multipliers from 64bit native versions). |
Send message Joined: 4 Jan 25 Posts: 13 Credit: 40,882,444 RAC: 553,189 ![]() ![]() |
Linear programming is for a certain class of problems that have linear relations, which we don't have here. Even if we did, the real bottleneck is with the large precision integers, since GPUs don't have native data types for that, so you have to create your own (think creating 128bit adders/multipliers from 64bit native versions).So what's known as Bignum Arithmetic/ Arbitrary-Precision Arithmetic? (but some quick searches on those show some results for libraries, but many of them appear to have little to no documentation...). Edit- and searches on CUDA_int128 return some articles on it being introduced in CUDA 11.5 (eg), but not much else. Grant Darwin NT, Australia. |
![]() Send message Joined: 8 Jul 11 Posts: 1366 Credit: 613,343,017 RAC: 744,696 ![]() ![]() ![]() |
Linear programming is for a certain class of problems that have linear relations, which we don't have here. Even if we did, the real bottleneck is with the large precision integers, since GPUs don't have native data types for that, so you have to create your own (think creating 128bit adders/multipliers from 64bit native versions).So what's known as Bignum Arithmetic/ Arbitrary-Precision Arithmetic? (but some quick searches on those show some results for libraries, but many of them appear to have little to no documentation...). It has been several years since I've looked at it, so things may have changed. But I did implement an int128 and int256 class in CUDA which did help for the earlier part of the calculation. |
Send message Joined: 12 Nov 22 Posts: 5 Credit: 4,387,274 RAC: 5,288 ![]() |
I can't help with the software, but could donations help? In any case, is there a page to donate to the project, or would it be better to donate to ASU? |
![]() Send message Joined: 8 Jul 11 Posts: 1366 Credit: 613,343,017 RAC: 744,696 ![]() ![]() ![]() |
I can't help with the software, but could donations help? In any case, is there a page to donate to the project, or would it be better to donate to ASU? Thanks for the offer Luca! At this time, accepting donations is not feasible. Maybe in the future, if we end up hosting somewhere unaffiliated with ASU. |