21)
Message boards :
Number crunching :
Reduced credit per work unit
(Message 3840)
Posted 2 May 2025 by ![]() Post: Has anyone else noticed a drop in their RAC over the last 2 days? I haven't looked at my RAC so I don't know if it's dropped. But there was a power outage on April 30th at the building housing the server and it took about 5 hours to restore. I keep about a days worth of tasks on my computers so I had plenty of work during the outage. If you only keep an hour or two of tasks then you may have run out of tasks and that would have affected your RAC. I also noticed the ETA on the batch status page went up .2 days (~5 hours) - much of that was recovered when users eventually returned the backlog of results, but it didn't completely recover, possibly because some users were starved of tasks during the outage (but there could be other explanations). |
22)
Message boards :
Number crunching :
Android support?
(Message 3835)
Posted 19 Apr 2025 by ![]() Post: When will this project get Android support? When I or someone else finds the time to port the code. This was attempted years ago and there were problems due to all the dependent libraries. Maybe the port would go smoother now. |
23)
Message boards :
News :
Web server upgrade tonight starting at 7pm MST
(Message 3833)
Posted 19 Apr 2025 by ![]() Post: The system update has been completed. Please let me know if you see any issues with the website or the server in general. Thanks! |
24)
Message boards :
News :
Web server upgrade tonight starting at 7pm MST
(Message 3832)
Posted 19 Apr 2025 by ![]() Post: Is this a hardware upgrade? You got a new server for the project? Or are we just talking about updating the software? Just the apache software and whatever dependencies come with it. |
25)
Message boards :
News :
Web server upgrade tonight starting at 7pm MST
(Message 3829)
Posted 18 Apr 2025 by ![]() Post: Upgrade shouldn't take too long, but just wanted everyone to be aware. |
26)
Message boards :
Science :
What else does the project need besides more computing power?
(Message 3827)
Posted 24 Mar 2025 by ![]() Post: I can't help with the software, but could donations help? In any case, is there a page to donate to the project, or would it be better to donate to ASU? Thanks for the offer Luca! At this time, accepting donations is not feasible. Maybe in the future, if we end up hosting somewhere unaffiliated with ASU. |
27)
Message boards :
Science :
What else does the project need besides more computing power?
(Message 3825)
Posted 14 Mar 2025 by ![]() Post: Linear programming is for a certain class of problems that have linear relations, which we don't have here. Even if we did, the real bottleneck is with the large precision integers, since GPUs don't have native data types for that, so you have to create your own (think creating 128bit adders/multipliers from 64bit native versions).So what's known as Bignum Arithmetic/ Arbitrary-Precision Arithmetic? (but some quick searches on those show some results for libraries, but many of them appear to have little to no documentation...). It has been several years since I've looked at it, so things may have changed. But I did implement an int128 and int256 class in CUDA which did help for the earlier part of the calculation. |
28)
Message boards :
Science :
What else does the project need besides more computing power?
(Message 3823)
Posted 14 Mar 2025 by ![]() Post: I was wondering if you had come across this article previously (and if it's even relevant to NumberFields processing). Linear programming is for a certain class of problems that have linear relations, which we don't have here. Even if we did, the real bottleneck is with the large precision integers, since GPUs don't have native data types for that, so you have to create your own (think creating 128bit adders/multipliers from 64bit native versions). |
29)
Message boards :
Science :
What else does the project need besides more computing power?
(Message 3821)
Posted 14 Mar 2025 by ![]() Post: While more computing power is always beneficial, what else does the project need most at this stage? It might be possible to improve the GPU app, but that would require a GPU expert who can figure out how to efficiently implement a multi-precision integer library. My current implementation gives speedups between 10 to 100 times over a CPU (depending on the GPU), but I hear a typical GPU can be over 1000 times faster than a CPU, so I wonder if there is room for improvement. The first step might be to reach out to one of the other projects, like Prime Grid, to see how they handle very large integers (~1000 bits). |
30)
Message boards :
News :
Batch plan
(Message 3817)
Posted 13 Mar 2025 by ![]() Post: In preparation for the final row of the sf6 search, I will be dumping a relatively small number of tasks (~60k). These are for the 16x7 case. The first part of row 16 was run years ago with the inefficient app, so this quick search is necessary to get proper timing stats with the latest app. After that, it's back to row 15, which should keep us busy through the end of the summer. |
31)
Message boards :
Number crunching :
Website bug
(Message 3816)
Posted 3 Mar 2025 by ![]() Post: Well, it's reporting "Warning: Undefined variable $v in /home/boincadm/projects/NumberFields/html/user/host_stats.php on line 93" now. Looking into this further, my initial analysis appears to have been wrong. After fixing the undefined variable, it actually shows non-zero counts for the Darwin hosts. I think it is fixed this time. |
32)
Message boards :
Number crunching :
Website bug
(Message 3813)
Posted 27 Feb 2025 by ![]() Post: https://numberfields.asu.edu/NumberFields/host_stats.php Thanks for reporting! This should now be fixed. The newer php standard sometimes conflicts with the older BOINC code. In this case, a null was being returned since there were no "darwin" hosts, and this led to the warning messages. |
33)
Message boards :
Number crunching :
End of 32bit CUDA support.
(Message 3810)
Posted 26 Feb 2025 by ![]() Post: I first read about this a week or two ago, but it looks like it's becoming a bigger issue than it might have first appeared to have been. All CUDA and OpoenCL applications are 64 bit. The only 32 bit applications at NumberFields are cpu only. |
34)
Message boards :
Number crunching :
MacOS Apple Silicon ARM support?
(Message 3806)
Posted 22 Feb 2025 by ![]() Post: To answer the first thing, that's totally fair, it's just cause I've seen other projects where it said Intel and ARM Macs, even though this was also Rosetta 2, or where the Apple Silicon Mac got both ARM and x86 tasks, with no option to choose which tasks to get. I don't know much about the new Apple Silicon Macs or Rosetta. Is Rosetta automatically enabled within the MacOS? And I didn't realize the BOINC client was smart enough to request x86 tasks on an ARM platform. |
35)
Message boards :
Number crunching :
MacOS Apple Silicon ARM support?
(Message 3804)
Posted 22 Feb 2025 by ![]() Post: Update: I don't think I should add an entry for ARM since the code is x86_64, and its Rosetta that's translating from x86_64 to ARM. Did you have to do anything special to get BOINC to download tasks? I'm surprised that BOINC would download the old mac code (x86_64) given that the host is ARM. |
36)
Message boards :
Number crunching :
Windows 11 ARM support
(Message 3803)
Posted 22 Feb 2025 by ![]() Post: No, but I've been considering porting the code. May take a while though... |
37)
Message boards :
Cafe :
Failed to initialize OpenCL on Fedora 41 with AMD RX 6950 XT
(Message 3800)
Posted 30 Jan 2025 by ![]() Post: Yes, AMD cards on linux have always been fickle. I had the same problem as you when I upgraded from Fedora 39 to 40. After several hours of messing with it, I just replaced my AMD card with an Nvidia card. Ran the Nvidia install script and everything works perfectly now. |
38)
Message boards :
Number crunching :
CUDA work units?
(Message 3796)
Posted 28 Jan 2025 by ![]() Post: That link regarding the OpenCL Compiler Cache appears to be specifically for the intel implementation. I don't see anything similar in the OpenCL standard where I can tell it to save a cached copy of the compiled code. It looks like the app can get access to the compiled code and then I could manually cache it. Not sure if that's the optimal solution, but either way it will require some modifications to the application code. Given my limited time right now, I'm not sure I have the bandwidth to do that along with the subsequent testing and porting to all the openCL platforms (windows/linux and amd/nvidia/intel). But I will put it near the top of the to-do list. |
39)
Message boards :
Number crunching :
CUDA work units?
(Message 3795)
Posted 25 Jan 2025 by ![]() Post: I noticed that with my RTX 4060Ti Super running under Windows, the run times were slightly longer & the APR slightly lower than yours running under LINUX, then i noticed the OpenCL v CUDA initialisation line in the Stderr output files.I am getting only open_cl work units for my GTX 1650 Super under Win10. That is a great point regarding the compiler cache, and I've often wondered about that - the first 20 seconds of each job on my AMD card is to compile the openCL code. This is something I will look into later when I get some free time. When I run the code offline, it always uses the cached version, so I had assumed maybe it was something that had to be changed in the BOINC manager. |
40)
Message boards :
News :
Support for Intel GPUs
(Message 3790)
Posted 23 Jan 2025 by ![]() Post: Hey Grant - thanks for pointing out the issue with the intel drivers. I didn't realize they were that bad. Hopefully they will get updated soon. No, numBlocks is not directly related to VRAM. But the higher numBlocks is, the more GPU RAM will be needed. The GPU lookup table has been discussed before. See for example: https://numberfields.asu.edu/NumberFields/forum_thread.php?id=472&postid=2990#2990 To summarize, the threadsPerBlock is the number of threads that are run in lockstep. 32 works best for Nvidia cards; and unless I am mistaken, 64 worked best for AMD cards, at least on the one card I tested. I have no idea what the optimal value should be for intel cards, and it would depend on their GPU architecture. NumBlocks is not as critical, and can be increased until all of the available cores on the GPU are being utilized. To answer your last question, yes there will be threadsPerBlock*numBlocks threads running simultaneously within the GPU app. |