Posts by Link

\n studio-striking\n
41) Questions and Answers : Windows : Can't add tasks on BOINC 7.20.2 (Message 8445)
Posted 15 Apr 2023 by Link
Post:
Have you tried adding Moo in the BOINC Manager on that computer? What does the log say when you do that? The computer is not in the list of your computers yet.


In addition, I have an AMD Ryzen 5 5600G (full spec) processor with processing limits set at 70% and 95% time, Win11 64-bit. Despite these limits, the processor is still running during my workload, and I'm concerned about the high temperature of 76C (168.8F).
Well, the limits say use 70% ot the processor cores and run 95% of the time. The setting "Suspend when computer is in use" is a different one and also you must set BOINC to run based on preferences and not always in the activity menu. In general it should be fine to run 100% of the time even when you are using the computer for something else.


Can you please share your experience with temperature management? I've invested in some expensive blue thermal paste, but it's hard and sticky on the CPU. Additionally, I'm using outdated 2-fan water cooling.
There must be something really wrong with your cooling, this CPU has a TDP of 65W, a simple air cooler should be able to cool it way below the temperature you have. What thermal paste is it? Expensive does not automatically mean good. Is there enough air flow in the case? Are the fans spinning too slow perhaps? Is there much dust on/in the radiator? Is there enough water in the cooling system and is it mounted properly? Always hard to guess without seeing and hearing the system.
42) Message boards : Number crunching : Laptop insanity , gpu errors galore (Message 8440)
Posted 3 Apr 2023 by Link
Post:
Has anyone seen a workunit reissued to the same computer after it got "error while computing"

Yes, many times.
43) Message boards : Number crunching : Allow us to choose which version to run. (Message 8438)
Posted 29 Mar 2023 by Link
Post:
The values for avg_ncpus and max_ncpus you can adjust to whatever suits your computer best, if you for example need 1 core per task, than set them both to 1. If you don't run CPU tasks at all, than they don't matter and you can leave them as they are.
I see your computer uses 1 whole CPU core per task, so best set avg_ncpus and max_ncpus to 1 like this:

  <avg_ncpus>1.00</avg_ncpus>
  <max_ncpus>1.00</max_ncpus>
44) Message boards : Number crunching : Allow us to choose which version to run. (Message 8437)
Posted 28 Mar 2023 by Link
Post:
OK, so run down your cache first, than:

1. Delete all files in your Moo! project dir except those 4:
dnetc_wrapper_1.5_x86_64-pc-linux-gnu__opencl_nvidia_101
dnetc521-linux-amd64-opencl
dnetc-gpu-1.3.ini
job-lin64-opencl-521.1.xml

2. Save this as app_info.xml in your Moo! project dir using a plain text editor. Make sure BOINC can read that file, Linux is a bit weird with that, I used "rw-rw-rw-" on my Android device, did not try with less that that, but r--r--r-- should also probably work:
<app_info>
 <app>
  <name>dnetc</name>
  <user_friendly_name>Distributed.net Client</user_friendly_name>
 </app>
 <file_info>
  <name>dnetc_wrapper_1.5_x86_64-pc-linux-gnu__opencl_nvidia_101</name>
  <executable/>
 </file_info>
 <file_info>
  <name>dnetc521-linux-amd64-opencl</name>
  <executable/>
 </file_info>
 <file_info>
  <name>dnetc-gpu-1.3.ini</name>
 </file_info>
 <file_info>
  <name>job-lin64-opencl-521.1.xml</name>
 </file_info>
 <app_version>
  <app_name>dnetc</app_name>
  <version_num>105</version_num>
  <avg_ncpus>0.25</avg_ncpus>
  <max_ncpus>0.25</max_ncpus>
  <plan_class>opencl_nvidia_101</plan_class>
  <platform>x86_64-pc-linux-gnu</platform>
  <coproc>
   <type>NVIDIA</type>
   <count>1</count>
  </coproc>
  <file_ref>
   <file_name>dnetc_wrapper_1.5_x86_64-pc-linux-gnu__opencl_nvidia_101</file_name>
   <main_program/>
  </file_ref>
  <file_ref>
   <file_name>dnetc521-linux-amd64-opencl</file_name>
   <copy_file/>
  </file_ref>
  <file_ref>
   <file_name>dnetc-gpu-1.3.ini</file_name>
   <open_name>dnetc.ini</open_name>
   <copy_file/>
  </file_ref>
  <file_ref>
   <file_name>job-lin64-opencl-521.1.xml</file_name>
   <open_name>job.xml</open_name>
   <copy_file/>
  </file_ref>
 </app_version>
</app_info>


3. Set your cache very low and check if it works, if not I will check the std_err of the errored out tasks. But I think I didn't miss anythink and it should work. If it doesn't work, simply shut down BOINC, delete the app_info file, start BOINC and reset the project. Or remove and add again Moo!, that clears every changes too.

The values for avg_ncpus and max_ncpus you can adjust to whatever suits your computer best, if you for example need 1 core per task, than set them both to 1. If you don't run CPU tasks at all, than they don't matter and you can leave them as they are.
45) Message boards : Number crunching : Allow us to choose which version to run. (Message 8435)
Posted 27 Mar 2023 by Link
Post:
I need the <app_version> parts for Moo!, they have been cut of due to the length of the file.
Must look like that:
<app_version>
    <app_name>dnetc</app_name>
...
</app_version>

You should have few of them, under all those <file> entries.
46) Message boards : Number crunching : Allow us to choose which version to run. (Message 8432)
Posted 26 Mar 2023 by Link
Post:
All the information is in your client_state.xml actually, the app_info.xml isn't much different, I wrote one even for Android.

For my GTX 275 and the cuda app (and the CPU app) the relevant part is:
<app>
    <name>dnetc</name>
    <user_friendly_name>Distributed.net Client</user_friendly_name>
    <non_cpu_intensive>0</non_cpu_intensive>
</app>
<file>
    <name>dnetc_wrapper_1.3_windows_intelx86__cuda31.exe</name>
    <nbytes>418304.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <executable/>
    <signature_required/>
    <file_signature>
2231a6cdbfc3e179ba0552ea2294dc182ddfa5e68681a0adc9f133582e6fd528

a84eaaa2a90a4584daef27a6c625013a4214195553982caf87b4636734eeeee1

aab9ef3e49ab13db461030c4412b0608addb5db5176b61753d1bdb2603691b6d

9778ca62866afb452770a91f24faab7b0c6dc9041232604f9ad73df3a234702a

.
</file_signature>
    <download_url>http://moowrap.net/download/dnetc_wrapper_1.3_windows_intelx86__cuda31.exe</download_url>
</file>
<file>
    <name>dnetc518-win32-x86-cuda31.exe</name>
    <nbytes>218624.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <executable/>
    <signature_required/>
    <file_signature>
4da380667940129ec8a3eb320b633e4843b68931cf8011d86ee81bf468726c9a
3afa6bf74046a4a19ea3502e5327565320426d1748461de9cc8e42aa46788456
0b3221df05a970f307298a566c105000686e7e717133bc1898b318fa5d42c03b
b9c801449e6a45e38f9a4dfaeeacdb72b1fb2f61458772d2d346f2f315a1c1cc
.
</file_signature>
    <download_url>http://moowrap.net/download/dnetc518-win32-x86-cuda31.exe</download_url>
</file>
<file>
    <name>dnetc-gpu-1.3.ini</name>
    <nbytes>447.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <signature_required/>
    <file_signature>
7847dd20c4b99cfe4273eab4efa481f1ec7930d9a66f1aed4bdb0c25cdd3f79b

541f9eb91814491f3bdd3a821d2b6fcede5614400876f71e7c618e7fb55d7561

f880f38ac93108d012e7f20282bbc780b9351fbec599ce59c5c5034d57daf2f0

e3f6de14905430e645ce714876476749952aefa7747f47751ddf8f4b8f6cc121

.
</file_signature>
    <download_url>http://moowrap.net/download/dnetc-gpu-1.3.ini</download_url>
</file>
<file>
    <name>job-cuda31-1.00.xml</name>
    <nbytes>229.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <signature_required/>
    <file_signature>
877551c57d9f79d3523cb80b0a103db90827bd12c4d62e6488fe1753aa4cc10e

b5a74565d9a8b44db03adeaac98db89f30294075026cee1a8549abc5f6857691

4964b7aa4e48896d21aa650e193e49b9497da3d645e038fca76db89cc2f0a481

8c7a35b2b21b80e14206de880dab99bb736464722de9cc0cbaaea09ef9cd61f1

.
</file_signature>
    <download_url>http://moowrap.net/download/job-cuda31-1.00.xml</download_url>
</file>
<file>
    <name>cudart32_31_9.dll</name>
    <nbytes>250472.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <signature_required/>
    <file_signature>
7085a76c800fff1c4085294fdbd0a39b76e01660c00dc0d059faf7a69ebe0650

88a27b4ace173d3ab46f7f3cd2b68f4c595082f79d317188abb2181a186561ee

50a47e750fd3ffabcd93d9652cd9fd7d18cc8ed2431f02f40fc6a6ad0c049b2a

1e1dbfab0711adae54bc4a1953316f4603faaf3916d1dd99293367b053e44918

.
</file_signature>
    <download_url>http://moowrap.net/download/cudart32_31_9.dll</download_url>
</file>
<file>
    <name>dnetc_wrapper_1.5_windows_x86_64.exe</name>
    <nbytes>1241600.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <executable/>
    <signature_required/>
    <file_signature>
17be8a6658802c0dc31215a2b142c1d1257a6d892a5e1b62487a4aab995c96d8

23b254cc25fcebf64fd274a60fe3bec3b597d35e6e5bace2ae4937822a051348

0cb28dd04488fe4dc31359459650524851f0dc8787022eebc7b7f049f13cfc3f

3dd8f8f28807e7b38f2463d6d245364de1d1b4eedf23093aea4d47bb712e3c5a

.
</file_signature>
    <download_url>http://moowrap.net/download/dnetc_wrapper_1.5_windows_x86_64.exe</download_url>
</file>
<file>
    <name>dnetc521-win64-amd64.exe</name>
    <nbytes>1554432.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <executable/>
    <signature_required/>
    <file_signature>
0b16e1207038d1ea61004d29d80229d9783fce1d995bcc55ae1b305180049591

7456871817d536ba1c48f2b174bc768c9903c1c10b8725bbc24b133dee2f8e65

b8e8f6be5cd12f8fff202d907c7832657d7be4e601f9b78d1a7450f25409b6d5

efa52afd6447a5289a5bca89c8670e520318cde6fe3c3da7e71969effb739f42

.
</file_signature>
    <download_url>http://moowrap.net/download/dnetc521-win64-amd64.exe</download_url>
</file>
<file>
    <name>dnetc-cpu-1.4.ini</name>
    <nbytes>524.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <signature_required/>
    <file_signature>
5a8b04a4f05981f0f7313f2990e902e2b48ee499135dceba6bae7e6ebb633665

1e766e263244f2d5dd2f9fbcfbb88946625d8026cba0f877dd2398779591ab74

29040f9690680fe8aca0404c287cbb10d0c22222496ca3ad08a6169ef83be109

adf0f4246293a13e495ac787593e66dd29f41d141a254ccbb368014864204b62

.
</file_signature>
    <download_url>http://moowrap.net/download/dnetc-cpu-1.4.ini</download_url>
</file>
<file>
    <name>job-win64-521.1.xml</name>
    <nbytes>250.000000</nbytes>
    <max_nbytes>0.000000</max_nbytes>
    <status>1</status>
    <signature_required/>
    <file_signature>
2431edcf113ea61856d4c2305d55f1cc3f610f3e17bd9b7fe9e7931a1f849ff5

e539023ef1c66944d9855dc878f4e49c1bb3fecd40ed0dc3b20734f19fd45a75

fb0200c39fc803624dae0bb0e1320006a46fbf65dda8a9df7ae31e488040869d

5117bc6029c3d1da8bb02fa857794a5fb1d29a9812591f52f735c31898a48214

.
</file_signature>
    <download_url>http://moowrap.net/download/job-win64-521.1.xml</download_url>
</file>
<app_version>
    <app_name>dnetc</app_name>
    <version_num>103</version_num>
    <platform>windows_intelx86</platform>
    <avg_ncpus>1.000000</avg_ncpus>
    <flops>71085192687.983200</flops>
    <plan_class>cuda31</plan_class>
    <api_version>6.13.12</api_version>
    <file_ref>
        <file_name>dnetc_wrapper_1.3_windows_intelx86__cuda31.exe</file_name>
        <main_program/>
    </file_ref>
    <file_ref>
        <file_name>dnetc518-win32-x86-cuda31.exe</file_name>
        <copy_file/>
    </file_ref>
    <file_ref>
        <file_name>dnetc-gpu-1.3.ini</file_name>
        <open_name>dnetc.ini</open_name>
        <copy_file/>
    </file_ref>
    <file_ref>
        <file_name>job-cuda31-1.00.xml</file_name>
        <open_name>job.xml</open_name>
        <copy_file/>
    </file_ref>
    <file_ref>
        <file_name>cudart32_31_9.dll</file_name>
        <copy_file/>
    </file_ref>
    <coproc>
        <type>NVIDIA</type>
        <count>1.000000</count>
    </coproc>
    <gpu_ram>33554432.000000</gpu_ram>
    <dont_throttle/>
</app_version>
<app_version>
    <app_name>dnetc</app_name>
    <version_num>105</version_num>
    <platform>windows_x86_64</platform>
    <avg_ncpus>1.000000</avg_ncpus>
    <flops>7498889328.256757</flops>
    <api_version>7.13.0</api_version>
    <file_ref>
        <file_name>dnetc_wrapper_1.5_windows_x86_64.exe</file_name>
        <main_program/>
    </file_ref>
    <file_ref>
        <file_name>dnetc521-win64-amd64.exe</file_name>
        <copy_file/>
    </file_ref>
    <file_ref>
        <file_name>dnetc-cpu-1.4.ini</file_name>
        <open_name>dnetc.ini</open_name>
        <copy_file/>
    </file_ref>
    <file_ref>
        <file_name>job-win64-521.1.xml</file_name>
        <open_name>job.xml</open_name>
        <copy_file/>
    </file_ref>
    <is_wrapper/>
</app_version>

If you post that part of your client_state.xml, I can write for you the app_info.xml file.
47) Message boards : Number crunching : Allow us to choose which version to run. (Message 8430)
Posted 26 Mar 2023 by Link
Post:
This has been requested many times, unfortunately not implemented yet. What you can do yourself is using the Anonymous platform mechanism to force the OpenCL app. This should not cause any issues, since the applications don't change here frequently.

I posted a working app_info.xml in this message, it's for the ATI CAL app, but you can use it as reference, the correct files and their names and other information for your Nvidia OpenCl app you can find in your client_state.xml once you processed an OpenCL task (you don't need to download or modify any files, you should have them, skip those steps).
48) Message boards : Number crunching : Project has no tasks available (Message 8424)
Posted 27 Feb 2023 by Link
Post:
Considering the huge amount of tasks/workunits waiting for assimilation and file deletion, it's not just the proxy, which isn't running. Those numbers should be close to 0.
49) Message boards : Number crunching : Missing blocks (Message 8422)
Posted 13 Feb 2023 by Link
Post:
This happens from time to time, until now the blocks appeared on dnect after a while.
50) Message boards : Number crunching : 6,000+ Credits for 10-minute Task? (Message 8420)
Posted 5 Jan 2023 by Link
Post:
Ok, just seems like a lot of credits for a task that completes in 10 minutes or less. Since that host has switched over to MilkyWay tasks, my daily numbers are only about 1/4 of what they were. I know the "standardization" of credit has been a long issue with the various projects.
Most Nvidia consumer GPUs have very poor FP64 performance required for Milkyway, your GTX 1660 Ti has a DP:SP ratio of 1:32 resulting in just 169.9 GFLOPS FP64 while having 5.437 TFLOPS FP32/INT32 which is used here. My significantly older GTX 275 has a 1:8 ratio resulting in 84.24 GFLOPS FP64, which is half of what you have, while having only 673.9 GFLOPS FP32, so the difference in credits isn't that huge for me, I get here around 1.29x of what I get from Milkyway, GPUs with 1:5 and better ratios get probably more from Milkyway than from Moo!.


Even though the tasks show "Aborted by user" the exit status was "201 (0x000000C9) EXIT_MISSING_COPROC" so I suspect there's something wrong with the interaction of OS/BOINC version, and the old GeForce 320M graphics in that particular MacBook Pro.
That might explain why there are so many Apple hosts with that behavior.
51) Message boards : Number crunching : Progress? (Message 8414)
Posted 29 Dec 2022 by Link
Post:
Yes, I agree, a direct link to the current progress either from the "project" menu or a "science" menu (which is missing/disabled here) would be nice to have.
52) Message boards : Number crunching : Progress? (Message 8412)
Posted 28 Dec 2022 by Link
Post:
But it's rather confusing, I'm crunching for moo, not whatever distributed.net is.
No, you don't crunch for Moo, Moo does not have own scientific research, it's just a bridge to distributed.net for BOINC users for RC5 tasks (and yoyo@home did the same for OGR tasks, but that seems to be completed now).
53) Message boards : Number crunching : Progress? (Message 8410)
Posted 28 Dec 2022 by Link
Post:
Thanks, where did you find that? Stats in the computing menu takes me to a list of pages for user stats.
distributed.net -> Statistics -> stats.distributed.net -> RC-72

I have an idea! Since the thing you're looking for is always in the last place you look, let's start at the other end.
I started simply at the page of the project, that we are actually crunching for. ;-)
54) Message boards : Number crunching : Progress? (Message 8407)
Posted 28 Dec 2022 by Link
Post:
Current progress.

As of today, "The odds are 1 in 9,741 that we will wrap this thing up in the next 24 hours. (This also means that we'll hit 100% in 9,741 days at yesterday's rate.)"
55) Message boards : Number crunching : 6,000+ Credits for 10-minute Task? (Message 8406)
Posted 28 Dec 2022 by Link
Post:
They've been going since Dec 7, only on one host--but the only other tasks I've done recently are CPU on other hosts which all get 72 credit.
The "tiny" CPU tasks are bundles of 9 blocks from distributed.net, those "huge" 6000+ credits WUs are bundles of usually 768-823 blocks. You can see that in the WU name, dnetc_r72_1672198181_13_770 is for example a bundle of 770 blocks. We get here always 8 credits/block, so yes, 6000+ credits for such WU is right.

EDIT: and since I see this, I have to ask, why are you Apple-users always aborting cuda tasks instead of just disabling them in your project preferences if you don't want to use your GPU in that host for whatever reason? I see this a lot and almost always that are Apple users who do that, I even started a thread about this issue a while ago.
56) Message boards : Number crunching : 2 GPUs for one task? (Message 8399)
Posted 25 Oct 2022 by Link
Post:
Since I have 8 Windows PCs with GPUs scattered across them, I can put Moo on single GPU systems. And AFAIK it's ok with 2 GPUs if they're not on the same card.
Yes, that seems to work for others in the top computers list.
57) Message boards : Number crunching : 2 GPUs for one task? (Message 8397)
Posted 25 Oct 2022 by Link
Post:
[coproc] ATI instance 0; 0.250000 pending for de_modfit_80_bundle5_3s_south_pt2_2_1666385652_2698675_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_80_bundle5_3s_south_pt2_2_1666385652_2698719_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_81_bundle5_3s_south_pt2_2_1666385652_2698929_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_81_bundle5_3s_south_pt2_2_1666385652_2698850_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_81_bundle5_3s_south_pt2_2_1666385652_2698857_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_81_bundle5_3s_south_pt2_2_1666385652_2698886_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_80_bundle5_3s_south_pt2_2_1666385652_2698609_0	
[coproc] ATI instance 0; 0.250000 pending for de_modfit_78_bundle5_3s_south_pt2_2_1666385652_2698193_0	

This seems to be some kind of BOINC thing, but since it works for Milkyway, it *should* work here too. So that's not the issue.

Unfortunately I'm out of ideas now. :-(

Your config seems to work everywhere except here, so it's some kind of issue either with the Moo wrapper or the dnect client. You could of course ask on the BOINC forums, there are people with far more experience than me over there (point them to the thread here, so they know what we already tried and know), or you try to PM the admin here, but he has not been active since a while.

If nothing helps, as a workaround you can limit Moo only to one concurrent GPU WU on that machine and run something else on the other. You will eventually have to check if you need to limit Moo in this case to one specific GPU, as we don't know
1. If the Moo task running on GPU 1 makes GPU 0 unusable for anything or just for another Moo task.
2. If Moo is simply unable to run at all on GPU 0 for whatever reason.
3. If Moo is unable to run on GPU 0 if GPU 1 is in use.
58) Message boards : Number crunching : 2 GPUs for one task? (Message 8395)
Posted 24 Oct 2022 by Link
Post:
And the <coproc_debug> output?

But that really looks like some weird bug either in the Moo wrapper or the dnect client, BOINC runs the 2nd task on GPU 0, but it seems to be stuck.
How come 0 (the first GPU according to Boinc) gets stuck and not 1? I would have expected the second GPU to be stuck. Although according to MSI Afterburner, it IS the second one (WTF?)
Well, that are two same cards, Afterburner might count the other way, but that's not the issue...

Can you post the same lines of <coproc_debug> output for Milkyway?

This is weird:
[coproc] Assigning ATI instance 0 to dnetc_r72_1666013338_12_768_1
[coproc] Assigning ATI instance 1 to dnetc_r72_1666605020_12_768_0
[coproc] ATI instance 0; 1.000000 pending for dnetc_r72_1666013338_12_768_1
[coproc] ATI instance 0; 1.000000 pending for dnetc_r72_1666605020_12_768_0
[coproc] ATI instance 0: confirming 1.000000 instance for dnetc_r72_1666013338_12_768_1
[coproc] ATI instance 1: confirming 1.000000 instance for dnetc_r72_1666605020_12_768_0 [/code]

I'd expect to see there 1 on all.
59) Message boards : Number crunching : 2 GPUs for one task? (Message 8393)
Posted 24 Oct 2022 by Link
Post:
And the <coproc_debug> output?

But that really looks like some weird bug either in the Moo wrapper or the dnect client, BOINC runs the 2nd task on GPU 0, but it seems to be stuck.
60) Message boards : Number crunching : 2 GPUs for one task? (Message 8390)
Posted 24 Oct 2022 by Link
Post:
I assume you want Moo tasks, not more MW? Coproc_debug on, Moo running.
Yes, now I want Moo.


Previous 20 · Next 20


 
Copyright © 2011-2024 Moo! Wrapper Project