Message boards :
Number crunching :
Did the work unit limit change?
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
I'm not interested in nefarious ways, I'm happy to take them as they come, but I read about the large amount of AP traffic that some people seem to get and I never see these numbers. Some people have a RAC that's 10 times yours, hence they are able to download a lot of work since they can process all of it well before the 10th day comes around. Grant Darwin NT |
Tim Send message Joined: 19 May 99 Posts: 211 Credit: 278,575,259 RAC: 0 |
Now I'm a bit down. Am I cheating somehow? I don’t think so. Is there somewhere written that I will take MB wu’s or AP wu’s? I don’t think so. I can do whatever I want to my preferences, asking no one what to do because ARE MINE. No one will tell me if I will download 800 AP’s or 800 MB WU’s Yes I can take 800 Ap tasks and I will do it again because the server allow, and it is legal. We all wanted 100 wu’s per Gpu as I remember, and now we are complaining? By the way… This machine is going to retire at about one month or two. We have here at the office a new build with dual xeon, and 8 Gpus.I will install seti and run 1000 AP tasks. And a question to petri33… If you were at 2nd position, and I was at 3rd, will you have those questions? Br Tim |
William Send message Joined: 14 Feb 13 Posts: 2037 Credit: 17,689,662 RAC: 0 |
I think my sarcasm detector tingled. If I have learned something in almost 4 years on these boards, it is that in an international community with lots of non-native speakers, the usage of sarcasm is inadvisable. Usually the other person doesn't get it and it gets ugly. Or they do get it and it gets even uglier. Can we fast forward to a good credit system please? Point being, if MB and AP credit wasn't that unequal, then some people might not be tempted to stick to AP to boost their RAC. And other people that run exclusively AP for entirely different reasons would not be seen as Credit whores. I'm not saying it is either way. To each his own, as long as you don't start to do silly stuff that requires tampering with boinc core files. My opinion. A person who won't read has no advantage over one who can't read. (Mark Twain) |
ExchangeMan Send message Joined: 9 Jan 00 Posts: 115 Credit: 157,719,104 RAC: 0 |
Now I'm a bit down. +1 |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
As it happens, I *choose* to run only multibeam at the moment. Not that I don't appreciate Astropulse's approach scientifically and the work on the applications over time (even though I DO think the stock CPU AP app qualifies as crud by now). It's more that I don't like fighting over bargain tables like surfing fat people at a new discount store opening sale. [not exactly underweight myself these days, so those kindof sporting events tend to be intimidating] "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
...Nothing wrong with that, is there? I don't think so. just try not to get trampled :) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Am I cheating somehow? They did not change the system to allow 100 task per GPU. The change was indented to allow the limit to be applied to each different type of GPU. So it could be 100 ATI, 100 iGPU, & 100 NVIDIA if a system had all 3 types of GPU installed. Then a GPU would not be "starved" for work. It may be fixed at some point, or it may not. It just depends on if the BOINC guys think it is working correctly. Which it obviously is not. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
They did not change the system to allow 100 task per GPU. The change was indented to allow the limit to be applied to each different type of GPU. So it could be 100 ATI, 100 iGPU, & 100 NVIDIA if a system had all 3 types of GPU installed. Then a GPU would not be "starved" for work. It may be fixed at some point, or it may not. It just depends on if the BOINC guys think it is working correctly. Which it obviously is not. From my POV that´s not fair, if you have 3 diferent types of GPU´s and you could have a 300 WU cache, why we who have 4 equal GPU´s on the same host you could only have 100? No that wrong. The way that is actualy "accidentaly" working is the right 100 per GPU no matter the type of GPU. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
They did not change the system to allow 100 task per GPU. The change was indented to allow the limit to be applied to each different type of GPU. So it could be 100 ATI, 100 iGPU, & 100 NVIDIA if a system had all 3 types of GPU installed. Then a GPU would not be "starved" for work. It may be fixed at some point, or it may not. It just depends on if the BOINC guys think it is working correctly. Which it obviously is not. The amount of tasks a project limits is separate from how BOINC is meant to function. Even with how it is functioning systems with different kinds of GPU's still can run into GPU starvation. Which is what they meant to fix. This change was implemented by the BOINC dev team & the project admins may not even know that this is occurring. Also right now I am getting extra tasks on my systems that have 2 GPU's, ATI/iGPU, & I am only using the iGPU to run tasks. So it gets a really large cache of 200 tasks. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Could be, but if they keep the way it´s "accidentaly" running now helps we all not just the ones with diferent brands of GPU´s. I could agree we not need days of caches anymore after COLO but we still need a cache who could hold for a razonable amount of time and 100 WU per host clearely don´t do that. A 100 WU MB cache per GPU is enought for 6-8 hours on the high end GPU´s. Maybe less on the Top Ones like the Titan Black but that is for another time. |
Wedge009 Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 |
They did not change the system to allow 100 task per GPU. The change was indented to allow the limit to be applied to each different type of GPU. From my POV that´s not fair, if you have 3 diferent types of GPU´s and you could have a 300 WU cache, why we who have 4 equal GPU´s on the same host you could only have 100? Am I missing something here? I thought it's already been established that the increase in task limit is on a per GPU basis, regardless of vendor. In fact, as at time of writing I can verify that this is still the case for both my mixed GPU hosts and single-vendor hosts. So I see no reason for these complaints. (Note, not being angry here, just genuinely confused why people appear to be upset.) Edit: emphasis Soli Deo Gloria |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Could be, but if they keep the way it´s "accidentaly" running now helps we all not just the ones with diferent brands of GPU´s. The main point is that the change does is not helping those with different types of GPU at all. The change is broken on what it was meant to acomplish. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
They did not change the system to allow 100 task per GPU. The change was indented to allow the limit to be applied to each different type of GPU. If you read what the change was meant to do it is not being accomplished. That is the only concern I really have. What is happening is the server is taking (limit * all GPU)=new limit. What was meant to happen was that the limit would be applied to each type of GPU separately. On one of my hosts 5255585 I am only using the Intel GPU. I even set my Computing preferences to say not to use ATI GPU's. However it downloads 200 tasks for the Intel GPU. If I wanted to start using the HD5750 I would get the "reached limit" message. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Wedge009 Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 |
I'm aware of your concerns that the new limits don't respect GPU choices in the preferences. I was mainly wondering why some seem to be thinking that the increase doesn't affect those with single-vendor hosts (which it does, at least for me). As for the GPU preferences, perhaps the limit was not meant to (or cannot) consider those? After all, this task limit is artificial and as I understand it only meant to help the database cope with the huge volumes of data (and is not to do with server bandwidth issues). Soli Deo Gloria |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I'm aware of your concerns that the new limits don't respect GPU choices in the preferences. I was mainly wondering why some seem to be thinking that the increase doesn't affect those with single-vendor hosts (which it does, at least for me). As far as the limits applying with preferences that is just an observation I found. It is unknown if it should work in that way. According to the change notes those with a single type of GPU should not be seeing an increase. No matter how many GPU's they have in a host. I have not seen anyone posting otherwise. Previously, if a project specified a limit on GPU jobs in progress, Yes the limits were put in place to keep the db from crashing. As it was getting grumpy with there were to many results out in the field. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Wedge009 Send message Joined: 3 Apr 99 Posts: 451 Credit: 431,396,357 RAC: 553 |
Okay, I missed the commit notes quoted there as I wasn't really following this thread. Then what we're observing is clearly not what was intended. By 'I have not seen anyone posting otherwise' I'm guessing you meant with respect to this change log because at first read I thought you meant that you hadn't seen anyone posting about getting an increase in tasks with single-vendor systems. Soli Deo Gloria |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Okay, I missed the commit notes quoted there as I wasn't really following this thread. Then what we're observing is clearly not what was intended. I was thinking of this comment when I said "I have not seen anyone posting otherwise" I was mainly wondering why some seem to be thinking that the increase doesn't affect those with single-vendor hosts (which it does, at least for me). I have not seen anyone saying that they are still limited to 100 GPU tasks when they have multiple GPU's from a single vendor. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
The main point is that the change does is not helping those with different types of GPU at all. It isn't? Pretty sure from what I've read those with different GPU types are getting extra work. The only issue appears to be that some people are getting more work allocated for their active GPUs because they have GPUs that haven't been allowed to process work. ie 2 discrete video cards are allowed work, the single on die IGPU isn't allowed work, yet they are getting 300WUs allocated. The change is broken on what it was meant to acomplish. True. But in this case that's a good thing as it means everyone with an extra GPU benefits, not just those with different brands. Why should those with different brands of GPU be given extra work, and not those with several of the same brand of GPU? Grant Darwin NT |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
The main point is that the change does is not helping those with different types of GPU at all. They may be getting more work, but it doesn't mean the work is going to resources that are idle and could be processing work. The change is broken on what it was meant to acomplish. With 1, 2, 3, 4, or even 8 GPU's of the same type & 100 assigned tasks. All of the GPU's will be able to be busy. A system with NVIDIA & iGPU the 100 tasks may only be assigned to NVIDIA leaving the iGPU idle. Now that system can download 200 GPU tasks, but they may all be assigned to NVIDIA leaving the iGPU still idle. It could go the other way and the system could assigned 200 tasks to the iGPU instead of a much faster 780 Ti. Maybe stating it that way help make more sense as to how it isn't working? Broken code should be fixed. Who knows, Maybe the guys in the lab have seen what is going on & have considered revising the limits based on what has been happening. EDIT: Thinking about it I think the ideal way to do what they are trying to implement would be to take the project limit and split it across the different vendor GPU's. Probably based on their processing rate. So a System with a faster NVIDIA card and an iGPU might have a limit of like 70 for the NVIDIA & 30 for the iGPU. Then both would be able to be kept busy. However it would need to honor the computing prefs. So would you could disable a GPU type there and it wouldn't split the limit for non used hardware. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.