Panic Mode On (77) Server Problems?


log in

Advanced search

Message boards : Number crunching : Panic Mode On (77) Server Problems?

Previous · 1 . . . 12 · 13 · 14 · 15 · 16 · 17 · 18 . . . 23 · Next
Author Message
juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5340
Credit: 298,398,714
RAC: 463,897
Brazil
Message 1292625 - Posted: 8 Oct 2012, 0:53:43 UTC - in response to Message 1292622.
Last modified: 8 Oct 2012, 0:54:19 UTC

"VLARS don't work well with NVIDIA GPUS"

That is not 100% true. They work just fine on my GTX450, GTX460s,
and GTX560Ti's.


VLARS works on all NVIDIA GPUs but takes for 4x to 8x more time to process so is a waste of resource if you processes then on a GPU, so VLARS -> CPU Only in NVIDIA enviroments. You could see that in you compleated WU.
____________

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,670,015
RAC: 23,024
United States
Message 1292626 - Posted: 8 Oct 2012, 0:56:19 UTC - in response to Message 1292625.

"VLARS don't work well with NVIDIA GPUS"

That is not 100% true. They work just fine on my GTX450, GTX460s,
and GTX560Ti's.


VLARS works on all NVIDIA GPUs but takes for 4x to 8x more time to process so is a waste of resource if you processes then on a GPU, so VLARS -> CPU Only in NVIDIA enviroments. You could see that in you compleated WU.


According to my Nvidia gpu's, you are wrong.

Profile Wiggo
Avatar
Send message
Joined: 24 Jan 00
Posts: 7094
Credit: 95,184,597
RAC: 73,594
Australia
Message 1292629 - Posted: 8 Oct 2012, 0:59:19 UTC - in response to Message 1292622.

"VLARS don't work well with NVIDIA GPUS"

That is not 100% true. They work just fine on my GTX450, GTX460s,
and GTX560Ti's.

They may work fine, until you wind up with to many of them being crunched at the same time as I found out (when they get like that they seem to leave something behind that eventually clogs the card's memory up), but you'll also find in the end that they are not economical to run on nVIDIA at all.

Cheers.
____________

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5340
Credit: 298,398,714
RAC: 463,897
Brazil
Message 1292632 - Posted: 8 Oct 2012, 1:07:11 UTC - in response to Message 1292626.


According to my Nvidia gpu's, you are wrong.


You must have some kind of out of this world GPUs because that is common knowledge the NVIDIA GPUs are incredible slow when process VLAR´s. You could verify that in several threads in this forum.


____________

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5831
Credit: 59,382,752
RAC: 47,299
Australia
Message 1292636 - Posted: 8 Oct 2012, 1:39:11 UTC - in response to Message 1292632.
Last modified: 8 Oct 2012, 1:39:45 UTC

You must have some kind of out of this world GPUs because that is common knowledge the NVIDIA GPUs are incredible slow when process VLAR´s. You could verify that in several threads in this forum.

That was the pre-Fermi GPUs.
My GTX 560Ti ran 2 at a time no problems when they were coming through. Most of them didn't run much longer than the usual long running WUs, but a few of them did blow out to over an hour, which was still more than 3 times faster than my E6600 can process them.
And even my GTX460 didn't get bogged down with them either- but i am running the Lunatics optimised applications. Running the stock application it could be a whole different kettle of fish.
____________
Grant
Darwin NT.

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,670,015
RAC: 23,024
United States
Message 1292638 - Posted: 8 Oct 2012, 1:47:39 UTC - in response to Message 1292632.

I have previously posted my times for work units.
My Nvidia gpus process vlars faster than my cpus.
Just like astopulse wus process faster on qualified gpus than cpus.
I can't help it if you and others are working off insufficient data.

I suggest you and others accept the facts that software improves, hardware improves, and old assumtions based on out of date data no longer apply to
newer hardware.

and no, Wiggo, I've never had the problem that you refer to.

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5340
Credit: 298,398,714
RAC: 463,897
Brazil
Message 1292644 - Posted: 8 Oct 2012, 1:59:36 UTC - in response to Message 1292638.
Last modified: 8 Oct 2012, 2:02:33 UTC

I have previously posted my times for work units.
My Nvidia gpus process vlars faster than my cpus.
Just like astopulse wus process faster on qualified gpus than cpus.
I can't help it if you and others are working off insufficient data.

I suggest you and others accept the facts that software improves, hardware improves, and old assumtions based on out of date data no longer apply to
newer hardware.

and no, Wiggo, I've never had the problem that you refer to.


I realy dont agree with you, i have a lot of NVIDIA GPUs from the 560 to the 690 and use the lates x41z builds, in ALL GPUS the VLAR process very slow in comparation of a normal WU, as I know there are no software or hardware more updated.

I think you make a confusion, we are talking about the time to process a WU in a GPU not in a CPU, of course if you process a VLAR on a GPU is faster than process on a CPU, but that is not the point.

On a 580/670 a normal WU takes 10 min to process, a VLAR takes + 1.2 hours. That is the reason why the VLARS are not sended to NVIDIA. Probaily the same VLAR will take 1.5 hr to process on a normal CPU (not top of the line)

So is clear the waste of time/resources spended if you try to process the VLARs on a NVIDIA.
____________

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,670,015
RAC: 23,024
United States
Message 1292656 - Posted: 8 Oct 2012, 2:27:51 UTC - in response to Message 1292644.

I have previously posted my times for work units.
My Nvidia gpus process vlars faster than my cpus.
Just like astopulse wus process faster on qualified gpus than cpus.
I can't help it if you and others are working off insufficient data.

I suggest you and others accept the facts that software improves, hardware improves, and old assumtions based on out of date data no longer apply to
newer hardware.

and no, Wiggo, I've never had the problem that you refer to.


I realy dont agree with you, i have a lot of NVIDIA GPUs from the 560 to the 690 and use the lates x41z builds, in ALL GPUS the VLAR process very slow in comparation of a normal WU, as I know there are no software or hardware more updated.

Then using that argument astropulses shouldn't be run on gpus.

I think you make a confusion, we are talking about the time to process a WU in a GPU not in a CPU, of course if you process a VLAR on a GPU is faster than process on a CPU, but that is not the point.

No that is exactly my point. "Vlars don't work well with Nvidia gpus" is incorrect. If you want to say it takes more time to run a vlar than a regular work unit on a Nvidia gpu I'll agree. It also takes more time to run an astropulse on a gpu than a regular work unit. Do we stop sending astropulses to
gpus then?

On a 580/670 a normal WU takes 10 min to process, a VLAR takes + 1.2 hours. That is the reason why the VLARS are not sended to NVIDIA. Probaily the same VLAR will take 1.5 hr to process on a normal CPU (not top of the line)

You should actually try it and make note of the times. My vlars are more than just a little faster on my gpus than on my cpus, and I can run 2 at a time and still be faster.


So is clear the waste of time/resources spended if you try to process the VLARs on a NVIDIA.


Your opinion. I don't agree. On my machine it takes less resources and time to run a vlar on my gpu than on my cpu.

juan BFBProject donor
Volunteer tester
Avatar
Send message
Joined: 16 Mar 07
Posts: 5340
Credit: 298,398,714
RAC: 463,897
Brazil
Message 1292664 - Posted: 8 Oct 2012, 2:41:38 UTC - in response to Message 1292656.


Your opinion. I don't agree. On my machine it takes less resources and time to run a vlar on my gpu than on my cpu.


In this case you need to show us who to do that, a lot of people will be interested in your solution. As i know they are a lot who try to find this for years. Whow to run a VLAR fast on a Nvidia GPU.
____________

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,670,015
RAC: 23,024
United States
Message 1292672 - Posted: 8 Oct 2012, 2:59:12 UTC - in response to Message 1292664.


Your opinion. I don't agree. On my machine it takes less resources and time to run a vlar on my gpu than on my cpu.


In this case you need to show us who to do that, a lot of people will be interested in your solution. As i know they are a lot who try to find this for years. Whow to run a VLAR fast on a Nvidia GPU.


Define fast.

I can tell you that a GTX450, GTX460, GTX560Ti is considerably faster
using Win 7 64 with all patches installed, Lunatics Win64 v0.40, Boinc
6.10.58, 6.10.60, 7.0.25, and 7.0.31 and my app_info set to run two instances per gpu than running the same vlar wu on either my e6600 or my q9550 cpu's.

Anything else would be an unsubstantiated guess on my part. But the above does give lie to the vlars don't run on Nvidia gpu's. Others have seen similar results on their machines, just check past posts.

Horacio
Send message
Joined: 14 Jan 00
Posts: 536
Credit: 74,086,482
RAC: 68,521
Argentina
Message 1292675 - Posted: 8 Oct 2012, 3:19:11 UTC - in response to Message 1292672.


Your opinion. I don't agree. On my machine it takes less resources and time to run a vlar on my gpu than on my cpu.


In this case you need to show us who to do that, a lot of people will be interested in your solution. As i know they are a lot who try to find this for years. Whow to run a VLAR fast on a Nvidia GPU.


Define fast.

I can tell you that a GTX450, GTX460, GTX560Ti is considerably faster
using Win 7 64 with all patches installed, Lunatics Win64 v0.40, Boinc
6.10.58, 6.10.60, 7.0.25, and 7.0.31 and my app_info set to run two instances per gpu than running the same vlar wu on either my e6600 or my q9550 cpu's.

Anything else would be an unsubstantiated guess on my part. But the above does give lie to the vlars don't run on Nvidia gpu's. Others have seen similar results on their machines, just check past posts.


For SETI I have a host using a GTX680, one using a GT430 and another using a GT9500, all of them with Lunatics apps...
Recently, when the vlars were sent to nvidia GPUs, I had to abort all of them, not due to speed, neither because they were failling, but simply because while they were beeing crunched the hosts become so unresponsive and the video was lagging so baldly that I was not able to use them for anything else...
The host with GTX680 was the less affected, but in this host I run 2 VMs and the process running on those VMs were failling with I/O timeouts and other weird errors...
That's the main reasson for the exclussion of vlars on nividia GPUs, if it were just a thing of performance for sure they wouldnt had spent even a minute to set an exclussion mechanism...

Anyway, I agree that adding an option to opt in/out for VLARS on GPUs, may be a better path for us, but more complex to set and maintain on the server/lab side where they have more in hands that they can handle...
____________

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5831
Credit: 59,382,752
RAC: 47,299
Australia
Message 1292681 - Posted: 8 Oct 2012, 3:38:31 UTC - in response to Message 1292672.


But the above does give lie to the vlars don't run on Nvidia gpu's.

I'm not aware of anyone saying they won't run; but when people first started crunching with GPUs & the stock applications, a VLAR on the GPU would make the machine in question extremely sluggish, sometimes to the point of being unresponsive.

For my GTX650Ti & GTX460 running the optimised applications, that isn't the case. But if running stock it may well still be the case.

With v7 making use of optimised code for the stock applications, VLARs on a NVidia GPU or not will most likely just become another bit of history.
____________
Grant
Darwin NT.

Profile Slavac
Volunteer tester
Avatar
Send message
Joined: 27 Apr 11
Posts: 1932
Credit: 17,952,639
RAC: 0
United States
Message 1292693 - Posted: 8 Oct 2012, 3:58:39 UTC - in response to Message 1292681.

10/7/2012 10:56:03 PM SETI@home Scheduler request completed: got 132 new tasks

0_o

Think the most I've ever seen before today was 70ish.
____________


Executive Director GPU Users Group Inc. -
brad@gpuug.org

Profile Slavac
Volunteer tester
Avatar
Send message
Joined: 27 Apr 11
Posts: 1932
Credit: 17,952,639
RAC: 0
United States
Message 1292695 - Posted: 8 Oct 2012, 3:59:27 UTC - in response to Message 1292693.

Is anyone still having issues with Auto Errored tasks? Basically tasks that error out before they're even downloaded or process, usually with impossible deadlines?
____________


Executive Director GPU Users Group Inc. -
brad@gpuug.org

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,670,015
RAC: 23,024
United States
Message 1292698 - Posted: 8 Oct 2012, 4:10:28 UTC - in response to Message 1292675.

Wasn't the 9500 the old nvidia architecture that really did puke on
vlars?

The GTX430 probably doesn't have enough power to do well, my gtx450
lags just a little bit if I have too many tabs open in my web browser, but that machine just crunches now, if I even bother to turn it on. Heat, power, summer, money, wife. Had to turn some stuff off till winter.

The 680 you would think wouldn't be a problem, but I read somewhere around here that the gtx6xx architecture had been crippled in some way and that the equivalent gtx5xx card actually out performed it in some areas. If I got hold of a gtx680 I'd like to see how it does work. Maybe for Christmas. Otherwise don't know what to tell you.

bill
Send message
Joined: 16 Jun 99
Posts: 861
Credit: 23,670,015
RAC: 23,024
United States
Message 1292701 - Posted: 8 Oct 2012, 4:14:46 UTC - in response to Message 1292681.


But the above does give lie to the vlars don't run on Nvidia gpu's.

I'm not aware of anyone saying they won't run; but when people first started crunching with GPUs & the stock applications, a VLAR on the GPU would make the machine in question extremely sluggish, sometimes to the point of being unresponsive.

I know my old Geforce 8800 gts certainly puked on them.

For my GTX650Ti & GTX460 running the optimised applications, that isn't the case. But if running stock it may well still be the case.

Could be, I haven't run stock in a long time.

With v7 making use of optimised code for the stock applications, VLARs on a NVidia GPU or not will most likely just become another bit of history.


We can only hope.

Horacio
Send message
Joined: 14 Jan 00
Posts: 536
Credit: 74,086,482
RAC: 68,521
Argentina
Message 1292707 - Posted: 8 Oct 2012, 4:37:09 UTC - in response to Message 1292698.

Wasn't the 9500 the old nvidia architecture that really did puke on
vlars?

Its a pre-fermi, so obviously it was completely expected to puke on the vlars, but there are a lot of prefermi GPUs like mine out there...

The GTX430 probably doesn't have enough power to do well, my gtx450
lags just a little bit if I have too many tabs open in my web browser, but that machine just crunches now, if I even bother to turn it on. Heat, power, summer, money, wife. Had to turn some stuff off till winter.

Well, its a low profile, low cost, low power GPU, but its fermi and while is the little sister of the 400 series, it give me the same RAC as the 8 HT cores of the i7-2600 in which it runs (or more)...

The 680 you would think wouldn't be a problem, but I read somewhere around here that the gtx6xx architecture had been crippled in some way and that the equivalent gtx5xx card actually out performed it in some areas. If I got hold of a gtx680 I'd like to see how it does work. Maybe for Christmas. Otherwise don't know what to tell you.

The only thing crippled on the high end 600 series is the double precission performance... (the mid and low end 600 GPUs are the ones that have some other things crippled, mostly memory bus width)
Anyway, I think that in this host, the vlars were just the drop that flooded the glass... I guess that if the host were just a cruncher I might not been noticed anything seriuos...

The point here is that filtering vlars per GPU model is almost impossible withoud doing serious and complex recoding of the schedullers and worst with a lot of access to databases to see what GPU the requester has (the client software asks work for the NVIDIA, ATI or CPU, so the scheduller will need to query the database on each RPC to know what GPU the client has..)

The current workaround was intended as a temporal fix, with the hope to get it solved at the apps level... but, the lack of time and resources along with other priorities, made the "temporal" thing a bit longer than expected...
____________

Profile S@NL Etienne Dokkum
Volunteer tester
Avatar
Send message
Joined: 11 Jun 99
Posts: 162
Credit: 16,588,268
RAC: 26,703
Netherlands
Message 1292710 - Posted: 8 Oct 2012, 4:51:41 UTC - in response to Message 1292695.

Is anyone still having issues with Auto Errored tasks? Basically tasks that error out before they're even downloaded or process, usually with impossible deadlines?


yes, every now and then I get 10 or so WU's with a deadline of 5 minutes... Not often the last 3-4 days anymore

Profile S@NL Etienne Dokkum
Volunteer tester
Avatar
Send message
Joined: 11 Jun 99
Posts: 162
Credit: 16,588,268
RAC: 26,703
Netherlands
Message 1292714 - Posted: 8 Oct 2012, 4:57:25 UTC

started accepting new work again this morning. Got 198 GPU tasks in half an hour, so DL is working pretty good. Shame they're still all shorties ...

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5831
Credit: 59,382,752
RAC: 47,299
Australia
Message 1292725 - Posted: 8 Oct 2012, 5:50:34 UTC - in response to Message 1292695.

Is anyone still having issues with Auto Errored tasks? Basically tasks that error out before they're even downloaded or process, usually with impossible deadlines?

Just had a look, and nothing on my task list since the 3rd.

Sent 3 Oct 2012 | 22:54:50 UTC, Time reported/Deadline 3 Oct 2012 | 23:01:09 UTC, Status Timed out - no response.
But i haven't had many resends since then either.

At the peak of the problems with uploading/getting work, probably 1 in 5 to 1 in 7 requests that resulted in work would have been resends.
____________
Grant
Darwin NT.

Previous · 1 . . . 12 · 13 · 14 · 15 · 16 · 17 · 18 . . . 23 · Next

Message boards : Number crunching : Panic Mode On (77) Server Problems?

Copyright © 2014 University of California