Bitcoin GPU-based Mining Machines good for BOINC / SETI?

Message boards : Number crunching : Bitcoin GPU-based Mining Machines good for BOINC / SETI?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 62 · 63 · 64 · 65 · 66 · Next

AuthorMessage
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2035353 - Posted: 3 Mar 2020, 1:36:10 UTC - in response to Message 2035343.  

Well that sucks. Do we have any other gpu-oriented BOINC projects that will run with a minimal PCIe load? Otherwise the entire experiment with Mining Motherboards becomes moot.

Tom


Einstein does quite well with it and low PCIe load.
Same with Milkyway

but in both cases, you're better off getting rid of your nvidia cards and moving to AMD. and more so a top FP64 (double precision) performer like old AMD cards or a RadeonVII if you want to make serious contributions to Milkyway.

Einstein does OK with nvidia. but will use more power for the same work done vs an AMD card.

GPUGrid has a well optimized nvidia app, but needs more pcie width. PCIe 3.0 x4 minimum based on my tests.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2035353 · Report as offensive     Reply Quote
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13864
Credit: 208,696,464
RAC: 304
Australia
Message 2035444 - Posted: 3 Mar 2020, 7:51:13 UTC - in response to Message 2035343.  

Well that sucks. Do we have any other gpu-oriented BOINC projects that will run with a minimal PCIe load? Otherwise the entire experiment with Mining Motherboards becomes moot.
Why not give Collatz a go?
Lots of big numbers to be had there. The more GPUs the better.
Grant
Darwin NT
ID: 2035444 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2035490 - Posted: 3 Mar 2020, 12:25:19 UTC - in response to Message 2035353.  

Well that sucks. Do we have any other gpu-oriented BOINC projects that will run with a minimal PCIe load? Otherwise the entire experiment with Mining Motherboards becomes moot.

Tom


Einstein does quite well with it and low PCIe load.
Same with Milkyway

but in both cases, you're better off getting rid of your nvidia cards and moving to AMD. and more so a top FP64 (double precision) performer like old AMD cards or a RadeonVII if you want to make serious contributions to Milkyway.

Einstein does OK with nvidia. but will use more power for the same work done vs an AMD card.

GPUGrid has a well optimized nvidia app, but needs more pcie width. PCIe 3.0 x4 minimum based on my tests.


I have joined GPUGrid on the only box I currently have left running which has 3 Nvidia's plugged directly into the PCIe bus.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2035490 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2035491 - Posted: 3 Mar 2020, 12:27:39 UTC - in response to Message 2035444.  

Well that sucks. Do we have any other gpu-oriented BOINC projects that will run with a minimal PCIe load? Otherwise the entire experiment with Mining Motherboards becomes moot.
Why not give Collatz a go?
Lots of big numbers to be had there. The more GPUs the better.


I have just joined Collatz and reduced the Seti@Home resource setting to the same as most of the others.

Will see what shakes out. Still hope to get my BitCoin box up after some more of the local Family disruptions have become more organized.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2035491 · Report as offensive     Reply Quote
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13864
Credit: 208,696,464
RAC: 304
Australia
Message 2035612 - Posted: 4 Mar 2020, 10:46:11 UTC - in response to Message 2035491.  

I have just joined Collatz and reduced the Seti@Home resource setting to the same as most of the others.
Almost 20 million already. And almost triple your present Seti RAC.
Grant
Darwin NT
ID: 2035612 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2035621 - Posted: 4 Mar 2020, 11:14:02 UTC - in response to Message 2035612.  

I have just joined Collatz and reduced the Seti@Home resource setting to the same as most of the others.
Almost 20 million already. And almost triple your present Seti RAC.
Do they actually grant credit by the original definition of cobblestone?

If my understanding of what Collatz is doing is correct, then it would be purely large integer crunching, which isn't really what GPUs are designed for.
ID: 2035621 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14680
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2035626 - Posted: 4 Mar 2020, 11:34:05 UTC - in response to Message 2035621.  

I have just joined Collatz and reduced the Seti@Home resource setting to the same as most of the others.
Almost 20 million already. And almost triple your present Seti RAC.
Do they actually grant credit by the original definition of cobblestone?

If my understanding of what Collatz is doing is correct, then it would be purely large integer crunching, which isn't really what GPUs are designed for.
I've just answered an idiot poster on the BOINC boards, who asserted much the same: "collatz ... does rate the gflops PPD correctly". Not, it doesn't:

1) As you say, Collatz is a conjecture relating to integer maths. GPUs are probably very good at integer maths (hence their usage for coin mining) - but it's not a comparable value.
2) This (SETI) project did for a while make a valiant attempt to 'count' (a fairly rough approximation) the number of floating point operations needed to complete a task. BOINC itself doesn't even try, and in particular doesn't even attempt to benchmark GPUs: it simply reports the theoretical maximum peak value derived from architecture and clock speeds.
ID: 2035626 · Report as offensive     Reply Quote
Ville Saari
Avatar

Send message
Joined: 30 Nov 00
Posts: 1158
Credit: 49,177,052
RAC: 82,530
Finland
Message 2035700 - Posted: 4 Mar 2020, 14:43:10 UTC - in response to Message 2035626.  

1) As you say, Collatz is a conjecture relating to integer maths. GPUs are probably very good at integer maths (hence their usage for coin mining) - but it's not a comparable value.
Most GPUs are very bad at integer math. They are fast compared to CPUs because of their massive parallelism, but still their integer performance is a fraction of their floating point performance.

When a GPU is designed, the transistors go where they are needed. GPUs are designed for rendering 3D graphics and that happens with floating point math. Integers are only used for indexing and such.
ID: 2035700 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2035761 - Posted: 4 Mar 2020, 17:35:07 UTC - in response to Message 2035700.  
Last modified: 4 Mar 2020, 17:45:09 UTC

Most GPUs are very bad at integer math


"slow", not "bad" : an "add" takes a single clock cycle. CPU clocks are faster than GPU clocks

They are fast compared to CPUs because of their massive parallelism


only when the problem can best be solved by using parallel algorithms.

but still their integer performance is a fraction of their floating point performance.


What if there is no hardware floating point or poor floating point support such as no double precision.? In any event, It is a lot faster to pull a trig value from a table using integers than to using floating point registers to calculate it no matter how fast the floating point stuff is.

When a GPU is designed, the transistors go where they are needed


so does the gas and brake pedal when a car is designed. Some people only know where the gas pedal is.

GPUs are designed for rendering 3D graphics and that happens with floating point math. Integers are only used for indexing and such.


First "GPU" was built for Sony PlayStation. Not sure if it even had hardware floating point or if the floating point was implemented using registers. Registers are devices that store bits: usually 0..n-1 where, GUESS WHAT, n is an integer.

this is just my 2c opinion.
ID: 2035761 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2035873 - Posted: 5 Mar 2020, 0:30:46 UTC
Last modified: 5 Mar 2020, 0:38:22 UTC

Found a replacement for my busted X8DTL mombo. "Xyratex" has 8 pcie slots and takes the same dual xeons and server ram which has gotten really cheap. Took a while to located documents and specs for the mombo.

https://www.ebay.com/itm/Xyratex-0944037-02-Dual-Socket-1366-Server-System-Motherboard/392343147646?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649

have been using my X8DTL recently without any GPUs running LHC's "Atlas" One "Atlas" requires 3gb + 0.9gb * number of threads for the "nbody" Hoping to put 3 GPUs plus an 18 "nbodied" Atlas task.
ID: 2035873 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2035884 - Posted: 5 Mar 2020, 1:38:55 UTC - in response to Message 2035873.  

Looks like 7 PCIe slots. Where’s the 8th?
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2035884 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2035886 - Posted: 5 Mar 2020, 1:48:44 UTC - in response to Message 2035884.  
Last modified: 5 Mar 2020, 2:04:42 UTC

Looks like 7 PCIe slots. Where’s the 8th?


Miscounted: Election disconfort from yesterday when I found I could only vote once.

pimping my old rig sans GPUs. Learned a lot building this. Do not plug risers in backwards and do not buy tubing from lowes that melts at 70c.

ID: 2035886 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2036169 - Posted: 6 Mar 2020, 3:13:53 UTC

Follow-up on my new motherboard, "Xyratex".

Linux from old, different motherboard booted no problem. Try that windows win 10!

Have RX-560 in a slot with an x8-x16 adapter, works fine. All slots are gen1 which is fine. I will not be using any 4-in-1 splitters as the 7 slots are more than enough for my unused boards.

The back of the slots are not cut out so an adapter is needed if not using a riser. Temps are reported just fine, no need to re-run lm-sensors or any other 3rd party software. All works fine.

BUT BUT BUT - screw holes do not line up, not even close. Had to attach three 1x2 14 inches boards with wood screws to mombo to allow mombo to rest on frame of mining rig. There are a lot fewer fan connects but I have a enough molex fan splitters for all the fans, pump, etc.

Although only gen 1, at least the 6 core Xeons can hold their own against anything crunchable unlike the G1840 Celeron that came with the TB85. That system is limited to Milkyway and ATI cards unless I $upgrade$ the cpu..
ID: 2036169 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2036412 - Posted: 7 Mar 2020, 2:48:12 UTC
Last modified: 7 Mar 2020, 2:51:21 UTC

Too late to edit previous post and did not want to leave incorrect info. xyratex 0944037 motherboard: the PCIe
Gen2 is supported after all. However, there are not enough to go around for 7 slots. Slots can be picked and
chosen for gen 1 or gen 2 so all slots cam be used. There are 34 lanes so I can add splitters for more cards.

After doing an upgrade something broke in python3 but not worth fixing or looking into: I was unable to get the
AMD enterprise ubuntu driver to work with the S series gpu boards I was planning to use for Milkyway project.
Will use windows instead.

Motherboard has an AMI bios but it does not have an embedded SLIC so I may have to add one.
My one unused win7home license (from family 3 pack) does not support dual cpus so pro is needed. Win7x64 pro
installed and works fine but was a real pain as only the "simple" updates worked. The required service pack 1
did not download so I had to hunt for it and also had to disable driver signatures before the AMD driver worked.

Either I pony up $ to activate, install an SLIC that might brick the system, or buy one of those cheap win10 Toms hardware recommended for $55 (PCdestination)
ID: 2036412 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2036742 - Posted: 8 Mar 2020, 13:32:20 UTC

My current plan is to sell all my Gtx 1060 3gbs and gtx 1070 video cards to help pay down my PayPal revolving account.

The price on Gtx 750ti's has gotten low enough on eBay that I should be able to get enough to attempt to standup an 18 gpu system.

https://setiathome.berkeley.edu/show_host_detail.php?hostid=8894290 I took it offline so I could both temporarily lower my electricity bill and deal with the current slow-motion family disaster.

Later this spring I want to migrate into the basic 19 gpu mining chassis. I have enough PSU's. I have the MB and possibly riser kits to do a test. If not, I will have to get some more.

By spending "only" $40 per video card for the test I can then plan a transition to higher performing, more efficient cards as my budget allows.

Currently, it looks like at least Einstein@Home can work with PCIe2 or lower and not be handicapped. Some projects like, apparently, GPUGrid require full bandwidth paths to perform at their highest processing speeds.

Darn but S@H was so much more appealing.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2036742 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2038910 - Posted: 19 Mar 2020, 12:40:50 UTC

My newest mining rack has arrived. Once I get time I will migrate "everything" from the 6 gpu rack to the 19 gpu rack and see how many riser cables I am going to need to order along with a heavier duty surge protector or two.

I think I have 16 gpus available for "the test". Then I have to decide how much of a power bill I want to pay :)

I can test to see if the OS and Seti@Home recognize the gpus. But I may need to spend time re-allocating locations close to my heavier power circuits before I try a "download and crunch" test.

If I take long enough I will end up over on E@H for the live testing.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2038910 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2039361 - Posted: 21 Mar 2020, 10:57:36 UTC - in response to Message 2038910.  


If I take long enough I will end up over on E@H for the live testing.


Got the non-production test finished. MB, OS and S@H recognized 16 gpus. So the next step (today) will be to relocate the box nearer to the power so I can "play" with it.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2039361 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2039370 - Posted: 21 Mar 2020, 11:45:36 UTC - in response to Message 2039361.  

Sounds like you’ve now taken the record for most number of GPUs on a single system. Usurping Tbar
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2039370 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2039550 - Posted: 21 Mar 2020, 22:54:39 UTC - in response to Message 2039370.  

Sounds like you’ve now taken the record for most number of GPUs on a single system. Usurping Tbar



Maybe, I still have to show it will crunch on 16 gpus. As well as confirm I have enough power circuitry to support running 16 gpus. I may take a page from your book and power-limit my gpus to stay below circuit-breaker loads.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2039550 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2039580 - Posted: 22 Mar 2020, 1:07:45 UTC - in response to Message 2039550.  
Last modified: 22 Mar 2020, 1:28:39 UTC

Sounds like you’ve now taken the record for most number of GPUs on a single system. Usurping Tbar



Maybe, I still have to show it will crunch on 16 gpus. As well as confirm I have enough power circuitry to support running 16 gpus. I may take a page from your book and power-limit my gpus to stay below circuit-breaker loads.

Tom


Well, nothing has blown yet.

I have Einstein@Home as my standard backup project so it just cheerfully downloaded 16 gpu tasks and started crunching them.

--edit--
And started the dreaded "output file time limit exceeded". I have throttled the total number of gpus down to 10 or so.
I have been playing with the cpu per gpu ratio. Not sure it has done any good.
--edit--

Tom
A proud member of the OFA (Old Farts Association).
ID: 2039580 · Report as offensive     Reply Quote
Previous · 1 . . . 62 · 63 · 64 · 65 · 66 · Next

Message boards : Number crunching : Bitcoin GPU-based Mining Machines good for BOINC / SETI?


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.