I read TFM; If I understand correctly one gets 200 Cobblestones for crunching at 1 gigaflop for 86400 seconds.
That said can one get a more accurate comparison of work done using SETI WU types; would a "SETI@home v7 7.00 windows_intelx86 (cuda32)" "Average processing rate 166.4640465293" be comparable to another "SETI@home v7 7.00 windows_intelx86 (cuda32)" "Average processing rate 125.94004295098"
This is the same computer the only difference is the HDD and the OS; Nvidia has a driver for use only with Win 8.1, driver version number 326.01.
Win 8.1 with NVIDIA driver: 326.01 OpenCL: 1.01
Be a little wary comparing operating systems at the moment, with Cuda multibeam V7 (x41zc).
Backstage experimentation and research has verified that for mid to high angle range tasks ( those are sent to Cuda GPUs at the moment) some 20%-45% of the elapsed time (depending on system, driver & other factors) can be attributed to PCi express data transfers [i.e. no flops in them].
The general gist is (for now) there are large numbers of small transfers across the PCI express bus, and that different Windows versions (rather their WDDM driver models) handle these quite differently ( Vista=WDDM 1.0, Win7=1.1 Win8=1.2 & 8.1=1.3).
So reducing that to the simplest, one quarter to one half of any throughput comparison can be related to seemingly minor differences, and squishing out those needless variations is part of the optimisation process, as opposed to a function of Credit(New), APR or trying to compare Cuda revisions or operating systems.
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."