Why the Difference in Credit for ATI vs. Nvidia GPU

\n studio-striking\n

Message boards : Number crunching : Why the Difference in Credit for ATI vs. Nvidia GPU
Message board moderation

To post messages, you must log in.

AuthorMessage
Warren B. Rogers

Send message
Joined: 31 May 13
Posts: 2
Credit: 71,176,287
RAC: 0
Message 5826 - Posted: 12 Jan 2014, 15:49:34 UTC

Hello everyone,

I just have a general question. I recently upgraded my video card from an ATI video card to an Nvidia card and noticed that there is a huge difference in the amount of credit given. The ATI would get me around 6000 credit where the Nvidia was getting around 1500 but now I'm averaging about 256 credits. I've noticed that the run time for the Nvidia are longer than the ATI but there is less credit given. I would have thought it would be the opposite, longer run time, more credit. I'm not complaining it just a quest that I have.

Thanks,

Warren
ID: 5826 · Rating: 0 · rate: Rate + / Rate - Report as offensive
mikey
Avatar

Send message
Joined: 22 Jun 11
Posts: 2080
Credit: 1,843,042,064
RAC: 20,205
Message 5828 - Posted: 13 Jan 2014, 12:10:56 UTC - in response to Message 5826.  

Hello everyone,

I just have a general question. I recently upgraded my video card from an ATI video card to an Nvidia card and noticed that there is a huge difference in the amount of credit given. The ATI would get me around 6000 credit where the Nvidia was getting around 1500 but now I'm averaging about 256 credits. I've noticed that the run time for the Nvidia are longer than the ATI but there is less credit given. I would have thought it would be the opposite, longer run time, more credit. I'm not complaining it just a quest that I have.

Thanks,

Warren


I can't say exactly but will start the process...your AMD gpu had about 800 shaders on it, your new gpu has about 768 shaders on it, so it is slightly different but similar. Nvidia does not call theirs shaders and they work differently but are kinda sorta comparable. Each project uses it's own programmers when writing the software to crunch the actual units, some favor AMD gpu's others favor Nvidia gpu's, here at Moo it is AMD that they favor. Meaning the software takes better advantage of the way AMD does things then the way Nvidia does things. It's kind of like Honda and Ferrari, similar but very different.

Your new Nvidia units are crunching in:
Run time 6,936.34
CPU time 3,889.56
Credit 1,536.00

while the old AMD card did them in:
Run time 3,375.13
CPU time 44.49
Credit 6,144.00.

You can see that the AMD gpu did much more work on the gpu itself while the Nvidia uses the cpu much more. More work means more credits and since we are talking about gpu units it makes sense. If you take 24 hours to finish the same unit I take 15 minutes to crunch, you do not get more credits just because it took you longer.
ID: 5828 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Warren B. Rogers

Send message
Joined: 31 May 13
Posts: 2
Credit: 71,176,287
RAC: 0
Message 5830 - Posted: 13 Jan 2014, 18:18:11 UTC - in response to Message 5828.  

Mikey,

Thanks for the information. I thought that it was going to be something on that line where they favored one GPU over the other. I was just a little surprised at how great a difference it was. I now understand that they just look at the GPU time for credit over the GPU/CPU time.

Thank you,

Warren
ID: 5830 · Rating: 0 · rate: Rate + / Rate - Report as offensive
mikey
Avatar

Send message
Joined: 22 Jun 11
Posts: 2080
Credit: 1,843,042,064
RAC: 20,205
Message 5831 - Posted: 13 Jan 2014, 23:47:17 UTC - in response to Message 5830.  

Mikey,

Thanks for the information. I thought that it was going to be something on that line where they favored one GPU over the other. I was just a little surprised at how great a difference it was. I now understand that they just look at the GPU time for credit over the GPU/CPU time.

Thank you,

Warren


Projects like GpuGrid, Asteroids and PrimeGrid favor the Nvidia gpu's, and in GpuGrid and Asteroids case can ONLY use Nvidia gpu's.
ID: 5831 · Rating: 0 · rate: Rate + / Rate - Report as offensive
loranger

Send message
Joined: 2 May 11
Posts: 1
Credit: 1,418,207
RAC: 0
Message 5955 - Posted: 9 Mar 2014, 22:22:42 UTC
Last modified: 9 Mar 2014, 22:29:22 UTC

Hi,

I am new in this project (crunch with Nvidia).
It seems that a great difference of credit for such a small difference in Gpu/Cpu time is "crazy".
Each GPU card work on the same project ! So same credit !

for example (0.2 cpu + 1 gpu)
run time 2072
cpu time 1050
credit : 256
ID: 5955 · Rating: 0 · rate: Rate + / Rate - Report as offensive
mikey
Avatar

Send message
Joined: 22 Jun 11
Posts: 2080
Credit: 1,843,042,064
RAC: 20,205
Message 5956 - Posted: 10 Mar 2014, 11:48:26 UTC - in response to Message 5955.  

Hi,

I am new in this project (crunch with Nvidia).
It seems that a great difference of credit for such a small difference in Gpu/Cpu time is "crazy".
Each GPU card work on the same project ! So same credit !

for example (0.2 cpu + 1 gpu)
run time 2072
cpu time 1050
credit : 256


No it's all got to do with how WELL it works, not THAT it works. If I put you and a 2 year old in the same room you both might be able to stack some blocks, but you of course would do it MUCH better and faster, therefore you would get many more cookies for it. When a project starts it chooses to either make itself work with Nvidia cards or AMD cards as the programming is MUCH different for each. It usually depends on the resources of the person or group running the project and who they know and what the programmer is most familiar with. Then when one kind is working they kind of 'port' the programming over to the other kind, meaning it will work but we are back to my example of you and the 2 year old. Some projects are lucky and big enough to have access to programmers who know both kinds of gpu's and then both kinds work very well. But that is FAR from the norm in my experience.

For instance at Einstein I can take my AMD 7970, a near state of the art gpu, and run a gpu unit but it is taking me over 6 hours to finish a single unit. My Nvidia 560Ti, a 5 or more year old gpu, can do the same unit in about 2.5 hours. The project favors Nvidia cards but does work with AMD cards.
ID: 5956 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Bernt
Avatar

Send message
Joined: 26 May 11
Posts: 568
Credit: 121,524,886
RAC: 0
Message 5973 - Posted: 16 Mar 2014, 9:02:15 UTC - in response to Message 5956.  

Hi,

I am new in this project (crunch with Nvidia).
It seems that a great difference of credit for such a small difference in Gpu/Cpu time is "crazy".
Each GPU card work on the same project ! So same credit !

for example (0.2 cpu + 1 gpu)
run time 2072
cpu time 1050
credit : 256


No it's all got to do with how WELL it works, not THAT it works. If I put you and a 2 year old in the same room you both might be able to stack some blocks, but you of course would do it MUCH better and faster, therefore you would get many more cookies for it. When a project starts it chooses to either make itself work with Nvidia cards or AMD cards as the programming is MUCH different for each. It usually depends on the resources of the person or group running the project and who they know and what the programmer is most familiar with. Then when one kind is working they kind of 'port' the programming over to the other kind, meaning it will work but we are back to my example of you and the 2 year old. Some projects are lucky and big enough to have access to programmers who know both kinds of gpu's and then both kinds work very well. But that is FAR from the norm in my experience.

For instance at Einstein I can take my AMD 7970, a near state of the art gpu, and run a gpu unit but it is taking me over 6 hours to finish a single unit. My Nvidia 560Ti, a 5 or more year old gpu, can do the same unit in about 2.5 hours. The project favors Nvidia cards but does work with AMD cards.



Mikey,
You are so informativ and clearly explains this rotten world of Nvidia versus ATI.
ID: 5973 · Rating: 0 · rate: Rate + / Rate - Report as offensive
mikey
Avatar

Send message
Joined: 22 Jun 11
Posts: 2080
Credit: 1,843,042,064
RAC: 20,205
Message 5974 - Posted: 16 Mar 2014, 12:31:28 UTC - in response to Message 5973.  

Hi,

I am new in this project (crunch with Nvidia).
It seems that a great difference of credit for such a small difference in Gpu/Cpu time is "crazy".
Each GPU card work on the same project ! So same credit !

for example (0.2 cpu + 1 gpu)
run time 2072
cpu time 1050
credit : 256


No it's all got to do with how WELL it works, not THAT it works. If I put you and a 2 year old in the same room you both might be able to stack some blocks, but you of course would do it MUCH better and faster, therefore you would get many more cookies for it. When a project starts it chooses to either make itself work with Nvidia cards or AMD cards as the programming is MUCH different for each. It usually depends on the resources of the person or group running the project and who they know and what the programmer is most familiar with. Then when one kind is working they kind of 'port' the programming over to the other kind, meaning it will work but we are back to my example of you and the 2 year old. Some projects are lucky and big enough to have access to programmers who know both kinds of gpu's and then both kinds work very well. But that is FAR from the norm in my experience.

For instance at Einstein I can take my AMD 7970, a near state of the art gpu, and run a gpu unit but it is taking me over 6 hours to finish a single unit. My Nvidia 560Ti, a 5 or more year old gpu, can do the same unit in about 2.5 hours. The project favors Nvidia cards but does work with AMD cards.



Mikey,
You are so informativ and clearly explains this rotten world of Nvidia versus ATI.


Yeah choosing one maker over the other is not always an easy choice, I chose AMD in the beginning and got most of my points here with them. I am now investing in Nvidia cards though as some other projects like them better.
ID: 5974 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Link
Avatar

Send message
Joined: 11 Feb 14
Posts: 114
Credit: 7,645,131
RAC: 3
Message 5980 - Posted: 18 Mar 2014, 22:45:01 UTC - in response to Message 5956.  
Last modified: 18 Mar 2014, 22:46:45 UTC

Hi,

I am new in this project (crunch with Nvidia).
It seems that a great difference of credit for such a small difference in Gpu/Cpu time is "crazy".
Each GPU card work on the same project ! So same credit !

for example (0.2 cpu + 1 gpu)
run time 2072
cpu time 1050
credit : 256


No it's all got to do with how WELL it works, not THAT it works. If I put you and a 2 year old in the same room you both might be able to stack some blocks, but you of course would do it MUCH better and faster, therefore you would get many more cookies for it. When a project starts it chooses to either make itself work with Nvidia cards or AMD cards as the programming is MUCH different for each. It usually depends on the resources of the person or group running the project and who they know and what the programmer is most familiar with. Then when one kind is working they kind of 'port' the programming over to the other kind, meaning it will work but we are back to my example of you and the 2 year old. Some projects are lucky and big enough to have access to programmers who know both kinds of gpu's and then both kinds work very well. But that is FAR from the norm in my experience.

For instance at Einstein I can take my AMD 7970, a near state of the art gpu, and run a gpu unit but it is taking me over 6 hours to finish a single unit. My Nvidia 560Ti, a 5 or more year old gpu, can do the same unit in about 2.5 hours. The project favors Nvidia cards but does work with AMD cards.

That's one part of the story. The second is, that the chips are very different and the calculations one application does might not perform as well on one of them as on the other simply because the hardware design is not so good for that type of calculations.

Same aplies to CPUs btw., just an example: my Pentium M and my Athlon X2 are both crunching SETI, both the Multibeam and the Astropulse applications. The Pentium is ~20% faster on Astropulse than the Athlon, the Athlon is ~20% faster on Multibeam. Not because the devs don't know how to optimize the applications for each of the CPUs, but in this particular case probably mostly because the Astropulse app benefits more from the larger L2 cache of the Pentium CPU.
ID: 5980 · Rating: 0 · rate: Rate + / Rate - Report as offensive
mikey
Avatar

Send message
Joined: 22 Jun 11
Posts: 2080
Credit: 1,843,042,064
RAC: 20,205
Message 5981 - Posted: 19 Mar 2014, 11:15:51 UTC - in response to Message 5980.  

Hi,

I am new in this project (crunch with Nvidia).
It seems that a great difference of credit for such a small difference in Gpu/Cpu time is "crazy".
Each GPU card work on the same project ! So same credit !

for example (0.2 cpu + 1 gpu)
run time 2072
cpu time 1050
credit : 256


No it's all got to do with how WELL it works, not THAT it works. If I put you and a 2 year old in the same room you both might be able to stack some blocks, but you of course would do it MUCH better and faster, therefore you would get many more cookies for it. When a project starts it chooses to either make itself work with Nvidia cards or AMD cards as the programming is MUCH different for each. It usually depends on the resources of the person or group running the project and who they know and what the programmer is most familiar with. Then when one kind is working they kind of 'port' the programming over to the other kind, meaning it will work but we are back to my example of you and the 2 year old. Some projects are lucky and big enough to have access to programmers who know both kinds of gpu's and then both kinds work very well. But that is FAR from the norm in my experience.

For instance at Einstein I can take my AMD 7970, a near state of the art gpu, and run a gpu unit but it is taking me over 6 hours to finish a single unit. My Nvidia 560Ti, a 5 or more year old gpu, can do the same unit in about 2.5 hours. The project favors Nvidia cards but does work with AMD cards.

That's one part of the story. The second is, that the chips are very different and the calculations one application does might not perform as well on one of them as on the other simply because the hardware design is not so good for that type of calculations.

Same aplies to CPUs btw., just an example: my Pentium M and my Athlon X2 are both crunching SETI, both the Multibeam and the Astropulse applications. The Pentium is ~20% faster on Astropulse than the Athlon, the Athlon is ~20% faster on Multibeam. Not because the devs don't know how to optimize the applications for each of the CPUs, but in this particular case probably mostly because the Astropulse app benefits more from the larger L2 cache of the Pentium CPU.


That's true if they were all the same then we wouldn't need too different manufacturers. Different ways to get to the same end point leads to different speeds and credits.
ID: 5981 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Link
Avatar

Send message
Joined: 11 Feb 14
Posts: 114
Credit: 7,645,131
RAC: 3
Message 5983 - Posted: 19 Mar 2014, 19:00:17 UTC - in response to Message 5981.  

That's true if they were all the same then we wouldn't need too different manufacturers.

We need at least two different manufacturers for to keep the prices low and actually get every now and than something new and not same chip they sell since 20 years. But that's another story.
ID: 5983 · Rating: 0 · rate: Rate + / Rate - Report as offensive
mikey
Avatar

Send message
Joined: 22 Jun 11
Posts: 2080
Credit: 1,843,042,064
RAC: 20,205
Message 5984 - Posted: 20 Mar 2014, 11:13:33 UTC - in response to Message 5983.  

That's true if they were all the same then we wouldn't need too different manufacturers.


We need at least two different manufacturers for to keep the prices low and actually get every now and than something new and not same chip they sell since 20 years. But that's another story.


That's VERY TRUE, just look at the lack of Mac clones and the price they can get for their pc's as opposed to the Windows and Linux boxes out there. I often tell people looking for a new pc to get a Mac as their next pc, especially if they are NOT pc savvy. As they have fewer virus and other problems and the learning curve really ISN'T that steep, but costs often force them to chose among the many Windows clones.
ID: 5984 · Rating: 0 · rate: Rate + / Rate - Report as offensive

Message boards : Number crunching : Why the Difference in Credit for ATI vs. Nvidia GPU


 
Copyright © 2011-2024 Moo! Wrapper Project