so im wathcing a tv show and playing ESO on ultra settings on my R9 280x, its running at 64 celsius... Is that too warm or is it ok?? Fans are only at 31% too
so im wathcing a tv show and playing ESO on ultra settings on my R9 280x, its running at 64 celsius... Is that too warm or is it ok?? Fans are only at 31% too
No, that is perfectly normal. GPU's naturally run hot. Hell 64C is nothing that's actually quite low. My GTX 680 can hit about 70-80C when I play BF4 on ultra.
Damn the edit times are FAST! EDIT: But if your are seriously concerned about the temps you can use a program such as AMD OverDrive and edit the fan speeds. It might even let you edit the fan curve and set fan speeds at certain temps. I'm not sure on that as I do not own a AMD card atm. But if you do bump up the speeds its probably gonna be quite loud keep that in mind.
Seconded all ^
I'd add that it's not worth installing a program to control the fan curve, unless you enjoy doing that kind of stuff.
thank you, im not super smart when it comes to temps and stuff im just starting to learn all of this so i wasnt sure if 147.2 to be exact was too high for it
CPU's usually shut down between 90 and 100C, video cards should be about the same.
Yes, most CPUs and GPUs shut down at those temps; although not all do as it does vary. Intel I know publicly releases what they call the TCASE for every processor they make. IMO you need to keep temps below 80C regardless of what your TCASE (max temp) is. Lower the temps you have the longer the CPU/GPU will last. Hardware degrades over time obviously, but heat increases this degradation. Heat is PC hardware's worst enemy, well that and water of course lol. Anyways I hope this information was helpful.
Those new AMD cards run hot, so 64 degrees is actually pretty good.
Geeze, you guys don't mind it getting that hot? I start freaking out if my stuff goes over 50C
For CPU's different core designs have different target temperatures, e.g., a Phenom II CPU will not want to go much higher than 70C as it starts to lose stability, (still stable for stock speeds, but unstable when overclocked). The latest Intel CPU's can generally handle up to 90C without too many issues (though not recommended for long term use.
Most videocards are designed to handle 90C+ which is more efficient. The larger the temperature delta between core temperature and ambient temperature, the more effective the heatsink becomes (which is why even though the heatsink cannot keep the CPU or GPU at room temperature, but it will stop heating past a certain point.
If you design a device to run hot, then you can get away with a smaller heatsink. While different core designs will waste different amounts of electricity as heat, GPU's tend to push out more heat than CPU's especially at the upper end where the videocard will use 170-250 watts. If the core can maintain full stability at 90C then companies can use the temperature delta to their advantage.
it is the reason why you may use a heatsink like the one below in order to cool a CPU that is pulling 95-120 watts
but a videocard like a 350 watt dual GPU videocard (such as the 4870x2 can get away with something like this)
Spot on. :)
Yeah my old 7950 GX2, the air between the two cards would reach 105C (the GPUs itself usually only reach about 55-60c). -this was the prototype card for the 8800 essentially so it was basically 2 7950s with a cross mount to make it one card. Each board had its own heatsync and fan.
The major difference between the 7950 GX2 and the 8800 was with the heat. The 8800 was enclosed and the cards were inverted so that one heatsync could deal with both gpu cores, while the 7950 GX2 wasn't it was open, and was essentially just two independant boards glued together and a bus put in to merge the traffic.
As far as performance they were equal. I could run any game at the same settings as the 8800 with a +/- 5 fps of it.
because of the open design the fans just couldn't push enough of the heat out the back on the bottom board. So a heat bubble would form between the two cards. The bottom heat sink fan would in turn start to draw air from thus bubble. Thus using hot air to try and cool.
However even with peak usage having the air temp around the card @ 105 C and I would game at that temp for 2-8 hours 4-6 days a week. The card still went 4 years before it died of heat stroke. (well presumably, not actually sure why it died)
Just for frame of reference, while the air between the two boards was 105C, the overall case air temp never rose above 40 C (usually hovered around 37-38), and the CPU temp never went about 45 C.
compared to
Forgot to say that part of the reason mine lasted so long was probably because I had a 120mm cas fane level with the GPU that instead of having it blow hot air out like it had been doing for years before that GPU i swapped it and had it blow cool air in, this solved most fo the heat problem going from avg of 115-120 C at peak to about 105 at peak.
Last edited by Blankwindow; 06-02-2014 at 01:01 PM.
For many videocards, the most common part to fail, are the passives, especially if the cooling design results in the heatsink exhausting hot air over the capacitors
Most capacitors on videocards are rated for a max of 105C and they can run at that temperature for a total of 1000 hours before failing. At less than that, they will last much longer, this is why some motherboard makers will advertise 10,000 hours on their capacitors, even though the model of capacitors are 1000 hour 105C.
Another failure point is the PCB design and thermal stress. Some BGA devices will develop cold solder joints over time due to thermal stress. for example the original xbox 360 getting the red ring of death, and needing a reflow in order to fix it. Some companies will try to prevent this by using higher quality PCB's which will better match the thermal characteristics of the components in addition to doing a better job at directing heat away from the components, and using better direct cooling to prevent huge temperature swings. (or the dreaded HP DV 6000 and DV 4 laptops where they had a bad package from nvidia that caused thermal stress, bad PCB layout, and sharing of a single heatpipe for multiple components all with different overheating temperatures, then ending up with situations where the GPU will cook the chupset when at full load.
Here is the heatsink for a stock Radeon HD3850
Card ran at about 100-110C at full load (did not die though)
Last edited by Mokona512; 06-02-2014 at 10:59 PM.