View Poll Results: Which option is the best for me? (READ THE OP BEFORE VOTING PLEASE!)

Voters
2. You may not vote on this poll
  • 2nd R9 290X for Crossfire

    0 0%
  • new GTX980

    2 100.00%
  • new monitor

    0 0%
Results 1 to 13 of 13
  1. #1
    Boycott shampoo! Demand the REAL poo! ZeroThaHero's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Feb 2015
    Age
    37
    Posts
    115

    Newshit Planning next incremental upgrade, please help me decide!

    Last summer, I built myself a whole new rig... unfortunately, as is often the case with our wonderful world of PC hardware, I'm already starting to feel the strain on the machine (in a few, extremely limited situations.) The primary performance pitfall that I've been experiencing is in Battlefield 4 (a renowned system killer,) specifically in busy urban fields of view, with a relatively long draw distance, and lots of "stuff" happening. For people who play and know the game and maps, the best examples I have would be like when I'm standing on a roof in Pearl Market, able to look down the length of one of the longer roads, or, while flying the scout helicopter around over the middle area of Flood Zone. In both of those circumstances, my typically stable 70-100 FPS consistently drops to the 45-55 range.

    For this reason, I'm considering a couple options for bolstering graphics processing. Right now, the two being most heavily considered are as follows:
    1. Obtaining a 2nd R9 290X, to run the cards in a CrossFire configuration
      • Pros:
        • Slightly cheaper than outright purchasing a GTX980, despite the necessity to also get a higher output power supply
        • This extensive benchmark testing chart indicates considerably better performance from the R9 290X CF configuration, when compared to a single GTX980
        • Time and effort saved in not having to do any software changes (drivers, etc.)
      • Cons:
        • Would need to replace power supply with higher output unit
        • Louder operating noise under load
        • Less future-proofing (what happens when I'm having the same issue in 12-18 months?)
    2. Replacing the R9 290X with a new GTX980
      • Pros:
        • Would not require power supply upgrade
        • Future proofing (leaves capability to add 2nd GTX980 in SLI, sometime down the road)
        • Could likely recoup $200-$300 of upgrade cost by selling R9 290X (along with the ~2 remaining years on the retailers optional 3 year in-store immediate replacement warrany that I had purchased)
      • Cons:
        • This similar benchmark testing chart indicates only slight overall performance gains from the alternative single-card
        • Would require additional work to reconfigure drivers and applications, a process that doesn't often go without some horrible bugs or crashes



    However, I'm also a bit torn on my display configuration at the moment... after reading this PCGamer article on "The best gaming monitors", I'm starting to wonder if the time has come to start shopping for a replacement for my old Samsung 1200p 16:10 60ms primary display. While the display does still look ok, I absolutely notice the color saturation and definition degradation when I'm LANning with friends, sitting 6' away from one of their brand new 1440p 16:9 144hZ monitors.

    When considering options for monitor purchase, I must specify a few things:
    • I'm not going to downgrade in screen real-estate to a 1080p screen. The stuff in my sights is either 16:9 1440p, or 21:9 1440p (whatever that bonkers resolution is, it looks amazing)
    • I'm also not bothering to consider anything in the 4k market. Pixel density on the 27"~34" screens is way too high for my blind old-man eyes to see stuff on my desktop, while any ~40"+ screen is just too bloody huge to be a PC monitor (too close to face in seated position, not being able to easily focus on both middle and edges of screen simultaneously defeats the purpose of having a gorgeous high-res for high-FOV FPS gaming)
    • I'm highly averse to the proprietary BS surrounding the green-man's "GSync" (which sounds like something awful that would be muttered by a gangsta'rapper)
    • I am, however, willing to wait until a suitable monitor that supports DisplayPort 1.4a AdaptiveSync becomes available on the market
    • As I'm already experiencing intolerable FPS drops while playing at 1200p, should I hold off on massively increasing my pixel render count until I can do so simultaneously with a major graphics processing upgrade?



    I'll likely only have $500~$750 (Canadian money, in Canadian hardware market,) in my budget for this year's incremental gaming rig upgrade, and I'm having a hard time deciding which of the above would be the wisest course of action. Nerd army, PLS HALP!

  2. #2
    Criminal Lawyer is a redundancy ModJPB's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New Jersey
    Age
    42
    Posts
    242

    Default Planning next incremental upgrade, please help me decide!

    Or you can do the free option which is to overclock. With such a high resolution you need some serious throughput. An overclocked R9 290x is not as fast as a gtx980 but it is pretty close. If you overclock the cpu, dram, gpu and vmem. I think you would be pleasantly surprised. Plus the money you saved, you could then get that fancy monitor you want.

  3. #3
    Keep honking. I'm reloading Mokona512's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New York
    Posts
    418

    Default

    Agreed, do not upgrade until you have confirmed that you are unable to achieve a higher stable overclock.

    Also when you get a frame rate drop, monitor your CPU and GPU usage. Some frame rate drops are not due to a GPU bottleneck, and in those cases, moving to SLI or crossfire, will only make the issue worst as multiple GPU's will add additional overhead on the CPU side.

    As for a monitor upgrade, I recommend going for a higher quality panel e.g, http://www.monoprice.com/Product?c_i...seq=1&format=2

    This way you can move from the sRGB color space, and go more into the adobe RGB color space. WHile the response time may not be quite as fast as a 144 Hz gaming display, you will have better picture quality and accuracy which will be very useful for photography and video production.

    Other than that, do not do any GPU upgrade until you have the chance to monitor both the GPU usage and CPU usage of each core when the slowdown happens to find out where the bottleneck really is.

  4. #4
    Boycott shampoo! Demand the REAL poo! ZeroThaHero's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Feb 2015
    Age
    37
    Posts
    115

    Canada2

    Quote Originally Posted by AOD_ModJPB View Post
    Or you can do the free option which is to overclock. With such a high resolution you need some serious throughput. An overclocked R9 290x is not as fast as a gtx980 but it is pretty close. If you overclock the cpu, dram, gpu and vmem. I think you would be pleasantly surprised. Plus the money you saved, you could then get that fancy monitor you want.
    Unfortunately, this road has been traveled, and slightly backtracked already. My R9 290X is the ASUS DirectCUII, which comes out of the box with the ASUS software overclocking and profile app. The system has presets to clock the card up to 105% and 110% IIRC, however, I was having some stability issues with this system when I first started using the card. In a variety of graphically intensive titles, including BF4, Metro series (on holy shit vid settings), and Max Payne 3 (also on holy shit mode,) I would encounter intermittent graphical artifacting, with occasional CTD, or even full system lock. Once I disabled the ASUS app entirely, I found my stability went right back to 100%. I may try the app again, now that the card has had a few big driver updates since I unboxed it in August of 2014, and see if that makes a big difference.


    Quote Originally Posted by AOD_Mokona512 View Post
    Agreed, do not upgrade until you have confirmed that you are unable to achieve a higher stable overclock.

    Also when you get a frame rate drop, monitor your CPU and GPU usage. Some frame rate drops are not due to a GPU bottleneck, and in those cases, moving to SLI or crossfire, will only make the issue worst as multiple GPU's will add additional overhead on the CPU side.

    As for a monitor upgrade, I recommend going for a higher quality panel e.g, http://www.monoprice.com/Product?c_i...seq=1&format=2

    This way you can move from the sRGB color space, and go more into the adobe RGB color space. WHile the response time may not be quite as fast as a 144 Hz gaming display, you will have better picture quality and accuracy which will be very useful for photography and video production.

    Other than that, do not do any GPU upgrade until you have the chance to monitor both the GPU usage and CPU usage of each core when the slowdown happens to find out where the bottleneck really is.
    I will run CPUid HWMonitor when I'm playing tonight, and I'll be sure to pay close attention to the readings when I'm experiencing instances of FPS loss. I'll try this with the vanilla clock settings, and I'll also give it a shot in conjunction with retrying the software overclocking, as described above.

    Sadly, regarding the monitor, I'll get pwnd by import duty if I buy something internationally. Besides the import tax, and preference to buy from a Canadian vendor, I'm still planning on waiting for DisplayPort AdaptiveSync (AMD FreeSync) compatible screens to hit the market before I really start shopping.

  5. #5
    Criminal Lawyer is a redundancy ModJPB's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New Jersey
    Age
    42
    Posts
    242

    Default Planning next incremental upgrade, please help me decide!

    +1 That the cpu might be a bottleneck too. I have an AMD 8 core processor overclocked to 5GHz and 2133MHz Ram and there are times in planetside that the cpu is bottlenecking my framerates.

    I never had any luck with auto overclockers or premade profiles. The best way is to just get your hands dirty and do it yourself.

    Try using the MSI Afterburner program. It is updated faster and is often more compatible. After installing Afterburner, go into settings and enable all the adjustable voltages and unlock and enable extended limits. Put the power limit up to the max and then slightly increase the voltage(s). Try overclocking it then. If you reach a point where it is not stable then check your temperatures. If the temperatures are good then up the voltage more until it is stable. Try overclocking it further Until you reach the max voltage and safe temperature that is stable.

    It would be a shame that you bought the DirectCu version, which is meant to overclock, and it is not able to overclock.

    After looking at reviews, your card overclocked slightly has the performance between a gtx970 and 980.

  6. #6
    Keep honking. I'm reloading Mokona512's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New York
    Posts
    418

    Default

    Yep, also, never overclock the CPU from within windows, always do it through the bios, It is more stable that way.
    For the GPU, MSI afterburner is really good.

    Auto overclockers for the CPU, never work well (especially on the AMD side), as they do not take into account power limits. On some overclocking forums there have been users who blew a VRM because their auto overclocked tried to give an FX8350 1.6V (you are pretty much talking pushing like 300+ watts to the CPU)

    They also do not focus on stability, instead, it increases clock speeds and voltages until the system fails the power on self test (for the bios based ones)

    Some of the ones that are done in the OS, slowly increase the clock speeds and voltages until the system crashes, then looks for the last successful multiplier+ voltage.

    Auto overclockers would not be so bad if the companies would allow an option for you to limit the voltage range that it would use, and them have it systemically find the highest multiplier, memory clock, and bus speed combination that will still pass the power on self test. (while it will not grant stability, it will at least allow you to fine the limit of the system before it will not boot. You then have a good starting point to work down from.


    Anyway, when doing a manual overclock, first look for reviews for the CPU and RAM, and see what kinds of overclocks many others are getting. Then when you start your own overclock, you can jump to around 70% of their overclock, then run prime 95 for about 30 minutes, and if it passes, then increase your overclock to 90% of the reviewers overclock, then repeat. If it still passes, then go up to the reviews overclock, and stress test for at least 2 hours before attempting to go higher. When you reach a point where it fails,back down around 50-100MHz and stress test for a full day.

  7. #7
    Boycott shampoo! Demand the REAL poo! ZeroThaHero's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Feb 2015
    Age
    37
    Posts
    115

    Default

    So, I took the time to play around with some stuff over the weekend, and did actually manage to get some improvement.

    Firstly, the CPU is definitely not the issue. Even in the most strenuous gameplay moments, no core was peaking anywhere over 80%. The issue turned out to be AMDs PowerTune tech on the vid card; the times where I was getting frame loss on the factory settings, the GPU was being downclocked by upwards of 100-150mHz. When I set up a manual profile in Afterburner, with PowerTune manually disabled, I was able to push the core clock to a stable 1120mHz (reference +12%) with only +50mV applied to the VGPU. Additionally, instead of modulating the GPU clock under load, MSI AB was modulating the VGPU as required, and the graph never actually flatlined at the set max.

    All of this combined to net me approximately ~20% gains in FPS, bringing the FPS range in those heavily taxing scenes I described up to a playable 60-75 FPS. The one downside, so far, has been the accompanying increase in required fan speeds, to keep everything from cooking... while running for a 6h session on Saturday, I finished off with average temps around ~80șC, and fans at around 75%-80%, with occasional scenes nearing 90șC and fans at >90%. Previously, I was getting frame loss with temps hitting >90șC, but fan speeds capped at 50%.

    Further tweaking is ongoing, and I'll likely report more developments later this week.

    Additionally, all of this OCing and temperature monitoring has lead me to investigate some aftermarket cooling solutions, namely the Arctic Cooling Accelero Hybrid II, and the NZXT Kraken G10. I'm a touch apprehensive of both products, for a few reasons. The first item, which seems to be the more desirable product, unfortunately removes any passive airflow cooling from the VRMs on at the front of the bottom of the board. The second, however, doesn't include any solution for cooling the top of the card (although may be compatible with the heatspreader plate that's present on the DirectCUII already?). Besides either of the product specific doubts, I'm apprehensive to modify my card, in turn voiding the warranty. While I'm extremely comfortable working inside PCs (doing full custom builds for >15 years,) I've never done anything involving 3rd party GPU cooling. Would it be advisable to only do this type of modification with a proper anti-static mat/bracelet setup?

  8. #8
    Keep honking. I'm reloading Mokona512's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New York
    Posts
    418

    Default

    I don't recommend products like the G10 as they do not properly cool the entire card.

    e.g., http://www.pugetsystems.com/labs/art...10-Review-527/

    cooling VRM's from the back of a card while it can somewhat work, it does not work well as they do not use heavy via stitching filled with solder in order to provide better thermal transfer to the back of the PCB and into a heatsink.

  9. #9
    Keep honking. I'm reloading Mokona512's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New York
    Posts
    418

    Default

    5 min edit limit :(

    Wanted to also add, with the pricing for the aftermarket cooling, it will be better to just buy a second videocard used.

  10. #10
    Boycott shampoo! Demand the REAL poo! ZeroThaHero's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Feb 2015
    Age
    37
    Posts
    115

    Default

    Quote Originally Posted by AOD_Mokona512 View Post
    5 min edit limit :(

    Wanted to also add, with the pricing for the aftermarket cooling, it will be better to just buy a second videocard used.
    I'm thinking the same... I'd rather not go out on a limb doing something potentially dangerous to my card :| thanks for the feedback though :)

  11. #11
    Criminal Lawyer is a redundancy ModJPB's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Jul 2013
    Location
    New Jersey
    Age
    42
    Posts
    242

    Default Planning next incremental upgrade, please help me decide!

    First simplistic thing to try is replace the thermal paste. Most of the time the manufacturer doesn't use the best there is. I use Antec Formula 7 because it is one of the best and I can find it on the shelf at any Staples store. A lot of people like IC Diamond but you need to order it online. Usually will give you several degrees cooler.

    Another good option is to cut two 120mm/140mm holes in the side of your case and mount intake fans that blow directly on the card. This saved my buddies GPU about 15C difference.

    If you don't mind the noise, maybe change the fan profile to run at 100% when it hits 80C.

  12. #12
    Boycott shampoo! Demand the REAL poo! ZeroThaHero's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Feb 2015
    Age
    37
    Posts
    115

    Default

    Unfortunately, reapplying the stock heatsink doesn't save me the risk of voiding my warranty :( tamper evident indicators and whatnot. Besides that, I've been running a fan profile that I have found to be quite effective at maintaining a balance between heat, noise, and performance:

  13. #13
    Boycott shampoo! Demand the REAL poo! ZeroThaHero's Avatar
    Rank
    Forum Member
    Division
    None
    Status
    Active
    Join Date
    Feb 2015
    Age
    37
    Posts
    115

    So

    Quick Update:

    In the pursuit of this response,

    Quote Originally Posted by AOD_Mokona512 View Post
    5 min edit limit :(

    Wanted to also add, with the pricing for the aftermarket cooling, it will be better to just buy a second videocard used.
    I've gotten an opportunity to purchase a used Gigabyte R9 290X WF OC for a decent price, and I'm just wondering a few things:
    1. Will I have compatibility issues with CrossFire between the two OEM vendor cards (my ASUS R9 290X Direct CUII OC, and the prospective purchase Gigabyte R9 290X Windforce OC)?
    2. If the above is true, am I able to flash the Gigabyte WF OC with the BIOS that's on the ASUS DCII OC?
    3. A quick google search of the part number showed this thread on the first results page... should I be worried about the card? The seller and I will be meeting in person, and I'll be dropping the card in a tower to do a quick Unigine pass, so I can make sure it's at least getting factory clocks without issues. I'd like to make sure that the handful of responses on that thread are not overly representative of the quality and reliability of that OEM card as a whole.
    4. My motherboard has a bizarre PCIe slot configuration, with 3x PCIeX16: 1 @ 16x, 1 @ 8x, 1 @ 4x. The "main" slot runs at 16x, but shares a bus with the secondary slot; when cards are installed in both slots, the mobo documentation reports that the bus evenly splits the bandwidth, effecively making both slots 8x. I've never implemented a multi-card setup before (and havn't run multi-GPU since my old GeForce 7950 GX2 like a decade ago,) so I'm unsure if this upgrade will suffer greatly from the PCIe slot/bus configuration.


    I've managed to get my original ASUS DCII OC up to clocks of 1125/1418, gains (from DCII OC factory 1050/1350 clock) of roughly 7% on GPU and 5% on RAM (and +12.5%/+13.5% over reference clocks of 1000/1250). I don't have an issue cranking the DCII OC back nearer to it's factory clocks, but I also need to make sure that the Gigabyte WF OC card is stable going from it's factory 1040/1250 up to at least match the factory clock on the DCII OC.


 

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
vBulletin Skin By: ForumThemes.com
Top