"As we don't do per-rail testing I don't have anything meaningful to add at this second, but it will be very interesting to see how AMD responds next week."
Why don't you do per-rail testing? It is always better [for us, consumers] to be able to get information from multiple sources so one doesn't have to take it at face value.
Short answer: we decided years ago that we'd rather run a closed case environment to accurately test how a card will behave in a case, to see how the cooling and noise levels hold up. Conversely, per-rail measurements are really only practical on an open testbed, that way you can easily get at components and make changes on the fly.
Testing power draw on the PCI Express slot isn't that hard to do, just need the right equipment and hardware. Even the guys at TecLab (in Brazil) was able to this (in their AMD RX-480 tests) https://www.youtube.com/watch?v=nKcHR1qW3w4 If they can do it then why should Anandtech hesitate...
Seriously? They literally just wrote that it was on the external connector, and you blather on about testing from the PCIe slot. It's simple: you either use professional tools to test from what is effectively an external power supply (voiding the closed case test conditions), use consumer tools with mediocre reporting, or you simply use the closed-case in a consumer setting to provide the information necessary to consumers.
Did you even watch the video? Here's a shortcut https://www.youtube.com/watch?v=nKcHR1qW3w4&t=... straight to 2 minutes 23 seconds. They did test the power draw from the PCI Express x16 slot. Here's the hint, they used a flat cable for PCI Express signals from he slot to the graphic card, while the on the power side they have thick cables. They also used an oscilloscope as well. That does qualify as a professional tool...
I haven't posted a comment on AnandTech in quite a few months, but your comments were able to force me to do just that.
Ryan said that they decided years ago to do the testing in a closed case, rather than an open testbed, for a variety of reasons that he listed that I'm not going to bother repeating. Read his comment.
TecLab was doing their testing in an open air testbed. It was definitely not in a closed case. That is the kind of testing AnandTech feels has several serious disadvantages.
You can disagree with AnandTech on those points... but you're just completely confused at the moment, not even realizing that you're not addressing their argument.
Its not about whether its a closed case and/or open testbed issue. A much more detailed analysis would have been better. In the past Anandtech has provided some brilliant and detailed reviews on power consumption of various hardware ranging from PC-based chips (like CPUs) to mobile SoCs. IMHO perhaps Anandtech should step it up a notch. Looks like other review sites have stepped up their game. Here's a tester who encountered problems with AMD's latest GPU https://www.youtube.com/watch?v=rhjC_8ai7QA even though his AM2 motherboard was old but was able to run a GTX980Ti on it but keeps shutting down with RX480. Definitely there was an issue, and if Toms Hardware, PC Perspective and others hadn't performed those detailed tests then we would not have guessed the source of the problem...
Your negating effects that heat have on components. And won't represent a realistic read world scenario. It get what you mean and you do have a point just that it's not in line with the test Anandtech wants to do.
I'm sure it's possible to build a setup so this can be done inside a case.
Anandtech could have perform dual testing, closed case and open testbed. Its not hard to do plus quite cheap (using PCI Express risers like some of the reviewers did, example http://ht4u.net/reviews/2016/amd_radeon_rx_480_rev... ). Furthermore open testbed testing can be relevant to some people like bitcoin miners. Alas too late for some bitcoin miners https://bitcointalk.org/index.php?topic=1433925.ms... "No, it's an Asus P7P55-LX, it was the 1st rig I built. Ran for 3 years with 3 280x and non-powered risers. 6 hours with the 480s and poof!!" The total load from 3x AMD RX480 fried the mainboard ATX power connector (instead of the PCI Express x16 slot). Looks like using more than a single AMD RX480 (on lower end mainboards) could quickly increase the likelihood of mainboard failure. Youtuber and reviewer (also fanboy), JokerSlunt had tasted that also https://twitter.com/JokerSlunt/status/748909470382... quote "I got my first black screen power crash today using two 480s on an Asus X99 board"...
There is always a limited number of press kits with cards. They have limited time for testing before they ship it away, unless they are doing roundup testing, for which hardware is usually lent by distributors, not manufacturers (at least in my country).
That said, both consumer oriented closed case testing and open testbed have their value, First appeals to 98% of users, the latter for those who will either run in open case or want to check how card will behave with waterblock slapped on. They made decision, this is enough reason for AnandTech to test it that way.
Most test rigs reviewers use have an expensive (well built) high end mainboard. Thus the problem may not be apparent. However for users with cheap lower end and/or older mainboards, this detailed extra information may be important. Thus just guess how many of those "98% of users" were using cheaper low end and/or older mainboards.
However, there is a good reason why most miners use powered risers especially with multiple cards. x1 PCIE is only rated at 25W so if you have a board with two x16 slots and four x1 slots and are running six cards, you could be asking the board to supply with 450W (75W for each card times 6). No board is designed to supply that much and even ATX24 with an 8-pin CPU connector is not really designed for that. Of course this guy only had three cards, but the max that board might be rated at is 150W (for the two PCIe x16 slots) plus 50W (for the two PCIE x1 slots) for a total of 200W. Whereas three x16 card running at spec (which the RX480 might not be, of course) would exceed that. Powered 3.0 riser at under $5 each would have saved that mobo. The guy was just lucky with his three 280X as I would guess they draw most of their current from the PCIE power connectors not the slot. If he had been running a different card, he'd probably have fried his mobo years ago.
That PCI Express x1 slot is capable of supplying 75W since it has the same power rail pins as PCI Express x16 slot. In the PCI Express Card Electro Mechanical Specifications, it does mention that PCI Express x1 can draw up to 75W limit. From https://pcisig.com/sites/default/files/specificati... quotes "A x1 standard height, full-length card is limited to a 10W maximum power dissipation at initial power up. When the card is configured for high power, by default, it must not exceed a 25W maximum power dissipation or optionally it must not exceed a 75W maximum power dissipation. A x4/x8 or a x16 standard height or low profile card is limited to a 25W maximum power dissipation at initial power up. When a card is configured for high power, it must not exceed a 75W maximum power dissipation."
This, close case vs open test bed is relevant if your writing about cooling, not power. they could test power this way to see what it's drawing at load, and idle.
What nobody seems to mention in these news post is several consumers have friend their motherboards PCI-E slots with the RX 480. it's all over reddit.
Ryan I can totally appreciate your position, but given this situation it might be time to consider making the investment? Come on you know you want a bench case anyway!
I totally don't buy that excuse. When did the addition of current sensors become rocket science again? If you ask a mainboard manufacturer nicely they might even do the modification for you...
Testing in a closed case is not incompatible with power rail testing. Put the card on a flex riser to give easy access to the power traces, and tap them there. The card itself remains inside the case, just in a lower slot. Same with the PEG power leads, which are much easier to splice into. Place testing equipment outside the case, routing test leads through a drive bay or PCIe slot blanking plate.
Or you could modify the board to place current sensors into the power rails, even current sensors on top of power carrying traces might work depending on the board layout.
so another 5 mins to hook the equipment and do a power draw test on a benchmark is much of a hassle from someone whose main job to do that?! now i know how the GTX 970 3.5GB slipped and no one has even noticed it.
I agree that testing in a case is the proper ways to test for a formal review as that is how end users will be utilizing the cards. However, I would consider investigating this issue to be more of an exception as this explores a very specific issue that needs a very specific test setup. I'd be interesting in Anandtech running the same type of testing that measure power consumption at the slot level as another comparison point.
Not just Toms Hardware, also PC Perspective as well. Listen to their explanation here https://www.youtube.com/watch?v=ZjAlrGzHAkI Perhaps Anandtech should start similar testing with graphics cards as well in the future...
Within the comments for the RX 480 preview, Ryan Smith said "The full RX 480 review will be in a few days. The full GTX 1080 review will be a couple of days after that. RX 480 would have been a full review today, but I managed to slice myself with the RX 480 on Tuesday and needed to resolve that first..."
Seems like they want to push out the RX 480 review before the 1080 review. Seems fishy to me.
They released 'previews' of both cards on day 1. Good try though. If you are coming to Anandtech for day 1 reviews you are doing it wrong. You generally get the most comprehensive review of any site a week or two post launch
The red cult is EVERYwhere, and inescapable :). Unfortunately over the years, their fanaticism has damaged AMD's brand more than a hundred such technical snafus combined. But then, that's what AMD's us-vs-them marketing has sown.
This comment is very true...some of the nonsense I read in forums with users talking about Intel backdoors in CPUs, etc are just nonsense....the fanboyism gets out of hand...whatever happened to buying whatever is best for your money?
The opinions of forum commenters says nothing about the bias of the site. Can you site anything specific where you believe AMD was given preferable treatment or NVIDIA was treated unfairly bad?
The forum moderators enforce the AMD bias. The site itself was heavily funded in the past by AMD which (combined with the forums) has given many people the impression that Anandtech as a whole is biased. However Anandtech was sold many years ago to the same company that owns Tom's Hardware and I doubt they have any bias of the sort. Why the forums are maintained in such an awful way is beyond me. As I said the reviews are generally pretty good here although it's a shame there are not as many as there used to be.
I explained above. I don't believe the site or Ryan are biased. The forums are very much so. I assume they are run separate from the site. Reading the cicrlejerk in the forums is entertaining at least.
Considering your posting habits, I could see where you might get that impression. You are brand loyal and constantly trashing AMD so you focus in on responders, and sometimes even set the tone of the entire discussion..
Most here are not biased though.. (from what I can tell) Their hardware enthusiasts not fanboys.
I, for the life of me, will never understand this "brand loyal" nonsense...I have owened many AMD setups back in the day when they were competitive...I built an AMD system for a friend who wanted custom but not pay a lot a few years back...I have owned many Nvidia videocards...today, i waited for the AMD RX480 because I can't justify in my head spending $700 for a videocard that will likely only be used to play one or two games over the next few years for me...people should just buy whatever is available to them at the best pricepoint...I'm hoping AMDs Zen finally helps put the company back on track after so many years of disappointing CPU architectures.
Sorry but you've got no credibility due to your well known zealotry.
You were banned from these forums though right? For intentionally misquoting people so you could try and win your pathetic fanboy disputes? Bit sad your still looking through the windows but aren't allowed back in. This is one of those cases where the problem is very much you.
I've heard many things in my life, but none as detached from reality as calling Anandtech "AMD biased". Also this is much ado about nothing, turns out it was just a bunch of trolls on reddit, spreading misinformation because Tom's Hardware didn't realize that the RX480 is drawing as much as a GTX960 from the PCIe.
No motherboards have fried, nor will they, this is perfectly normal. The only practical issue is that older motherboards (pre-2006) may shut down the computer under heavy load.
AMD's RX480 is worse than GTX960, and PC Perspective explains the differences here http://www.pcper.com/reviews/Graphics-Cards/Power-... or better yet watch their highly detailed explanation (which includes the GTX960) here https://www.youtube.com/watch?v=ZjAlrGzHAkI As for fried mainboards, already have quite a few, especially this one by a bitcoin miner https://bitcointalk.org/index.php?topic=1433925.ms... quote "No, it's an Asus P7P55-LX, it was the 1st rig I built. Ran for 3 years with 3 280x and non-powered risers. 6 hours with the 480s and poof!!" Using 3x AMD RX480s overloaded his mainboard's 24-pin ATX power connector...
I find it hard to believe AMD did not notice this before releasing the cards, this is after all something that is not apart of the "normal" reviewing process. If not for Tom's who knows how long til this issue would have been noticed. In the quote above AMD states "we identified select scenarios where the tuning of SOME RX 480 boards was not optimal." I find it hard to believe that this issue is only affecting certain boards...Can that be the Case? I'm sure this will be a non-issue with AIB cards but as someone that has already purchased a reference RX 480 am I very concerned with this. I have an Asus Sabertooth board, is a board like that less vulnerable to this issue?
What I meant by it not being apart of the normal reviewing process is that maybe AMD did not expect anyone to catch it or at least not this quickly especially before AIB cards were out. After AIB cards are released I'm sure most people will downplay the issue because I'm sure those will vastly outsell the reference but that does not mean the issue can be ignored. I dont see AMD having too many options regarding this issue. A) they either power throttle the card costing performance, which will be a Huge negative. I know I for one would probably not keep the card if its going to perform any less because I dont see it beign an insignificant amount. (Hopefully I'm wrong) B) They move more current draw from the slot to the 6-pin. In which case the card will still not be compliant with the spec. Even though I admit I for one would not mind this solution knowing how "over-built" those connectors are anyway.
Oddly enough I'm seeing a lot of reports about undervolting the 480 and actually reducing the power draw and increasing performance because the power/temp limit (one of them) was no longer hit. I wonder if it would be possible to do better on efficiency by dropping the voltages at stock.
The Sabertooths are beasts when it comes to power management. You have nothing to worry about. Keep in mind that the PCIE issue exists mostly for anyone using a PCIE 1.0 slot or a cheap modern motherboard. Even though modern mobos use 2.0 and 3.0 slots they can and will cut costs by taking out the feature that allows them to have higher wattage pulled through them.
AMD made a couple of gross miscalculations for their target audience.
Thanks. I was just talking with a friend and maybe someone here can also help. Is there any difference between PCIe 2.0 vs 3.0 when it comes to power delivery specifications? I have a friend using AMD 990fx and so has a 2.0 slot and has ordered a reference card, any potential issues he needs to worry about?
nope. Power delivery has been unchanged since 1.0: 25W as the baseline for everything except 1x low profile and (optionally) 16x cards, consisting of up to 3A @ 3.3v (10W) and/or 2.1A @ 12V (25W). Full height 16x cards can optionally do 75W, made up of up to 3A @ 3.3 V and 5.5A @ 12W (66w). The 3A @ 3.3V part of the spec is a hold over from the legacy PCI days; and was intended to make converting existing designs easier by just adding a PCI-PCIe1x bridge and not requiring any major hardware changes.
Beyond that there are additional limits in the spec for low profile 1 and 16x cards. Low profile 1x cards, are limited to up to 3A @ 3.3v and up to .5 A @ 12v. Low profile 16x cards are supposed to be limited to 25W. I'm assuming these were added to the spec to allow OEMs to limit the size of the PSUs (and thickness of the copper used in the power planes in the mobo itself) they put in small form factor systems without having to worry about them being overloaded. TTBOMK there's not any way to enforce the lower limits; and I suspect a lot of higher performing low profile cards are massively out of spec. ex The GT 740 has a nominal 64W TDP, while the GT730 and R7 240 (Newegg doesn't list any newer AMD low profile cards) are both above 30W (the 730 by a significant amount). It might have something to do with most of the nVidia cards being from Zotac and other low tier OEMs, if EVGA/MSI/ASUS/etc weren't willing to risk the wrath of the spec committee gods by violating the spec that severely.
So what you are saying is, it's the mobo manufacturers responsibility to overspec their boards so they can accommodate the out-of-spec PCIE power draw of the RX 480?
Did you even think about how ridiculous you sound?
How about if somebody runs over your car with a tank and he says "lol, your fault for not using an armored vehicle"?
The way mutantmagnet phrased it is rather strange, so I won't defend his statement, but yours is also ridiculous. In engineering, you *never* design things to precisely meet the spec. If you do, your work is destined to fail. You have to overengineer things by some margin, a safety buffer, for exactly when weird things happen, which they will. "Defensive design" is one term for it.
It doesn't justify AMD using the safety margin for their own ends, and they're planning to roll out a software update to fix this issue by the sounds of things, but it is unlikely to actually damage any but the cheapest of motherboards.
Wow, so the Asrock Extreme Gaming4 mobo can't run overclocked GPUs. A likely story. What's with all these tech-wienies and little girls freaking out over exceeding the PCIe spec? Here's some news for you, everything from the GTX 960 to just about every OCd GPU out there violates the PCIe spec - and you believe a gaming motherboard can't handle that.
If you ever overclocked, or used an overclocked GPU, you probably went way over the PCIe spec. Mobo manufacturers for the last 10 years at least have designed gaming mobos for OCd GPUs.
So duh. I've been running 2x OCd 7950s for 3 years, both sucking way more from the PCIe than the spec "allows", because: overclock.
No power issues, motherboards completely unaffected and working as intended. The PCIe standard is ignored every day by users all over the world with all sorts of GPUs and it just doesn't matter.
The only thing ridiculous is your ability to jump to conclusions. I was mostly pointing out that you now have to take into account what motherboard you're using if you decide to use a reference 480.
Everything else you stated is based on your imagination.
I actually read quite a bit about what's going on and there are a few things to note about it.
1) It's not actually an issue with the motherboard itself as long as it's a good motherboard. Where the problem lies is the actual physical PCIe connector. If you have a motherboard which is Crossfire/SLI rated then you should be fine with a single card. The max spec is 75w per slot. Thus a SLI/Crossfire setup could theoretically pull 150w through the motherboard, causing damage, -but- since it's only pulling in the range of 80-90w, the limit you're hitting is the single slot pulling the power.
2) the '90 watt' number that's being pulled in general is a -spike-, as in a single point based on the resolution of the oscilloscope doing the reading (many times a second). If you've watched the power delivery of a GPU in fine resolution, it's all over the place up and down. What's important is the average. If you look at the average, then yes, it's running a bit high, but not drastically so (maybe 5w), while the spike is what causes the 90w number. Should it be that way? Probably not, most cards draw 30-50w through the slot, the remainder through the PEG.
3) Can they issue a software fix? Hopefully. If they can issue a fix (either VBIOS or driver) which changes the balance of power to pull more through the PEG and less through the slot that would be better. Yes, it's temporarily putting more power but if you look at actual wire capacities, an 18 gauge conductor can flow 10A @ 12v, 16 gauge can flow 20A @ 12v. That means a 6 pin 18AWG PEG (12v x 2, gnd x 3) will be capable of a combined wattage of 12v * (10A + 10A) = 240w. Why it's only rated for 30% of it's potential capacity is beyond me, unless it's the physical connection which is the limiter. Still, even if they went with 50% of wire capacity, it should easily handle 120w.
1. How many slots you have is irrelevant, there are 5 pins/traces in the pcie slot that pull the power, if all those pins/traces are overloaded on the motherboard by 30-40% for long periods of time they're going to pop or something around them is going to pop.
2. In the pcper's oc'ed results its running at 80-90 watts constantly, its not a spike, limit is 66 watts.
3. That's most likely the easiest solution, but its still technically out of spec on the 6 pin connector, aib partners will probably remedy this with an 8 pin connector. Ati should just scrap the reference design.
Your friend should worry about brand / model / retail price originally of his board. 990fx isn't enough to go on.
ie: Asus Sabertooth is fine. My M5a97 R2.0 has some quirks but has a lot of power controls and protections. MSI gaming boards, and some Gigabyte gaming boards should also be fine. I only trust those three, anyway, MSI and Asus being at the top, Gigabye, then Asrock as a very, very last. Asrock is okay for budget focused builds, but you have to be sure it's their highest of high gaming builds, but even then, I might not trust it still. Only two boards I've heard of having issues so far is a Foxconn AM2 OEM board and some Asrock board(don't remember them saying which), but it had multiple PCI-E slots and burned them all out.
A beta driver for 480 was just released for 480, which will give a consumer more fine-tuned control until they update their bios to run a bit differently for those who paid under $80 for their mobos.
The card isn't even very efficient to begin with... And I'm not one to complain about that when the performance is there (after all, I had CF 6950 and currently have CF R9 290), but the 480 is basically on a bang/buck tightrope as it is.
Better, although, roughly in that ballpark. From nVidia's own (shitty FUDdy skewed scale) slides, 1060 is about 40% better at perf/w. But again, that's at "reference" 480 clock.
Note that even vs AMD's previous gen, that's only about 1.8 improvement. While Raja was claiming that 1.7 comes from process shrink, and they pushed it further up to 2.8, by "optimizing architecture".
Slides that shows that number mentions it's 470 vs 270x, though.
Yeah it's interesting to see how AMD went for a very cheap reference card, where Nvidia went with higher grade designs. It doesn't make for a good PR comparison in the beginning with both cards being released so close to each other...
With Polaris unable to compete with GP 104 on performance they had to aim for the low price high volume segment of the market. At the same time I suspect the problematic power draw levels from the 480s suggests that they were expecting the reference design to run somewhat lower power, but either didn't quite get the perf/watt they were hoping for or decided to bump clocks when they saw where nVidia's cards were performing.
Hopefully they can adjust the relative power draw via a software update to clamp the max drawn through the mobo (the wiring in the PCIe cable is able to safely deliver a good amount of power beyond the nominal 75W rating) and limit the reputational damage. If they can do so the higher price points nVidia's going after with the 1060 should continue to protect AMD from direct competition for a bit longer. OTOH a cut down GP-106 will probably be launching under the GT 1050 brand before much longer and should be able to match the price and performance of the RX480 fairly closely.
Maxwell is terrible in DX12, which all future games will use. Also, efficiency doesn' matter, just price and performance. A GPU's job is running games, not being most efficient. In fact efficiency is actually speed vs price, there is no power efficiency that matters, since power draw does not matter to end user.
Nvidia got destroyed here, they need to get a Pascal car out quickly in this price range, they've already lost gigantic amount of market share. Unfortunately if rumors are true the 1060 has terrible efficiency, as it's rumored to use a worthless 3GB card at 249, and 299 for the 6GB viable card. As you can see AMD is far more efficient.
The rumor mill is talking about two different GP106 variants, one with 3gb of ram and the other with 6gb. Some are claiming that the 3gb model will launch as a $250 GT1060 with the 6gb model as a $300 variant. Others that the GT1060 will only come in 6gb models. Presumably in the latter case the 3GB model will be used for the GT1050.
If the 6GB model is only available at the $300 price point (and ofc assuming the 3gb one ends up bottlenecked by less VRAM), it'd be 50% more expensive than the RX480 for only about 15% -25% more performance (according to what are alleged to be leaked nVidia slides). If that's the case the GT1060 would be seriously overpriced for the relative performance gain it offers.
I'm hoping that they'll all be 6gb cards and the $250/300 price spread is custom cooled vs founders edition variants. This seems most likely to me; $250 for a 3GB card seems seriously overpriced today, when the 950 was available in a 4gb config for well under $200 a year ago.
I thought we were talking about efficiency, not value? I.e. the ability of the 480 to do what it promised within its power budget -- in particular, within the spec of the power supply connections.
Yep end users don't care about power draw. It's why the GTX 750ti was the most bought card in the world from it's release till sometime this year. Oh wait....
Oh, i don't think it has much to do with power draw. Most users don't even know what that is. You're forgetting how most components/computers are acquired.
Unfortunately with your last word you proved you are not civilized enough to have a conversation. Which also undermines your arguments somewhat. Pity...
The 750ti was popular because you could put one in any system regardless of the PSU because it didn't need external power connectors. That is literally the only reason. Not that it was a bad card, but that is why it sold so well and for so long.
I love it when desperate fanatics like you try to feed us the same AMD marketing bullshit only you gobble. We want to play here and now, honey, and AMD just doesn`t deliver.
Huh? RX 480 is literally TWICE as fast as the GTX 960, Nvidia's only competition in the 199-249 range...kinda weird to say it's walking a bang/buck tightrope
1060 will change all that, but it isn't out yet. Also, if Nvidia try to get away with something like 249 for the 3GB 1060, which shouldn't even exist...then they are a joke. And knowing Nvidia, you know they have a good chance to try to do that (and their hordes of fanboys will pay more for less product just to help Nvidia line it's coffers, it's happened many times in past)
Also I find all the sudden concern about efficiency funny. Pretty clear all the fanboys are suddenly so worried about efficiency when Nvidia has an edge there. A GPU's job is to run games not conserve watts. If you want to conserve watts go run an IGP, it'll blow away any Nvidia card in conserving watts.
As long as a card is under 300 watts (just because that seems to be the reasonable limit) I really could care less how many watts it uses, it doesn't affect me in the slightest (assuming the cooling is good and fans are quiet, which is a matter if engineering). What matters is price and performance, and RX 480 blew away Nvidia there, until further notice.
You have absolutely no idea what's going on or what they're talking about in this article, do you?
I'll drop a hint for your convenience... this is NOT about power efficiency at all. This is about poor load-management, which can result in this GPU unevenly stressing the system to the point where your hardware, mainly the motherboard, can get damaged in such a way that you would not be able to claim your warranty.
Now if you wish to continue your ranting, please go right ahead - it may not quite help your agenda, but at least it is entertaining.
@D. Lister You should read the previous comments, the discussion derailed towards this subject. Please do keep your composure, you sound awful when you're heated up.
(wow, sounds like a regular reader... how flattering :D)
You're too kind, I can actually appear to be awful even when I'm not heated up. What is important is that we recognize our failings and work on them. :P
"You should read the previous comments, the discussion derailed towards this subject."
Oh I'm sorry, didn't mean to stumble into the path of your attempted derailment of the topic. Please go right ahead. I will try to keep out of your way.
That's somewhat true, but not to the same extent. It's true in terms of maintaining strict integrity of the data collected but it's not true in such a direct causal relationship as in this case. It's an important consideration that this is an unscheduled update that is being made less than a week after the launch of the card as a direct result of a problem with the card, and that there is reason to believe said problem would inflate the benchmarks of the card. A key word there is inflate. Because the performance change isn't the result of an optimization but rather a mis-optimization. AMD is making a change in the optimization for safety/compliance reasons. Benchmarks taken under unsafe/noncompliant conditions should be invalid.
Another more sinister way of looking at it providing another reason to re-benchmark: I'm not accusing AMD of purposefully pushing the card beyond specs for the purpose of benchmarking, but it's easy to see how a company could do exactly that and then later bring the card back to safe performance after the benchmarking period is over. Then, for the reason that it's an unusual situation and it could be used in such a way, benchmarks should really be re-run as a matter of good journalism.
AMD's death throes. AMD will be purchased by another company within the next few years. They may not even produce new drivers anymore after that. Be careful before buying AMD.
Apple already seems to be exclusively AMD for the Mac. Which is a crying shame because the I really don't like AMD graphic chips. ALL of the Macs at work that be bought in the 2011-2012 time frame that had AMD chips in them have died. All the Macs with Intel-Only or nvidia graphics are still going strong. Waiting on Apple to update the MacBook Pro..but son-of-a-bitch they are taking their sweet time. I'm convinced now that Tim Cook is as bad for Mac Hardware as John Scully was. The Mac went from being the tech-darling of mobile computing to a laggard. How on Earth did this happen ... Apple?
I'm sure Apple would be using nvidia if they weren't I'm a contract to use amd gpus. Apple, at least in the handheld mobile side has always targeted great performance as well as high efficiency simultaneously.
Yeah, it's pretty depressing. Particularly since in an extra dose of irony Nvidia has actually been pretty damn good about keeping up with Mac support, even though it's basically exclusively for the very, very tiny and dwindling niche of folks still using tower Mac Pros. But if you've got a 2009-2012 Mac Pro, you can install the latest Nvidia web drivers and then a 980 or Titan or whatever (I don't think they've added 1070/1080 yet, though it works under Windows) and actually have a more powerful graphics system then anything Apple has shipped in the following 6 years. What a shame.
I agree with you that Tim Cook has been really mediocre for the Mac. I'm not mad about them spending a lot of time and effort on iOS, but it shouldn't be a zero sum game with their resources. It's particularly frustrating when they've gone to extra effort to make something worse, like with the trashcan Mac Pro. If they'd literally taken the laziest, most highly profitable approach and just stuck with the same ancient case and merely bumped the internals it would have been a ton better and still made them more money too.
More then irritating it's just plain confusing. We're all used to companies sometimes needing (or at least choosing) to make hard business decisions for the sake of money that we don't like, and that's just part of the markets. It's not fun, but the logic is clear. But it's a lot more frustrating when companies seem to go for own-goals and choose to make moves that are both bad *and* make no business sense. Rather then "argh" is becomes "!?!?!?!!??!?".
Borderline? AMD vs. Nvidia is the most tiresome troll bait in the tech world, having replaced Apple bashing now that Apple is powerful, rich, and ubiquitous.
The forum has a lot of doom and gloom "sky is falling" crud constantly posted about AMD and in the comments here it's usually more of the same.
The religious devotion to corporations is lame. Corporate staff changes frequently and people act like a corporation is an immutable deity to be worshiped or sided against.
I realize some of this is investors and employee astroturf but it's really boring. If only we could see the same level of passion go toward getting better-quality games, films, and other entertainment. I have a difficult time getting worked up about graphics cards since I don't even think most of the games are worth playing.
People are sick and tired of AMD doing criminally stupid things and constantly under delivering on their astronomical hype. It's thanks to AMDs incompetence that the PC market is what it is right now, with Intel actually RAISING the price of their second tier extreme edition CPU compared to the previous generation first tier as well as dropping a $1,700 consumer CPU, and NVIDIA easily being able to move $1,000 consumer GPUs and make a killing on a $700 mid-range chip that they could sell at $400 and make a large profit.
People enjoy watching AMD fail because they want them to die and go away so a company that isn't incompetent can take over and challenge these psuedo monopolies. On top of that, AMD's hilariously delusional fanboys make hating them even easier.
People are sick and tired of Nvidia doing criminal things, anti-consumer behavior and good grief hearing about their honestly quite mediocre GPUs being hailed as awesome by kids spending their mommy's money is just eye-rollingly annoying.
This is very true. I wanted AMD to succeed for the longest time but now having seen them utterly fail over and over again I want someone to bury their executives in a cave, seize all their money, and hire a new board.
It is obvious to anyone to who is not irrationally and hopelessly optimistic about AMD. Anyone with a neutral perspective and analyze AMD using numbers and not emotions can see how poor AMD is doing and has been doing for decades. AMD has made no important changes to corporate strategy or leadership and does not seem like they plan such changes in the near future. Their downfall is only inevitable. When the market was at its height, those that started warning about the upcoming subprime mortgage crises were also dismissed as "trolls," but how right there were.
Yeah, it's a shame we won't see a MacBook Pro with Pascal anytime soon since that would have awesome battery life, but Polaris is the next best thing at this point. Maybe Apple likes their compute prowess for some reason too.
Slight undervolt, reduce power limit of the card so it hits max boost slightly less often. Strict compliance will be assured but at the cost of maybe 2-5% less performance.
I highly suspect the 8GB of fast GDDR5 is the culprit. They should have stayed with 4GB only. No need for 8GB with the resolutions this card can manage.
It COULD fry a cheap motherboard, or even a decent one if overclocked. The CPU getting damaged is unlikely. Although I wouldn't want to put an SSD in such a system without making sure the SSD has power failure protection. Still, AMD says it will be fixed in a couple of days, so let's see how it turns out.
Tbh AMD made terrible decision with 6pin power plug... If they just plugged 8pin and said 150W average with peaks under stress at 170W, all would be fine, at least they wouldn't be overloading other components.
An 8-pin connector would certainly be welcome, especially for a stable OC, but all by itself it won't be able to solve the 480's problem.
The core issue here is not a new one for AMD - bad software, in this case, at BIOS level. They (AMD) are hinting that a simple driver patch would fix this, but they may eventually have to offer a new BIOS version altogether, with proper voltage distribution and more pragmatic speeds.
At the end of the day, the 480 could actually end up exceeding Maxwell in performance/watt (irony), while having to settle somewhere between the 960 and the 970 in raw performance. The questions that are bothering me at this point are: how much of the 480's MSRP would have to be sacrificed over this? And what impact that sacrifice would have on RTG's future plans?
Let's think about it... Unless there is a BIOS patch that would rewrite power profiles drivers themselves won't be able to do that much - likely scenario is lowering max frequencies, but I am not confident it would be able to take out 15W of the budget. So maybe we are looking on combination of reduced voltages and frequencies a little. This is probably the least hurting scenario. The question remains though... why did they take that much over PCIe in the first place... yep, 6pin is 75W capped, but practically majority of PSUs today have single rail for whole 12V, unless you had a very bad one, 5W more per such a big pin would be hardly a stress, even when it is above specs. Otoh, I am sure I wouldn't want that overload running over my motherboard, it's probably in safety margin, but we are talking about long term continuous stress which is asking for troubles. And honestly, what average gamer does when he gets all shiny new toy to play with? Put it in, take vacation and game a week in row, long term stress provided :-) I still wonder how they passed specification tests certification for the card. What worries me is the chance the excessive power load ran via PCIe was driven through there intentionally since it's much harder to detect there than simple 6/8pin power cord, AMD knowing about the issue and sneaking it in with "we'll fix with next driver and they'll never notice"
"The question remains though... why did they take that much over PCIe in the first place... yep, 6pin is 75W capped, but practically majority of PSUs today have single rail for whole 12V, unless you had a very bad one, 5W more per such a big pin would be hardly a stress, even when it is above specs."
Of course, the 6p/8p PEG cables take the power straight from the PSU and their (comparatively) much thicker twisted copper wires have a fairly massive headroom of hundreds of watts. The PCI express slots are primarily a data transfer interface, and take their power from the very fine traces printed on the motherboard. These parts are not made for sustaining such electrical abuse, and every hardware manufacturer is expected to know this very basic stuff.
"I still wonder how they passed specification tests certification for the card."
There are always loopholes to be found to circumvent standards and regulations, if one is motivated enough to look for them. Hopefully there would be necessary amendments and added vigilance, to ensure nobody tries something like this again.
"What worries me is the chance the excessive power load ran via PCIe was driven through there intentionally"
The only rational alternative is that engineers at AMD are unaware of the basics of electronics, which is far less likely. Probably the seriousness of consequences of this design choice didn't travel up the chain of command with the necessary impact.
"AMD knowing about the issue and sneaking it in with "we'll fix with next driver and they'll never notice""
...or, "we'll gradually normalize it in the later production batches, and by the time there is a noticeable difference in framerates, Vega would be ready for launch and the voices of the naysayers would go unheard in the festivities." Besides, even if someone pointed to a performance decrease and didn't get immediately destroyed by the fanboys, AMD could always find a scapegoat, like some Windows update, or developers favoring Nvidia, or this or that or the freakin' aliens. :(
Any type of "fix" for this would need to reduce performance right? To lower the power draw somewhat.
Is it safe to say that AMD is somewhat cheating their power numbers to make them look more efficient?
I believe it's safe to say nVidia is doing the same when their manufacturers list their cards (GTX 970 and 980) by only their reference power usage when they are in fact highly overclocked cards using multiple power connects and drawing much, much more.
My assumption is that fixing a problem with transient spikes is hardly in the same league as the 970 VRAMgate.
The market rewarded Nvidia for its cheating by making the 970 a best-seller. If people get upset about corporations cheating their customers they should look in the mirror.
To this day, Nvidia still does not list the proper specifications for the 970 on its website.
"My assumption is that fixing a problem with transient spikes is hardly in the same league as the 970 VRAMgate."
Your statement is correct, although the problem here isn't transient spikes, but a sustained overload. Still, unlike that 970 nonsense by Nvidia, this is fixable, at a slight performance cost.
The VRAM situation of the 970 had no effect on the performance of the card. The benchmark results published by review sites were all accurate and true. After the issue came to light, despite much searching, no one could verify instances under playable frame rates where the card was hamstrung by its memory configuration. Why? Because the memory system of the card, as it was, was engineered to be that way, and the card was well-engineered and balanced. It wasn't memory bandwidth or memory capacity bound under playable conditions. The only error was not properly informing the review sites of the memory architecture.
This RX 480 power draw controversy is a completely different, and much more serious, situation. AMD also failed to notify people of the situation, a situation which includes the total power draw of the card exceeding the TDP for extended periods of time and the power draw through the PCI-E slot exceeding the specifications of the standard (whether they, AMD, knew of the situation or not). However, the situation is both potentially damaging to a user's system and potentially invalidates the published benchmarks of the card. We'll know more in the next few days how much of an issue the latter is, but we already know the former is a concern.
"The VRAM situation of the 970 had no effect on the performance of the card."
My point was merely that this issue, problematic or not, was not something that could be sorted out with a patch, without redesigning the chip, or at least its memory subsystem. Which I suppose they eventually did, with Pascal.
"This RX 480 power draw controversy is a completely different, and much more serious, situation. "
But the 970 didn't have a problem at all. The only mistake made was a misrepresentation of the specs handed out in the press packet. If the correct information had been handed out people would have said "oh, that's clever" and hardly anyone would have given a shit.
If there was a similar problem with nVidia cards it would've been reported by now, just like the '3.5GB' debacle was discovered and shown to the entire world.
As I recall, it took several months for the enthusiast community to discover the problem. Meanwhile, Nvidia raked in cash from "4 GB" 970 sales. It seemed to be a particularly good value for SLI at the time since it was supposed to have the same amount of high speed VRAM as the overpriced 980.
After enthusiasts discovered the performance inconsistency on their own, Nvidia admitted the truth via this site -- which ran an article (where it was claimed that the extremely difficult to believe story about the engineering team coming up with the design all by themselves and not telling anyone about it is credible) but still refuses to list the proper specification on its website.
Interesting, I hadn't known it only impacts the 8GB and therefore 8GBps cards before. Did people test the 4GB and find there was no issue? That's probably why there was so much confusion over whether this was regular or not.
And any changes to performance after the patch will definitely be interesting. Even if it goes fine though, what a PR mess.
It's not that much of a PR mess, only as big as people like yourself make it. In six months time it'll only matter to people trying to justify their purchases to themselves and to those people that spend more time arguing over which company is better than they use their graphics card.
Pro reviewers usually have high-quality test systems and much greater experience with hardware setups compared to an average buyer of a budget GPU. Hence it is much less likely that one of the reviewers would end up being the one with the damaged equipment.
"He also overclocked the card and played newer AAA titles for 7 hours straight. At least be honest about what you are linking."
So I am dishonest because I did not add details completely irrelevant to my point, that in your opinion, may have painted AMD in a slightly better light? lol
A few isolated cases, probably people pushing hardware too far but they can't admit they've fried it.
I recall when the 3770K came out, two reviewers burnt their socket with too much voltage and one received a melted motherboard. It makes it on Anandtech and suddenly its a massive issue, every 3770K can burn its socket, its a complete disaster and Intel should do a recall. Sigh.
I haven't had an AMD product since the 9800 Pro. I have my own mind. I love it when people make accusations, it just makes people like Michael Gay look bad.
Do you really want to add something that would increase the amount of work needed to do a review - and thus how long until after release it's published - even more?
Yes, deflecting the issue will fix the problem. Typical AMD tactic. Although you may not be an AMD rep, I have seen similar nonsense come from AMD reps when confronted about serious issues. This type of stubbornness and arrogance has infected AMD from top to bottom and is why AMD is a failing company.
So many times I have seen costumer complaints to AMD about serious issues and they just say. "well look at Intel/Nvidia doing this and that". Even though my issue is with AMD product. Are they suggesting I buy Intel/Nvidia instead? Maybe they are. If I kill someone and tell the judge "Charles Manson killed more" will he let me off the hook?
However, when consumers reward bad behavior from corporations as they did with the 970 it creates a climate where corporations feel that they can take advantage of people.
Nvidia's false specs/VRAM gambit paid off nicely. The market rewarded their corruption.
The vast majority of the GTX 970 cards that were sold were sold after the true memory subsystem architecture was well-known. What consumers rewarded was a well-engineered and highly valuable (from the consumers' point of view) card.
You can believe that NVIDIA purposefully tried to trick people into buying the card if you wish, but that narrative really doesn't make any sense. Don't throw it around like it's fact instead of your own (refutable) speculation.
"The vast majority of the GTX 970 cards that were sold were sold after the true memory subsystem architecture was well-known."
1) Red herring
2) Not "well-known"
Nvidia still hasn't corrected the specs on its own site. Many enthusiasts did not/do not understand the tech issue or the business ethics implications well.
"What consumers rewarded was a well-engineered and highly valuable (from the consumers' point of view) card."
Your spin won't change the fact that SLI sales in particular were extra-strong because the overpriced top model was supposed to have the same amount of VRAM. Nor will it change the fact that the 970 was sold under false pretenses, bait and switch, and continues to be.
As for speculation, I am wondering how anyone can possibly believe the story that the company's engineers created a product without telling anyone in management what they were making and what ended up being made. Sure, let's pretend that Nvidia's management is so useless that they don't direct product design, oversee their staff, and bother to know what their staff has made to sell. I have some nice swampland in Florida you may find highly valuable.
NV lied about nothing, Before and after the discovery 970 is still technically a 56 ROP/4GB card and the performance remained unchanged.
While AMD is stupid enough is launch a out-of-spec card that is valid grounds for a recall and hoping no one cared or noticed. Already 20% pitiful marketshare and terrible PR and they still pull crap like this, but hey it's all NV fault.
@ StrangerGuy Umm... No. The original Specs originally claimed 64 ROPS, 2MB L2 Cache, and a usable 4gbs of full speed memory.
Real Specs - 56 ROPS, 1.75mb L2 Cache, 3.5gbs of full speed memory. the other .5gbs is like @ 1/7ths the speed. so most games with an optimized driver won't even go over that 3.5gbs threshold.
Do a little more research before you try and act like a white knight.
oh and dont get me wrong... at the time the gtx 970 was a good card for the price... NVidia flat out lied their asses off to sell more on pre order. they knew they could get away with it by blaming their PR dept.
That's what I feared, it's physically wired that way, such that the board is essentially seeing no difference between PCI-E power and 6 pin power, so I wonder what any software or even video bios fix could possibly "fix".
Why, AMD, why...Anyways, the gist of all this is...Just get a third party board with a better power system (and cooler while you're at it)
We need a roundup of which third party boards do what with the power, and if any follow AMD off the cliff
So you still reward AMD for crap like this? I personally will try to stay as far as possible from AMD products and the same goes for systems I build for other people.
That's the thing, if it's hard wired to draw 50/50 from PCI-e and 6 pin, reducing the total power to the advertised TDP isn't enough. Going below it would probably hurt performance. The only solution that would work is drawing less from PCI-E and more from 6 pin, but it's not yet sure if that's possible.
It's supposed to be a 150 W TDP card. Even if it were to be wired for 50/50 power draw, is there any problem with that if the slot can handle 75 W? If they tune the card to make sure there isn't sustained power draw above 150 W then there won't be sustained power draw above 75 W on the PCI-E slot.
As you know, we repeatedly lauch our GPUs products with oops, e.g. the noisy power component of Fury last year. Recently, a few outsiders found and told us that select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, some customers with LUCK may not have the problem, so it all depends on whether you are a LUCKY person or not, or pray before buying may help too.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
181 Comments
Back to Article
Arnulf - Saturday, July 2, 2016 - link
"As we don't do per-rail testing I don't have anything meaningful to add at this second, but it will be very interesting to see how AMD responds next week."Why don't you do per-rail testing? It is always better [for us, consumers] to be able to get information from multiple sources so one doesn't have to take it at face value.
Ryan Smith - Saturday, July 2, 2016 - link
Short answer: we decided years ago that we'd rather run a closed case environment to accurately test how a card will behave in a case, to see how the cooling and noise levels hold up. Conversely, per-rail measurements are really only practical on an open testbed, that way you can easily get at components and make changes on the fly.BlueBlazer - Saturday, July 2, 2016 - link
Testing power draw on the PCI Express slot isn't that hard to do, just need the right equipment and hardware. Even the guys at TecLab (in Brazil) was able to this (in their AMD RX-480 tests) https://www.youtube.com/watch?v=nKcHR1qW3w4 If they can do it then why should Anandtech hesitate...lmcd - Saturday, July 2, 2016 - link
Seriously? They literally just wrote that it was on the external connector, and you blather on about testing from the PCIe slot. It's simple: you either use professional tools to test from what is effectively an external power supply (voiding the closed case test conditions), use consumer tools with mediocre reporting, or you simply use the closed-case in a consumer setting to provide the information necessary to consumers.BlueBlazer - Saturday, July 2, 2016 - link
Did you even watch the video? Here's a shortcut https://www.youtube.com/watch?v=nKcHR1qW3w4&t=... straight to 2 minutes 23 seconds. They did test the power draw from the PCI Express x16 slot. Here's the hint, they used a flat cable for PCI Express signals from he slot to the graphic card, while the on the power side they have thick cables. They also used an oscilloscope as well. That does qualify as a professional tool...coder543 - Saturday, July 2, 2016 - link
I haven't posted a comment on AnandTech in quite a few months, but your comments were able to force me to do just that.Ryan said that they decided years ago to do the testing in a closed case, rather than an open testbed, for a variety of reasons that he listed that I'm not going to bother repeating. Read his comment.
TecLab was doing their testing in an open air testbed. It was definitely not in a closed case. That is the kind of testing AnandTech feels has several serious disadvantages.
You can disagree with AnandTech on those points... but you're just completely confused at the moment, not even realizing that you're not addressing their argument.
BlueBlazer - Saturday, July 2, 2016 - link
Its not about whether its a closed case and/or open testbed issue. A much more detailed analysis would have been better. In the past Anandtech has provided some brilliant and detailed reviews on power consumption of various hardware ranging from PC-based chips (like CPUs) to mobile SoCs. IMHO perhaps Anandtech should step it up a notch. Looks like other review sites have stepped up their game. Here's a tester who encountered problems with AMD's latest GPU https://www.youtube.com/watch?v=rhjC_8ai7QA even though his AM2 motherboard was old but was able to run a GTX980Ti on it but keeps shutting down with RX480. Definitely there was an issue, and if Toms Hardware, PC Perspective and others hadn't performed those detailed tests then we would not have guessed the source of the problem...NeatOman - Saturday, July 2, 2016 - link
Your negating effects that heat have on components. And won't represent a realistic read world scenario. It get what you mean and you do have a point just that it's not in line with the test Anandtech wants to do.I'm sure it's possible to build a setup so this can be done inside a case.
BlueBlazer - Sunday, July 3, 2016 - link
Anandtech could have perform dual testing, closed case and open testbed. Its not hard to do plus quite cheap (using PCI Express risers like some of the reviewers did, example http://ht4u.net/reviews/2016/amd_radeon_rx_480_rev... ). Furthermore open testbed testing can be relevant to some people like bitcoin miners. Alas too late for some bitcoin miners https://bitcointalk.org/index.php?topic=1433925.ms... "No, it's an Asus P7P55-LX, it was the 1st rig I built. Ran for 3 years with 3 280x and non-powered risers. 6 hours with the 480s and poof!!" The total load from 3x AMD RX480 fried the mainboard ATX power connector (instead of the PCI Express x16 slot). Looks like using more than a single AMD RX480 (on lower end mainboards) could quickly increase the likelihood of mainboard failure. Youtuber and reviewer (also fanboy), JokerSlunt had tasted that also https://twitter.com/JokerSlunt/status/748909470382... quote "I got my first black screen power crash today using two 480s on an Asus X99 board"...Vatharian - Monday, July 4, 2016 - link
There is always a limited number of press kits with cards. They have limited time for testing before they ship it away, unless they are doing roundup testing, for which hardware is usually lent by distributors, not manufacturers (at least in my country).That said, both consumer oriented closed case testing and open testbed have their value, First appeals to 98% of users, the latter for those who will either run in open case or want to check how card will behave with waterblock slapped on. They made decision, this is enough reason for AnandTech to test it that way.
BlueBlazer - Monday, July 4, 2016 - link
Most test rigs reviewers use have an expensive (well built) high end mainboard. Thus the problem may not be apparent. However for users with cheap lower end and/or older mainboards, this detailed extra information may be important. Thus just guess how many of those "98% of users" were using cheaper low end and/or older mainboards.KompuKare - Monday, July 4, 2016 - link
However, there is a good reason why most miners use powered risers especially with multiple cards. x1 PCIE is only rated at 25W so if you have a board with two x16 slots and four x1 slots and are running six cards, you could be asking the board to supply with 450W (75W for each card times 6). No board is designed to supply that much and even ATX24 with an 8-pin CPU connector is not really designed for that.Of course this guy only had three cards, but the max that board might be rated at is 150W (for the two PCIe x16 slots) plus 50W (for the two PCIE x1 slots) for a total of 200W. Whereas three x16 card running at spec (which the RX480 might not be, of course) would exceed that.
Powered 3.0 riser at under $5 each would have saved that mobo. The guy was just lucky with his three 280X as I would guess they draw most of their current from the PCIE power connectors not the slot. If he had been running a different card, he'd probably have fried his mobo years ago.
BlueBlazer - Monday, July 4, 2016 - link
That PCI Express x1 slot is capable of supplying 75W since it has the same power rail pins as PCI Express x16 slot. In the PCI Express Card Electro Mechanical Specifications, it does mention that PCI Express x1 can draw up to 75W limit. From https://pcisig.com/sites/default/files/specificati... quotes "A x1 standard height, full-length card is limited to a 10W maximum power dissipation at initial power up. When the card is configured for high power, by default, it must not exceed a 25W maximum power dissipation or optionally it must not exceed a 75W maximum power dissipation. A x4/x8 or a x16 standard height or low profile card is limited to a 25W maximum power dissipation at initial power up. When a card is configured for high power, it must not exceed a 75W maximum power dissipation."Morawka - Sunday, July 3, 2016 - link
This, close case vs open test bed is relevant if your writing about cooling, not power. they could test power this way to see what it's drawing at load, and idle.What nobody seems to mention in these news post is several consumers have friend their motherboards PCI-E slots with the RX 480. it's all over reddit.
Morawka - Sunday, July 3, 2016 - link
Fried their Motherboard Slots**Lonyo - Sunday, July 3, 2016 - link
Power and heat are related. As a card gets hotter it uses more power.MajorQuestion - Sunday, July 3, 2016 - link
I may be wrong but the hall effect camp meter that they use to measure the current looks like is one of those that is only accurate above 10A.MajorQuestion - Sunday, July 3, 2016 - link
*clamp.vanilla_gorilla - Saturday, July 2, 2016 - link
Ryan I can totally appreciate your position, but given this situation it might be time to consider making the investment? Come on you know you want a bench case anyway!Murloc - Sunday, July 3, 2016 - link
AT is barely limping along and unable to post reviews on time, asking them to complicate their reviews is wishful thinking at this point.Daniel Egger - Saturday, July 2, 2016 - link
I totally don't buy that excuse. When did the addition of current sensors become rocket science again? If you ask a mainboard manufacturer nicely they might even do the modification for you...D. Lister - Saturday, July 2, 2016 - link
They aren't even doing their normal full reviews and you're asking them to do ADDITIONAL work? Yeah, that'll work.edzieba - Saturday, July 2, 2016 - link
Testing in a closed case is not incompatible with power rail testing. Put the card on a flex riser to give easy access to the power traces, and tap them there. The card itself remains inside the case, just in a lower slot. Same with the PEG power leads, which are much easier to splice into. Place testing equipment outside the case, routing test leads through a drive bay or PCIe slot blanking plate.Daniel Egger - Saturday, July 2, 2016 - link
Or you could modify the board to place current sensors into the power rails, even current sensors on top of power carrying traces might work depending on the board layout.BlueBlazer - Sunday, July 3, 2016 - link
Aye, and does not cost much at all, example https://www.amazon.com/PCIE-Powered-Flexible-Exten... quite cheap and very little modification required to tap the power lines. Other reviewers like HT4U tested using a PCI Express riser http://ht4u.net/reviews/2016/amd_radeon_rx_480_rev... looks so simple, cut the power traces and tap into the power lines...YazX_ - Saturday, July 2, 2016 - link
so another 5 mins to hook the equipment and do a power draw test on a benchmark is much of a hassle from someone whose main job to do that?! now i know how the GTX 970 3.5GB slipped and no one has even noticed it.Kevin G - Saturday, July 2, 2016 - link
I agree that testing in a case is the proper ways to test for a formal review as that is how end users will be utilizing the cards. However, I would consider investigating this issue to be more of an exception as this explores a very specific issue that needs a very specific test setup. I'd be interesting in Anandtech running the same type of testing that measure power consumption at the slot level as another comparison point.bug77 - Saturday, July 2, 2016 - link
Because lately they don't do much testing at all, that's why.BlueBlazer - Saturday, July 2, 2016 - link
Not just Toms Hardware, also PC Perspective as well. Listen to their explanation here https://www.youtube.com/watch?v=ZjAlrGzHAkI Perhaps Anandtech should start similar testing with graphics cards as well in the future...PeckingOrder - Saturday, July 2, 2016 - link
Anandtech is AMD biased, they sweep anything negative towards them under the rug.Impulses - Saturday, July 2, 2016 - link
lol?PeckingOrder - Saturday, July 2, 2016 - link
well this issue is talked about so much all they can really do is mention it and try and make it seem like it's not a big dealsilverblue - Saturday, July 2, 2016 - link
...or they could've just not mentioned it at all, to fit with the bias you're harping on about.Eden-K121D - Saturday, July 2, 2016 - link
LOL.Wreckage - Saturday, July 2, 2016 - link
Well that does explain why they skipped the 1080 review while releasing the 480 review on day 1.coder543 - Saturday, July 2, 2016 - link
They released a preview of the 480, and they released a preview of the 1080. They haven't fully reviewed either card AFAIK.BMNify - Saturday, July 2, 2016 - link
Within the comments for the RX 480 preview, Ryan Smith said "The full RX 480 review will be in a few days. The full GTX 1080 review will be a couple of days after that. RX 480 would have been a full review today, but I managed to slice myself with the RX 480 on Tuesday and needed to resolve that first..."Seems like they want to push out the RX 480 review before the 1080 review. Seems fishy to me.
Morawka - Sunday, July 3, 2016 - link
yup, and they've had the 1080 for a full month longer, And the whitepapers to go with it.Jtaylor1986 - Saturday, July 2, 2016 - link
They released 'previews' of both cards on day 1. Good try though. If you are coming to Anandtech for day 1 reviews you are doing it wrong. You generally get the most comprehensive review of any site a week or two post launchjimbo2779 - Sunday, July 3, 2016 - link
That never used to be the case. It used to be day 1 in depth reviews now often the reviews don't come.It is a shame.
Tegeril - Tuesday, July 5, 2016 - link
You're responding to Wreckage, the most Nvidia fanboy troll to ever grace the forums. It's not worth the effort.medi03 - Saturday, July 2, 2016 - link
LOLYojimbo - Saturday, July 2, 2016 - link
I haven't noticed Anandtech being biased towards AMD.Wreckage - Saturday, July 2, 2016 - link
The website was sponsored by AMD for a long time. The forums are 100% AMD biased. Ryan.....does a good job of reviews and I would not call him biased.D. Lister - Saturday, July 2, 2016 - link
The red cult is EVERYwhere, and inescapable :). Unfortunately over the years, their fanaticism has damaged AMD's brand more than a hundred such technical snafus combined. But then, that's what AMD's us-vs-them marketing has sown.ACE76 - Tuesday, July 5, 2016 - link
This comment is very true...some of the nonsense I read in forums with users talking about Intel backdoors in CPUs, etc are just nonsense....the fanboyism gets out of hand...whatever happened to buying whatever is best for your money?Yojimbo - Saturday, July 2, 2016 - link
The opinions of forum commenters says nothing about the bias of the site. Can you site anything specific where you believe AMD was given preferable treatment or NVIDIA was treated unfairly bad?Wreckage - Saturday, July 2, 2016 - link
The forum moderators enforce the AMD bias. The site itself was heavily funded in the past by AMD which (combined with the forums) has given many people the impression that Anandtech as a whole is biased. However Anandtech was sold many years ago to the same company that owns Tom's Hardware and I doubt they have any bias of the sort. Why the forums are maintained in such an awful way is beyond me. As I said the reviews are generally pretty good here although it's a shame there are not as many as there used to be.Yojimbo - Saturday, July 2, 2016 - link
In what way do the forum moderators enforce an AMD bias?Tegeril - Tuesday, July 5, 2016 - link
By banning him for his Nvidia trolling I wager.Makaveli - Saturday, July 2, 2016 - link
If Anandtech is so AMD biased and you are clearly NV biased why do you keep coming back?Wreckage - Saturday, July 2, 2016 - link
I explained above. I don't believe the site or Ryan are biased. The forums are very much so. I assume they are run separate from the site. Reading the cicrlejerk in the forums is entertaining at least.just4U - Sunday, July 3, 2016 - link
Wreckage,Considering your posting habits, I could see where you might get that impression. You are brand loyal and constantly trashing AMD so you focus in on responders, and sometimes even set the tone of the entire discussion..
Most here are not biased though.. (from what I can tell) Their hardware enthusiasts not fanboys.
ACE76 - Tuesday, July 5, 2016 - link
I, for the life of me, will never understand this "brand loyal" nonsense...I have owened many AMD setups back in the day when they were competitive...I built an AMD system for a friend who wanted custom but not pay a lot a few years back...I have owned many Nvidia videocards...today, i waited for the AMD RX480 because I can't justify in my head spending $700 for a videocard that will likely only be used to play one or two games over the next few years for me...people should just buy whatever is available to them at the best pricepoint...I'm hoping AMDs Zen finally helps put the company back on track after so many years of disappointing CPU architectures.Horza - Tuesday, July 5, 2016 - link
Sorry but you've got no credibility due to your well known zealotry.You were banned from these forums though right? For intentionally misquoting people so you could try and win your pathetic fanboy disputes? Bit sad your still looking through the windows but aren't allowed back in. This is one of those cases where the problem is very much you.
atlantico - Sunday, July 3, 2016 - link
I've heard many things in my life, but none as detached from reality as calling Anandtech "AMD biased". Also this is much ado about nothing, turns out it was just a bunch of trolls on reddit, spreading misinformation because Tom's Hardware didn't realize that the RX480 is drawing as much as a GTX960 from the PCIe.No motherboards have fried, nor will they, this is perfectly normal. The only practical issue is that older motherboards (pre-2006) may shut down the computer under heavy load.
That. is. all. Wow. So drama. Much bias.
BlueBlazer - Sunday, July 3, 2016 - link
AMD's RX480 is worse than GTX960, and PC Perspective explains the differences here http://www.pcper.com/reviews/Graphics-Cards/Power-... or better yet watch their highly detailed explanation (which includes the GTX960) here https://www.youtube.com/watch?v=ZjAlrGzHAkI As for fried mainboards, already have quite a few, especially this one by a bitcoin miner https://bitcointalk.org/index.php?topic=1433925.ms... quote "No, it's an Asus P7P55-LX, it was the 1st rig I built. Ran for 3 years with 3 280x and non-powered risers. 6 hours with the 480s and poof!!" Using 3x AMD RX480s overloaded his mainboard's 24-pin ATX power connector...shabby - Saturday, July 2, 2016 - link
AINT GOT TIME FOR THAT!Murloc - Sunday, July 3, 2016 - link
they might just as well get the data from tom's hardware since they're Purch AFAIK.Soon they will be posting the whole reviews from tom's since they're unable to do them.
SunnyNW - Saturday, July 2, 2016 - link
I find it hard to believe AMD did not notice this before releasing the cards, this is after all something that is not apart of the "normal" reviewing process. If not for Tom's who knows how long til this issue would have been noticed.In the quote above AMD states "we identified select scenarios where the tuning of SOME RX 480 boards was not optimal." I find it hard to believe that this issue is only affecting certain boards...Can that be the Case?
I'm sure this will be a non-issue with AIB cards but as someone that has already purchased a reference RX 480 am I very concerned with this. I have an Asus Sabertooth board, is a board like that less vulnerable to this issue?
SunnyNW - Saturday, July 2, 2016 - link
What I meant by it not being apart of the normal reviewing process is that maybe AMD did not expect anyone to catch it or at least not this quickly especially before AIB cards were out. After AIB cards are released I'm sure most people will downplay the issue because I'm sure those will vastly outsell the reference but that does not mean the issue can be ignored.I dont see AMD having too many options regarding this issue.
A) they either power throttle the card costing performance, which will be a Huge negative. I know I for one would probably not keep the card if its going to perform any less because I dont see it beign an insignificant amount. (Hopefully I'm wrong)
B) They move more current draw from the slot to the 6-pin. In which case the card will still not be compliant with the spec. Even though I admit I for one would not mind this solution knowing how "over-built" those connectors are anyway.
euskalzabe - Saturday, July 2, 2016 - link
This. One of the few intelligent answers I've seen so far on this topic.Drumsticks - Saturday, July 2, 2016 - link
Oddly enough I'm seeing a lot of reports about undervolting the 480 and actually reducing the power draw and increasing performance because the power/temp limit (one of them) was no longer hit. I wonder if it would be possible to do better on efficiency by dropping the voltages at stock.mutantmagnet - Saturday, July 2, 2016 - link
The Sabertooths are beasts when it comes to power management. You have nothing to worry about. Keep in mind that the PCIE issue exists mostly for anyone using a PCIE 1.0 slot or a cheap modern motherboard. Even though modern mobos use 2.0 and 3.0 slots they can and will cut costs by taking out the feature that allows them to have higher wattage pulled through them.AMD made a couple of gross miscalculations for their target audience.
SunnyNW - Saturday, July 2, 2016 - link
Thanks. I was just talking with a friend and maybe someone here can also help. Is there any difference between PCIe 2.0 vs 3.0 when it comes to power delivery specifications? I have a friend using AMD 990fx and so has a 2.0 slot and has ordered a reference card, any potential issues he needs to worry about?lmotaku - Saturday, July 2, 2016 - link
Sorry for the redundancy/typos, no edit button and getting into my morning coffee. lolDanNeely - Saturday, July 2, 2016 - link
nope. Power delivery has been unchanged since 1.0: 25W as the baseline for everything except 1x low profile and (optionally) 16x cards, consisting of up to 3A @ 3.3v (10W) and/or 2.1A @ 12V (25W). Full height 16x cards can optionally do 75W, made up of up to 3A @ 3.3 V and 5.5A @ 12W (66w). The 3A @ 3.3V part of the spec is a hold over from the legacy PCI days; and was intended to make converting existing designs easier by just adding a PCI-PCIe1x bridge and not requiring any major hardware changes.Beyond that there are additional limits in the spec for low profile 1 and 16x cards. Low profile 1x cards, are limited to up to 3A @ 3.3v and up to .5 A @ 12v. Low profile 16x cards are supposed to be limited to 25W. I'm assuming these were added to the spec to allow OEMs to limit the size of the PSUs (and thickness of the copper used in the power planes in the mobo itself) they put in small form factor systems without having to worry about them being overloaded. TTBOMK there's not any way to enforce the lower limits; and I suspect a lot of higher performing low profile cards are massively out of spec. ex The GT 740 has a nominal 64W TDP, while the GT730 and R7 240 (Newegg doesn't list any newer AMD low profile cards) are both above 30W (the 730 by a significant amount). It might have something to do with most of the nVidia cards being from Zotac and other low tier OEMs, if EVGA/MSI/ASUS/etc weren't willing to risk the wrath of the spec committee gods by violating the spec that severely.
StrangerGuy - Saturday, July 2, 2016 - link
So what you are saying is, it's the mobo manufacturers responsibility to overspec their boards so they can accommodate the out-of-spec PCIE power draw of the RX 480?Did you even think about how ridiculous you sound?
How about if somebody runs over your car with a tank and he says "lol, your fault for not using an armored vehicle"?
coder543 - Saturday, July 2, 2016 - link
The way mutantmagnet phrased it is rather strange, so I won't defend his statement, but yours is also ridiculous. In engineering, you *never* design things to precisely meet the spec. If you do, your work is destined to fail. You have to overengineer things by some margin, a safety buffer, for exactly when weird things happen, which they will. "Defensive design" is one term for it.It doesn't justify AMD using the safety margin for their own ends, and they're planning to roll out a software update to fix this issue by the sounds of things, but it is unlikely to actually damage any but the cheapest of motherboards.
Morawka - Sunday, July 3, 2016 - link
if you call the Asrock Extreme Gaming4 a cheap motherboard then ok.. that's a top tier AMD board and 3 people so far have burnt pci-e slotsatlantico - Sunday, July 3, 2016 - link
Wow, so the Asrock Extreme Gaming4 mobo can't run overclocked GPUs. A likely story. What's with all these tech-wienies and little girls freaking out over exceeding the PCIe spec? Here's some news for you, everything from the GTX 960 to just about every OCd GPU out there violates the PCIe spec - and you believe a gaming motherboard can't handle that.Cool story brah.
atlantico - Sunday, July 3, 2016 - link
If you ever overclocked, or used an overclocked GPU, you probably went way over the PCIe spec. Mobo manufacturers for the last 10 years at least have designed gaming mobos for OCd GPUs.So duh. I've been running 2x OCd 7950s for 3 years, both sucking way more from the PCIe than the spec "allows", because: overclock.
No power issues, motherboards completely unaffected and working as intended. The PCIe standard is ignored every day by users all over the world with all sorts of GPUs and it just doesn't matter.
mutantmagnet - Sunday, July 3, 2016 - link
The only thing ridiculous is your ability to jump to conclusions. I was mostly pointing out that you now have to take into account what motherboard you're using if you decide to use a reference 480.Everything else you stated is based on your imagination.
bill.rookard - Saturday, July 2, 2016 - link
I actually read quite a bit about what's going on and there are a few things to note about it.1) It's not actually an issue with the motherboard itself as long as it's a good motherboard. Where the problem lies is the actual physical PCIe connector. If you have a motherboard which is Crossfire/SLI rated then you should be fine with a single card. The max spec is 75w per slot. Thus a SLI/Crossfire setup could theoretically pull 150w through the motherboard, causing damage, -but- since it's only pulling in the range of 80-90w, the limit you're hitting is the single slot pulling the power.
2) the '90 watt' number that's being pulled in general is a -spike-, as in a single point based on the resolution of the oscilloscope doing the reading (many times a second). If you've watched the power delivery of a GPU in fine resolution, it's all over the place up and down. What's important is the average. If you look at the average, then yes, it's running a bit high, but not drastically so (maybe 5w), while the spike is what causes the 90w number. Should it be that way? Probably not, most cards draw 30-50w through the slot, the remainder through the PEG.
3) Can they issue a software fix? Hopefully. If they can issue a fix (either VBIOS or driver) which changes the balance of power to pull more through the PEG and less through the slot that would be better. Yes, it's temporarily putting more power but if you look at actual wire capacities, an 18 gauge conductor can flow 10A @ 12v, 16 gauge can flow 20A @ 12v. That means a 6 pin 18AWG PEG (12v x 2, gnd x 3) will be capable of a combined wattage of 12v * (10A + 10A) = 240w. Why it's only rated for 30% of it's potential capacity is beyond me, unless it's the physical connection which is the limiter. Still, even if they went with 50% of wire capacity, it should easily handle 120w.
shabby - Saturday, July 2, 2016 - link
1. How many slots you have is irrelevant, there are 5 pins/traces in the pcie slot that pull the power, if all those pins/traces are overloaded on the motherboard by 30-40% for long periods of time they're going to pop or something around them is going to pop.2. In the pcper's oc'ed results its running at 80-90 watts constantly, its not a spike, limit is 66 watts.
3. That's most likely the easiest solution, but its still technically out of spec on the 6 pin connector, aib partners will probably remedy this with an 8 pin connector. Ati should just scrap the reference design.
lmotaku - Saturday, July 2, 2016 - link
Your friend should worry about brand / model / retail price originally of his board.990fx isn't enough to go on.
ie: Asus Sabertooth is fine. My M5a97 R2.0 has some quirks but has a lot of power controls and protections. MSI gaming boards, and some Gigabyte gaming boards should also be fine. I only trust those three, anyway, MSI and Asus being at the top, Gigabye, then Asrock as a very, very last. Asrock is okay for budget focused builds, but you have to be sure it's their highest of high gaming builds, but even then, I might not trust it still. Only two boards I've heard of having issues so far is a Foxconn AM2 OEM board and some Asrock board(don't remember them saying which), but it had multiple PCI-E slots and burned them all out.
A beta driver for 480 was just released for 480, which will give a consumer more fine-tuned control until they update their bios to run a bit differently for those who paid under $80 for their mobos.
Impulses - Saturday, July 2, 2016 - link
This does not bode well for AMD, last thing they need right now is a scandal, or to let this fester for days.Impulses - Saturday, July 2, 2016 - link
The card isn't even very efficient to begin with... And I'm not one to complain about that when the performance is there (after all, I had CF 6950 and currently have CF R9 290), but the 480 is basically on a bang/buck tightrope as it is.Eden-K121D - Saturday, July 2, 2016 - link
This card is like Maxwell efficiencymedi03 - Saturday, July 2, 2016 - link
Better, although, roughly in that ballpark.From nVidia's own (shitty FUDdy skewed scale) slides, 1060 is about 40% better at perf/w.
But again, that's at "reference" 480 clock.
Note that even vs AMD's previous gen, that's only about 1.8 improvement.
While Raja was claiming that 1.7 comes from process shrink, and they pushed it further up to 2.8, by "optimizing architecture".
Slides that shows that number mentions it's 470 vs 270x, though.
euskalzabe - Saturday, July 2, 2016 - link
Yeah it's interesting to see how AMD went for a very cheap reference card, where Nvidia went with higher grade designs. It doesn't make for a good PR comparison in the beginning with both cards being released so close to each other...DanNeely - Saturday, July 2, 2016 - link
With Polaris unable to compete with GP 104 on performance they had to aim for the low price high volume segment of the market. At the same time I suspect the problematic power draw levels from the 480s suggests that they were expecting the reference design to run somewhat lower power, but either didn't quite get the perf/watt they were hoping for or decided to bump clocks when they saw where nVidia's cards were performing.Hopefully they can adjust the relative power draw via a software update to clamp the max drawn through the mobo (the wiring in the PCIe cable is able to safely deliver a good amount of power beyond the nominal 75W rating) and limit the reputational damage. If they can do so the higher price points nVidia's going after with the 1060 should continue to protect AMD from direct competition for a bit longer. OTOH a cut down GP-106 will probably be launching under the GT 1050 brand before much longer and should be able to match the price and performance of the RX480 fairly closely.
Yojimbo - Saturday, July 2, 2016 - link
Polaris on a 14nm process has an efficiency comparable to Maxwell on a 28nm process. I think my statement gives a truer picture of the situation.jwcalla - Saturday, July 2, 2016 - link
I believe that's the real take-away from the 480 release, even more important than this question of PCIe compliance.bill4 - Saturday, July 2, 2016 - link
Maxwell is terrible in DX12, which all future games will use. Also, efficiency doesn' matter, just price and performance. A GPU's job is running games, not being most efficient. In fact efficiency is actually speed vs price, there is no power efficiency that matters, since power draw does not matter to end user.Nvidia got destroyed here, they need to get a Pascal car out quickly in this price range, they've already lost gigantic amount of market share. Unfortunately if rumors are true the 1060 has terrible efficiency, as it's rumored to use a worthless 3GB card at 249, and 299 for the 6GB viable card. As you can see AMD is far more efficient.
Meteor2 - Saturday, July 2, 2016 - link
What rumours? There's nothing to suggest that the 1060 won't have the same excellent efficiency as the 1070 and 1080.DanNeely - Saturday, July 2, 2016 - link
The rumor mill is talking about two different GP106 variants, one with 3gb of ram and the other with 6gb. Some are claiming that the 3gb model will launch as a $250 GT1060 with the 6gb model as a $300 variant. Others that the GT1060 will only come in 6gb models. Presumably in the latter case the 3GB model will be used for the GT1050.If the 6GB model is only available at the $300 price point (and ofc assuming the 3gb one ends up bottlenecked by less VRAM), it'd be 50% more expensive than the RX480 for only about 15% -25% more performance (according to what are alleged to be leaked nVidia slides). If that's the case the GT1060 would be seriously overpriced for the relative performance gain it offers.
I'm hoping that they'll all be 6gb cards and the $250/300 price spread is custom cooled vs founders edition variants. This seems most likely to me; $250 for a 3GB card seems seriously overpriced today, when the 950 was available in a 4gb config for well under $200 a year ago.
Meteor2 - Sunday, July 3, 2016 - link
I thought we were talking about efficiency, not value? I.e. the ability of the 480 to do what it promised within its power budget -- in particular, within the spec of the power supply connections.MapRef41N93W - Saturday, July 2, 2016 - link
Yep end users don't care about power draw. It's why the GTX 750ti was the most bought card in the world from it's release till sometime this year. Oh wait....Idiot.
Shodoman - Sunday, July 3, 2016 - link
Oh, i don't think it has much to do with power draw. Most users don't even know what that is. You're forgetting how most components/computers are acquired.Unfortunately with your last word you proved you are not civilized enough to have a conversation. Which also undermines your arguments somewhat. Pity...
fanofanand - Tuesday, July 5, 2016 - link
The 750ti was popular because you could put one in any system regardless of the PSU because it didn't need external power connectors. That is literally the only reason. Not that it was a bad card, but that is why it sold so well and for so long.Michael Bay - Sunday, July 3, 2016 - link
>muh future dx12 gaemzI love it when desperate fanatics like you try to feed us the same AMD marketing bullshit only you gobble. We want to play here and now, honey, and AMD just doesn`t deliver.
Morawka - Sunday, July 3, 2016 - link
on a 16nm process and Nvidia did it on 28nm Planar transistors. That should tell you something right therebill4 - Saturday, July 2, 2016 - link
Huh? RX 480 is literally TWICE as fast as the GTX 960, Nvidia's only competition in the 199-249 range...kinda weird to say it's walking a bang/buck tightrope1060 will change all that, but it isn't out yet. Also, if Nvidia try to get away with something like 249 for the 3GB 1060, which shouldn't even exist...then they are a joke. And knowing Nvidia, you know they have a good chance to try to do that (and their hordes of fanboys will pay more for less product just to help Nvidia line it's coffers, it's happened many times in past)
Also I find all the sudden concern about efficiency funny. Pretty clear all the fanboys are suddenly so worried about efficiency when Nvidia has an edge there. A GPU's job is to run games not conserve watts. If you want to conserve watts go run an IGP, it'll blow away any Nvidia card in conserving watts.
As long as a card is under 300 watts (just because that seems to be the reasonable limit) I really could care less how many watts it uses, it doesn't affect me in the slightest (assuming the cooling is good and fans are quiet, which is a matter if engineering). What matters is price and performance, and RX 480 blew away Nvidia there, until further notice.
D. Lister - Saturday, July 2, 2016 - link
@bill4You have absolutely no idea what's going on or what they're talking about in this article, do you?
I'll drop a hint for your convenience... this is NOT about power efficiency at all. This is about poor load-management, which can result in this GPU unevenly stressing the system to the point where your hardware, mainly the motherboard, can get damaged in such a way that you would not be able to claim your warranty.
Now if you wish to continue your ranting, please go right ahead - it may not quite help your agenda, but at least it is entertaining.
Shodoman - Sunday, July 3, 2016 - link
@D. ListerYou should read the previous comments, the discussion derailed towards this subject.
Please do keep your composure, you sound awful when you're heated up.
D. Lister - Sunday, July 3, 2016 - link
"...you sound awful when you're heated up."(wow, sounds like a regular reader... how flattering :D)
You're too kind, I can actually appear to be awful even when I'm not heated up. What is important is that we recognize our failings and work on them. :P
"You should read the previous comments, the discussion derailed towards this subject."
Oh I'm sorry, didn't mean to stumble into the path of your attempted derailment of the topic. Please go right ahead. I will try to keep out of your way.
Geranium - Saturday, July 2, 2016 - link
That's why we see watercooling on efficient NVIDIA card.Yojimbo - Saturday, July 2, 2016 - link
So does them adjusting the GPU's tuning software invalidate the benchmarks published pre-adjustment?Gigaplex - Saturday, July 2, 2016 - link
Any form of driver update will invalidate benchmark results.Yojimbo - Saturday, July 2, 2016 - link
That's somewhat true, but not to the same extent. It's true in terms of maintaining strict integrity of the data collected but it's not true in such a direct causal relationship as in this case. It's an important consideration that this is an unscheduled update that is being made less than a week after the launch of the card as a direct result of a problem with the card, and that there is reason to believe said problem would inflate the benchmarks of the card. A key word there is inflate. Because the performance change isn't the result of an optimization but rather a mis-optimization. AMD is making a change in the optimization for safety/compliance reasons. Benchmarks taken under unsafe/noncompliant conditions should be invalid.Another more sinister way of looking at it providing another reason to re-benchmark: I'm not accusing AMD of purposefully pushing the card beyond specs for the purpose of benchmarking, but it's easy to see how a company could do exactly that and then later bring the card back to safe performance after the benchmarking period is over. Then, for the reason that it's an unusual situation and it could be used in such a way, benchmarks should really be re-run as a matter of good journalism.
Weyoun0 - Saturday, July 2, 2016 - link
AMD's death throes. AMD will be purchased by another company within the next few years. They may not even produce new drivers anymore after that. Be careful before buying AMD.vladx - Saturday, July 2, 2016 - link
Apple+AMD would make the perfect match. Imagine a healthy AMD with Apple's marketing.TEAMSWITCHER - Saturday, July 2, 2016 - link
Apple already seems to be exclusively AMD for the Mac. Which is a crying shame because the I really don't like AMD graphic chips. ALL of the Macs at work that be bought in the 2011-2012 time frame that had AMD chips in them have died. All the Macs with Intel-Only or nvidia graphics are still going strong. Waiting on Apple to update the MacBook Pro..but son-of-a-bitch they are taking their sweet time. I'm convinced now that Tim Cook is as bad for Mac Hardware as John Scully was. The Mac went from being the tech-darling of mobile computing to a laggard. How on Earth did this happen ... Apple?kurahk7 - Saturday, July 2, 2016 - link
I'm sure Apple would be using nvidia if they weren't I'm a contract to use amd gpus. Apple, at least in the handheld mobile side has always targeted great performance as well as high efficiency simultaneously.zanon - Saturday, July 2, 2016 - link
Yeah, it's pretty depressing. Particularly since in an extra dose of irony Nvidia has actually been pretty damn good about keeping up with Mac support, even though it's basically exclusively for the very, very tiny and dwindling niche of folks still using tower Mac Pros. But if you've got a 2009-2012 Mac Pro, you can install the latest Nvidia web drivers and then a 980 or Titan or whatever (I don't think they've added 1070/1080 yet, though it works under Windows) and actually have a more powerful graphics system then anything Apple has shipped in the following 6 years. What a shame.I agree with you that Tim Cook has been really mediocre for the Mac. I'm not mad about them spending a lot of time and effort on iOS, but it shouldn't be a zero sum game with their resources. It's particularly frustrating when they've gone to extra effort to make something worse, like with the trashcan Mac Pro. If they'd literally taken the laziest, most highly profitable approach and just stuck with the same ancient case and merely bumped the internals it would have been a ton better and still made them more money too.
More then irritating it's just plain confusing. We're all used to companies sometimes needing (or at least choosing) to make hard business decisions for the sake of money that we don't like, and that's just part of the markets. It's not fun, but the logic is clear. But it's a lot more frustrating when companies seem to go for own-goals and choose to make moves that are both bad *and* make no business sense. Rather then "argh" is becomes "!?!?!?!!??!?".
Makaveli - Saturday, July 2, 2016 - link
Unless you have any proof of this post like this are FUD and borderline trolling.Oxford Guy - Saturday, July 2, 2016 - link
Borderline? AMD vs. Nvidia is the most tiresome troll bait in the tech world, having replaced Apple bashing now that Apple is powerful, rich, and ubiquitous.The forum has a lot of doom and gloom "sky is falling" crud constantly posted about AMD and in the comments here it's usually more of the same.
The religious devotion to corporations is lame. Corporate staff changes frequently and people act like a corporation is an immutable deity to be worshiped or sided against.
I realize some of this is investors and employee astroturf but it's really boring. If only we could see the same level of passion go toward getting better-quality games, films, and other entertainment. I have a difficult time getting worked up about graphics cards since I don't even think most of the games are worth playing.
MapRef41N93W - Saturday, July 2, 2016 - link
People are sick and tired of AMD doing criminally stupid things and constantly under delivering on their astronomical hype. It's thanks to AMDs incompetence that the PC market is what it is right now, with Intel actually RAISING the price of their second tier extreme edition CPU compared to the previous generation first tier as well as dropping a $1,700 consumer CPU, and NVIDIA easily being able to move $1,000 consumer GPUs and make a killing on a $700 mid-range chip that they could sell at $400 and make a large profit.People enjoy watching AMD fail because they want them to die and go away so a company that isn't incompetent can take over and challenge these psuedo monopolies. On top of that, AMD's hilariously delusional fanboys make hating them even easier.
D. Lister - Saturday, July 2, 2016 - link
Exactly. Someone needs to put a leash on Intel/Nvidia, and unfortunately AMD isn't quite up to the challenge. :(atlantico - Sunday, July 3, 2016 - link
People are sick and tired of Nvidia doing criminal things, anti-consumer behavior and good grief hearing about their honestly quite mediocre GPUs being hailed as awesome by kids spending their mommy's money is just eye-rollingly annoying.sonicmerlin - Monday, July 4, 2016 - link
This is very true. I wanted AMD to succeed for the longest time but now having seen them utterly fail over and over again I want someone to bury their executives in a cave, seize all their money, and hire a new board.Weyoun0 - Saturday, July 2, 2016 - link
It is obvious to anyone to who is not irrationally and hopelessly optimistic about AMD. Anyone with a neutral perspective and analyze AMD using numbers and not emotions can see how poor AMD is doing and has been doing for decades. AMD has made no important changes to corporate strategy or leadership and does not seem like they plan such changes in the near future. Their downfall is only inevitable. When the market was at its height, those that started warning about the upcoming subprime mortgage crises were also dismissed as "trolls," but how right there were.Diogenes5 - Sunday, July 3, 2016 - link
Your comment was so dumb it deserved a response.Please don't reproduce in the interests of evolution.
Idiot.
Michael Bay - Sunday, July 3, 2016 - link
I like how you always sign in the end of your posts, dudebro!Chaser - Saturday, July 2, 2016 - link
As soon as the GTX1060 is released this fanboi fueled hype dream will come to an end.quaz0r - Saturday, July 2, 2016 - link
even though saying "whom" all the time might seem like it sounds smart, it is not always correct. :]Leosch - Saturday, July 2, 2016 - link
You're right, "some sites" is in the nominative case there.DonMiguel85 - Saturday, July 2, 2016 - link
Yeah, it's a shame we won't see a MacBook Pro with Pascal anytime soon since that would have awesome battery life, but Polaris is the next best thing at this point. Maybe Apple likes their compute prowess for some reason too.Geranium - Saturday, July 2, 2016 - link
Apple uses OpenCL, in which NVIDIA's support is not that good. Maybe thats why we may not see pascal in MacBook Pro.FriendlyUser - Saturday, July 2, 2016 - link
Slight undervolt, reduce power limit of the card so it hits max boost slightly less often. Strict compliance will be assured but at the cost of maybe 2-5% less performance.I highly suspect the 8GB of fast GDDR5 is the culprit. They should have stayed with 4GB only. No need for 8GB with the resolutions this card can manage.
jameskatt - Saturday, July 2, 2016 - link
Since the AMD GPU doesn't have an external power supply like nVidia uses, will it fry your motherboard and potentially your expensive Intel CPU?Oxford Guy - Saturday, July 2, 2016 - link
I'm sure it will -- just before the sky falls.D. Lister - Saturday, July 2, 2016 - link
It COULD fry a cheap motherboard, or even a decent one if overclocked. The CPU getting damaged is unlikely. Although I wouldn't want to put an SSD in such a system without making sure the SSD has power failure protection. Still, AMD says it will be fixed in a couple of days, so let's see how it turns out.HollyDOL - Sunday, July 3, 2016 - link
Tbh AMD made terrible decision with 6pin power plug... If they just plugged 8pin and said 150W average with peaks under stress at 170W, all would be fine, at least they wouldn't be overloading other components.D. Lister - Sunday, July 3, 2016 - link
An 8-pin connector would certainly be welcome, especially for a stable OC, but all by itself it won't be able to solve the 480's problem.The core issue here is not a new one for AMD - bad software, in this case, at BIOS level. They (AMD) are hinting that a simple driver patch would fix this, but they may eventually have to offer a new BIOS version altogether, with proper voltage distribution and more pragmatic speeds.
At the end of the day, the 480 could actually end up exceeding Maxwell in performance/watt (irony), while having to settle somewhere between the 960 and the 970 in raw performance. The questions that are bothering me at this point are: how much of the 480's MSRP would have to be sacrificed over this? And what impact that sacrifice would have on RTG's future plans?
HollyDOL - Sunday, July 3, 2016 - link
Let's think about it...Unless there is a BIOS patch that would rewrite power profiles drivers themselves won't be able to do that much - likely scenario is lowering max frequencies, but I am not confident it would be able to take out 15W of the budget. So maybe we are looking on combination of reduced voltages and frequencies a little. This is probably the least hurting scenario.
The question remains though... why did they take that much over PCIe in the first place... yep, 6pin is 75W capped, but practically majority of PSUs today have single rail for whole 12V, unless you had a very bad one, 5W more per such a big pin would be hardly a stress, even when it is above specs. Otoh, I am sure I wouldn't want that overload running over my motherboard, it's probably in safety margin, but we are talking about long term continuous stress which is asking for troubles. And honestly, what average gamer does when he gets all shiny new toy to play with? Put it in, take vacation and game a week in row, long term stress provided :-) I still wonder how they passed specification tests certification for the card. What worries me is the chance the excessive power load ran via PCIe was driven through there intentionally since it's much harder to detect there than simple 6/8pin power cord, AMD knowing about the issue and sneaking it in with "we'll fix with next driver and they'll never notice"
D. Lister - Sunday, July 3, 2016 - link
"The question remains though... why did they take that much over PCIe in the first place... yep, 6pin is 75W capped, but practically majority of PSUs today have single rail for whole 12V, unless you had a very bad one, 5W more per such a big pin would be hardly a stress, even when it is above specs."Of course, the 6p/8p PEG cables take the power straight from the PSU and their (comparatively) much thicker twisted copper wires have a fairly massive headroom of hundreds of watts. The PCI express slots are primarily a data transfer interface, and take their power from the very fine traces printed on the motherboard. These parts are not made for sustaining such electrical abuse, and every hardware manufacturer is expected to know this very basic stuff.
"I still wonder how they passed specification tests certification for the card."
There are always loopholes to be found to circumvent standards and regulations, if one is motivated enough to look for them. Hopefully there would be necessary amendments and added vigilance, to ensure nobody tries something like this again.
"What worries me is the chance the excessive power load ran via PCIe was driven through there intentionally"
The only rational alternative is that engineers at AMD are unaware of the basics of electronics, which is far less likely. Probably the seriousness of consequences of this design choice didn't travel up the chain of command with the necessary impact.
"AMD knowing about the issue and sneaking it in with "we'll fix with next driver and they'll never notice""
...or, "we'll gradually normalize it in the later production batches, and by the time there is a noticeable difference in framerates, Vega would be ready for launch and the voices of the naysayers would go unheard in the festivities." Besides, even if someone pointed to a performance decrease and didn't get immediately destroyed by the fanboys, AMD could always find a scapegoat, like some Windows update, or developers favoring Nvidia, or this or that or the freakin' aliens. :(
blzd - Saturday, July 2, 2016 - link
Any type of "fix" for this would need to reduce performance right? To lower the power draw somewhat.Is it safe to say that AMD is somewhat cheating their power numbers to make them look more efficient?
I believe it's safe to say nVidia is doing the same when their manufacturers list their cards (GTX 970 and 980) by only their reference power usage when they are in fact highly overclocked cards using multiple power connects and drawing much, much more.
Oxford Guy - Saturday, July 2, 2016 - link
My assumption is that fixing a problem with transient spikes is hardly in the same league as the 970 VRAMgate.The market rewarded Nvidia for its cheating by making the 970 a best-seller. If people get upset about corporations cheating their customers they should look in the mirror.
To this day, Nvidia still does not list the proper specifications for the 970 on its website.
D. Lister - Saturday, July 2, 2016 - link
"My assumption is that fixing a problem with transient spikes is hardly in the same league as the 970 VRAMgate."Your statement is correct, although the problem here isn't transient spikes, but a sustained overload. Still, unlike that 970 nonsense by Nvidia, this is fixable, at a slight performance cost.
Yojimbo - Sunday, July 3, 2016 - link
I think you guys have things backwards.The VRAM situation of the 970 had no effect on the performance of the card. The benchmark results published by review sites were all accurate and true. After the issue came to light, despite much searching, no one could verify instances under playable frame rates where the card was hamstrung by its memory configuration. Why? Because the memory system of the card, as it was, was engineered to be that way, and the card was well-engineered and balanced. It wasn't memory bandwidth or memory capacity bound under playable conditions. The only error was not properly informing the review sites of the memory architecture.
This RX 480 power draw controversy is a completely different, and much more serious, situation. AMD also failed to notify people of the situation, a situation which includes the total power draw of the card exceeding the TDP for extended periods of time and the power draw through the PCI-E slot exceeding the specifications of the standard (whether they, AMD, knew of the situation or not). However, the situation is both potentially damaging to a user's system and potentially invalidates the published benchmarks of the card. We'll know more in the next few days how much of an issue the latter is, but we already know the former is a concern.
D. Lister - Sunday, July 3, 2016 - link
"The VRAM situation of the 970 had no effect on the performance of the card."My point was merely that this issue, problematic or not, was not something that could be sorted out with a patch, without redesigning the chip, or at least its memory subsystem. Which I suppose they eventually did, with Pascal.
"This RX 480 power draw controversy is a completely different, and much more serious, situation. "
No argument there (please see my earlier post http://www.anandtech.com/comments/10465/amd-releas... ). The problem is serious, but it is most likely fixable with a software update.
Yojimbo - Sunday, July 3, 2016 - link
But the 970 didn't have a problem at all. The only mistake made was a misrepresentation of the specs handed out in the press packet. If the correct information had been handed out people would have said "oh, that's clever" and hardly anyone would have given a shit.vladx - Saturday, July 2, 2016 - link
If there was a similar problem with nVidia cards it would've been reported by now, just like the '3.5GB' debacle was discovered and shown to the entire world.Oxford Guy - Sunday, July 3, 2016 - link
As I recall, it took several months for the enthusiast community to discover the problem. Meanwhile, Nvidia raked in cash from "4 GB" 970 sales. It seemed to be a particularly good value for SLI at the time since it was supposed to have the same amount of high speed VRAM as the overpriced 980.After enthusiasts discovered the performance inconsistency on their own, Nvidia admitted the truth via this site -- which ran an article (where it was claimed that the extremely difficult to believe story about the engineering team coming up with the design all by themselves and not telling anyone about it is credible) but still refuses to list the proper specification on its website.
brunosp - Saturday, July 2, 2016 - link
No problem https://www.youtube.com/watch?v=elgIUK0ond0brunosp - Saturday, July 2, 2016 - link
http://wccftech.com/article/radeon-rx-480-reducing...tipoo - Saturday, July 2, 2016 - link
Interesting, I hadn't known it only impacts the 8GB and therefore 8GBps cards before. Did people test the 4GB and find there was no issue? That's probably why there was so much confusion over whether this was regular or not.And any changes to performance after the patch will definitely be interesting. Even if it goes fine though, what a PR mess.
HomeworldFound - Saturday, July 2, 2016 - link
It's not that much of a PR mess, only as big as people like yourself make it. In six months time it'll only matter to people trying to justify their purchases to themselves and to those people that spend more time arguing over which company is better than they use their graphics card.Michael Bay - Sunday, July 3, 2016 - link
>burnt sockets>not a mess
I guess in amdworld it isn`t.
Oxford Guy - Sunday, July 3, 2016 - link
Have any legitimate professional reviewers had this happen or just random people in forums who could easily be astroturf?D. Lister - Sunday, July 3, 2016 - link
Pro reviewers usually have high-quality test systems and much greater experience with hardware setups compared to an average buyer of a budget GPU. Hence it is much less likely that one of the reviewers would end up being the one with the damaged equipment.Nonetheless, this link is to the official AMD forum, and the guy has provided pictures:
>> https://community.amd.com/thread/202410
fanofanand - Tuesday, July 5, 2016 - link
He also overclocked the card and played newer AAA titles for 7 hours straight. At least be honest about what you are linking.D. Lister - Tuesday, July 5, 2016 - link
"He also overclocked the card and played newer AAA titles for 7 hours straight. At least be honest about what you are linking."So I am dishonest because I did not add details completely irrelevant to my point, that in your opinion, may have painted AMD in a slightly better light? lol
HomeworldFound - Monday, July 4, 2016 - link
A few isolated cases, probably people pushing hardware too far but they can't admit they've fried it.I recall when the 3770K came out, two reviewers burnt their socket with too much voltage and one received a melted motherboard. It makes it on Anandtech and suddenly its a massive issue, every 3770K can burn its socket, its a complete disaster and Intel should do a recall. Sigh.
HomeworldFound - Monday, July 4, 2016 - link
I haven't had an AMD product since the 9800 Pro. I have my own mind. I love it when people make accusations, it just makes people like Michael Gay look bad.Michael Bay - Tuesday, July 5, 2016 - link
I love your cries of pain, please continue.Geranium - Saturday, July 2, 2016 - link
I think the problem is related to inefficient VRM. Does retail cards also have this kind of problem?DanNeely - Saturday, July 2, 2016 - link
Do you really want to add something that would increase the amount of work needed to do a review - and thus how long until after release it's published - even more?MapRef41N93W - Saturday, July 2, 2016 - link
"Unprecedented 8Gbps for GDDR5" LOL.......... Typical AMD idiotic statement.D. Lister - Sunday, July 3, 2016 - link
Shhhh, nobody tell them about the 1070, it's funny this way. :pWeyoun0 - Sunday, July 3, 2016 - link
AMD has excelled at frying motherboards for decades. Why stop now?atlantico - Sunday, July 3, 2016 - link
Nvidia has excelled and bricking their GPUs with their shoddy drivers for decades, when they're not BSODing the system. Why stop now?Weyoun0 - Sunday, July 3, 2016 - link
Yes, deflecting the issue will fix the problem. Typical AMD tactic. Although you may not be an AMD rep, I have seen similar nonsense come from AMD reps when confronted about serious issues. This type of stubbornness and arrogance has infected AMD from top to bottom and is why AMD is a failing company.Weyoun0 - Sunday, July 3, 2016 - link
So many times I have seen costumer complaints to AMD about serious issues and they just say. "well look at Intel/Nvidia doing this and that". Even though my issue is with AMD product. Are they suggesting I buy Intel/Nvidia instead? Maybe they are. If I kill someone and tell the judge "Charles Manson killed more" will he let me off the hook?Oxford Guy - Sunday, July 3, 2016 - link
However, when consumers reward bad behavior from corporations as they did with the 970 it creates a climate where corporations feel that they can take advantage of people.Nvidia's false specs/VRAM gambit paid off nicely. The market rewarded their corruption.
Yojimbo - Sunday, July 3, 2016 - link
The vast majority of the GTX 970 cards that were sold were sold after the true memory subsystem architecture was well-known. What consumers rewarded was a well-engineered and highly valuable (from the consumers' point of view) card.You can believe that NVIDIA purposefully tried to trick people into buying the card if you wish, but that narrative really doesn't make any sense. Don't throw it around like it's fact instead of your own (refutable) speculation.
Oxford Guy - Tuesday, July 5, 2016 - link
"The vast majority of the GTX 970 cards that were sold were sold after the true memory subsystem architecture was well-known."1) Red herring
2) Not "well-known"
Nvidia still hasn't corrected the specs on its own site. Many enthusiasts did not/do not understand the tech issue or the business ethics implications well.
"What consumers rewarded was a well-engineered and highly valuable (from the consumers' point of view) card."
Your spin won't change the fact that SLI sales in particular were extra-strong because the overpriced top model was supposed to have the same amount of VRAM. Nor will it change the fact that the 970 was sold under false pretenses, bait and switch, and continues to be.
As for speculation, I am wondering how anyone can possibly believe the story that the company's engineers created a product without telling anyone in management what they were making and what ended up being made. Sure, let's pretend that Nvidia's management is so useless that they don't direct product design, oversee their staff, and bother to know what their staff has made to sell. I have some nice swampland in Florida you may find highly valuable.
Weyoun0 - Tuesday, July 5, 2016 - link
A red herring is trying to blame Nvidia for AMD's spectacular failures.Weyoun0 - Tuesday, July 5, 2016 - link
Always leave it up to AMD religious zealots to turn an article about AMD into unrelated Nvidia bashing.fanofanand - Tuesday, July 5, 2016 - link
You will never convince the Nvidia employees that post here, not even worth trying.Michael Bay - Sunday, July 3, 2016 - link
Market rewarded nV for a great card. Damn, looking at 480 here it still is!StrangerGuy - Tuesday, July 5, 2016 - link
NV lied about nothing, Before and after the discovery 970 is still technically a 56 ROP/4GB card and the performance remained unchanged.While AMD is stupid enough is launch a out-of-spec card that is valid grounds for a recall and hoping no one cared or noticed. Already 20% pitiful marketshare and terrible PR and they still pull crap like this, but hey it's all NV fault.
Mottoman216 - Tuesday, July 5, 2016 - link
@ StrangerGuy Umm... No. The original Specs originally claimed 64 ROPS, 2MB L2 Cache, and a usable 4gbs of full speed memory.Real Specs - 56 ROPS, 1.75mb L2 Cache, 3.5gbs of full speed memory. the other .5gbs is like @ 1/7ths the speed. so most games with an optimized driver won't even go over that 3.5gbs threshold.
Do a little more research before you try and act like a white knight.
http://www.anandtech.com/show/8935/geforce-gtx-970...
Mottoman216 - Tuesday, July 5, 2016 - link
oh and dont get me wrong... at the time the gtx 970 was a good card for the price... NVidia flat out lied their asses off to sell more on pre order. they knew they could get away with it by blaming their PR dept.D. Lister - Wednesday, July 6, 2016 - link
I agree sir, Nvidia lied, no question. The only thing that could possibly make them worse is if they lied AND didn't deliver on performance.tipoo - Sunday, July 3, 2016 - link
Skip 30 minutes, and then skip to 53 minutes, but this guy explains whats going on with the power circuitryhttps://www.twitch.tv/buildzoid/v/75850933
That's what I feared, it's physically wired that way, such that the board is essentially seeing no difference between PCI-E power and 6 pin power, so I wonder what any software or even video bios fix could possibly "fix".
Why, AMD, why...Anyways, the gist of all this is...Just get a third party board with a better power system (and cooler while you're at it)
We need a roundup of which third party boards do what with the power, and if any follow AMD off the cliff
vladx - Sunday, July 3, 2016 - link
So you still reward AMD for crap like this? I personally will try to stay as far as possible from AMD products and the same goes for systems I build for other people.Gigaplex - Sunday, July 3, 2016 - link
Worst case scenario, software or BIOS could underclock the card to reduce total power.tipoo - Monday, July 4, 2016 - link
That's the thing, if it's hard wired to draw 50/50 from PCI-e and 6 pin, reducing the total power to the advertised TDP isn't enough. Going below it would probably hurt performance. The only solution that would work is drawing less from PCI-E and more from 6 pin, but it's not yet sure if that's possible.prisonerX - Tuesday, July 5, 2016 - link
It's not hard wired. It's easy to fix via software, they just reprogram the onboard IR3567B controller.Yojimbo - Monday, July 4, 2016 - link
It's supposed to be a 150 W TDP card. Even if it were to be wired for 50/50 power draw, is there any problem with that if the slot can handle 75 W? If they tune the card to make sure there isn't sustained power draw above 150 W then there won't be sustained power draw above 75 W on the PCI-E slot.pavag - Sunday, July 3, 2016 - link
Just minutes ago I found an GPU review in Anandtech made 19 years ago (For a video card named Hercules Thriller 3D).Couldn't resist to make the first comment!
I beat all of you, even 19 years after the article was published.
Gastec - Tuesday, September 6, 2016 - link
I'd rather have a "Vote delete" button for the ones who brag about being the first to post a comment on a web page. But yeah, I understand your irony.MATHEOS - Monday, July 4, 2016 - link
Anand come back this is a shamesonicmerlin - Tuesday, July 5, 2016 - link
It's Tuesday... So where's AMD's comments?HighTech4US - Tuesday, July 5, 2016 - link
M I Avladx - Wednesday, July 6, 2016 - link
Yep AMD decided to sweep it under the rug hoping most customers are too ignorant to know what blows their motherboards.wow&wow - Tuesday, July 5, 2016 - link
As you know, we repeatedly lauch our GPUs products with oops, e.g. the noisy power component of Fury last year. Recently, a few outsiders found and told us that select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, some customers with LUCK may not have the problem, so it all depends on whether you are a LUCKY person or not, or pray before buying may help too.prisonerX - Tuesday, July 5, 2016 - link
Wow, so much hysteria over a simple bug in a new product. The AMD haters are rabid and certifiable.