I think it could be done without a PLX. Do the 4 x16's all off the CPU, breaking down to 4/4/4/4 if all are used, m.2 #1 off the CPU, m.2 #2 and #3 off the south bridge, and then have 4 PCIe lanes left for misc onboard controllers.
The big gotcha there is that using a single addon card drops the GPU to only an x8. In reality this won't matter; but part of the internet rage brigade would still freak out. A single x4 to x8 PLX on the chipset would avoid that; but the rage brigade would instead freak out at the cost.
If you are splitting them to 4x lanes, then they are no longer x16 :p
I'm watching this one. I need 16x for one GPU, two 4x NVMe drives and the ability to hook up one 8x Gen2 NIC. This is the only board thus far that seems like it may have even a chance of doing that. If it has a PCIe switch, I may just hvae to pay for it. I consider Ryzen useless without it. I still can't believe they handicapped the platform with so few PCIe lanes.
12 cores puts it squarely in the HEDT space, and without a new Threadripper announced, it is essentially AMD's latest HEDT offering, but without the HEDT features like 40+ PCIe lane count.
Nope. I don't care about Gen4 at all. I have no gen 4 devices
8x gen4 has the same amount of bandwidth as 16x gen 3, sure, but if you stick a 16x geb3 GPU in an 8x Gen4 slot, guess what you get? 8x Gen3. Not 8x Gen4.
There probably should be a much of * next to the 4x PCIe 4.0 x16 and 3x M.2 Gen 4 slots tech specs. The PCIe slots will run at less than x16 if more than the first PCIe x16 slot or M.2 slot is used. Add the 10 GbE card and fill all 3 M.2 PCIe 4.0 slots and you'll be down to PCIe 4.0 x4 on the top GPU slot (unless one or more of those M.2 slots are sharing the x4 PCIe 4.0 lanes going to the chipset...). Of course, if your #2 and #3 M.2 drives are idling and you have minimal 10 GbE network traffic, you should see more than x4 PCIe 4.0 bandwidth on the top slot GPU thanks to the magic of PLX chips... its a very dynamic situation sharing 16 PCIe lanes.
Overall, its a way better situation than when this was all PCIe 3.0 (double the bandwidth for everything), but it evidently 24 lanes still requires trade offs. Sure, AMD will sell you a 12C/24T (and soon 16C/32T) CPU on AM4, but HEDT and consumer platforms will remain truly differentiated on PCIe lanes. Of course, if you can afford to fill out all of those PCIe and M.2 slots, the extra price for HEDT platform is a minuscule.
Anyway, not really a complaint about the number of PCIe lanes on AM4, just how they are marketing the board. It might appear like a high end HEDT board with 4x PCIe x16 slots for cheap, but it is not, and this is why Threadripper and the HEDT platform isn't going anywhere.
I should add that it IS very cool that you can have a PCIe 4.0 x8 GPU and THREE M.2 PCIe 4.0 x4 drives running at full speed + all of your chipset I/O unaffected.
You could probably throw the GbE card in there and suffer very little performance degradation to your other PCIe devices, since it actually requires less than a full PCIe 4.0 x1 link for full speed operation. If the PLX chip is efficiently multiplexing the signal, you wouldn't notice much of a hit on the other devices.
This very flexible, efficient usage of lanes with a PLX chip is in stark contrast to the non-PLX situation where you have dedicated PCIe lane allotments to the expansion card slots (eg. x8, x4, x4) and occupying additional M.2 slots requires running at less than PCIe 4.0 x4 and disabling chipset features (such as SATA ports).
Well that is interesting, because if it is the case it is PCIe 3.0, then yes, it will use 2 of the 8 physical lanes to the PLX chip, but then I believe the PLX chip will multiplex the signal into the PCIe x16 link to the CPU, if that is indeed how those chips work (they seem to be a bit of a black-box, there was an Anandtech article ages ago that speculated how they worked along the lines of a multiplexer chip).
If that is the case, using a PCIe 2.0 or 3.0 device doesn't necessarily rob you of PCIe bandwidth to the CPU if you use a PLX chip. In contrast, if you have a board without a PLX chip and dedicated lanes directly from the CPU, unused ports or non-PCIe 4.0 devices would technically be wasting available bandwidth.
I'm sorry but you're incorrect and I had to correct you before someone else reads your comment and becomes confused. The MSI X570 MEG GodLike runs a PCIE-4.0 Bridge Chip. It will run at 16x-16x-8x-4x, even while using the two onboard m.2 ports that come off the CPU.
With this motherboard we can run two video cards @ 16x + 16x, while also running MSI's included dual-m.2 (4x+4x) card in the 3rd PCIE slot @ 8x, while also running the 10 gigabit ethernet card @ 4x in the 4'th slot, while also using the two onboard PCIE-4.0-4x m.2 slots all at the same time. Including up to 4 independant PCIE-4.0 nvme drives in RAID at the same time.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
16 Comments
Back to Article
extide - Tuesday, May 28, 2019 - link
Oh boy, looks like this will have PLX chips and thus be .. super expensive. Otherwise looks like a great board.DanNeely - Tuesday, May 28, 2019 - link
I think it could be done without a PLX. Do the 4 x16's all off the CPU, breaking down to 4/4/4/4 if all are used, m.2 #1 off the CPU, m.2 #2 and #3 off the south bridge, and then have 4 PCIe lanes left for misc onboard controllers.The big gotcha there is that using a single addon card drops the GPU to only an x8. In reality this won't matter; but part of the internet rage brigade would still freak out. A single x4 to x8 PLX on the chipset would avoid that; but the rage brigade would instead freak out at the cost.
Death666Angel - Tuesday, May 28, 2019 - link
The x16 coming from the CPU can't be bifurcated beyond x8/x8.mattlach - Tuesday, May 28, 2019 - link
If you are splitting them to 4x lanes, then they are no longer x16 :pI'm watching this one. I need 16x for one GPU, two 4x NVMe drives and the ability to hook up one 8x Gen2 NIC. This is the only board thus far that seems like it may have even a chance of doing that. If it has a PCIe switch, I may just hvae to pay for it. I consider Ryzen useless without it. I still can't believe they handicapped the platform with so few PCIe lanes.
12 cores puts it squarely in the HEDT space, and without a new Threadripper announced, it is essentially AMD's latest HEDT offering, but without the HEDT features like 40+ PCIe lane count.
rpg1966 - Tuesday, May 28, 2019 - link
He obviously meant x16 physical.Are you saying you need x16 PCIe 4.0?
mattlach - Wednesday, May 29, 2019 - link
Nope. I don't care about Gen4 at all. I have no gen 4 devices8x gen4 has the same amount of bandwidth as 16x gen 3, sure, but if you stick a 16x geb3 GPU in an 8x Gen4 slot, guess what you get? 8x Gen3. Not 8x Gen4.
Kevin G - Tuesday, May 28, 2019 - link
They may not be PLX but Microsemi. They have been sampling PCIe 4.0 bridge chips for awhile now and should already be in mass production.FreckledTrout - Tuesday, May 28, 2019 - link
That big shroud going from the IO panel to cover some of the VRM's might have a PLX chip under the covers, it certainly is large enough space wise.3DoubleD - Tuesday, May 28, 2019 - link
There probably should be a much of * next to the 4x PCIe 4.0 x16 and 3x M.2 Gen 4 slots tech specs. The PCIe slots will run at less than x16 if more than the first PCIe x16 slot or M.2 slot is used. Add the 10 GbE card and fill all 3 M.2 PCIe 4.0 slots and you'll be down to PCIe 4.0 x4 on the top GPU slot (unless one or more of those M.2 slots are sharing the x4 PCIe 4.0 lanes going to the chipset...). Of course, if your #2 and #3 M.2 drives are idling and you have minimal 10 GbE network traffic, you should see more than x4 PCIe 4.0 bandwidth on the top slot GPU thanks to the magic of PLX chips... its a very dynamic situation sharing 16 PCIe lanes.Overall, its a way better situation than when this was all PCIe 3.0 (double the bandwidth for everything), but it evidently 24 lanes still requires trade offs. Sure, AMD will sell you a 12C/24T (and soon 16C/32T) CPU on AM4, but HEDT and consumer platforms will remain truly differentiated on PCIe lanes. Of course, if you can afford to fill out all of those PCIe and M.2 slots, the extra price for HEDT platform is a minuscule.
Anyway, not really a complaint about the number of PCIe lanes on AM4, just how they are marketing the board. It might appear like a high end HEDT board with 4x PCIe x16 slots for cheap, but it is not, and this is why Threadripper and the HEDT platform isn't going anywhere.
3DoubleD - Tuesday, May 28, 2019 - link
I should add that it IS very cool that you can have a PCIe 4.0 x8 GPU and THREE M.2 PCIe 4.0 x4 drives running at full speed + all of your chipset I/O unaffected.You could probably throw the GbE card in there and suffer very little performance degradation to your other PCIe devices, since it actually requires less than a full PCIe 4.0 x1 link for full speed operation. If the PLX chip is efficiently multiplexing the signal, you wouldn't notice much of a hit on the other devices.
This very flexible, efficient usage of lanes with a PLX chip is in stark contrast to the non-PLX situation where you have dedicated PCIe lane allotments to the expansion card slots (eg. x8, x4, x4) and occupying additional M.2 slots requires running at less than PCIe 4.0 x4 and disabling chipset features (such as SATA ports).
Jorgp2 - Tuesday, May 28, 2019 - link
The 10G card is most likely PCI-E 3, so it will take two lanes.3DoubleD - Tuesday, May 28, 2019 - link
Well that is interesting, because if it is the case it is PCIe 3.0, then yes, it will use 2 of the 8 physical lanes to the PLX chip, but then I believe the PLX chip will multiplex the signal into the PCIe x16 link to the CPU, if that is indeed how those chips work (they seem to be a bit of a black-box, there was an Anandtech article ages ago that speculated how they worked along the lines of a multiplexer chip).If that is the case, using a PCIe 2.0 or 3.0 device doesn't necessarily rob you of PCIe bandwidth to the CPU if you use a PLX chip. In contrast, if you have a board without a PLX chip and dedicated lanes directly from the CPU, unused ports or non-PCIe 4.0 devices would technically be wasting available bandwidth.
kithylin - Saturday, June 8, 2019 - link
I'm sorry but you're incorrect and I had to correct you before someone else reads your comment and becomes confused. The MSI X570 MEG GodLike runs a PCIE-4.0 Bridge Chip. It will run at 16x-16x-8x-4x, even while using the two onboard m.2 ports that come off the CPU.With this motherboard we can run two video cards @ 16x + 16x, while also running MSI's included dual-m.2 (4x+4x) card in the 3rd PCIE slot @ 8x, while also running the 10 gigabit ethernet card @ 4x in the 4'th slot, while also using the two onboard PCIE-4.0-4x m.2 slots all at the same time. Including up to 4 independant PCIE-4.0 nvme drives in RAID at the same time.
Cullinaire - Tuesday, May 28, 2019 - link
Nice scowl :)creed3020 - Thursday, May 30, 2019 - link
I caught that too!