Acer’s leading gaming branding, Predator, is all about maximizing performance, particularly around gaming. In the modern era, that now extends into content creation, streaming, video editing, and all the sorts of things that drive the need for high performance. As we’ve seen several times over the years, just throwing more cores at the problem isn’t the solution: bottlenecks appear elsewhere in the system. Despite this, Acer is preparing a mind-boggling solution.

The Acer Predator X is the new dual-Xeon workstation, with ECC memory and multiple graphics cards, announced today at IFA 2018. The premise of the system is for the multi-taskers that do everything: gaming, content creation, streaming, the lot. With this being one of Acer’s flagship products, we expect it to be geared to the hilt: maximum cores, maximum capacity. There-in lies the first rub: if Acer is going all out, this is going to cost something crazy:

  • 2 x Intel Xeon 8180 ($10009 x 2)
  • 12 x 16 GB ECC RDIMM ($200 x 12)
  • 2 x NVIDIA Quadro RTX 8000 ($000s x 2)
  • Some Storage
  • Some big power supply
  • Some custom chassis

Of course, Acer is focusing this product for the next generation of processors (read, Cascade Lake-SP), and so none of the specifications have been put into place yet. However, there’s a fundamental aspect to dual-CPU systems that needs to be addressed.

Dual CPU systems have what is known as a Non-Uniform Memory Architecture (NUMA) – despite each CPU having direct access to memory, without a NUMA-aware operating system or software in place, memory for one process on one CPU can be allocated on the memory of the other CPU, causing additional latency. We tested this way back in 2013, and the situation has not improved since. Most software assumes all the cores and memory are identical, so adding additional latency causes performance to tank. Tank hard.

Back in those 2013 articles, even scientific software was not built for multi-CPU hardware, and often performed worse than a single CPU with fewer cores. More recently, we’ve seen even single socket systems with a NUMA like environment such as the 32-core Threadripper 2 show performance deficits against monolithic solutions. Only in very specific scenarios (lightweight ray-tracing being the best), does performance improve.

When I approached the person who Acer put on stage to promote this new hardware for the Predator brand about these issues, he didn’t really have a clue what I was talking about. At first he confused it with having ECC, and describing the difference between bandwidth and latency seemed to go no-where. If Acer wants to promote this as a Windows machine, which I’m 99.9% sure they will, they really need to have some software wrapper in place to enumerate cores and put Core Affinity in place. Otherwise people will shell out a lot of money for, in a lot of cases, worse performance.

But hey, maybe Acer is going after the VM gaming market? Right?

One thing I was told is that Acer will be offering configurable variants. So you might be able to use a pair of Xeon Silver instead. Or remove that piece of 'leather' from the front of the chassis.

Comments Locked


View All Comments

  • TheDoctor46 - Wednesday, August 29, 2018 - link

    Lol! Loved the way u described it, esp the leather thing;) what were they thinking!
  • boozed - Wednesday, August 29, 2018 - link

    Just the perfect amount of sarcasm
  • xchaotic - Wednesday, August 29, 2018 - link

    The situation has improved a lot on the software front, but I agree it's pretty hopeless / useless for gaming. This is for people with more money than sense.
  • Valantar - Wednesday, August 29, 2018 - link

    This would _barely_ make sense if they limited it to 6-8-core high clock speed CPUs (which ... well, Xeons aren't the latter, at least) and had software in place that automatically limited, say, CPU1 to games only (with an exhaustive list of games and their process and executable names, I suppose) and left everything else to CPU0. Of course you'd also (likely) need to make sure that the GPU(s) is(/are) seated in PCIe slots linked to the correct CPU at that point, making everything even more complicated. No thanks. Wouldn't buy this even if I could afford it.
  • alpha754293 - Wednesday, August 29, 2018 - link

    Depends on the motherboard.

    Supermicro boards are super robust (plus the built-in IPMI is nice so that you don't need a separate, external KVMoIP).

    It's too bad that most programs are actually written for multi-threading (e.g. OpenMP/SMP) rather than truly multi-processing (MPI).
  • RSAUser - Friday, August 31, 2018 - link

    I'd really like to know how one would move from MT to MPI considering that for situations that are highly MT one can get way more than 64 threads = number of CPU cores. Would also be terrible to scale as you'd work for a specific configuration each time.

    Current solution is fine, the problem is more that tasks can only scale that much, and gaming currently still needs that one thread that deals with the game logic which requires a really fast single core.
  • duploxxx - Wednesday, August 29, 2018 - link

    just buy a HPinc z6 -z8 workstation and have the same....

    they should rather bring something new to the market, Threadripper based station for half the price i.s.o the Xeon Bullshit
  • alpha754293 - Wednesday, August 29, 2018 - link

    Consumer-grade hardware have a SIGNIFICANTLY higher rate of failure than server/enterprise-grade hardware.

    And AMD hasn't released the 2nd gen TR in their Epyc line. (Their Epyc line is still based on first gen TR.)
  • eek2121 - Sunday, September 2, 2018 - link

    Threadripper != EPYC. EPYC is a completely different stepping from Threadripper, features more PCIE lanes and more memory channels.
  • piroroadkill - Wednesday, August 29, 2018 - link

    What's the point?
    Just use Threadripper 2950X and call it a day...

Log in

Don't have an account? Sign up now