Comments Locked

28 Comments

Back to Article

  • SurJector - Tuesday, June 5, 2007 - link

    I've just reread your article. I'm a little bit surprised by the power:
    idle load
    1 node : 160 213
    2 nodes: 271 330
    increase: 111 117
    There is something wrong: the second node adds only 6W (5.5W counting efficiency) of power consumption ?

    Could it be that some power-saving options are not set on the second node (speedstep, or similar) ?

    Nice article though, I bet I'd love to have a rack of them for my computing farm. Either that or wait (forever ?) for the Barcelona equivalents (they should have better memory throughput).
  • Super Nade - Saturday, June 2, 2007 - link

    Hi,

    The PSU is built by Lite-On. I owned the PWS-0056 and it was built like a tank. Truely server grade build quality.

    Regards,

    Super Nade, OCForums.
  • VooDooAddict - Tuesday, May 29, 2007 - link

    Here are the VMWare ESX issues I see. ... They basically compound the problem.

    - No Local SAS controller. (Already mentioned)
    - No Local SAS requires booting from a SAN. This means you will use your only PCIe slot for a SAN Hardware HBA as ESX can't boot with a software iSCSI.
    - Only Dual NICs on board and with the only expansion slot taken up by the SAN HBA (Fiber Channel or iSCSI) you already have a less then ideal ESX solution. --- ESX works best with a dedicated VMotion port, Dedicated Console Port, and at least one dedicated VM port. Using this setup you'll be limited to a dedicated VMotion and a Shared Console and VM Port.

    The other issue is of coarse the non redundant power supply. While yes ESX has a High Availability mode where it restarts VMs from downed hardware. It restarts VMs on other hardware, doesn't preserve them. You could very easily loose data.

    Then probably the biggest issue ... support. Most companies dropping the coin on ESX server are only going to run it on a supported platform. With supported platforms from the Dell, HP and IBM being comparatively priced and the above issues, I don't see them winning ANY of the ESX server crowd with this unit.



    I could however see this as a nice setup for the VMWare (free) Virtual Server crowd using it for virtualized Dev and/or QA environments where low cost is a larger factor then production level uptime.
  • JohanAnandtech - Wednesday, May 30, 2007 - link

    Superb feedback. I feel however that you are a bit too strict on the dedicated ports. A dedicated console port seems a bit exagerated, and as you indicate a shared Console/vmotion seems acceptable to me.
  • DeepThought86 - Monday, May 28, 2007 - link

    I thought it interesting to note how poor the scaling was on the web server benchmark when going from 1S to 2S 5345 (107 URL's/s to 164). However the response times scaled quite well.

    Going from 307 ms to 815 ms (2.65) with only a clockspeed difference of 2.33 cs 1.86 (1.25) is completely unexpected. Since the architecture is the same, how can a 1.25 factor in clock lead to a 2.65 factor in performance? Then I remembered you're varying TWO factors at once making it impossible to compare the numbers.... how dumb is that in benchmark testing??

    Honestly, it seems you guys know how to hook up boxes but lack the intelligence to actually select test cases that make sense, not to mention analyse your results in a meaningful way

    It's also a pity you guys didn't test with the AMD servers to see how they scaled. But I guess the article is meant to pimp Supermicro and not point out how deficient the Intel system design is when going from 4-cores to 8
  • JohanAnandtech - Tuesday, May 29, 2007 - link

    quote:

    Since the architecture is the same, how can a 1.25 factor in clock lead to a 2.65 factor in performance? Then I remembered you're varying TWO factors at once making it impossible to compare the numbers.... how dumb is that in benchmark testing??


    I would ask you to read my comments again. Webserver performance can not be measured by one single metric unless you can keep response time exactly the same. In that case you could measure throughput. However in the realworld, response time is never the same, and our test simulates real users. The reason for this "superscaling" of responstimes is that the slower configurations have to work with a backlog. Like it or not, but that is what you see on a webserver.

    quote:

    It's also a pity you guys didn't test with the AMD servers to see how they scaled


    We have done that already here for a number of workloads:
    http://www.anandtech.com/cpuchipsets/intel/showdoc...">http://www.anandtech.com/cpuchipsets/intel/showdoc...

    This article was about introducing our new benches, and investigating the possibilities of this new supermicro server. Not every article can be an AMD vs Intel article.

    And I am sure that 99.9% of the people who will actually buy a supermicro Twin after reading this review, will be very pleased with it as it is an excellent server for it's INTENDED market. So there is nothing wrong with giving it positive comments as long as I show the limitations.



  • TA152H - Tuesday, May 29, 2007 - link

    Johan,

    I think it's even better than you didn't bring into the AMD/Intel nonsense, because it tends to take focus away from important companies like Supermicro. A lot of people aren't even aware of this company, and it's an extremely important company that makes extraordinary products. Their quality is unmatched, and although they are more expensive, it is excellent to have the option of buying a top quality piece. It's almost laughable, and a bit sad, when people call Asus top quality, or a premium brand. So, if nothing else, you brought an often ignore company into people's minds. Sadly, on a site like this where performance is what is generally measured, if you guys reviewed the motherboards, it would appear to be a mediocre, at best product. So, your type of review helps put things in their proper perspective; they are a very high quality, reliable, innovative company that is often overlooked, but has a very important role in the industry.

    Now, having said that (you didn't think I could be exclusively complimentary, did you?), when are you guys going to evaluate Eizo monitors??? I mean, how often can we read articles on junk from Dell and Samsung, et al, wondering what the truly best monitors are like? Most people will buy mid-range to low-end (heck, I still buy Samsung monitors and Epox motherboards sometimes because of price), but I also think most people are curious about how the best is performing anyway. But, let's give you credit where it's due, it was nice seeing Supermicro finally get some attention.
  • DeepThought86 - Monday, May 28, 2007 - link

    Also, looking at your second benchmark I'm baffled how you didn't include a comparison of 1xE5340 vs 2x5340 or 1x5320 vs 2x5320 so we could see scaling. You just have a comparison of Dual vs 2N, where (duh!) the results are similar.

    Sure, there's 1x5160 vs 2x5160 but since the number of cores is half we can't see if memory performance is a bottleneck. Frankly, if Intel had given you instruction on how to explicitly avoid showing FSB limitations in server application they couldn't have done a better job.

    Oh wait, looks like 2 Intel staffers helped on the project! BIG SURPRISE!
  • yacoub - Monday, May 28, 2007 - link

    http://images.anandtech.com/reviews/it/2007/superm...">http://images.anandtech.com/reviews/it/2007/superm...
    Looks like the top DIMM is not fully seated? :D
  • MrSpadge - Monday, May 28, 2007 - link

    Nice one.. not everyone would catch such a fault :)

    MrS
  • JohanAnandtech - Monday, May 28, 2007 - link

    Those DIMM slots are empty :-)
  • yacoub - Monday, May 28, 2007 - link

    ohhh hahah thought they were filled with black DIMMs :D
  • yacoub - Monday, May 28, 2007 - link

    Also on page 8:

    quote:

    In comparison, with 2U servers, we save about 130W or about 30% thanks to Twin 1U system

    You should remove that first comma. It was throwing me off because the way it reads it sounds like the 2U servers save about 130W but then you get to the end of the sentence and realize you mean "in comparison with 2U servers, we save about 130W or about 30% thanks to Twin 1U". You could also say "Compared with 2U servers, we save..." to make the sentence even more clear.

    Thanks for an awesome article, btw. It's nice to see these server articles from time to time, especially when they cover a product that appears to offer a solid TCO and strong comparative with the competition from big names like Dell.
  • JohanAnandtech - Monday, May 28, 2007 - link

    Fixed! Good point
  • gouyou - Monday, May 28, 2007 - link

    The part about infiniband's performance much better as you increase the number of core is really misleading.

    The graph is mixing core and nodes, so you cannot tell anything. We are in an era where a server has 8 cores: the scaling is completely different as it will depend less on the network. BTW, is the graph made for single core servers ? dual cores ?
  • MrSpadge - Monday, May 28, 2007 - link

    Gouyou, there's a link called "this article" in the part on InfiniBand which answers your question. In the original article you can read that they used dual 3 GHz Woodcrests.

    What's interesting is that the difference between InfiniBand and GigE is actually more pronounced for the dual core Woodcrests compared with single core 3.4 GHz P4s (at 16 nodes). The explanation given is that the faster dual core CPUs need more communication to sustain performance. So it seems like their algorithm uses no locality optimizations to exploit the much faster communication within a node.

    @BitJunkie: I second your comment, very nice article!

    MrS
  • BitJunkie - Monday, May 28, 2007 - link

    Nice article, I'm most impressed by the breadth and the detail you drilled in to - also the clarity with which you presented your thinking / results. It's always good to be stretched and great example of how to approach things in structured logical way.

    Don't mind the "it's an enthusiast site" comments. Some people will be stepping outside their comfort zone with this and won't thank you for it ;)
  • JohanAnandtech - Monday, May 28, 2007 - link

    Thanks, very encouraging comment.

    And I guess it doesn't hurt the "enthusiast" is reminded that "pcs" can also be fascinating in another role than "Hardcore gaming machine" :-). Many of my students need the same reminder: being an ITer is more than booting Windows and your favorite game. My 2-year old daughter can do that ;-)
  • yyrkoon - Monday, May 28, 2007 - link

    It is however nice to learn about InfiniBand. This is a technology I have been interrested in for a while now, and was under the impression was not going to be implemented until PCIe v2.0 (maybe I missed something here).

    I would still rather see this technology in the desktop class PC, and if this is yet another enterprise driven technology, then people such as myself, who were hoping to use it for decent home networking(remote storage) are once again, left out in the cold.
  • yyrkoon - Monday, May 28, 2007 - link

    quote:

    And I guess it doesn't hurt the "enthusiast" is reminded that "pcs" can also be fascinating in another role than "Hardcore gaming machine" :-). Many of my students need the same reminder: being an ITer is more than booting Windows and your favorite game. My 2-year old daughter can do that ;-)


    And I am sure every gamer out there knows what iSCSI *is* . . .

    Even in 'IT' a 16 core 1U rack is a specialty system, and while they may be semi common in the load balancing/failover scenario(or maybe even used extensively in paralell processing, yes, and even more possible uses . . .), they are still not all that common comparred to the 'standard' server. Recently, a person that I know deployed 40k desktops/ 30k servers for a large company, and would'nt you know it, not one had more than 4 cores . . . and I have personally contracted work from TV/Radio stations(and even the odd small ISP), and outside of the odd 'Toaster', most machines in these places barely use 1 core.

    I too also find technologies such as 802.3 ad link aggregation, iSCSI, AoE, etc interresting, and sometimes like playing around with things like openMosix, the latest /hottest Linux Distro, but at the end of the day, other than experimentation, these things typically do not entertain me. Most of the above, and many other technologies for me, are just a means to an end, not entertainment.

    Maybe it is enjoyable staring at a machine of this type, not being able to use it to its full potential outside of the work place ? Personally I would not know, and honestly I really do not care, but if this is the case, perhaps you need to take notice of your 2 year old daughter, and relax once in a while.

    The point here ? The point being: pehaps *this* 'gamer' you speak of knows a good bit more about 'IT' than you give him credit for, and maybe even makes a fair amount of cash at the end of the day while doing so. Or maybe I am a *real* hardware enthusiast, who would rather be reading about technology, instead of reading yet another 'product review'. Especially since any person worth their paygrade in IT should already know how this system (or anything like) is going to perform beforehand.
  • JohanAnandtech - Monday, May 28, 2007 - link

    Hey, you certainly gave the wrong impression of yourself with that first post, not our fault.

    Anyway NLB, infiniband and rendering farms are not anymore exotic than 802.3 ad link aggregation. So I am definitely glad that you and a lot of people want to look beyond the typical gaming technology.

    quote:

    Most of the above, and many other technologies for me, are just a means to an end, not entertainment.


    That is somewhat in contradiction with being an "enthousiast" as "enthousiast" means that technology is a little more than just a tool.

    quote:

    *this* 'gamer' you speak of knows a good bit more about 'IT' than you give him credit for,


    Yep, but why hide it ?

    quote:

    am a *real* hardware enthusiast, who would rather be reading about technology, instead of reading yet another 'product review'.


    Well it is hardly about the product alone, as we look into NLB and network rendering, which is exactly using the technology for a mean.

    While do get the point of your second post, your first post doesn't make any sense: 1) this kind of server should never be turned into a iSCSI device: there are servers that can have more memory and have - more importantly - a much better storage subsystem. 2) you give the impression that an enthousiast site should not talk about datacenter related stuff.

    Hey man, my purpose here is certainly not making fun of you. You seem like a person that can give a lot better feedback than you did in your first post. By all means do that :-)


    quote:

    Especially since any person worth their paygrade in IT should already know how this system (or anything like) is going to perform beforehand.


    A lot of data administrators are very capable certified people in the world of networking and Server OS. But very few know their hardware or can decently size it. I read a book from O'reilly about datacenters a while ago. The stuff about the electrical and networking part of datacenters was top notch. The part about storage, load balancing and sizing were very average. And I believe a lot of people are in that case.
  • yyrkoon - Monday, May 28, 2007 - link

    quote:

    A lot of data administrators are very capable certified people in the world of networking and Server OS. But very few know their hardware or can decently size it. I read a book from O'reilly about datacenters a while ago. The stuff about the electrical and networking part of datacenters was top notch. The part about storage, load balancing and sizing were very average. And I believe a lot of people are in that case.


    Well I suppose you are right to an extent here, maybe I like hardware so much that I tend to spend more time 'researching' different hardware?

    The last thing I really want to convey is that I know EVERYTHING, which if I actually thought this, I would most likely be dillusional(this goes for everyone, not just myself, and no, I am not pointing any fingers, I am just saying that perhaps I come off as a know-it-all, but I really dont know it all).

    Anyhow, my original post ws more of a joke, with the serious part, being if somehow this equipment landed its self in my home, I would actually do with it as I said. I do not work in a data center, but I do contract work for small business a lot, mostly for media broadcasters, and the occational home PC when business as such presents its self, so obviously there are things the datacenter monkies know that I do not.

    All that being said, I can not hide the fact that home PC hardware is where my enthusiasm stems from, concerning technology. I see great things for technology like Infiniband, SAS, but these technologies are all but useless in the home because it is being driven by enterprise consumers, that usually dont care about 'reasonably' priced hardware, that performs well in the home envoirnment.

    As I stated before, I have been following the PCIe v2.0 technology for a bit now, and I was under the impression that PCIe-PCIe direct communications were not going to be implemented outside of PCIe v2.0, and would have a good chance of being reasonably priced enough, to be used in the home (on a smaller scale, say 4x channels, instead of the potential of 32x channels). Now, I am dissapointed in seeing that while it may improve server performance, this is going to be used as an excuse to bleed home users dry of cash. Just like SAS, hardware wise, it is comparable in price to say firewire once you pass a certain HDD count threshhold, but finding standalone expanders (without a 2.5" formfactor removable drive bay, or LSI built 1U or greater rack), are non existant(or at least, I personally have not been able to find any). This means people like me, who are wanting to build SoHo like storage for personal, or small business get left out in the cold, AGAIN. Would'nt you like to have a small server at home capable of delivering decent disk throuput/access speeds(ie external to your desktop PC) for a reasonable price ? I know I would.

    All in all, I find the hardware interresting, yet find myself dissapointed from the home use aspect. SO this is why I can hardly be excited by such news.
  • TA152H - Monday, May 28, 2007 - link

    Johan,

    I think labels are bad in general, and the enthusiast site is more often an excuse than a valid explanation for some of the choices. It gets annoying, because there is no reason why you can't do both. If someone doesn't want to read your articles on a topic, they can skip it, right?

    However, I have a few issues, naturally :P. Not that you weren't complimentary towards Supermicro, but I'm not sure you carried it quite far enough. Comparing Supermicro to Dell is kind of insulting to Supermicro. Also, you seemed to leave out that Supermicro sells motherboards, cases, power supplies, etc... as standalone pieces, and they are considered by most professionals to be the best motherboards made, as well as being supported extremely well. You can't kill these motherboards, I still run a P6DLE (440LX!) that I want to upgrade but it just won't die. They never do, and the components and fit and finish are absolutely top notch. Now before someone who likes Intel screams at me, they make excellent motherboards too and are extremely high quality as well. But, Intel doesn't make motherboards for AMD, Supermicro does. And if you're building a server, and want AMD, do you really want some junk from Taiwan? Sure, they're cheap, but you buy Asus or Tyan and you're whistling past the graveyard with that rubbish. On top of this, you can even buy Supermicro motherboards that are not server motherboards (my first was a desktop one, P5MMA, and it still works as a print server). There are plenty of white boxes sporting Supermicro motherboards, and some companies build their own in house with Supermicro components. So, their market share is considerable higher than just those sold as complete servers.

    Also, you're insistence on a redundant power supply, I think, misses the point completely. You can buy Supermicros with redundant power supplies, and if that's what you wanted, review one of them. This was made for a different purpose, and you absolutely do NOT want it. That would defeat the purpose, so saying how they should get it is kind of silly. Saving 100 watts is absolutely enormous, especially when getting something inexpensive, and from the most reputable company in server motherboards. By the way, have you ever killed a Supermicro power supply? I haven't, and I do try. So, yes, maybe power supplies fail more often for the cheap companies, but I think the failure rate for Supermicro is very low. But realistically, you have to consider if you can tolerate it at all. If you can't, get one of their other products. If you can, then the power savings are incredible. It's nice to have both choices, isn't it?

    If you want the best and can pay, Supermicro is the way to go. When they are inexpensive and have excellent power use characteristics, it's almost irresistable if it is the type of product you want. Dell??? Oh my. You'd have to be crazy.
  • JohanAnandtech - Monday, May 28, 2007 - link

    So much feedback, thanks! Makes these horrible long undertakings called "server articles" much more rewarding even if you don't agree.

    Just leave your first name next time, I like to address you in a proper way.

    Anyway, your feedback

    quote:

    Also, you're insistence on a redundant power supply, I think, misses the point completely. You can buy Supermicros with redundant power supplies, and if that's what you wanted, review one of them.


    Read my article again, and you'll understand my POV better. I work with a lot of SMBs, like MCS, and they like to run their webserver in NLB for fail over reasons. For that, the supermicro twin could be wonderful: pay half as much colocation costs than normal. And notice that I did remark that the percentage of downtime due to a PSU failure is relatively small and it is probably an acceptable risk for many SMBs.

    I just hope I can challenge Supermicro enough to get two PSUs in there. The Twin is an excellent idea, and it would be nice to have it as high availability solution too. So no, just reviewing another supermicro server won't cut it: you double the rackspace needed.

  • TA152H - Monday, May 28, 2007 - link

    Johan,

    I just thought of a few things after writing that, and have moved closer to your point of view. Oh, my name is Rich, sorry, I forgot to mention that in the first one.

    Maybe it isn't possible to create a server with the same feature set as this one, in 1U with a redundant power supply. My first reaction was when you are asking for this, you were willing to put up with something bigger, which is an option. Another option is to impose limitations on the server, and still fit it in 1U. By removing some features, and imposing limitations, for example what processors can be used, you might be able to do it. Not only could you reduce the motherboard size, but you could also reduce the power supply if you can safely say that fewer watts can be used. And it's significant, because you multiply it by two, or four in the case of processors. If you lower the acceptable power use for the processor by say 30 watts, you reduce the power supply by 120 watts, so it's significant. If you make it ULV, you could realize some very serious savings, as well as reduce cooling issues. Between this and removing some features, they might be able to significantly reduce power use, and at the same time make the motherboard a little smaller.

    On the other hand, I think SAS would complicate things, and might be why they left it out. I don't think you can get everything in this type of box right now, but maybe another choice would be to create more choices by leaving certain things out, and allowing other things (redundant power supply) to be put in. Of course, I don't know for sure if it's possible, but it might be.
  • MrSpadge - Monday, May 28, 2007 - link

    Rich,

    that's roughly what I thought as well when I read your first post. "It's too cramped, they can't get 2 x 900 W with high quality in there" Then I was about to post that it may be possible to get away with a two smaller PSUs and started to name the power consuming devices: 4 x 120 W CPUs, 32 x 5-7 W FB-DIMMs, 4 x 10 W HDD plus something for the chipsets and some loss in the CPU core voltage converters. And I realized that even 700W is probably not enough for this hardware, so I scrapped my post.

    To reduce power consumption they'd really have to constrain the use of CPUs and maybe limit the machines to 8 FB-DIMMs per machine, which is still a lot of memory. The 80W 2.33 GHz quad Xeon may be the best candidate for this. One could also think about using either 2.5" 7200 rpm notebook drives (uhh..) or Seagate Savvios. Less cost effective than 3.5" SATA but you save some more space and power.

    MrS
  • TA152H - Monday, May 28, 2007 - link

    Johan,

    I think I know where we disagree. I do understand your point, but I don't think what you're asking for is possible. The problem is, how do you get two power supplies of that rating into that size case? Consider, if you will, you still need a massive power supply to be able to handle both, the case is already crammed pretty well. To get two dual motherboards in there in the first place is and to create such an attractive product is already a great accomplishment. I just don't think you can stuff two 900 watt power supplies in there. Addressing a challenge and beating the laws physics are two different things, I think, right now, it is a bridge too far (a reference to a historic battle in your country, I hope you appreciate it :P). Even if they do get a twin in there, which again, I think is nearly impossible, you'd lose some power efficiency, and add some cost. So, it wouldn't be painless. But, you know, it would make for a good choice to have, so I'm not against it. Both would be attractive. But, I just don't see how they could do it. I don't even know any company has a 2 x Dual available in 1U yet, so it's not trivial. Of course, I can't say for sure it's impossible, it's not like I really know enough to.
  • yyrkoon - Monday, May 28, 2007 - link

    Take out all but 2-4 cores, slap in another 16 GB ram for a total of 32GB, use Windows2003 iSCSI/export 31GB of ram disk, and you have a decently fast, very low latency 'disk'. All for 2000x the cost of a similar sized HDD ;) Weeeeeeeeee!

    Sorry guys, I thought this was an 'enthusiast' site, and was briefly confused :P If I had one of the mentioned systems, this is prpobably fairly accurate as to how I would use it . . .

Log in

Don't have an account? Sign up now