“ SambaNova have their own graph optimizer and compiler, enabling customers currently using PyTorch or TensorFlow to have their workloads recompiled for the hardware in less than an hour”
There's open benchmark competitions. MLPerf, why can't companies just be honest and throw their results there so the customer can get clear comparisons rather than all this random marketing nonsense. Seems ridiculous to keep reinventing the wheel. https://mlperf.org/
The reason you see so much competition for inference accelerators as opposed to training accelerators is that commercially viable AI involves doing inference many thousands or millions of times for every time you train.
Because so many people are learning ML, training is front-of-mind for many people, but it makes no sense to train on 10000 examples and then infer on 10 examples from an economic viewpoint. (unless the product is that you learned how to look up the answers in the PyTorch manual.)
It is a tricky market to be in. The Department of Energy has sufficient spend on HPC (which that machine must crush) that it can finance this sort of thing, but a focus on a big customer like that will make you absolutely blind to the needs of anyone else.
If I wanted to invert a very large sparse binary matrix (that is, to factor a large number, an activity that I hear some people do out of other than purely mathematical interest) this looks a lot like the hardware I would want. (Cerebras looks maybe even more like it, but the internal memory on its wafers isn't anything like enough)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
9 Comments
Back to Article
Elchanan Haas - Wednesday, December 9, 2020 - link
I believe OpenAI created GPT3, not Googlep1esk - Wednesday, December 9, 2020 - link
“ SambaNova have their own graph optimizer and compiler, enabling customers currently using PyTorch or TensorFlow to have their workloads recompiled for the hardware in less than an hour”What? For every training run?
anandam108 - Thursday, December 10, 2020 - link
I think it would be once for a new model architecture.webdoctors - Wednesday, December 9, 2020 - link
There's open benchmark competitions.MLPerf, why can't companies just be honest and throw their results there so the customer can get clear comparisons rather than all this random marketing nonsense. Seems ridiculous to keep reinventing the wheel.
https://mlperf.org/
Kresimir - Friday, December 11, 2020 - link
MlPerf is now MLCommons and Sambanova Systems is a founding member.https://mlcommons.org/en/
opticalai - Wednesday, April 14, 2021 - link
can you point to the submission of results?quorm - Wednesday, December 9, 2020 - link
Stellar cartography, huh. I'm guessing there's a tiny bit more money in analyzing images from the satellites pointed at Earth.PaulHoule - Monday, December 14, 2020 - link
The reason you see so much competition for inference accelerators as opposed to training accelerators is that commercially viable AI involves doing inference many thousands or millions of times for every time you train.Because so many people are learning ML, training is front-of-mind for many people, but it makes no sense to train on 10000 examples and then infer on 10 examples from an economic viewpoint. (unless the product is that you learned how to look up the answers in the PyTorch manual.)
It is a tricky market to be in. The Department of Energy has sufficient spend on HPC (which that machine must crush) that it can finance this sort of thing, but a focus on a big customer like that will make you absolutely blind to the needs of anyone else.
TomWomack - Tuesday, December 15, 2020 - link
If I wanted to invert a very large sparse binary matrix (that is, to factor a large number, an activity that I hear some people do out of other than purely mathematical interest) this looks a lot like the hardware I would want. (Cerebras looks maybe even more like it, but the internal memory on its wafers isn't anything like enough)