The thing about running benchmarks is that results vary somewhat with each run. Sometimes there will be outliers. It’s easy enough to compare your best run with your competitors worst run.
MLPerf was setup by industry and academia as a way to get a fair and real comparison for benchmarking AI workloads. When running MLPerf benchmarks you only run on your own hardware and software to squeeze out your best results. You then submit your results to the MLPerf consortium for peer review.
The results are published quarterly. Many companies have submitted results. For example Dell has run tests with their Epyc based hardware. $Alphabet(GOOG)$ , $Intel(INTC)$ , $NVIDIA Corp(NVDA)$ are among those who have submitted results.
The notably absent submission to date is $Advanced Micro Devices(AMD)$ ’s MI300 chips. For all the bravado and performance claims AMD has yet to submit results. Perhaps they are great, maybe not. It seems risky to put so much faith into hardware that AMD is not ready to showcase to the world.
Comments