Benchmarking ML with MLCommons w/ Peter Mattson - #434
Published:Dec 7, 2020 20:40
•1 min read
•Practical AI
Analysis
This article from Practical AI discusses MLCommons and MLPerf, focusing on their role in accelerating machine learning innovation. It features an interview with Peter Mattson, a key figure in both organizations. The conversation covers the purpose of MLPerf benchmarks, which are used to measure ML model performance, including training and inference speeds. The article also touches upon the importance of addressing ethical considerations like bias and fairness within ML, and how MLCommons is tackling this through datasets like "People's Speech." Finally, it explores the challenges of deploying ML models and how tools like MLCube can simplify the process for researchers and developers.
Key Takeaways
- •MLCommons and MLPerf are key organizations for advancing machine learning.
- •MLPerf provides standardized benchmarks for measuring ML model performance.
- •Ethical considerations like bias and fairness are being addressed through datasets like "People's Speech."
Reference
“We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets.”