Data science today is a lot like the Wild West: there’s endless opportunity and excitement, but also a lot of chaos and confusion. If you’re new to data science and applied machine learning, evaluating a machine-learning model can seem pretty overwhelming. Now you have help. With this O’Reilly report, machine-learning expert Alice Zheng takes you through the model evaluation basics.
In this overview, Zheng first introduces the machine-learning workflow, and then dives into evaluation metrics and model selection. The latter half of the report focuses on hyperparameter tuning and A/B testing, which may benefit more seasoned machine-learning practitioners.
With this report, you will:
- Learn the stages involved when developing a machine-learning model for use in a software application
- Understand the metrics used for supervised learning models, including classification, regression, and ranking
- Walk through evaluation mechanisms, such as hold out validation, cross-validation, and bootstrapping
- Explore hyperparameter tuning in detail, and discover why it’s so difficult
- Learn the pitfalls of A/B testing, and examine a promising alternative: multi-armed bandits
- Get suggestions for further reading, as well as useful software packages
Review Written by Constantin Florea
Frequently Bought Together
If you love this, you might also like...
- Your discount coupon code will be applied to your purchase when you click the 'Buy Now' button.
- BitsDuJour downloads use a discount coupon code that comes direct from the software vendor, so you'll always get the latest version of the software app sold under the same terms as a regular sale, just at a great promotional price.
- Prices do not necessarily include taxes, which will vary by country.