{"id":606583,"date":"2025-07-14T19:41:39","date_gmt":"2025-07-15T00:41:39","guid":{"rendered":"https:\/\/towardsdatascience.com\/?p=606583"},"modified":"2025-07-14T19:41:53","modified_gmt":"2025-07-15T00:41:53","slug":"accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need","status":"publish","type":"post","link":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/","title":{"rendered":"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\"><mdspan datatext=\"el1752539982542\" class=\"mdspan-comment\">Accuracy is often the metric<\/mdspan> we, data scientists, cite the most \u2014 but also the most misleading.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It was long ago that we found out that models are developed for far more than just making predictions. We create models to make decisions, and that requires trust. And relying on the accuracy is simply not enough.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">In this post, we&#8217;ll see why and we&#8217;ll check other alternatives, more advanced and tailored to our needs. As always, we&#8217;ll do it following a practical approach, with the end goal of deep diving into evaluation beyond standard metrics. <\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Here&#8217;s the table of contents for today&#8217;s read:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Setting Up the Models<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Classification: Beyond Accuracy<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Regression: Advanced Evaluation<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Conclusion<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Setting Up the Models<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Accuracy makes more sense for classification algorithms rather than regression tasks&#8230; Hence, not all problems are measured equally.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">That&#8217;s the reason why I&#8217;ve decided to tackle both scenarios \u2014 the regression and the classification ones \u2014 separately by creating two different models. <\/p>\n\n\n\n<p class=\"wp-block-paragraph\">And they&#8217;ll be very simple ones, because their performance and application isn&#8217;t what matters today:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>Classification<\/strong>: Will a striker score in the next match?<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Regression<\/strong>: How many goals will a player score?<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">If you&#8217;re a recurrent reader, I&#8217;m sure that the use of football examples didn&#8217;t come as a surprise.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Note<\/strong>: Even though we won&#8217;t be using accuracy on our regression problem and this post is thought to be more focused on that metric, I didn&#8217;t want to leave these cases behind. So that&#8217;s why we&#8217;ll be exploring regression metrics too.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Again, because we don&#8217;t care about the data nor the performance, let me skip all the preprocessing part and go straight to the models themselves:<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\"># Classification model\nmodel = LogisticRegression()\nmodel.fit(X_train_scaled, y_train)\n\n# Gradient boosting regressor\nmodel = GradientBoostingRegressor()\nmodel.fit(X_train_scaled, y_train)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">As you can see, we stick to simple models: logistic regression for the binary classification, and gradient boosting for regression.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Let&#8217;s check the metrics we&#8217;d usually check:<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\"># Classification\ny_pred = model.predict(X_test_scaled)\naccuracy = accuracy_score(y_test, y_pred)\n\nprint(f&quot;Test accuracy: {accuracy:.2%}&quot;)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">The printed accuracy is 92.43%, which is honestly way higher than what I&#8217;d have expected. Is the model really that good?<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\"># Regression\ny_pred = model.predict(X_test_scaled)\n\nrmse = np.sqrt(mean_squared_error(y_test, y_pred))\n\nprint(f&quot;Test RMSE: {rmse:.4f}&quot;)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">I got an RMSE of 0.3059. Not that good. But is it enough to discard our regression model?<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">We need to do better.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Classification: Beyond Accuracy<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Too many data science projects stop at accuracy, which is often misleading, especially with imbalanced targets (e.g., scoring a goal is rare).<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">To evaluate whether our model <em>really<\/em> predicts &#8220;Will this player perform?&#8221;, here are other metrics we should consider:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>ROC-AUC<\/strong>: Measures ability to rank positives above negatives. Insensitive to threshold but doesn&#8217;t care about calibration.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>PR-AUC<\/strong>: Precision-Recall curve is essential for rare events (e.g., scoring probability). It focuses on the positive class, which matters when positives are scarce.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Log Loss<\/strong>: Punishes overconfident wrong predictions. Ideal for comparing calibrated probabilistic outputs.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Brier Score<\/strong>: Measures mean squared error between predicted probabilities and actual outcomes. Lower is better, and it&#8217;s interpretable as overall probability calibration.<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Calibration Curves<\/strong>: Visual diagnostic to see if predicted probabilities match observed frequencies.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">We won&#8217;t test all of them now, but let&#8217;s briefly touch upon ROC-AUC and Log Loss, probably the most used after accuracy.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-heading-6-font-size\">ROC-AUC<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">ROC-AUC, or <em>Receiver Operating Characteristic &#8211; Area Under the Curve<\/em>, is a popular metric that consists in measuring the area under the ROC curve, which is a curve that plots the True Positive rate (TPR) against the False Positive rate (FPR).<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Simply put, the ROC-AUC score (ranging from 0 to 1) sums up how well a model can produce relative scores to discriminate between positive or negative instances across all classification thresholds.&nbsp;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">A score of 0.5 indicates random guessing and a 1 is a perfect performance.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Computing it in Python is easy:<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">from sklearn.metrics import roc_auc_score\n\nroc_auc = roc_auc_score(y_test, y_proba)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Here, y_true contains the real labels and y_proba contains our model&#8217;s predicted prorbabilities. In my case the score is 0.7585, which is relatively low compared to the accuracy. But how can this be possible, if we got an accuracy above 90%?<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Context: We&#8217;re trying to predict whether a player will score in a match or not. The &#8220;problem&#8221; is that this is highly imbalanced data: most players won&#8217;t score in a match, so our model learns that predicting a 0 is the most probable, without really learning anything about the data itself. <\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It can&#8217;t capture the minority class correctly and accuracy simply doesn&#8217;t show us that.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-heading-6-font-size\">Log Loss<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">The logarithmic loss, cross-entropy or, simply, log loss, is used to evaluate the performance with probability outputs. It measures the difference between the predicted probabilities and the actual (true) values, logarithmically.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Again, we can do this with a one-liner in python:<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">from sklearn.metrics import log_loss\n\nlogloss = log_loss(y_test, y_proba)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">As you&#8217;ve probably guessed, the lower the value, the better. A 0 would be the perfect model. In my case, I got a 0.2345.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This one is also affected by class imbalance: Log loss penalizes confident wrong predictions very harshly and, since our model predicts a 0 most of the time, those cases in which there was indeed a goal scored affect the final score.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Regression: Advanced Evaluation<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Accuracy makes no sense in regression but we have a handful of interesting metrics to evaluate the problem of how many goals will  a player score in a given match.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">When predicting <strong>continuous outcomes<\/strong> (e.g., expected minutes, match ratings, fantasy points), simple RMSE\/MAE is a start\u2014but we can go much further.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Other metrics and checks:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>R\u00b2<\/strong>: Represents the proportion of the variance in the target variable explained by the model.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>RMSLE<\/strong>: Penalizes underestimates more and is useful if values vary exponentially (e.g., fantasy points).<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>MAPE \/ SMAPE<\/strong>: Percentage errors, but beware divide-by-zero issues.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\"><strong>Quantile Loss<\/strong>: Train models to predict intervals (e.g., 10th, 50th, 90th percentile outcomes).<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><strong>Residual vs. Predicted<\/strong> <strong>(plot)<\/strong>: Check for heteroscedasticity.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Again, let&#8217;s focus on a subgroup of them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-heading-6-font-size\">R\u00b2 Score<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Also called the coefficient of determination, it compares a model\u2019s error to the baseline error. A score of 1 is the perfect fit, a 0 means that it predicts the mean only, and a value below 0 means that it&#8217;s worse than mean prediction.<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">from sklearn.metrics import r2_score\n\nr2 = r2_score(y_test, y_pred)<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">I got a value of 0.0557, which is pretty close to 0&#8230; Not good.<\/p>\n\n\n\n<h3 class=\"wp-block-heading has-heading-6-font-size\">RMSLE<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">The <em>Root Mean Squared Logarithmic Error, <\/em>or RMSLE, measures the square root of the average squared difference between the <strong>l<\/strong>og-transformed predicted and actual values. This metric is useful when:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">We want to penalize under-prediction more gently.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Our target variables are skewed (it reduces the impact of large outliers).<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">from sklearn.metrics import mean_squared_log_error\n\nrmsle = np.sqrt(mean_squared_log_error(y_test, y_pred))<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">I got a 0.19684 which means that my average prediction error is about 0.2 goals. It&#8217;s not that big but, given that our target variable is a value between 0 and 4 and highly skewed towards 0&#8230; <\/p>\n\n\n\n<h3 class=\"wp-block-heading has-heading-6-font-size\">Quantile Loss<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Also called Pinball Loss, it can be used for quantile regression models to evaluate how well our predicted quantiles perform. If we build a quantile model (GradientBoostingRegressor with quantile loss), we can test it as follows:<\/p>\n\n\n\n<pre class=\"wp-block-prismatic-blocks\"><code class=\"language-python\">from sklearn.metrics import mean_pinball_loss\n\nalpha = 0.9\nq_loss = mean_pinball_loss(y_test, y_pred_quantile, alpha=alpha)\n<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Here, with alpha 0.9 we&#8217;re trying to predict the 90th percentile. My quantile loss is 0.0644 which is very small in relative terms (~1.6% of my target variable range).<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">However, distribution matters: Most of our <em>y_test <\/em>values are 0, and we need to interpret it as &#8220;<em>on average, our model\u2019s error in capturing the upper tail is very low<\/em>&#8220;.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It&#8217;s especially impressive given the 0-heavy target.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">But, because most outcomes are 0, other metrics like the ones we saw and mentioned above should be used to assess whether our model is in fact performing well or not.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Building predictive models goes far beyond simply achieving &#8220;good accuracy.&#8221;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">For <strong>classification<\/strong> tasks, you need to think about imbalanced data, probability calibration, and real-world use cases like pricing or risk management.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">For <strong>regression<\/strong>, the goal is not just minimizing error but understanding uncertainty\u2014vital if your predictions inform strategy or trading decisions.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Ultimately, true value lies in:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Carefully curated, temporally valid features.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Advanced evaluation metrics tailored to the problem.<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Transparent, well-visualized comparisons.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">If you get these right, you\u2019re no longer building \u201cjust another model.\u201d You\u2019re delivering robust, decision-ready tools. And the metrics we explored here are just the entry point.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A deep dive into advanced evaluation for data scientists<\/p>\n","protected":false},"author":18,"featured_media":606584,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"is_member_only":false,"sub_heading":"A deep dive into advanced evaluation for data scientists","footnotes":""},"categories":[44],"tags":[749,12377,1120,4793,607],"sponsor":[],"coauthors":[30752],"class_list":["post-606583","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-data-science","tag-classification","tag-evaluation-metrics","tag-model-evaluation","tag-predictive-algorithm","tag-regression"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need | Towards Data Science<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need | Towards Data Science\" \/>\n<meta property=\"og:description\" content=\"A deep dive into advanced evaluation for data scientists\" \/>\n<meta property=\"og:url\" content=\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\" \/>\n<meta property=\"og:site_name\" content=\"Towards Data Science\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-15T00:41:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-15T00:41:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Pol Marin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:site\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Pol Marin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\"},\"author\":{\"name\":\"TDS Editors\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\"},\"headline\":\"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need\",\"datePublished\":\"2025-07-15T00:41:39+00:00\",\"dateModified\":\"2025-07-15T00:41:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\"},\"wordCount\":1339,\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg\",\"keywords\":[\"Classification\",\"Evaluation Metrics\",\"Model Evaluation\",\"Predictive Algorithm\",\"Regression\"],\"articleSection\":[\"Data Science\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\",\"url\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\",\"name\":\"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need | Towards Data Science\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg\",\"datePublished\":\"2025-07-15T00:41:39+00:00\",\"dateModified\":\"2025-07-15T00:41:53+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg\",\"width\":1920,\"height\":1080,\"caption\":\"Image by Afif Ramdhasuma in Unsplash\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/towardsdatascience.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/towardsdatascience.com\/#website\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"name\":\"Towards Data Science\",\"description\":\"Publish AI, ML &amp; data-science insights to a global community of data professionals.\",\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"alternateName\":\"TDS\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/towardsdatascience.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/towardsdatascience.com\/#organization\",\"name\":\"Towards Data Science\",\"alternateName\":\"TDS\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"width\":696,\"height\":696,\"caption\":\"Towards Data Science\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/TDataScience\",\"https:\/\/www.youtube.com\/c\/TowardsDataScience\",\"https:\/\/www.linkedin.com\/company\/towards-data-science\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\",\"name\":\"TDS Editors\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"caption\":\"TDS Editors\"},\"description\":\"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds\",\"url\":\"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need | Towards Data Science","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/","og_locale":"en_US","og_type":"article","og_title":"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need | Towards Data Science","og_description":"A deep dive into advanced evaluation for data scientists","og_url":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/","og_site_name":"Towards Data Science","article_published_time":"2025-07-15T00:41:39+00:00","article_modified_time":"2025-07-15T00:41:53+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg","type":"image\/jpeg"}],"author":"Pol Marin","twitter_card":"summary_large_image","twitter_creator":"@TDataScience","twitter_site":"@TDataScience","twitter_misc":{"Written by":"Pol Marin","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#article","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/"},"author":{"name":"TDS Editors","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee"},"headline":"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need","datePublished":"2025-07-15T00:41:39+00:00","dateModified":"2025-07-15T00:41:53+00:00","mainEntityOfPage":{"@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/"},"wordCount":1339,"publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"image":{"@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg","keywords":["Classification","Evaluation Metrics","Model Evaluation","Predictive Algorithm","Regression"],"articleSection":["Data Science"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/","url":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/","name":"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need | Towards Data Science","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage"},"image":{"@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg","datePublished":"2025-07-15T00:41:39+00:00","dateModified":"2025-07-15T00:41:53+00:00","breadcrumb":{"@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#primaryimage","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/afif-ramdhasuma-RjqCk9MqhNg-unsplash-1.jpg","width":1920,"height":1080,"caption":"Image by Afif Ramdhasuma in Unsplash"},{"@type":"BreadcrumbList","@id":"https:\/\/towardsdatascience.com\/accuracy-is-dead-calibration-discrimination-and-other-metrics-you-actually-need\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/towardsdatascience.com\/"},{"@type":"ListItem","position":2,"name":"Accuracy Is Dead: Calibration, Discrimination, and Other Metrics You Actually Need"}]},{"@type":"WebSite","@id":"https:\/\/towardsdatascience.com\/#website","url":"https:\/\/towardsdatascience.com\/","name":"Towards Data Science","description":"Publish AI, ML &amp; data-science insights to a global community of data professionals.","publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"alternateName":"TDS","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/towardsdatascience.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/towardsdatascience.com\/#organization","name":"Towards Data Science","alternateName":"TDS","url":"https:\/\/towardsdatascience.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","width":696,"height":696,"caption":"Towards Data Science"},"image":{"@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/TDataScience","https:\/\/www.youtube.com\/c\/TowardsDataScience","https:\/\/www.linkedin.com\/company\/towards-data-science\/"]},{"@type":"Person","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee","name":"TDS Editors","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac","url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","caption":"TDS Editors"},"description":"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds","url":"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/"}]}},"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"TDS Contributor Portal","distributor_original_site_url":"https:\/\/contributor.insightmediagroup.io","push-errors":false,"_links":{"self":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606583","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/comments?post=606583"}],"version-history":[{"count":0,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606583\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media\/606584"}],"wp:attachment":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media?parent=606583"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/categories?post=606583"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/tags?post=606583"},{"taxonomy":"sponsor","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/sponsor?post=606583"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/coauthors?post=606583"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}