Whitening and standardization
<!DOCTYPE html> <html lang="en-US"> <head> <meta charset="UTF-8" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link rel="pingback" href="https://machinelearningmastery.com/xmlrpc.php" /> <meta name='robots' content='index, follow, max-image-preview:large, max-snippet:-1, max-video-preview:-1' />
<meta content="initial-scale=1.0, maximum-scale=1.0, user-scalable=yes" name="viewport"/>
<title>How to use Data Scaling Improve Deep Learning Model Stability and Performance</title><link rel="stylesheet" href="https://machinelearningmastery.com/wp-content/cache/min/1/7ab4e5ffb1c7994e88de46542752e00b.css" media="all" data-minify="1" /> <link rel="canonical" href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/" /> <meta property="og:locale" content="en_US" /> <meta property="og:type" content="article" /> <meta property="og:title" content="How to use Data Scaling Improve Deep Learning Model Stability and Performance" /> <meta property="og:description" content="Deep learning neural networks learn how to map inputs to outputs from examples in a training dataset. The weights of […]" /> <meta property="og:url" content="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/" /> <meta property="og:site_name" content="Machine Learning Mastery" /> <meta property="article:publisher" content="https://www.facebook.com/MachineLearningMastery/" /> <meta property="article:author" content="https://www.facebook.com/MachineLearningMastery/" /> <meta property="article:published_time" content="2019-02-03T18:00:34+00:00" /> <meta property="article:modified_time" content="2020-08-25T00:18:53+00:00" /> <meta property="og:image" content="https://machinelearningmastery.com/wp-content/uploads/2018/11/Box-and-Whisker-Plots-of-Mean-Squared-Error-With-Unscaled-Normalized-and-Standardized-Input-Variables-for-the-Regression-Problem.png" /> <meta property="og:image:width" content="1280" /> <meta property="og:image:height" content="960" /> <meta name="twitter:label1" content="Written by" /> <meta name="twitter:data1" content="Jason Brownlee" /> <meta name="twitter:label2" content="Est. reading time" /> <meta name="twitter:data2" content="26 minutes" /> <script type="application/ld+json" class="yoast-schema-graph">{"@context":"https://schema.org","@graph":[{"@type":"Organization","@id":"https://machinelearningmastery.com/#organization","name":"Machine Learning Mastery","url":"https://machinelearningmastery.com/","sameAs":["https://www.facebook.com/MachineLearningMastery/","https://www.linkedin.com/company/machine-learning-mastery","https://twitter.com/TeachTheMachine"],"logo":{"@type":"ImageObject","@id":"https://machinelearningmastery.com/#logo","inLanguage":"en-US","url":"https://machinelearningmastery.com/wp-content/uploads/2016/09/cropped-icon.png","contentUrl":"https://machinelearningmastery.com/wp-content/uploads/2016/09/cropped-icon.png","width":512,"height":512,"caption":"Machine Learning Mastery"},"image":{"@id":"https://machinelearningmastery.com/#logo"}},{"@type":"WebSite","@id":"https://machinelearningmastery.com/#website","url":"https://machinelearningmastery.com/","name":"Machine Learning Mastery","description":"Making developers awesome at machine learning","publisher":{"@id":"https://machinelearningmastery.com/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https://machinelearningmastery.com/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"ImageObject","@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#primaryimage","inLanguage":"en-US","url":"https://machinelearningmastery.com/wp-content/uploads/2018/11/Box-and-Whisker-Plots-of-Mean-Squared-Error-With-Unscaled-Normalized-and-Standardized-Input-Variables-for-the-Regression-Problem.png","contentUrl":"https://machinelearningmastery.com/wp-content/uploads/2018/11/Box-and-Whisker-Plots-of-Mean-Squared-Error-With-Unscaled-Normalized-and-Standardized-Input-Variables-for-the-Regression-Problem.png","width":1280,"height":960,"caption":"Box and Whisker Plots of Mean Squared Error With Unscaled, Normalized and Standardized Input Variables for the Regression Problem"},{"@type":"WebPage","@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#webpage","url":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/","name":"How to use Data Scaling Improve Deep Learning Model Stability and Performance","isPartOf":{"@id":"https://machinelearningmastery.com/#website"},"primaryImageOfPage":{"@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#primaryimage"},"datePublished":"2019-02-03T18:00:34+00:00","dateModified":"2020-08-25T00:18:53+00:00","breadcrumb":{"@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/"]}]},{"@type":"BreadcrumbList","@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://machinelearningmastery.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://machinelearningmastery.com/blog/"},{"@type":"ListItem","position":3,"name":"How to use Data Scaling Improve Deep Learning Model Stability and Performance"}]},{"@type":"Article","@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#article","isPartOf":{"@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#webpage"},"author":{"@id":"https://machinelearningmastery.com/#/schema/person/e2d0ff4828d406a3b47e5a3c9a0591e8"},"headline":"How to use Data Scaling Improve Deep Learning Model Stability and Performance","datePublished":"2019-02-03T18:00:34+00:00","dateModified":"2020-08-25T00:18:53+00:00","mainEntityOfPage":{"@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#webpage"},"wordCount":3566,"commentCount":125,"publisher":{"@id":"https://machinelearningmastery.com/#organization"},"image":{"@id":"https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#primaryimage"},"thumbnailUrl":"https://machinelearningmastery.com/wp-content/uploads/2018/11/Box-and-Whisker-Plots-of-Mean-Squared-Error-With-Unscaled-Normalized-and-Standardized-Input-Variables-for-the-Regression-Problem.png","articleSection":["Deep Learning Performance"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#respond"]}]},{"@type":"Person","@id":"https://machinelearningmastery.com/#/schema/person/e2d0ff4828d406a3b47e5a3c9a0591e8","name":"Jason Brownlee","image":{"@type":"ImageObject","@id":"https://machinelearningmastery.com/#personlogo","inLanguage":"en-US","url":"https://secure.gravatar.com/avatar/a0942b56b07831ac15d4a168a750e34a?s=96&d=mm&r=g","contentUrl":"https://secure.gravatar.com/avatar/a0942b56b07831ac15d4a168a750e34a?s=96&d=mm&r=g","caption":"Jason Brownlee"},"description":"Jason Brownlee, PhD is a machine learning specialist who teaches developers how to get results with modern machine learning methods via hands-on tutorials.","sameAs":["http://MachineLearningMastery.com","https://www.facebook.com/MachineLearningMastery/","https://www.linkedin.com/company/machine-learning-mastery","https://twitter.com/teachthemachine"]}]}</script>
<link rel='dns-prefetch' href='//cdn.jsdelivr.net' />
<link rel='dns-prefetch' href='//ads.adthrive.com' />
<link rel='dns-prefetch' href='//www.google-analytics.com' />
<link rel='dns-prefetch' href='//loadeu.exelator.com' />
<link rel='dns-prefetch' href='//sync.crwdcntrl.net' />
<link rel='dns-prefetch' href='//gdpr-wrapper.privacymanager.io' />
<link rel='dns-prefetch' href='//securepubads.g.doubleclick.net' />
<link rel='dns-prefetch' href='//gdpr.privacymanager.io' />
<link rel='dns-prefetch' href='//sb.scorecardresearch.com' />
<link rel='dns-prefetch' href='//confiant-integrations.global.ssl.fastly.net' />
<link rel="alternate" type="application/rss+xml" title="Machine Learning Mastery » Feed" href="https://feeds.feedburner.com/MachineLearningMastery" /> <link rel="alternate" type="application/rss+xml" title="Machine Learning Mastery » Comments Feed" href="https://machinelearningmastery.com/comments/feed/" /> <link rel="alternate" type="application/rss+xml" title="Machine Learning Mastery » How to use Data Scaling Improve Deep Learning Model Stability and Performance Comments Feed" href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/feed/" /> <style type="text/css"> img.wp-smiley, img.emoji { display: inline !important; border: none !important; box-shadow: none !important; height: 1em !important; width: 1em !important; margin: 0 .07em !important; vertical-align: -0.1em !important; background: none !important; padding: 0 !important; } </style>
<script type="f3886dae12b0536ad361ea93-text/javascript" src='https://machinelearningmastery.com/wp-includes/js/jquery/jquery.min.js?ver=3.6.0' id='jquery-core-js' defer></script>
<script type="f3886dae12b0536ad361ea93-text/javascript" id='ssb-front-js-js-extra'> /* <![CDATA[ */ var SSB = {"ajax_url":"https:\/\/machinelearningmastery.com\/wp-admin\/admin-ajax.php","fb_share_nonce":"0f0d514a34"}; /* ]]> */ </script>
<link rel="https://api.w.org/" href="https://machinelearningmastery.com/wp-json/" /><link rel="alternate" type="application/json" href="https://machinelearningmastery.com/wp-json/wp/v2/posts/6939" /><link rel="EditURI" type="application/rsd+xml" title="RSD" href="https://machinelearningmastery.com/xmlrpc.php?rsd" />
<link rel="wlwmanifest" type="application/wlwmanifest+xml" href="https://machinelearningmastery.com/wp-includes/wlwmanifest.xml" />
<link rel='shortlink' href='https://machinelearningmastery.com/?p=6939' />
<style type="text/css">
.mpcs-classroom .nav-back i,
.mpcs-classroom .navbar-section a.btn,
.mpcs-classroom .navbar-section a,
.mpcs-classroom .navbar-section button {
color: rgba(255, 255, 255) !important;
}
.mpcs-classroom .navbar-section .dropdown .menu a {
color: rgba(44, 54, 55) !important;
}
.mpcs-classroom .mpcs-progress-ring {
background-color: rgba(29, 166, 154) !important;
}
.mpcs-classroom .mpcs-course-filter .dropdown .btn span,
.mpcs-classroom .mpcs-course-filter .dropdown .btn i,
.mpcs-classroom .mpcs-course-filter .input-group .input-group-btn,
.mpcs-classroom .mpcs-course-filter .input-group .mpcs-search,
.mpcs-classroom .mpcs-course-filter .input-group input[type=text],
.mpcs-classroom .mpcs-course-filter .dropdown a,
.mpcs-classroom .pagination,
.mpcs-classroom .pagination i,
.mpcs-classroom .pagination a {
color: rgba(44, 54, 55) !important;
border-color: rgba(44, 54, 55) !important;
}
/* body.mpcs-classroom a{
color: rgba();
} */
#mpcs-navbar,
#mpcs-navbar button#previous_lesson_link,
#mpcs-navbar button#previous_lesson_link:hover {
background: rgba(44, 54, 55);
}
.course-progress .user-progress,
.btn-green,
#mpcs-navbar button:not(#previous_lesson_link){
background: rgba(29, 166, 154, 0.9);
}
.btn-green:hover,
#mpcs-navbar button:not(#previous_lesson_link):focus,
#mpcs-navbar button:not(#previous_lesson_link):hover{
background: rgba(29, 166, 154);
}
.btn-green{border: rgba(29, 166, 154)}
.course-progress .progress-text,
.mpcs-lesson i.mpcs-circle-regular {
color: rgba(29, 166, 154)
}
#mpcs-main #bookmark, .mpcs-lesson.current{background: rgba(29, 166, 154, 0.3)}
.mpcs-instructor .tile-subtitle{
color: rgba(29, 166, 154, 1)
}
</style>
<style media="screen">
.simplesocialbuttons.simplesocialbuttons_inline .ssb-fb-like { margin: ; } /*inline margin*/
.simplesocialbuttons.simplesocialbuttons_inline.simplesocial-simple-icons button{
margin: ;
}
/*margin-digbar*/
</style>
<meta property="og:title" content="How to use Data Scaling Improve Deep Learning Model Stability and Performance - Machine Learning Mastery" /> <meta property="og:description" content="Deep learning neural networks learn how to map inputs to outputs from examples in a training dataset.
The weights of the model are initialized to small random values and updated via an optimization algorithm in response to estimates of error on the training dataset.
Given the use of small weights in the model and the use of error between predictions and expected" /> <meta property="og:url" content="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/" /> <meta property="og:site_name" content="Machine Learning Mastery" /> <meta property="og:image" content="https://machinelearningmastery.com/wp-content/uploads/2018/11/Box-and-Whisker-Plots-of-Mean-Squared-Error-With-Unscaled-Normalized-and-Standardized-Input-Variables-for-the-Regression-Problem.png" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:description" content="Deep learning neural networks learn how to map inputs to outputs from examples in a training dataset.
The weights of the model are initialized to small random values and updated via an optimization algorithm in response to estimates of error on the training dataset.
Given the use of small weights in the model and the use of error between predictions and expected" /> <meta name="twitter:title" content="How to use Data Scaling Improve Deep Learning Model Stability and Performance - Machine Learning Mastery" /> <meta property="twitter:image" content="https://machinelearningmastery.com/wp-content/uploads/2018/11/Box-and-Whisker-Plots-of-Mean-Squared-Error-With-Unscaled-Normalized-and-Standardized-Input-Variables-for-the-Regression-Problem.png" />
<style id="wplmi-inline-css" type="text/css"> span.wplmi-user-avatar { width: 16px;display: inline-block !important;flex-shrink: 0; } img.wplmi-elementor-avatar { border-radius: 100%;margin-right: 3px; }
</style>
<link rel="preload" as="font" href="https://machinelearningmastery.com/wp-content/themes/canvas-new/includes/fonts/fontawesome-webfont.woff2?v=4.5.0" crossorigin> <style type="text/css">
- logo .site-title, #logo .site-description { display:none; }
body {background-repeat:no-repeat;background-position:top left;background-attachment:scroll;border-top:0px solid #000000;}
- header {background-repeat:no-repeat;background-position:left top;margin-top:0px;margin-bottom:0px;padding-top:10px;padding-bottom:10px;border:0px solid ;}
- logo .site-title a {font:bold 40px/1em "Helvetica Neue", Helvetica, sans-serif;color:#222222;}
- logo .site-description {font:normal 13px/1em "Helvetica Neue", Helvetica, sans-serif;color:#999999;}
body, p { font:normal 14px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#555555; } h1 { font:bold 28px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#222222; }h2 { font:bold 24px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#222222; }h3 { font:bold 20px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#222222; }h4 { font:bold 16px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#222222; }h5 { font:bold 14px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#222222; }h6 { font:bold 12px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#222222; } .page-title, .post .title, .page .title {font:bold 28px/1.1em "Helvetica Neue", Helvetica, sans-serif;color:#222222;} .post .title a:link, .post .title a:visited, .page .title a:link, .page .title a:visited {color:#222222} .post-meta { font:normal 12px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#999999; } .entry, .entry p{ font:normal 15px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#555555; } .post-more {font:normal 13px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:;border-top:0px solid #e6e6e6;border-bottom:0px solid #e6e6e6;}
- post-author, #connect {border-top:1px solid #e6e6e6;border-bottom:1px solid #e6e6e6;border-left:1px solid #e6e6e6;border-right:1px solid #e6e6e6;border-radius:5px;-moz-border-radius:5px;-webkit-border-radius:5px;background-color:#fafafa}
.nav-entries a, .woo-pagination { font:normal 13px/1em "Helvetica Neue", Helvetica, sans-serif;color:#888; } .woo-pagination a, .woo-pagination a:hover {color:#888!important} .widget h3 {font:bold 14px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#555555;border-bottom:1px solid #e6e6e6;} .widget_recent_comments li, #twitter li { border-color: #e6e6e6;} .widget p, .widget .textwidget { font:normal 13px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#555555; } .widget {font:normal 13px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#555555;border-radius:0px;-moz-border-radius:0px;-webkit-border-radius:0px;}
- tabs .inside li a, .widget_woodojo_tabs .tabbable .tab-pane li a { font:bold 12px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#555555; }
- tabs .inside li span.meta, .widget_woodojo_tabs .tabbable .tab-pane li span.meta { font:300 11px/1.5em "Helvetica Neue", Helvetica, sans-serif;color:#999999; }
- tabs ul.wooTabs li a, .widget_woodojo_tabs .tabbable .nav-tabs li a { font:300 11px/2em "Helvetica Neue", Helvetica, sans-serif;color:#999999; }
@media only screen and (min-width:768px) { ul.nav li a, #navigation ul.rss a, #navigation ul.cart a.cart-contents, #navigation .cart-contents #navigation ul.rss, #navigation ul.nav-search, #navigation ul.nav-search a { font:bold 15px/1.2em "Helvetica Neue", Helvetica, sans-serif;color:#ffffff; } #navigation ul.rss li a:before, #navigation ul.nav-search a.search-contents:before { color:#ffffff;}
- navigation ul.nav > li a:hover, #navigation ul.nav > li:hover a, #navigation ul.nav li ul li a, #navigation ul.cart > li:hover > a, #navigation ul.cart > li > ul > div, #navigation ul.cart > li > ul > div p, #navigation ul.cart > li > ul span, #navigation ul.cart .cart_list a, #navigation ul.nav li.current_page_item a, #navigation ul.nav li.current_page_parent a, #navigation ul.nav li.current-menu-ancestor a, #navigation ul.nav li.current-cat a, #navigation ul.nav li.current-menu-item a { color:#eeeeee!important; }
- navigation ul.nav > li a:hover, #navigation ul.nav > li:hover, #navigation ul.nav li ul, #navigation ul.cart li:hover a.cart-contents, #navigation ul.nav-search li:hover a.search-contents, #navigation ul.nav-search a.search-contents + ul, #navigation ul.cart a.cart-contents + ul, #navigation ul.nav li.current_page_item a, #navigation ul.nav li.current_page_parent a, #navigation ul.nav li.current-menu-ancestor a, #navigation ul.nav li.current-cat a, #navigation ul.nav li.current-menu-item a{background-color:#84abc7!important}
- navigation ul.nav li ul, #navigation ul.cart > li > ul > div { border: 0px solid #dbdbdb; }
- navigation ul.nav > li:hover > ul { left: 0; }
- navigation ul.nav > li { border-right: 0px solid #dbdbdb; }#navigation ul.nav > li:hover > ul { left: 0; }
- navigation { box-shadow: none; -moz-box-shadow: none; -webkit-box-shadow: none; }#navigation ul li:first-child, #navigation ul li:first-child a { border-radius:0px 0 0 0px; -moz-border-radius:0px 0 0 0px; -webkit-border-radius:0px 0 0 0px; }
- navigation {background:#84abc7;border-top:0px solid #dbdbdb;border-bottom:0px solid #dbdbdb;border-left:0px solid #dbdbdb;border-right:0px solid #dbdbdb;border-radius:0px; -moz-border-radius:0px; -webkit-border-radius:0px;}
- top ul.nav li a { font:normal 12px/1.6em "Helvetica Neue", Helvetica, sans-serif;color:#ddd; }
}
- footer, #footer p { font:normal 13px/1.4em "Helvetica Neue", Helvetica, sans-serif;color:#999999; }
- footer {border-top:1px solid #dbdbdb;border-bottom:0px solid ;border-left:0px solid ;border-right:0px solid ;border-radius:0px; -moz-border-radius:0px; -webkit-border-radius:0px;}
.magazine #loopedSlider .content h2.title a { font:bold 24px/1em Arial, sans-serif;color:#ffffff; } .wooslider-theme-magazine .slide-title a { font:bold 24px/1em Arial, sans-serif;color:#ffffff; } .magazine #loopedSlider .content .excerpt p { font:300 13px/1.5em Arial, sans-serif;color:#cccccc; } .wooslider-theme-magazine .slide-content p, .wooslider-theme-magazine .slide-excerpt p { font:300 13px/1.5em Arial, sans-serif;color:#cccccc; } .magazine .block .post .title a {font:bold 18px/1.2em Helvetica Neue, Helvetica, sans-serif;color:#222222; }
- loopedSlider.business-slider .content h2 { font:bold 24px/1em Arial, sans-serif;color:#ffffff; }
- loopedSlider.business-slider .content h2.title a { font:bold 24px/1em Arial, sans-serif;color:#ffffff; }
.wooslider-theme-business .has-featured-image .slide-title { font:bold 24px/1em Arial, sans-serif;color:#ffffff; } .wooslider-theme-business .has-featured-image .slide-title a { font:bold 24px/1em Arial, sans-serif;color:#ffffff; }
- wrapper #loopedSlider.business-slider .content p { font:300 13px/1.5em Arial, sans-serif;color:#cccccc; }
.wooslider-theme-business .has-featured-image .slide-content p { font:300 13px/1.5em Arial, sans-serif;color:#cccccc; } .wooslider-theme-business .has-featured-image .slide-excerpt p { font:300 13px/1.5em Arial, sans-serif;color:#cccccc; } .archive_header { font:bold 18px/1em Arial, sans-serif;color:#222222; } .archive_header {border-bottom:1px solid #e6e6e6;} .archive_header .catrss { display:none; } </style>
<link rel="shortcut icon" href="https://machinelearningmastery.com/wp-content/uploads/2019/09/icon-16x16.png"/> <style type="text/css">
- logo img {
max-width: 100%; height: auto;
} </style>
<meta name="generator" content="Canvas 5.9.21" />
<meta name="generator" content="WooFramework 6.2.9" />
<link rel="icon" href="https://machinelearningmastery.com/wp-content/uploads/2016/09/cropped-icon-32x32.png" sizes="32x32" /> <link rel="icon" href="https://machinelearningmastery.com/wp-content/uploads/2016/09/cropped-icon-192x192.png" sizes="192x192" /> <link rel="apple-touch-icon" href="https://machinelearningmastery.com/wp-content/uploads/2016/09/cropped-icon-180x180.png" /> <meta name="msapplication-TileImage" content="https://machinelearningmastery.com/wp-content/uploads/2016/09/cropped-icon-270x270.png" /> <style type="text/css" id="wp-custom-css"> .display-posts-listing.image-left .listing-item { overflow: hidden; margin-bottom: 30px; width: 100%; }
.display-posts-listing.image-left .image { float: left; margin: 0 10px 0 0; }
.display-posts-listing.image-left .attachment-thumbnail { height: auto; width: auto; max-width: 50px; max-height: 50px; border-radius: 50%; }
.display-posts-listing.image-left .title { display: block; }
.display-posts-listing.image-left .excerpt-dash { display: none; }
.display-posts-listing.image-left { margin: 0 0 40px 0; } </style> <noscript><style id="rocket-lazyload-nojs-css">.rll-youtube-player, [data-lazy-src]{display:none !important;}</style></noscript> <script type="f3886dae12b0536ad361ea93-text/javascript"> !function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window, document,'script', 'https://machinelearningmastery.com/wp-content/cache/busting/facebook-tracking/fbpix-events-en_US-2.9.5.js'); fbq('init', '834324500844861'); fbq('track', 'PageView'); </script> <noscript><img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=834324500844861&ev=PageView&noscript=1" /></noscript> </head> <body class="post-template-default single single-post postid-6939 single-format-standard chrome alt-style-default two-col-left width-960 two-col-left-960">
<a href="/super-bundle/?utm_campaign=Machine%20Learning%20Mastery%20Super%20Bundle&utm_source=website&utm_medium=banner">Click to get the 20-book Super Bundle! (Save $250)</a>
<header id="header" class="col-full">
<a href="https://machinelearningmastery.com/" title="Making developers awesome at machine learning"><img width="480" height="80" src="https://machinelearningmastery.com/wp-content/uploads/2019/09/Header_smaller_text_better-1.png" alt="Machine Learning Mastery" /></a> <a href="https://machinelearningmastery.com/">Machine Learning Mastery</a> Making developers awesome at machine learning
</header> <nav id="navigation" class="col-full" role="navigation">
<section class="menus">
<a href="https://machinelearningmastery.com" class="nav-home">Home</a>
Main Menu
</section>
<a href="#top" class="nav-close">Return to Content</a>
</nav>
<footer id="footer" class="col-full">
© 2021 Machine Learning Mastery Pty. Ltd. All Rights Reserved.
<a href="https://www.linkedin.com/company/machine-learning-mastery/">LinkedIn</a> |
<a href="https://twitter.com/TeachTheMachine">Twitter</a> |
<a href="https://www.facebook.com/MachineLearningMastery/">Facebook</a> |
<a href="/newsletter/">Newsletter</a> |
<a href="/rss-feed/">RSS</a>
<a href="/privacy/">Privacy</a> | <a href="/disclaimer/">Disclaimer</a> | <a href="/terms-of-service/">Terms</a> | <a href="/contact/">Contact</a> | <a href="/sitemap/">Sitemap</a> | <a href="/site-search/">Search</a>
</footer>
<script data-cfasync="false" type="text/javascript">
var _dcq = _dcq || [];
var _dcs = _dcs || {};
_dcs.account = '9556588';
(function() {
var dc = document.createElement('script');
dc.type = 'text/javascript'; dc.async = true;
dc.src = '//tag.getdrip.com/9556588.js';
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(dc, s);
})();
</script>
<script type="f3886dae12b0536ad361ea93-text/javascript" id='rocket-browser-checker-js-after'> "use strict";var _createClass=function(){function defineProperties(target,props){for(var i=0;i<props.length;i++){var descriptor=props[i];descriptor.enumerable=descriptor.enumerable||!1,descriptor.configurable=!0,"value"in descriptor&&(descriptor.writable=!0),Object.defineProperty(target,descriptor.key,descriptor)}}return function(Constructor,protoProps,staticProps){return protoProps&&defineProperties(Constructor.prototype,protoProps),staticProps&&defineProperties(Constructor,staticProps),Constructor}}();function _classCallCheck(instance,Constructor){if(!(instance instanceof Constructor))throw new TypeError("Cannot call a class as a function")}var RocketBrowserCompatibilityChecker=function(){function RocketBrowserCompatibilityChecker(options){_classCallCheck(this,RocketBrowserCompatibilityChecker),this.passiveSupported=!1,this._checkPassiveOption(this),this.options=!!this.passiveSupported&&options}return _createClass(RocketBrowserCompatibilityChecker,[{key:"_checkPassiveOption",value:function(self){try{var options={get passive(){return!(self.passiveSupported=!0)}};window.addEventListener("test",null,options),window.removeEventListener("test",null,options)}catch(err){self.passiveSupported=!1}}},{key:"initRequestIdleCallback",value:function(){!1 in window&&(window.requestIdleCallback=function(cb){var start=Date.now();return setTimeout(function(){cb({didTimeout:!1,timeRemaining:function(){return Math.max(0,50-(Date.now()-start))}})},1)}),!1 in window&&(window.cancelIdleCallback=function(id){return clearTimeout(id)})}},{key:"isDataSaverModeOn",value:function(){return"connection"in navigator&&!0===navigator.connection.saveData}},{key:"supportsLinkPrefetch",value:function(){var elem=document.createElement("link");return elem.relList&&elem.relList.supports&&elem.relList.supports("prefetch")&&window.IntersectionObserver&&"isIntersecting"in IntersectionObserverEntry.prototype}},{key:"isSlowConnection",value:function(){return"connection"in navigator&&"effectiveType"in navigator.connection&&("2g"===navigator.connection.effectiveType||"slow-2g"===navigator.connection.effectiveType)}}]),RocketBrowserCompatibilityChecker}(); </script> <script type="f3886dae12b0536ad361ea93-text/javascript" id='rocket-delay-js-js-after'> (function() { "use strict";var e=function(){function n(e,t){for(var r=0;r<t.length;r++){var n=t[r];n.enumerable=n.enumerable||!1,n.configurable=!0,"value"in n&&(n.writable=!0),Object.defineProperty(e,n.key,n)}}return function(e,t,r){return t&&n(e.prototype,t),r&&n(e,r),e}}();function n(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}var t=function(){function r(e,t){n(this,r),this.attrName="data-rocketlazyloadscript",this.browser=t,this.options=this.browser.options,this.triggerEvents=e,this.userEventListener=this.triggerListener.bind(this)}return e(r,[{key:"init",value:function(){this._addEventListener(this)}},{key:"reset",value:function(){this._removeEventListener(this)}},{key:"_addEventListener",value:function(t){this.triggerEvents.forEach(function(e){return window.addEventListener(e,t.userEventListener,t.options)})}},{key:"_removeEventListener",value:function(t){this.triggerEvents.forEach(function(e){return window.removeEventListener(e,t.userEventListener,t.options)})}},{key:"_loadScriptSrc",value:function(){var r=this,e=document.querySelectorAll("script["+this.attrName+"]");0!==e.length&&Array.prototype.slice.call(e).forEach(function(e){var t=e.getAttribute(r.attrName);e.setAttribute("src",t),e.removeAttribute(r.attrName)}),this.reset()}},{key:"triggerListener",value:function(){this._loadScriptSrc(),this._removeEventListener(this)}}],[{key:"run",value:function(){RocketBrowserCompatibilityChecker&&new r(["keydown","mouseover","touchmove","touchstart","wheel"],new RocketBrowserCompatibilityChecker({passive:!0})).init()}}]),r}();t.run(); }()); </script> <script type="f3886dae12b0536ad361ea93-text/javascript" id='rocket-preload-links-js-extra'> /* <![CDATA[ */ var RocketPreloadLinksConfig = {"excludeUris":"\/register\/machine-learning-university\/|\/courses\/\/lessons\/holding-back-goals\/|\/courses\/course-get-started\/|\/courses\/course-get-started\/lessons\/holding-back-goals\/|\/courses\/course-get-started\/lessons\/why-machine-learning-does-not-have-to-be-so-hard\/|\/courses\/course-get-started\/lessons\/how-to-think-about-machine-learning\/|\/courses\/course-get-started\/lessons\/find-your-machine-learning-tribe\/|\/courses\/step-by-step-process\/|\/courses\/probability-for-machine-learning\/|\/account\/|\/register\/|\/courses\/|\/machine-learning-mastery-university-registration\/|\/register\/|\/newsletter\/|\/(.+\/)?feed\/?.+\/?|\/(?:.+\/)?embed\/|\/(index\\.php\/)?wp\\-json(\/.*|$)|\/wp-admin\/|\/logout\/|\/login\/","usesTrailingSlash":"1","imageExt":"jpg|jpeg|gif|png|tiff|bmp|webp|avif","fileExt":"jpg|jpeg|gif|png|tiff|bmp|webp|avif|php|pdf|html|htm","siteUrl":"https:\/\/machinelearningmastery.com","onHoverDelay":"100","rateThrottle":"3"}; /* ]]> */ </script> <script type="f3886dae12b0536ad361ea93-text/javascript" id='rocket-preload-links-js-after'> (function() { "use strict";var r="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(e){return typeof e}:function(e){return e&&"function"==typeof Symbol&&e.constructor===Symbol&&e!==Symbol.prototype?"symbol":typeof e},e=function(){function i(e,t){for(var n=0;n<t.length;n++){var i=t[n];i.enumerable=i.enumerable||!1,i.configurable=!0,"value"in i&&(i.writable=!0),Object.defineProperty(e,i.key,i)}}return function(e,t,n){return t&&i(e.prototype,t),n&&i(e,n),e}}();function i(e,t){if(!(e instanceof t))throw new TypeError("Cannot call a class as a function")}var t=function(){function n(e,t){i(this,n),this.browser=e,this.config=t,this.options=this.browser.options,this.prefetched=new Set,this.eventTime=null,this.threshold=1111,this.numOnHover=0}return e(n,[{key:"init",value:function(){!this.browser.supportsLinkPrefetch()||this.browser.isDataSaverModeOn()||this.browser.isSlowConnection()||(this.regex={excludeUris:RegExp(this.config.excludeUris,"i"),images:RegExp(".("+this.config.imageExt+")$","i"),fileExt:RegExp(".("+this.config.fileExt+")$","i")},this._initListeners(this))}},{key:"_initListeners",value:function(e){-1<this.config.onHoverDelay&&document.addEventListener("mouseover",e.listener.bind(e),e.listenerOptions),document.addEventListener("mousedown",e.listener.bind(e),e.listenerOptions),document.addEventListener("touchstart",e.listener.bind(e),e.listenerOptions)}},{key:"listener",value:function(e){var t=e.target.closest("a"),n=this._prepareUrl(t);if(null!==n)switch(e.type){case"mousedown":case"touchstart":this._addPrefetchLink(n);break;case"mouseover":this._earlyPrefetch(t,n,"mouseout")}}},{key:"_earlyPrefetch",value:function(t,e,n){var i=this,r=setTimeout(function(){if(r=null,0===i.numOnHover)setTimeout(function(){return i.numOnHover=0},1e3);else if(i.numOnHover>i.config.rateThrottle)return;i.numOnHover++,i._addPrefetchLink(e)},this.config.onHoverDelay);t.addEventListener(n,function e(){t.removeEventListener(n,e,{passive:!0}),null!==r&&(clearTimeout(r),r=null)},{passive:!0})}},{key:"_addPrefetchLink",value:function(i){return this.prefetched.add(i.href),new Promise(function(e,t){var n=document.createElement("link");n.rel="prefetch",n.href=i.href,n.onload=e,n.onerror=t,document.head.appendChild(n)}).catch(function(){})}},{key:"_prepareUrl",value:function(e){if(null===e||"object"!==(void 0===e?"undefined":r(e))||!1 in e||-1===["http:","https:"].indexOf(e.protocol))return null;var t=e.href.substring(0,this.config.siteUrl.length),n=this._getPathname(e.href,t),i={original:e.href,protocol:e.protocol,origin:t,pathname:n,href:t+n};return this._isLinkOk(i)?i:null}},{key:"_getPathname",value:function(e,t){var n=t?e.substring(this.config.siteUrl.length):e;return n.startsWith("/")||(n="/"+n),this._shouldAddTrailingSlash(n)?n+"/":n}},{key:"_shouldAddTrailingSlash",value:function(e){return this.config.usesTrailingSlash&&!e.endsWith("/")&&!this.regex.fileExt.test(e)}},{key:"_isLinkOk",value:function(e){return null!==e&&"object"===(void 0===e?"undefined":r(e))&&(!this.prefetched.has(e.href)&&e.origin===this.config.siteUrl&&-1===e.href.indexOf("?")&&-1===e.href.indexOf("#")&&!this.regex.excludeUris.test(e.href)&&!this.regex.images.test(e.href))}}],[{key:"run",value:function(){"undefined"!=typeof RocketPreloadLinksConfig&&new n(new RocketBrowserCompatibilityChecker({capture:!0,passive:!0}),RocketPreloadLinksConfig).init()}}]),n}();t.run(); }()); </script>
<script type="f3886dae12b0536ad361ea93-text/javascript">window.lazyLoadOptions={elements_selector:"iframe[data-lazy-src]",data_src:"lazy-src",data_srcset:"lazy-srcset",data_sizes:"lazy-sizes",class_loading:"lazyloading",class_loaded:"lazyloaded",threshold:300,callback_loaded:function(element){if(element.tagName==="IFRAME"&&element.dataset.rocketLazyload=="fitvidscompatible"){if(element.classList.contains("lazyloaded")){if(typeof window.jQuery!="undefined"){if(jQuery.fn.fitVids){jQuery(element).parent().fitVids()}}}}}};window.addEventListener('LazyLoad::Initialized',function(e){var lazyLoadInstance=e.detail.instance;if(window.MutationObserver){var observer=new MutationObserver(function(mutations){var image_count=0;var iframe_count=0;var rocketlazy_count=0;mutations.forEach(function(mutation){for(i=0;i<mutation.addedNodes.length;i++){if(typeof mutation.addedNodes[i].getElementsByTagName!=='function'){continue} if(typeof mutation.addedNodes[i].getElementsByClassName!=='function'){continue} images=mutation.addedNodes[i].getElementsByTagName('img');is_image=mutation.addedNodes[i].tagName=="IMG";iframes=mutation.addedNodes[i].getElementsByTagName('iframe');is_iframe=mutation.addedNodes[i].tagName=="IFRAME";rocket_lazy=mutation.addedNodes[i].getElementsByClassName('rocket-lazyload');image_count+=images.length;iframe_count+=iframes.length;rocketlazy_count+=rocket_lazy.length;if(is_image){image_count+=1} if(is_iframe){iframe_count+=1}}});if(image_count>0||iframe_count>0||rocketlazy_count>0){lazyLoadInstance.update()}});var b=document.getElementsByTagName("body")[0];var config={childList:!0,subtree:!0};observer.observe(b,config)}},!1)</script><script data-no-minify="1" async src="https://machinelearningmastery.com/wp-content/plugins/wp-rocket/assets/js/lazyload/16.1/lazyload.min.js" type="f3886dae12b0536ad361ea93-text/javascript"></script><script src="https://machinelearningmastery.com/wp-content/cache/min/1/faa2bb2b045a56fea4ed2ea21d2cc719.js" data-minify="1" defer type="f3886dae12b0536ad361ea93-text/javascript"></script><script src="/cdn-cgi/scripts/7d0fa10a/cloudflare-static/rocket-loader.min.js" data-cf-settings="f3886dae12b0536ad361ea93-|49" defer=""></script></body> </html>
Wonbin February 13, 2019 at 6:03 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-468125" title="Direct link to this comment">#</a>
Thank you for this helpful post for beginners!
Could you please provide more details about the steps of “using the root mean squared error on the unscaled data” to interpret the performance in a specific domain?
Would it be like this??
———————————————————–
1. Finalize the model (based on the performance being calculated from the scaled output variable)
2. Make predictions on test set
3. Invert the predictions (to convert them back into their original scale)
4. Calculate the metrics (e.g. RMSE, MAPE)
———————————————————–
Waiting for your reply! Cheers mate!
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 14, 2019 at 8:39 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-468241" title="Direct link to this comment">#</a>
Correct.
ajebulon April 30, 2019 at 2:44 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-483399" title="Direct link to this comment">#</a>
Really nice article! I got Some quick questions,
If I have multiple input columns, each has different value range, might be [0, 1000] or even a one-hot-encoded data, should all be scaled with same method, or it can be processed differently?
For example:
– input A is normalized to [0, 1],
– input B is normalized to [-1, 1],
– input C is standardized,
– one-hot-encoded data is not scaled
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 1, 2019 at 6:58 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-483508" title="Direct link to this comment">#</a>
Yes, typically it is a good idea to scale all columns to have the same range. Perhaps start with [0,1] and compare others to see if they result in an improvement.
mk123qwe February 19, 2019 at 5:38 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-469330" title="Direct link to this comment">#</a>
we want standardized inputs, no scaling of outputs,but outputs value is not in (0,1).Are the predictions inaccurate?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 20, 2019 at 7:51 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-469398" title="Direct link to this comment">#</a>
I don’t follow, are what predictions accurate?
yingxiao kong February 28, 2019 at 8:17 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-471145" title="Direct link to this comment">#</a>
Hi Jason,
Your experiment is very helpful for me to understand the difference between different methods, actually I have also done similar things. I always standardized the input data. I have compared the results between standardized and standardized targets. The plots shows that with standardized targets, the network seems to work better. However, here I have a question: suppose the standard deviation of my target is 300, then I think the MSE will be strongly decreased after you fixed the standard deviation to 1. So shall we multiply the original std to the MSE in order to get the MSE in the original target value space?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 28, 2019 at 2:32 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-471186" title="Direct link to this comment">#</a>
You can invert the standardization, by adding the mean and multiplying by the stdev.
I also have an example here using the sklaern:
<a href="https://machinelearningmastery.com/machine-learning-data-transforms-for-time-series-forecasting/" rel="nofollow ugc">https://machinelearningmastery.com/machine-learning-data-transforms-for-time-series-forecasting/</a>
Beato March 11, 2019 at 3:51 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-473491" title="Direct link to this comment">#</a>
Hi Jason,
My data includes categorical and continued data. Could I transform the categorical data with 1,2,3…into standardized data and put them into the neural network models to make classification? Or do I need to transformr the categorical data with with one-hot coding(0,1)? I have been confused about it. Thanks
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 11, 2019 at 6:53 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-473523" title="Direct link to this comment">#</a>
Yes, perhaps try it and compare results?
Bart March 16, 2019 at 5:23 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-474594" title="Direct link to this comment">#</a>
Hi Jason, I have a specific Question regarding the normalization (min-max scaling) of the output value. Usually you are supposed to use normalization only on the training data set and then apply those stats to the validation and test set. Otherwise you would feed the model at training time certain information about the world it shouldn’t have access to. (The Elements of Statistical Learning: Data Mining, Inference, and Prediction p.247)
But for instance, my output value is a single percentage value ranging [0, 100%] and I am using the ReLU activation function in my output layer. I know for sure that in the “real world” regarding my problem statement, that I will get samples ranging form 60 – 100%. But my training sample size is to small and does not contain enough data points including all possible output values. So here comes my question: Should I stay with my initial statement (normalization only on training data set) or should I apply the maximum possible value of 100% to max()-value of the normalization step? The latter would contradict the literature. Best Regards Bart
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 16, 2019 at 8:02 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-474629" title="Direct link to this comment">#</a>
Correct.
I would recommend a sigmoid activation in the output.
I would then recommend interpreting the 0-1 scale as 60-100 prior to model evaluation.
Does that help?
Bart March 17, 2019 at 1:37 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-474728" title="Direct link to this comment">#</a>
I’m not quite sure what you mean by your second recommendation. How would I achieve that?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 17, 2019 at 6:24 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-474757" title="Direct link to this comment">#</a>
You can project the scale of 0-1 to anything you want, such as 60-100.
First rescale to a number between 0 and 40 (value * 40) then add the min value (+ 60)
result = value * 40 + 60
Mike March 25, 2019 at 1:04 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-476186" title="Direct link to this comment">#</a>
Dear Jason, thank you for the great article.
I am wondering if there is any advantage using StadardScaler or MinMaxScaler over scaling manually. I could calculate the mean, std or min, max of my training data and apply them with the corresponding formula for standard or minmax scaling.
Would this approach produce the same results as the StadardScaler or MinMaxScaler or are the sklearn scalers special?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 25, 2019 at 6:46 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-476218" title="Direct link to this comment">#</a>
Yes, it is reliable bug free code all wrapped up in a single class – making it harder to introduce new bugs.
Same results as manual, if you coded the manual scaling correctly.
Magnus May 9, 2019 at 8:32 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-484599" title="Direct link to this comment">#</a>
Dear Jason,
I have a few questions from section “Data normalization”. You mention that we should estimate the max and min values, and use that to normalize the training set to e.g. [-1,1]. But what if the max and min values are in the validation or test set? Then I might get values e.g. [-1.2, 1.3] in the validation set. Do you consider this to be incorrect or not?
Another approach is then to make sure that the min and max values for all parameters are contained in the training set. What are your thoughts on this? Is this the way to do it? Or should we use the max and min values for all data combined (training, validation and test sets) when normalizing the training set?
For the moment I use the MinMaxScaler and fit_transform on the training set and then apply that scaler on the validation and test set using transform. But I realise that some of my max values are in the validation set. I suppose this is also related to network saturation.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 10, 2019 at 8:16 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-484663" title="Direct link to this comment">#</a>
Perhaps estimate the min/max using domain knowledge. If new data exceeded the limits, snap to known limits, or not – test and see how the model is impacted.
Regardless, the training set must be representative of the problem.
youssef May 30, 2019 at 9:33 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-487085" title="Direct link to this comment">#</a>
Hello Jason, I am a huge fan of your work! Thank you so much for your insightful tutorials. You are a life saver! I have a small question if i may:
I am trying to fit spectrograms in a cnn in order to do some classification tasks. Unfortunately each spectrogram is around (3000,300) array. Is there a way to reduce the dimensionality without losing so much information?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 30, 2019 at 2:50 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-487103" title="Direct link to this comment">#</a>
Ouch, perhaps start with simple downsampling and see what effect that has?
Muktamani July 4, 2019 at 8:54 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-491581" title="Direct link to this comment">#</a>
Hi Jason,
It was always good and informative to go through your blogs and your interaction with comments by different people all across the globe.
I have question regarding the scaling techniques.
As you explained about scaling :
Case1:
- created scaler
scaler = StandardScaler()- fit scaler on training dataset
scaler.fit(trainy)- transform training dataset
trainy = scaler.transform(trainy)- transform test dataset
testy = scaler.transform(testy)in this case mean and standard deviation for all train and test remain same.
What i approached is:
case2
- created scaler
scaler_train = StandardScaler()- fit scaler on training dataset
scaler_train.fit(trainy)- transform training dataset
trainy = scaler_train.transform(trainy)# created scaler
scaler_test = StandardScaler()
- fit scaler on training dataset
scaler_test.fit(trainy)- transform test dataset
testy = scaler_test.transform(testy)Here the mean and standard deviation in train data and test data are different.so model may find the test data completely unknown and new .rather in first case where mean and standard deviation is same on train and test data that may leads to providing the known test data to model (known in term of same mean and standard deviation treatment).
Jason,can you guide me if my logics is good to go with case2 or shall i consider case1 .
or if logic is wrong you can also say that and explain.
(Also i applied Same for min-max scaling i.e normalization, if i choose this then)
Again thanks Jason for such a nice work !
Happy Learning !!
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 5, 2019 at 8:06 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-491669" title="Direct link to this comment">#</a>
I recommend fitting the scaler on the training dataset once, then apply it to transform the training dataset and test set.
If you fit the scaler using the test dataset, you will have data leakage and possibly an invalid estimate of model performance.
ICHaLiL July 6, 2019 at 12:13 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-491776" title="Direct link to this comment">#</a>
Hi Jason,
I’m working on sequence2sequence problem. Input’s max and min points are around 500-300, however output’s are 200-0. If I want to normalize them, should I use different scalers? For example:
scx = MinMaxScaler(feature_range = (0, 1))
scy = MinMaxScaler(feature_range = (0, 1))
trainx = scx.fit_transform(trainx)
trainy = scy.fit_transform(trainy)
or should I scale them with same scale like below?
sc = MinMaxScaler(feature_range = (0, 1))
trainx = sc.fit_transform(trainx)
trainy = sc.fit_transform(trainy)
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 6, 2019 at 8:40 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-491833" title="Direct link to this comment">#</a>
Yes, use a separate transform for inputs and outputs is a good idea. Otherwise have them all as separate columns in the same matrix and use one scaler, but the column order for transform/inverse_transform will always have to be consistent.
Brent July 12, 2019 at 6:55 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-492643" title="Direct link to this comment">#</a>
Hi Jason,
Confused about one aspect, I have a small NN with 8 independent variables and one dichotomous dependent variable. I have standardized the input variables (the output variable was left untouched). I have both trained and created the final model with the same standardized data. However, the question is, if I want to create a user interface to receive manual inputs, those will no longer be in the standardized format, so what is the best way to proceed?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 13, 2019 at 6:53 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-492702" title="Direct link to this comment">#</a>
You must maintain the objects used to prepare the data, or the coefficients used by those objects (mean and stdev) so that you can prepare new data in an identically way to the way data was prepared during training.
Does that help
Brent July 15, 2019 at 10:58 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-492925" title="Direct link to this comment">#</a>
Thank you, that makes perfect sense.
cgv July 21, 2019 at 11:27 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-493673" title="Direct link to this comment">#</a>
Hi Jason,
I have built an ANN model and scaled my inputs and outputs before feeding to the network. I measure the performance of the model by r2_score. My output variable is height. My r2_score when the output variable is in metres is .98, but when my output variable is in centi-metres , my r2_score is .91. I have scaled my output too before feeding to the network, why is there a difference in r2_score even because the output variable is scaled before feeding to the network.
Thanks in advance
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 22, 2019 at 8:27 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-493717" title="Direct link to this comment">#</a>
Good question, this is why it is important to test different scaling approaches in order to discover what works best for a given dataset and model combination.
madhuri August 29, 2019 at 3:44 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-498877" title="Direct link to this comment">#</a>
Hi Jason,
I am working on sequence to data prediction problem wherein i am performing normalization on input and output both.
Once model is trained then to get the actual output in real-time, I have to perform the de-normalization and when I will perform the denorm then error will increase by the same factor I have used for normalization.
Lets consider, norm predicted output is 0.1 and error of the model is 0.01 .
denorm predicted output become 0.1*100 = 10 and after de-normalizing the error will be 0.01*100= 1
So, what will be solution to this eliminate this kind of problem in regression.
Thanks
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> August 29, 2019 at 6:16 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-498916" title="Direct link to this comment">#</a>
What problem exactly?
madhuri August 29, 2019 at 8:38 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-498975" title="Direct link to this comment">#</a>
The problem is after de-normalization of the output, the error difference between actual and predicted output is scaled up by the normalization factor (max-min) So, I want to know what can be done to make the error difference same for both de-normized as well as normalized output.
Thanks
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> August 30, 2019 at 6:18 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-499030" title="Direct link to this comment">#</a>
I don’t understand, sorry.
joshBorrison October 7, 2019 at 5:08 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-504531" title="Direct link to this comment">#</a>
Hi Jason,
Do I have to use only one normalization formula for all inputs?
For example: I have 5 inputs [inp1, inp2, inp3, inp4, inp5] where I can estimate max and min only for [inp1, inp2]. So can I use
y = (x – min) / (max – min)
for [inp1, inp2] and
y = x/(1+x)
for [inp3, inp4, inp5]?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> October 8, 2019 at 7:53 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-504621" title="Direct link to this comment">#</a>
Yes, it is applied to each input separately – assuming they have different units.
shiva November 13, 2019 at 4:17 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510336" title="Direct link to this comment">#</a>
Hi Jason
what if I scale the word vectors(glove) for exposing to LSTM?
would it affect the accuracy of results or it maintains the semantic relations of words?
Thank you a lot.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 13, 2019 at 5:53 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510377" title="Direct link to this comment">#</a>
I don’t think so. Try it and see?
Murilo Souza November 14, 2019 at 12:35 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510516" title="Direct link to this comment">#</a>
Hello, i was trying to normalize/inverse transoformation in my data, but i got one error that i think its due to the resize i did in my input data. Here’s my code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import time as time
import matplotlib.pyplot as plt
import pydot
import csv as csv
import keras.backend as K
from sklearn.preprocessing import MinMaxScaler
# Downloading data
!wget <a href="https://raw.githubusercontent.com/sibyjackgrove/CNN-on-Wind-Power-Data/master/MISO_power_data_classification_labels.csv" rel="nofollow ugc">https://raw.githubusercontent.com/sibyjackgrove/CNN-on-Wind-Power-Data/master/MISO_power_data_classification_labels.csv</a>
!wget <a href="https://raw.githubusercontent.com/sibyjackgrove/CNN-on-Wind-Power-Data/master/MISO_power_data_input.csv" rel="nofollow ugc">https://raw.githubusercontent.com/sibyjackgrove/CNN-on-Wind-Power-Data/master/MISO_power_data_input.csv</a>
# Trying normalization
batch_size = 1
valid_size = max(1,np.int(0.2*batch_size))
df_input = pd.read_csv(‘./MISO_power_data_input.csv’,usecols =[‘Wind_MWh’,’Actual_Load_MWh’], chunksize=24*(batch_size+valid_size),nrows = 24*(batch_size+valid_size),iterator=True)
df_target = pd.read_csv(‘./MISO_power_data_classification_labels.csv’,usecols =[‘Mean Wind Power’,’Standard Deviation’,’WindShare’],chunksize =batch_size+valid_size,nrows = batch_size+valid_size, iterator=True)
for chunk, chunk2 in zip(df_input,df_target):
InputX = chunk.values
InputX = np.resize(InputX,(batch_size+valid_size,24,2,1))
print(InputX)
InputX.astype(‘float32’, copy=False)
InputY = chunk2.values
InputY.astype(‘float32’, copy=False)
print(InputY)
# create scaler
scaler = MinMaxScaler() # Define limits for normalize data
normalized_input = scaler.fit_transform(InputX) # Normalize input data
normalized_output = scaler.fit_transform(InputY) # Normalize output data
print(normalized_input)
print(normalized_output)
inverse_output = scaler.inverse_transform(normalized_output) # Inverse transformation of output data
print(inverse_output)
The error:
“ValueError: Found array with dim 4. MinMaxScaler expected <= 2."
Do you have any idea how can i fix this? I really didn't wish to change the resize command at the moment.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 14, 2019 at 8:04 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510565" title="Direct link to this comment">#</a>
Perhaps this will help:
<a href="https://machinelearningmastery.com/machine-learning-data-transforms-for-time-series-forecasting/" rel="nofollow ugc">https://machinelearningmastery.com/machine-learning-data-transforms-for-time-series-forecasting/</a>
Murilo Souza November 15, 2019 at 12:44 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510656" title="Direct link to this comment">#</a>
Is there anyway i can do the inverse transform inside the model itself? Because, for example, my MSE reported at the end of each epoch would be in the “wrong” scale.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 15, 2019 at 7:53 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510694" title="Direct link to this comment">#</a>
Yes, you could wrap the model in a sklearn pipeline.
Or wrap the model in your own wrapper class.
Mariana Costa April 29, 2021 at 11:11 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-607456" title="Direct link to this comment">#</a>
Does it improve the net to do this?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> April 30, 2021 at 6:06 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-607501" title="Direct link to this comment">#</a>
No, but it may be helpful when coding.
Jules Damji November 14, 2019 at 6:55 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510545" title="Direct link to this comment">#</a>
Hey Jason,
I love this tutorial. I was wondering if I can get your permission to use this tutorial, convert all its experimentation and tracking using MLflow, and include it in my tutorials I teach at conferences.
It’s a fitting example of how you can use MLFlow to track different experiments and visually compare the outcomes.
All the credit will be given to you as the source and inspiration. You can see some of the examples here: <a href="https://github.com/dmatrix/spark-saturday/tree/master/tutorials/mlflow/src/python" rel="nofollow ugc">https://github.com/dmatrix/spark-saturday/tree/master/tutorials/mlflow/src/python</a>.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 14, 2019 at 8:07 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510572" title="Direct link to this comment">#</a>
Thanks!
No problem as long as you clearly cite and link to the post.
jules Damji November 14, 2019 at 2:45 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510615" title="Direct link to this comment">#</a>
Thanks, I will certainly put the original link and plug your book too, along with your site and an excellent resource of tutorials and examples to learn from.
Cheers
Jules
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 15, 2019 at 7:42 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-510683" title="Direct link to this comment">#</a>
Thanks Jules.
Hanser November 28, 2019 at 8:13 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-512747" title="Direct link to this comment">#</a>
Amazing content Jason! I was wondering if it is possible to apply different scalers to different inputs given based on their original characteristics? I am asking you that because as you mentioned in the tutorial “Differences in the scales across input variables may increase the difficulty of the problem being modeled” Therefore, if I use standard scaler in one input and normal scaler in another it could be bad for gradient descend.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 28, 2019 at 8:16 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-512749" title="Direct link to this comment">#</a>
Thanks!
Yes, perhaps try it and compare the results to using one type of scaling for all inputs.
Riyaz Pasha December 9, 2019 at 9:26 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-514349" title="Direct link to this comment">#</a>
Hi Jason,
I am solving the Regression problem and my accuracy after normalizing the target variable is 92% but I have the doubt about scaling the target variable. So can you elaborate about scaling the Target variable?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> December 10, 2019 at 7:30 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-514403" title="Direct link to this comment">#</a>
You cannot calculate accuracy for regression. You must calculate error.
More details here:
<a href="https://machinelearningmastery.com/faq/single-faq/how-do-i-calculate-accuracy-for-regression" rel="nofollow ugc">https://machinelearningmastery.com/faq/single-faq/how-do-i-calculate-accuracy-for-regression</a>
FAIZ December 30, 2019 at 7:34 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-516662" title="Direct link to this comment">#</a>
Hi Jason Sir!
My data range is variable, e.g. -1500000, 0.0003456, 2387900,23,50,-45,-0.034, what should i do? i want to use MLP, 1D-CNN and SAE.
THANKS
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> December 31, 2019 at 7:31 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-516700" title="Direct link to this comment">#</a>
Perhaps try normalizing the data first?
Faiz January 2, 2020 at 12:28 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-516827" title="Direct link to this comment">#</a>
i tried different type of normalization but got data type errors, i used “MinMaxScaler ” and also (X-min(X))/ (max(X)-min(X)), but it can’t process. I want to know about the tf.compat.v1.keras.utils.normalize() command, what it actually do? thanks
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> January 2, 2020 at 6:42 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-516854" title="Direct link to this comment">#</a>
I don’t have a tutorial on that, perhaps check the source code?
BNB January 30, 2020 at 3:32 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-519745" title="Direct link to this comment">#</a>
Hi Jason
I have a question about the normalization of data. Samples from the population may be added to the dataset over time, and the attribute values for these new objects may then lie outside those you have seen so far. One possibility to handle new minimum and maximum values is to periodically renormalize the data after including the new values. Is there any normalization approach without renormalization?
Thanks,
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> January 30, 2020 at 6:56 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-519769" title="Direct link to this comment">#</a>
Yes, re-normalizing is one approach.
Clipping values to historical limits is another.
Perhaps try a few methods and see what makes sense for your project?
<a href='http://None.' rel='external nofollow ugc' class='url'>Tajik</a> February 19, 2020 at 1:15 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-522214" title="Direct link to this comment">#</a>
Hi Jason
Should we use “standard_deviation = sqrt( sum( (x – mean)**2 ) / count(x))” instead of “standard_deviation = sqrt( sum( (x – mean)^2 ) / count(x))”?
Does “^” sign represent square root in Python and is it fine not to subtract count (x) by 1 (in order to make it std of sample distribution, unless we have 100% observation of a population)?
Thank you
Best
Tajik
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 19, 2020 at 8:06 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-522269" title="Direct link to this comment">#</a>
^ means superscript (e.g. exponent) in latex and excel.
Peter February 22, 2020 at 6:18 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-522703" title="Direct link to this comment">#</a>
Hi Jason,
Very helpful post as always! I am slightly confused regarding the use of the scaler object though. In my scenario…
If I have a set of data that I split into a training set and validation set, I then scale the data as follows:
scaler = MinMaxScaler()
scaledTrain = scaler.fit_transform(trainingSet)
scaledValid = scaler.transform(validationSet)
I then use this data to train a deep learning model.
My question is, should I use the same scaler object, which was created using the training set, to scale my new, unseen test data before using that test set for predicting my model’s performance? Or should I create a new, separate scaler object using the test data?
Thanks in advance
Michael
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 22, 2020 at 6:40 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-522738" title="Direct link to this comment">#</a>
Yes.
Any data given to your model MUST be prepared in the same way. You are defining the expectations for the model based on how the training set looks.
Use the same scaler object – it knows – from being fit on the training dataset – how to transform data in the way your model expects.
Peter February 22, 2020 at 7:24 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-522746" title="Direct link to this comment">#</a>
Awesome! Thanks so much for the quick response and clearing that up for me.
Very best wishes.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 23, 2020 at 7:19 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-522840" title="Direct link to this comment">#</a>
You’re welcome.
Mike March 10, 2020 at 2:21 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-525070" title="Direct link to this comment">#</a>
Hi Jason,
Thank you for the tutorial. A question about the conclusion: I find it surprising that standardization did not yield better performance compared to the model with unscaled inputs. Shouldn’t standardization provide better convergence properties when training neural networks? It’s also surprising that min-max scaling worked so well. If all of your inputs are positive (i.e between [0, 1] in this case), doesn’t that mean ALL of your weight updates at each step will be the same sign, which leads to inefficient learning?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 11, 2020 at 5:16 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-525143" title="Direct link to this comment">#</a>
Not always. It really depends on the problem and the model.
Zeynep newby May 15, 2020 at 8:59 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-534865" title="Direct link to this comment">#</a>
Hi Jason,
I am an absolute beginner into neural networks and I appreciate your helpful website. In the lecture, I learned that when normalizing a training set, one should use the same mean and standard deviation from training for the test set. But I see in your codes that you’re normalizing training and test sets individually. Is that for a specific reason?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 16, 2020 at 6:09 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-534911" title="Direct link to this comment">#</a>
The example correctly fits the transform on the training set then applies the transform to train and test sets.
If we don’t do it this way, it will result in data leakage and in turn an optimistic estimate of model performance.
Zeynep newby May 15, 2020 at 9:04 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-534866" title="Direct link to this comment">#</a>
Hi again,
since I saw another comment having the same question like me, I noticed that you acutally have done exactly the same thing as I expected. Since I am not familiar with the syntax yet, I got it wrong. Thanks very much!
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 16, 2020 at 6:10 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-534912" title="Direct link to this comment">#</a>
No problem!
Ask questions anyway, even if you’re not sure. The tutorials are really just the starting point in a conversation.
Isaac May 17, 2020 at 3:25 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-535093" title="Direct link to this comment">#</a>
Hai Jaison, I am a beginner in ML and I am having an issue with normalizing..
I am developing a multivariate regression model with three inputs and three outputs.
The three inputs are in the range of [700 1500] , [700-1500] and [700 1500]
The three outputs are in the range of [-0.5 0.5] , [-0.5 0.5] and [700 1500]
I have normalized everything in the range of [-1 1].
The loss at the end of 1000 epoch is in the order of 1e-4, but still, I am not satisfied with the fit of the model. Since the loss function is based on normalized target variables and normalized prediction, its value id very less from the first epoch itself.
Is there a way to bring the cost further down?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 18, 2020 at 6:08 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-535157" title="Direct link to this comment">#</a>
Yes, the suggestions here will help you improve your model:
<a href="https://machinelearningmastery.com/start-here/#better" rel="nofollow ugc">https://machinelearningmastery.com/start-here/#better</a>
Victor Yu June 9, 2020 at 11:43 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-538831" title="Direct link to this comment">#</a>
Hi Jason,
I wonder how you apply scaling to batch data? Say we batch load from tfrecords, for each batch we fit a scaler? If so, then the final scaler is on the last batch, which will be used for test data? Also in batch data, if the batch is small, then it seems the scaler is volatile, especially for MaxMin. Would like to hear your thoughts since in a lot of practices it’s nearly impossible to load huge data into driver to do scaling.
Thanks!
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> June 10, 2020 at 6:16 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-538889" title="Direct link to this comment">#</a>
Scaling is fit on the training set, then applied to all data, e.g. train, test, val.
Victor Yu June 10, 2020 at 12:10 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-538935" title="Direct link to this comment">#</a>
The entire training set? What if the entire training set is too big to load in the memory? Even doing batch training, you still do scaling on the entire training set first then do batch training? That seems pretty inefficient
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> June 10, 2020 at 1:25 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-538949" title="Direct link to this comment">#</a>
You can use a generator to load the data step by step, only keep in memory what you can/need.
More suggestions here:
<a href="https://machinelearningmastery.com/faq/single-faq/how-to-i-work-with-a-very-large-dataset" rel="nofollow ugc">https://machinelearningmastery.com/faq/single-faq/how-to-i-work-with-a-very-large-dataset</a>
Victor Yu June 11, 2020 at 10:53 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-539080" title="Direct link to this comment">#</a>
Yes, that’s my question. When doing batch training, do you fit (or re-fit) a scaler on each batch? If so, it seems the final scaler that will be used for scoring is fit on the final batch. Do you see any issue with that especially when batch is small? Thanks
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> June 11, 2020 at 1:31 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-539088" title="Direct link to this comment">#</a>
You could, this is what batch norm does.
Or you can estimate the coefficients used in scaling up front from a sample of training data. Or some other way you prefer.
Najeh June 19, 2020 at 6:32 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-540105" title="Direct link to this comment">#</a>
Hi Jason,
In deep learning as machine learning, data should be transformed into a tabular format? if yes or no why?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> June 19, 2020 at 1:08 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-540131" title="Direct link to this comment">#</a>
Input data must be vectors or matrices of numbers, this covers tabular data, images, audio, text, and so on.
Julie June 24, 2020 at 10:29 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-540919" title="Direct link to this comment">#</a>
Hello Jason,
I used your method (i did standardized my outputs and normalized my inputs with MinMaxScaler()) but i keep having the same issue : when i train my neural network with 3200 and validate with 800 everything alright, i have R2 = 99% but when i increase the training / validation set, R2 decreases which is weird, it should be even higher ? Do you think it has something to do with the scaling of the data ?
Thank you !
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> June 25, 2020 at 6:17 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-540974" title="Direct link to this comment">#</a>
It might be interesting to perform a sensitivity analysis on model performance vs train or test set size to understand the relationship.
Munaf February 23, 2021 at 4:30 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-598424" title="Direct link to this comment">#</a>
Sir how can I normalize real-time data and scale them between -150 to 150? The data are coming every 5 min interval.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> February 23, 2021 at 6:25 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-598446" title="Direct link to this comment">#</a>
Perhaps use the minmaxscaler if you’re having trouble:
<a href="https://machinelearningmastery.com/standardscaler-and-minmaxscaler-transforms-in-python/" rel="nofollow ugc">https://machinelearningmastery.com/standardscaler-and-minmaxscaler-transforms-in-python/</a>
David July 4, 2020 at 1:29 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-542748" title="Direct link to this comment">#</a>
Hi sir,
I have a NN with 6 input variables and one output , I employed minmaxscaler for inputs as well as outputs . My approach was applying the scaler to my whole dataset then splitting it into training and testing dataset, as I dont know the know-hows so is my approach wrong .
Currently the problem I am facing is my actual outputs are positive values but after unscaling the NN predictions I am getting negative values. I tried changing the feature range, still NN predicted negative values , so how can i solve this?
Y1=Y1.reshape(-1, 1)
Y2=Y2.reshape(-1, 1)
TY1=TY1.reshape(-1, 1)
TY2=TY2.reshape(-1, 1)
scaler1 = MinMaxScaler(feature_range=(0, 1))
rescaledX= scaler1.fit_transform(X)
rescaledTX=scaler1.fit_transform(TX)
scaler2 = MinMaxScaler(feature_range=(0, 2))
rescaledY1 = scaler2.fit_transform(Y1)
scaler3 = MinMaxScaler(feature_range=(0, 2))
rescaledY2 = scaler3.fit_transform(Y2)
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 5, 2020 at 6:54 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-542829" title="Direct link to this comment">#</a>
First, perhaps confirm that there is no bug in your code.
Second, it is possible for the model to predict values that get mapped to a value out of bounds. You could use a n if-statement to snap them to the required range or use a model that forces predictions to the required range.
TAMER A. FARRAG July 30, 2020 at 9:42 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-546517" title="Direct link to this comment">#</a>
Thanks a lot,
My question is:
I finish training my model and I use normalized data for inputs and outputs.
my problem now is when I need to use this model I do the following:
1- I load the model
2- normalize the inputs
3- use model to get the outputs (predicted data)
how to denormalized the output of the model ??? I don’t have the MinMaxScaler for the output ??
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 30, 2020 at 1:45 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-546534" title="Direct link to this comment">#</a>
You can call inverse_transform() on the scaler object for the predictions to get the data back to the original scale.
TAMER A. FARRAG July 30, 2020 at 7:21 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-546563" title="Direct link to this comment">#</a>
Thanks for fast replay,
I think my question is not clear to you.
imagine than I finish the training phase and save the trained model named “model1”.
I send the “model1” file to a friend and he tries to use it, he will normalize the inputs and get the outputs. In this case, he doesn’t have the scaler object to recover the original values using inverse_transform().
my problem is similar to: <a href="https://stackoverflow.com/questions/37595891/how-to-recover-original-values-after-a-model-predict-in-keras" rel="nofollow ugc">https://stackoverflow.com/questions/37595891/how-to-recover-original-values-after-a-model-predict-in-keras</a>
but the answer don’t use the scaler object. It depends on manual normalization and normalization process
Thanks for your time
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 31, 2020 at 6:15 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-546618" title="Direct link to this comment">#</a>
Save the scaler object as well:
<a href="https://machinelearningmastery.com/how-to-save-and-load-models-and-data-preparation-in-scikit-learn-for-later-use/" rel="nofollow ugc">https://machinelearningmastery.com/how-to-save-and-load-models-and-data-preparation-in-scikit-learn-for-later-use/</a>
You are developing a “modeling pipeline”, not just a predictive model.
Mel August 15, 2020 at 9:09 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-549367" title="Direct link to this comment">#</a>
Hi Jason,
Do you know of any textbooks or journal articles that address the input scaling issue as you’ve described it here, in addition to the Bishop textbook? I’m struggling so far in vain to find discussions of this type of scaling, when different raw input variables have much different ranges. Instead I’m finding plenty of mentions in tutorials and blog posts (of which yours is one of the clearest), and papers describing the problems of scale (size) variance in neural networks designed for image recognition.
Thanks!
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> August 15, 2020 at 1:26 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-549393" title="Direct link to this comment">#</a>
Not really, practical issues are not often discussed in textbooks/papers.
Maybe “neural smithing”? Maybe Bishops later book?
Munisha Bansal September 29, 2020 at 5:44 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-565174" title="Direct link to this comment">#</a>
Hi Jason,
Thank you very much for the article. I wanted to understand the following scenario
I have mix of categorical and numerical inputs. I can normalize/standardize the numerical inputs and the output numerical variable.
But in the categorical variables I have high number of categories ~3000. So I use label encoder (not one hot coding) and then I use embedding layers. How can I achieve scaling in this case.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> September 30, 2020 at 6:24 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-565223" title="Direct link to this comment">#</a>
You can separate the columns and scale them independently, then aggregate the results.
Hamed October 22, 2020 at 1:58 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-569739" title="Direct link to this comment">#</a>
Hi Jason,
I really enjoyed reading your article. My CNN regression network has binary image as input which the background is black, and foreground is white. The ground truth associated with each input is an image with color range from 0 to 255 which is normalized between 0 and 1.
The network can almost detect edges and background but in foreground all the predicted values are almost same. Do you have any idea what is the solution?
I appreciate in advance.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> October 22, 2020 at 6:45 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-569783" title="Direct link to this comment">#</a>
Thanks.
Perhaps these tips will help you improve the performance of your model:
<a href="https://machinelearningmastery.com/start-here/#better" rel="nofollow ugc">https://machinelearningmastery.com/start-here/#better</a>
walid November 5, 2020 at 11:50 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-573097" title="Direct link to this comment">#</a>
Hi jason, how are you?
i have data with input X (matrix with real values) and output y (matrix real values).
i tried to normalize X and y :
scaler1 = Normalizer()
X = scaler1.fit_transform(X)
scaler2 = Normalizer()
y = scaler2.fit_transform(y)
i get a good result with the transform normalizer as shown by: <a href="https://ibb.co/bQYCkvK" rel="nofollow ugc">https://ibb.co/bQYCkvK</a>
at the end i tried to get the predicted values: yhat = model.predict(X_test)
the problem here yhat is not the original data, it’s a transformed data and there is no inverse for normalizer.
I tried to use the minmaxScalar an order to do the inverse operation (invyhat = scaler2.inverse_transform(yhat)) but i get a big numbers compared to the y_test values that i want.
I tried to normalize just X, i get a worst result compared to the first one.
could you please help me.
example of X values : 1006.808362,13.335140,104.536458 …..
289.197205,257.489613,106.245104,566.941857…..
.
example of y values: 0.50000, 250.0000
0.879200,436.000000
.
.
this is my code:
X = dataset[:,0:20]
y = dataset[:,20:22]
scaler1 = Normalizer()
X = scaler1.fit_transform(X)
scaler2 = Normalizer()
y = scaler2.fit_transform(y)
X_train = X[90000:,:]
X_test= X[:90000,:]
y_train =y[90000:,:]
y_test=y[:90000,:]
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
- define the keras model
model = Sequential()- input layer
model.add(Dense(20, input_dim=20,activation=’relu’,kernel_initializer=’normal’))- hidden layer
model.add(Dense(7272,activation=’relu’,kernel_initializer=’normal’))model.add(Dropout(0.8))
- output layer
model.add(Dense(2, activation=’linear’))opt =Adadelta(lr=0.01)
- compile the keras model
model.compile(loss=’mean_squared_error’, optimizer=opt, metrics=[‘mse’])- fit the keras model on the dataset
history=model.fit(X_train, y_train, validation_data=(X_test, y_test),epochs=20,verbose=0)- evaluate the model
_, train_mse = model.evaluate(X_train, y_train, verbose=0)_, test_mse = model.evaluate(X_test, y_test, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_mse, test_mse))
yhat = model.predict(X_test)
- plot loss during training
pyplot.title(‘Loss / Mean Squared Error’)pyplot.plot(history.history[‘loss’], label=’train’)
pyplot.plot(history.history[‘val_loss’], label=’test’)
pyplot.legend()
pyplot.show()
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 6, 2020 at 5:57 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-573152" title="Direct link to this comment">#</a>
Sorry to hear that you’re having trouble, perhaps some of these tips will help:
<a href="https://machinelearningmastery.com/faq/single-faq/can-you-read-review-or-debug-my-code" rel="nofollow ugc">https://machinelearningmastery.com/faq/single-faq/can-you-read-review-or-debug-my-code</a>
Carlos November 17, 2020 at 9:18 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-575881" title="Direct link to this comment">#</a>
Hi Jason, first thanks for the wonderful article. I have a little doubt. By normalizing my data and then dividing it into training and testing, all samples will be normalized. But in the case of a real application, where I have an input given by the user, do I need to put it together with all the data and normalize it so that it has the same pattern as the other data? What would be the best alternative?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> November 17, 2020 at 12:56 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-575928" title="Direct link to this comment">#</a>
Good question.
No, save the scaler object or coefficients used for scaling along with the model and use them to prepare new data in the future. More here:
<a href="https://machinelearningmastery.com/how-to-save-and-load-models-and-data-preparation-in-scikit-learn-for-later-use/" rel="nofollow ugc">https://machinelearningmastery.com/how-to-save-and-load-models-and-data-preparation-in-scikit-learn-for-later-use/</a>
<a href='http://www.iqvia.com' rel='external nofollow ugc' class='url'>Chris</a> December 3, 2020 at 2:41 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-580301" title="Direct link to this comment">#</a>
Hi Jason, what is the best way to scale NANs when you need the model to generate them? I am creating a synthetic dataset where NANs are critical part. In one case we have people with no corresponding values for a field (truly missing) and in another case we have missing values but want to replicate the fact that values are missing. I tried filling the missing values with the negative sys.max value, but the model tends to spread values between the real data negative limit and the max limit, instead of treating the max value as an outlier. In another case, it seems to ignore that value and always generates values with the real data range, resulting in no generated NANs. I enjoyed your book and look forward to your response.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> December 3, 2020 at 8:21 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-580371" title="Direct link to this comment">#</a>
You cannot scale a NaN, you must replace it with a value, called imputation.
If you want to mark missing values with a special value, mark and then scale, or remove the rows from the scale process, and impute after scale. The latter sounds better to me.
Luke Mao January 6, 2021 at 4:13 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-591809" title="Direct link to this comment">#</a>
Thanks Jason for the blog post.
One question:
is it necessary to apply feature scaling for linear regression models as well as MLP’s?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> January 6, 2021 at 6:32 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-591844" title="Direct link to this comment">#</a>
Yes, it is a good idea to scale input data prior to modeling for models that use a weighted sum of input, like neural nets and regression models.
Lu Mao January 6, 2021 at 9:00 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-591999" title="Direct link to this comment">#</a>
Thanks Jason. May I ask a follow up question, what is your view on if it is wrong to only scale the input, not scale the output?.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> January 7, 2021 at 6:16 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-592055" title="Direct link to this comment">#</a>
It depends on the data and model.
Do whatever results in the best performance for your prediction problem.
Nisarg Patel January 25, 2021 at 1:23 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-594313" title="Direct link to this comment">#</a>
sir, i have a 1 problem
When normalizing a dataset, the resulting data will have a minimum value of 0 and a
maximum value of 1. However, the dataset we work with in data mining is typically a
sample of a population. Therefore, the minimum and maximum for each of the attributes
in the population are unknown.
Samples from the population may be added to the dataset over time, and the attribute
values for these new objects may then lie outside those you have seen so far. One
possibility to handle new minimum and maximum values is to periodically renormalize
the data after including the new values. Your task is to think of a normalization scheme
that does not require you to renormalize all of the data. Your normalization approach has
to fulfill all of the following requirements:
– all values (old and new) have to lie in the range between 0 and 1
– no transformation or renormalization of the old values is allowed
Describe your normalization approach.
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> January 25, 2021 at 1:31 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-594316" title="Direct link to this comment">#</a>
Perhaps you can use domain knowledge to estimate a broader min and max range prior to scaling.
Perhaps you can clip values to a pre-defined range prior to scaling.
J March 10, 2021 at 5:43 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-600398" title="Direct link to this comment">#</a>
Hi Jason!
Thank you so much for this great post 🙂
I have one question I hope you could help with:
Why do we need to conduct 30 model runs in particular? I do understand the idea, but i mean why 30 exactly?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 10, 2021 at 6:28 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-600404" title="Direct link to this comment">#</a>
30 is often used to create a large enough sample that we can use statistical methods and that the estimated stats like mean and stev are not too noisy.
Maha March 16, 2021 at 7:14 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-601009" title="Direct link to this comment">#</a>
Thanks Jason
I have some confused questions
If the scaling to input data done on the all data set or done to each sample of the data set seperately?
the scalling is done after dividing data to training and test, yes?
If I done normalizations manual to inputs and output, so I should save the max and min values to normalization inputs and denormalization outputs in future prediction?
If I have the outputs containing two differerent range of variables , is same normalization is effective or I should do further things,for example two different normalization?
Thanks in advance
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 16, 2021 at 7:59 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-601015" title="Direct link to this comment">#</a>
Data scaling, and all data pre-processing should be fit on the training set and applied to the training set, validation set and test sets in order to avoid data leakage. You can learn more about this here:
<a href="https://machinelearningmastery.com/data-preparation-without-data-leakage/" rel="nofollow ugc">https://machinelearningmastery.com/data-preparation-without-data-leakage/</a>
Maha March 16, 2021 at 9:20 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-601019" title="Direct link to this comment">#</a>
Many thanks for that, I hd read your mentioned article and understood to avoid data leakage. as I should split data first and got the scale from the trainning set .
but i had another Q:
my data set , for example contain four vectors [ x1 x2 x3 x4 ], where for example each had 100 values ., x1= [value1……………………..value100], x2=[value1…….value100],……
then my traing data may be 400Xnumberof sumples.
but the range of values to these is varying , x1 , x2 and x3 had values in range [ -04], forexample [ – 4.7338e-04 to – 1.33-04 ] and the x4 has values in range of [-02], forexample[ -1.33e-02 to 3.66e-02 ]
the same the output has values some in range [-0.0698 to 0.06211] and other in range [-3.1556 to 3.15556]
sorry for long discription , but , what suitable scaling you recommend me to do, if normalization(max, min ) to input and outs can be suitable , or I had to do any other prepation
many thanks to you
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> March 17, 2021 at 5:55 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-601133" title="Direct link to this comment">#</a>
I recommend starting with normalization. Perhaps try standardization if the variables look like they have a gaussian probability distribution.
Maha April 3, 2021 at 3:27 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-603221" title="Direct link to this comment">#</a>
If normalization and standarization is done of the whole data or each row of the samples , for example , in standardization , we got the mean of the whole data set and substract from each element in data set , or we treat each row in the data set separately and got its mean ,..?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> April 3, 2021 at 5:35 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-603243" title="Direct link to this comment">#</a>
No, data preparation is typically fit on the training set and applied to the train and test dataset.
Carlos May 2, 2021 at 11:12 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-607743" title="Direct link to this comment">#</a>
Hi Jason,
I have a question.. I hope you have time to answer it…
If I scale/normalize the input data… The output label (calculated) will be generated “scalated/normalized” also..correct…
and in order to calculate the output error the expected label should be scalated also..
Correct??
In other words.. I should scalate both..data and labels??
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 3, 2021 at 4:55 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-607817" title="Direct link to this comment">#</a>
Scaling input is a good idea, depending on the data and choice of model.
If the target is numeric (e.g. regression), then scaling the target is a good idea, depending on the data and choice of model.
If the target was sealed, then the scaling must be inverted on the prediction and the test data before calculating an error metic.
Israel May 6, 2021 at 10:03 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-608235" title="Direct link to this comment">#</a>
Hi Jason,
I’m new to deep learning. I tried to implement a CNN regression model with multiple impute image chips of 31 channels(Raster image/TIFF format), and a numeric target variable. But the result I got is quite weird cos its giving me 100% accuracy (r2_score). I also noticed that during training, the loss/val loss output values were all zeros and the training was pretty fast considering feeding over 5000 images into the network. so I feel the network isn’t training anything passé.
I want to ask if this could be as a result of data scaling? My image chips pixel values are in decimals (float) between 0 and 1 (all the image chips are less than 1), while my target variable are a continuous variable between 0 and 160 (integer).
Do you think i need to perform some sort of normalization or standardization of my data?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 7, 2021 at 6:26 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-608282" title="Direct link to this comment">#</a>
Perhaps try scaling the data and see if it makes a difference.
<a href='https://acehl.org/' rel='external nofollow ugc' class='url'>JG</a> May 9, 2021 at 6:00 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-608515" title="Direct link to this comment">#</a>
Hi Jason,
Great Tutorial! Thank you very much.
very clear explanation of scaling inputs and output necessity !
I am introducing your tutorial to a friend of mine who is very interested in following you.
regards
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> May 10, 2021 at 6:18 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-608599" title="Direct link to this comment">#</a>
Thanks!
Phil July 16, 2021 at 4:00 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-616693" title="Direct link to this comment">#</a>
Hi Jason,
Im currently training an MLP and I have 9 metric features and 3 binary coded to 0/1.
So I have decided only to standardize the 9 metric feautes and leave the binary features untouched.
Is this approch okay or should standardize the binary features as well – so they have an mean neat to zero and sd of 1
Cheers
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> July 16, 2021 at 5:29 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-616721" title="Direct link to this comment">#</a>
It sounds strange to me that you would standardize binary features. Often they would be excluded from any scaling operation.
voloddia August 2, 2021 at 1:01 pm <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-619500" title="Direct link to this comment">#</a>
“You must ensure that the scale of your output variable matches the scale of the activation function (transfer function) on the output layer of your network.”
I don’t understand this point.
First, the output layer often has no activation function, or in other words, identity activation function which has arbitrary scale.
Second, normalization and standardization are only linear transformations.
Therefore, is it true that normalization/standardization of output is almost always unnecessary? If not, why?
<a href='http://MachineLearningMastery.com' rel='external nofollow ugc' class='url'>Jason Brownlee</a> August 3, 2021 at 4:49 am <a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/#comment-619602" title="Direct link to this comment">#</a>
This was critical in the olden days of sigmoid and tanh. These days, normalizing or standardizing is sufficient.
It’s critical because large inputs cause large weights which leads to an unstable network, in general.