12 12 2016
Why not to use NODE_ENV for defining environments
Recently I’ve came across a couple of node.js projects which use NODE_ENV
for defining environment for the Development, Testing, Acceptance and Production (DTAP) pipeline. At first sight this looks like a good idea, but I would advise against it.
What we often see is that npm modules are consuming NODE_ENV
as either ‘production’ or something else. When NODE_ENV
is set to ‘production’, then less logging is shown, code is optimized for performance and some other stuff is disabled, which makes it a ‘real production code’. React.js
is one of the examples in doing this through the whole codebase. Based on that I see developers define NODE_ENV
as ‘testing’, ‘acceptance’ and ‘production’ to have more logging in the test environments, less logging in the production and more performant code in the production. In my opinion, this is one of the things, which you should not do.
When code moves through the DTAP pipeline you want it to be as similar as possible on all the stages. It’s not without the reason that there are ‘testing’ and ‘acceptance’ stages, besides the ‘production’. By making difference inside the code and making it ‘development|testing|acceptance’ code and ‘production’ one, you can’t guarantee that the code which runs in DTA environments will run in ‘production’ the same way. Due to the subtle differences, bugs can popup at the places where you don’t expect them.
The second reason is extra logging which you get on non ‘production’ mode. You would say – That’s exactly what I want in my DTA environments – but I would argue. By making explicit differentiation between DTA and P you branch your release/debug process into two different threads: debugging production code and debugging loggable code. Though if debugging of production code is not in your daily workflow, then probably you will stuck for much longer when things will go wrong there. often people only learn when they’re doing something periodically. But how could we learn to trace and debug ‘production’ code if that doesn’t happen in the daily workflow? Also don’t forget that production bug must be solved MUCH faster than any other bug. This is what makes it even more high priority to learn to do this early in the phase of software development.
The last reason of not using NODE_ENV
for defining environments applies only for isomorphic apps. This doesn’t make it less important to me. If you stick to the NODE_ENV this means you will also have to use it in the client code. Say honestly if (NODE_ENV === 'acceptance')
looks weird in the client, isn’t it? There is no node in the browser, so it makes no sense.
Here is my rule of thumb. First of all keep your code similar as possible through all the environments and keep NODE_ENV always on ‘production’. Second, if you have to use differentiation, make a new variable for your environment, like APP_ENV
or CODE_ENV
, you name it. For example, we used APP_ENV
for defining our environments, because we used shared log DB and need the way to know, where it comes from.
JSON.parse welcome to my consciousness What you see is NOT what you get
I recently experienced this exact thing. We committed a change that switched a dependency to be a devDependency. We deployed to the “staging” environment that had NODE_ENV set to staging and everything worked fine, because Node.js installed the dev dependencies while deploying. We then released to production , that had NODE_ENV set to production and our app broke due to a missing dependency.
Always remember to decouple NODE_ENV from your applications’ specific environment. Set NODE_ENV to development on local environments and to production on all different “online” stagings.