24 11 2016
by Maks Nemisj | javascript |
Today JSON is widely used in different corners of software development. It’s used as data format, as configuration or even as in memory database. At my current company we also use it as configuration format. The more I use it, the more I have feeling that it’s “a bit” inconsistent and “raw”. You would expect that it would be enough to do JSON.parse(str)
and all problems solved, but that’s not true. Read further to find out the real truth.
json-safe-parse
First thing which is weird is the fact that JSON allows you to override inherited properties of the Native Object object. I will not go deeper into this topic, but thanks to the library json-safe-parse, we can sleep well and don’t thik of this problem anymore. If you want to read more about the problem, read library’s documentation.
Next.
Objects
You know what JSON stands for, right? It is “JavaScript Object Notation”. Which is, in my opinion, means that JSON should represent an object, like {}
, []
or null
. At least this is what I was thinking until a couple of days ago, when I found out truth. It appeared that there are not only arrays, objects and null values, which can be parsed by json, but also something else. Let’s have a look at the following code snippet and think what would it give us?
const zero = JSON.parse('0');
const truth = JSON.parse('true');
When executing following code I found out that "zero"
variable will be a real 0
number and truth
variable will be a real boolean true
. This brings the following statement: number values and booleans are also part of the JSON spec. What makes sense, since in JavaScript, everything is an object, right? But it’s not quite like that for JSON.parse
. At the same time, empty string – ""
– is NOT a valid JSON.
JSON.parse('');
will happily throw an Error and it is something what brings BIG confusion into my head!!! Why? Because, now to parse JSON configuration, I need to think in “special-cases”.Let’s have a look.
We use JSON as a configuration object and store it as a string in the DB. Imagine now that someone will put 'true'
into the field wher JSON is stored. In the code, which will parse that json, parser will not throw any error, since the JSON.parse("true")
is a valid bit. Though later on somewhere in the code it could through an error, since it’s a different type. Imagine now, that there is a 'null'
value going to be in DB. In that case JSON.parse("null")
could lead to errors at even more places, e.g. when using Object.keys(json)
. It’s not possible to enumerate null values, so it will just break. But that’s not all. Don’t forget about empty strings, which are not empty – " "
which will pass logical if – if (str !== '')
.
All this brought me to the next snippet which I now use when I want to parse JSON into configuration object:
First of all, it’s always checking that the parsed json is an object
object. Secondly it dismisses empty strings or null values, since it’s not misconfiguration, but merely an emptiness of configuration. As last it trims empty strings, since it’s possible to have value like " "
, which is just should be ignored.
configuration, javascript, json, parse
23 11 2016
by Maks Nemisj | javascript |
It’s going to be a short one, but powerful.
Do you remember I wrote previously why getters/setters is a bad idea in JavaScript? I didn’t change my mind, I do still think so, but now I found one valid place where I can and DO want use them. You will never guess. (just kidding)
Unit tests. Nowadays I write unit tests and use getters for testing my code. At appears we have a lot of these else
, if
statements where boolean values are checked, something like this:
function doSomething(options) {
if (!options.hasZork) {
return;
} else if (options.hasBork) {
return;
}
}
And this is exactly the place where i can now use getters
to test whether hasZork
has been checked or not. It helps me to protect my API and ensure that all this eval logical branches are tested:
const sinon = require('sinon');
const hasZork = sinon.spy(() => false);
const hasBork = sinon.spy(() => true);
const options = {
get hasZork() { return hasZork(); }
get hasBork() { return hasBork(); }
};
doSomething(options);
// assert that both hasZork and hasBork has been called;
I promised you, it will be a short one. The End!
6 09 2016
by Maks Nemisj | javascript |
Do you know what would be the result of the following code executed in node.js without babel and any transpiling?
const s = 'Maks';
for (var i = 0;i < s.length; i++) {
const ch = s.charAt(i);
console.log('ch:' + ch);
}
example.js
Do you know the result ? Think good …
And one more time
Well, it appears that it depends on the version of the node.js ( and guess what ) on use strict
directive.
Without use strict
result varies:
node: v4.4.7 - MMMM
node: v5.12.0 - MMMM
node: v6.3.1 - Maks
Though, if you would prepend use strict
at the top of the file and would run it all node versions would return the same result:
node: v4.4.7 - Maks
node: v5.12.0 - Maks
node: v6.3.1 - Maks
I thought that the javascript differences were only in the browsers, but appears it’s not.
https://gist.github.com/nemisj/d3250dd8e1dbe1a1cbaa3ba8481c38ce
execution, javascript, node.js
27 06 2016
by Maks Nemisj | javascript |
When preparing application for deploying to production env I want to ensure that everything is properly logged, especially things which are unexpected in code. That’s why, I think, proper error handling is crucial to any application.
This is a short article on how to do an error handling when going live in isomporhic web app using React.js, Express.js and Fluxible. Probably after seeing the word Fluxible you might think – “I don’t use Fluxible, it is irrelevant to me”. Still hold on, since most of the points can be applied to any isomporhic SPA’s based on Express and React.
Step 1: Rendering
First step is to prevent initial rendering from breaking. It is the place in a code where method render()
is called on react-dom
and renderToStaticMarkup()
on the react-dom/server
.
An example code for browser:
import ReactDOM from 'react-dom';
try {
ReactDOM.render();
} catch (e) {
console.error(e);
}
and one for the server:
import ReactServer from 'react-dom/server';
try {
ReactServer.renderToStaticMarkup();
} catch (e) {
console.error(e);
}
In case you use promises in your code base, there is no need to put catch
statements around methods. Instead use catch()
function. Code below will clarify it:
import ReactServer from 'react-dom/server';
(...some code before)
.then(() => {
ReactServer.renderToStaticMarkup();
})
.catch(() => {
console.error(e);
});
Step 2: Express
After rendering of react is fixed, there are other things, which might go wrong on the server. For example, things might break before the rendering. If you use Express.js you can catch them using special middleware: http://expressjs.com/en/guide/error-handling.html
This middleware should be placed after all the other middlewares:
import express from 'express';
const server = express();
server.use((req, res) => {
// some rendering code
});
server.use((req, res) => {
// some other handler
});
//// error middleware is HERE:
//
server.use((err, req, res, next) => {
console.error(err);
});
//
////
As you can see, this middleware expects to receive 4 arguments, the first one is err
object and all others are as in normal middleware.
Step 3: Global error handler
Besides the specific error handlers, there are also two global places for intercepting errors which can be used:
Step 3.1: Node.js
Node.js has a global error hook to catch the errors inside the asynchronous flows. Such errors might occur, when you’re getting back from I/O inside a callback and the code is not within try {} catch() {}
. Example:
import superagent from 'superagent';
export default () => {
return new Promise((resolve, reject) => {
superagent
.get(url)
.end((err, result) => {
// at this place if error occurs, global hook can help
return resolve(result);
});
});
}
To setup global error hook use uncaughtException
event:
process.on('uncaughtException', (err) => {
console.log(err);
});
A lot of people advice against using this hook, but I propose to do it if you do some different logging, than console.error
. At least, you could catch error using your logger, and then terminate a process:
process.on('uncaughtException', (err) => {
logger.error(err);
process.exit(1);
});
If you use Promises in your code base, or maybe some of the dependencies might use them, there is another event available: unhandledRejection
. This one will catch promises which have no .catch()
method in them. Example:
(some code)
.then(() => {
// at this place if error occurs, unhandledRejection might help
});
Here is the hook to use:
process.on('unhandledRejection', (reason, p) => {
console.error(reason);
});
Small note to those who use source-map-support
npm package. In order to make uncaughtException
to work you have to disable it inside the module configuration:
require("source-map-support").install({
handleUncaughtExceptions: false
});
Step 3.2: Browser
When code is running inside the browser, there is another way to catch unhandled errors. Such errors might occur not only inside fetching of the data, but for example inside browser events, like mouse clicks, key presses, scrolls. To setup error handler use window.onerror
window.onerror = function onError(message, fileName, col, line, err) {
console.error(err || new Error(message));
});
Be careful with the non production build of React. It appears that React intercept unhandled messages with the ReactErrorUtils and will give you Script error
instead of meaningful error. WHen you will build the react for production then all will be fine.
Step 4: Fluxible
Fluxible has its own way of handling errors. Whenever you use executeAction
, it will be caught by the Fluxible itself. Which means it wont’ appear in all of the above places. In case you want to get the error and do something with it, use componentActionErrorHandler
when constructing Fluxible instance:
new Fluxible({
componentActionErrorHandler(context, fluxibleError) {
// fluxibleError has err inside which is Native one
console.error(fluxibleError.err);
}
})
Step 4.1: Services
It’s not a separate hook, but a friendly reminder. Do something with your errors inside the services, when fetching data. I have noticed that it is one of the points where people forget to do the error handling.
Step 5
Whenever you use some framework or library, don’t hesitate to look into the documentation. Maybe they have their own way of handling errors. Please, do not leave your luggage errors unattended.
error, express.js, expressjs, fluxible, isomorphic, javascript, node.js, nodejs, react.js, reactjs
28 04 2016
by Maks Nemisj | javascript |
If you’ve decided to move react components to es6/es2015 syntax you’ve might found out that defining propTypes and contextTypes is not that seamless as it was. Babel@6.7.7 doesn’t yes support static properties on Classes and the most evident way to use propTypes is to append them to the class at the end:
class SomeComponent extends React.Component {
render() {
}
}
SomeComponent.propTypes = {
text: React.PropTypes.string
};
Though there is a little trick to do it inline in the class. Thankfully bable@6.7.7 supports static getters and setters which we can use for that:
class SomeComponent extends React.Component {
static get propTypes() {
return {
text: React.PropTypes.string
}
}
render() {
}
}
The same applies to contextTypes.
Now you can choose which method to use 🙂
es2015, es6, javascript, react.js, reactjs
26 04 2016
by Maks Nemisj | javascript |
If you have moved to Ubuntu 16.04 you can find out that your old ViM stuff is not working – Some plugins are broken. This is due to the change in python interpreter for ViM ( https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#VIM_defaults_to_python3 )
To fix this you have to use different package of vim, like vim-gnome-py2. If you’re like me and using ncurses version of vim you would go better with to vim-nox-py2 package.
sudo apt install vim-nox-py2
sudo update-alternatives --set vim /usr/bin/vim.nox-py2
sudo update-alternatives --set vi /usr/bin/vim.nox-py2
That should fix broken plugins.
plugins, vim, vimscript
4 01 2016
by Maks Nemisj | javascript |
I still have to used to this new arrow functions and implicit return statement. If you’re unfamiliar with them, here is the doc – https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions
Look at this two ‘almost’ identical parts of code and think what is the difference between them?
code-one.js
function run(context) {
return methodOne()
.then(() => doIt(context) );
}
and this one
code-two.js
function run(context) {
return methodOne()
.then(() => { doIt(context); });
}
They look the same, except that one will work correctly and another one won’t. Whenever an arrow function has curly braces, it expects statement in the body, whenever there are no curlies, it sees it as expression and applies implicit return
to it.
It’s quite easy to overlook this, when doing code review or writing code at two o’clock in the morning. That’s why I suggest to not write implicit code like this. Stick to the explicit return statement and you’re safe. I know, it’s longer to write, but remember: “Time saved by less typing is not comparable to the time spent on debugging this code.”
You’ve been warned.
ecma2015, functions
6 10 2015
by Maks Nemisj | javascript |
UPDATE (15 May 2020) :Â I see a lot of comments regarding TypeScript and that there is no issue with setters/getters while using static typing. Of course, you can safely use getters/setters in the environment, which guarantees static type check, but this article is about vanilla JavaScript. This article is my opinion why I think this feature shouldn’t arrive to the vanilla JavaScript in the first place.
As you know, getters and setters are already a part of the JavaScript for sometime. They’re widely support in all major browsers even starting at IE8.
I don’t think that this concept is wrong in general, but I think it’s not very well suited for JavaScript. It might look like getters and setters are a time saver and simplification of your code, but actually they brings hidden errors which are not obvious from the first look.
How does getters and setters work
First a small recap on what are these things are:
Sometimes it is desirable to allow access to a property that returns a dynamically computed value, or you may want reflect the status of an internal variable without requiring the use of explicit method calls.
To illustrate how they work, let’s look at a person
object which has two properties: firstName
and lastName
, and one computed value fullName
.
var obj = {
firstName: "Maks",
lastName: "Nemisj"
}
The computed value fullName
would return a concatenation of both firstName
and lastName
.
Object.defineProperty(person, 'fullName', {
get: function () {
return this.firstName + ' ' + this.lastName;
}
});
To get the computed value of fullName
there is no more need for awful braces like person.fullName()
, but a simple var fullName = person.fullName
can be used.
The same applies to the setters, you could set a value by using the function:
Object.defineProperty(person, 'fullName', {
set: function (value) {
var names = value.split(' ');
this.firstName = names[0];
this.lastName = names[1];
}
});
Usage is just as simple with getter: person.fullName = 'Boris Gorbachev'
This will call the function defined above and will split Boris Gorbachev
into firstName and lastName.
Where is the problem
You maybe think: “Hey, I like setters and getters, they feel more natural, just like JSON.” You’re right, they do, but let’s step back for a moment and look how would fullName
worked before getters and setters?
For getting a value we would use something like getFullName()
and for setting a value person.setFullName('Maks Nemisj')
would be used.
And what would happen if the name of the function is misspelled and person.getFullName()
is written as person.getFulName()
?
JavaScript would give an error:
person.getFulName();
^
TypeError: undefined is not a function
This error is triggered at the right place and at the right moment. Accessing non existing functions of an object will trigger an error – that’s good.
Now let’s see what happens when setter is used with the wrong name?
person.fulName = 'Boris Gorbachev';
Nothing. Objects are extensible and can have dynamically assigned keys and values, so no error will be thrown in runtime.
Such behavior means that errors might be visible somewhere in the user interface, or maybe, when some operation is performed on the wrong value, but not at the moment when the real typo occurred.
Tracing errors which should happen in the past but shown in the future of the code flow is “so fun”.
Seal to the rescue
This problem could be partially solved by seal
API. Whenever an object is sealed, it can’t be mutated, which means that fulName
will try to assign a new key to the person object and it will fail.
For some reason, when I was testing this in node.js v4.0, it didn’t worked the way I was expecting. So I doubt this solution.
What is even more frustrating is that there is no solution for getters at all. As I already mentioned, objects are extensible and are failsafe, which means accessing a non existing key will not result in any error at all.
I wouldn’t bother writing this article if this situation would only apply to the object literals, but after rise of ECMAScript 2015 (ES6) and the ability to define getters and setters within Classes, I’ve decided to blog about the possible pitfalls.
Classes to the masses
I know that currently Classes are not very welcome inside of some JavaScript communities. People are arguing about the need of them in a functional/prototype-based language like JavaScript. However, the fact is that classes are in ECMAScript 2015 (ES6) spec and are going to stay there for a while.
For me, Classes are the way to specify well defined APIs between the outside world ( consumers ) of the classes and the internals of the application. It is an abstraction which puts rules down in black and white, and assumes that these rules are not going to change any time soon.
Time to improve the person
object and make a real
class of it ( as real as class can be in JavaScript). Person
defines the interface for getting and setting fullName
.
class Person {
constructor(firstName, lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
getFullName() {
return this.firstName + ' ' + this.lastName;
}
setFullName(value) {
var names = value.split(' ');
this.firstName = names[0];
this.lastName = names[1];
}
}
Classes define a strict interface description, but getters and setters make it less strict than it should be. We’re already used to the swollen errors when typos occur in keys when working with object literals and with JSON. At least I was hoping that Classes would be more strict and provide better feedback to the developers in that sense.
Though this situation is not any different when defining getters and setters on a class. It will not stop others from making typos without any feedback.
class Person {
constructor(firstName, lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
get fullName() {
return this.firstName + ' ' + this.lastName;
}
set fullName(value) {
var names = value.split(' ');
this.firstName = names[0];
this.lastName = names[1];
}
}
Executing with a typo, won’t give any error:
var person = new Person('Maks', 'Nemisj');
console.log(person.fulName);
The same non-strict, non-verbose, non-traceable behavior leading to possible errors.
After I discovered this, my question was: is there anything to do in order to make classes more strict when using getters and setters? I found out: sure there is, but is this worse it? Adding an extra layer of complexity into code just to use fewer braces? It is also possible not to use getters and setters for API definition and that would solve the issue. Unless you’re a hardcore developer and willing to proceed, there is another solution, described below.
Proxy to the rescue?
Besides setters and getters, ECMAScript 2015 (ES6) also comes with proxy object. Proxies help you to define the delegator method which can be used to perform various actions before real access to the key is performed. Actually, it looks like dynamic getters/setters.
Proxy objects can be used to trap any access to the instance of the Class and throw an error if a pre-defined getter or setter was not found in that Class.
In order to do this, two actions must be performed:
- Create list of getters and setters based on the
Person
prototype.
- Create
Proxy
object which will test against these lists.
Let’s implement it.
First, to find out what kind of getters and setters are available on the class Person
, it’s
possible to use getOwnPropertyNames
and getOwnPropertyDescriptor
:
var names = Object.getOwnPropertyNames(Person.prototype);
var getters = names.filter((name) =&amp;gt; {
var result = Object.getOwnPropertyDescriptor(Person.prototype, name);
return !!result.get;
});
var setters = names.filter((name) =&amp;gt; {
var result = Object.getOwnPropertyDescriptor(Person.prototype, name);
return !!result.set;
});
After that, create a Proxy
object, which will be tested against these lists:
var handler = {
get(target, name) {
if (getters.indexOf(name) != -1) {
return target[name];
}
throw new Error('Getter "' + name + '" not found in "Person"');
},
set(target, name) {
if (setters.indexOf(name) != -1) {
return target[name];
}
throw new Error('Setter "' + name + '" not found in "Person"');
}
};
person = new Proxy(person, handler);
Now, whenever you will try to access person.fulName
, message Error: Getter "fulName" not found in "Person"
will be shown.
I hope this article helped you to understand the whole picture about getters and setters, and danger which they can bring into the code. classes, getters, javascript, setters
10 09 2015
by Maks Nemisj | javascript |
Currently I’m working on a project which uses GitHub. Here we work with feature branches, which means every feature gets its branch and every branch has to go through the pull request and then merged back to the main line. Whenever branch is merged back it get’s deleted in the GitHub with button “Delete branch”. GitHub allows to restore branches, so we try to keep branches list as short as possible.
I’m working with a command line and after a while of such workflow git branch
becoming bigger and bigger while git branch -r
remains small. Sure, it’s possible to do git branch -d {branch_name}
, but after a while I will have again the full list.
To fix this manual routine I’ve created a script which will cleanup all the branches which are available locally but already removed from the remote. This is a python script and working only for python 2.7 https://raw.githubusercontent.com/nemisj/git-removed-branches/master/git-removed-branches.py
By default this script will not remove any local branches, but only will list branches to be removed. To actually perform the deletion --prune
flag must be specified.
One more trick: If you place this script inside the $PATH variable, then you will be able to run it as git command
git removed-branches
I found this trick at http://thediscoblog.com/blog/2014/03/29/custom-git-commands-in-3-steps/
NPM
If you prefer node.js and npm, you can install git-removed-branches of the script via npm
$ npm install -g git-removed-branches
JS source is available at github: https://github.com/nemisj/git-removed-branches
bash, git, github, python, scripts
24 07 2015
by Maks Nemisj | javascript |
Have you ever needed to repeat a string or character multiple times? Some times I have this need ( don’t ask why ) and it was always annoying for me to do this. For such a simple operation, you have to write for loop
and concatenate string. I know, there is now repeat
available in javascript, but it’s only starting with ES6 (ES2015) which is not available everywhere.
var repeat = function (times, str) {
var result = '';
for (var i = 0; i < times; i++) {
result += str;
}
return result;
But today after thinking a while I’ve found much more cleaner and faster way to do that. For rescue bit shifting operations comes into scene. Look at that beauty:
var repeat = function (times, str) {
return (1 << (times - 1)).toString(2).replace(/./g, str);
}
Doesn’t it looks cute? Let’s see how is this working.
There are two ways in repeating a string: it can be buildup using a for loop or it can be created using a replacement method.
The first one we all know, but the second one is what makes foundation of this trick.
We can take any string ‘string’ and replace any character of it with any other character or any string:
'string'.replace(/./g, 'new string');
This will repeat our ‘new string’ 6 times, because ‘string’ has 6 characters. In order to make repetition adjustable, we have to generate string with the number of characters we want to have repetition.
To create an input string, I’ve decided to use bit-shifting operation and represent bits as a characters. I won’t describe the idea behind bitshifting in details, but in short it looks like this.
Since any digit contains of a bits, it’s is possible to do bits manipulation. For example: number 2
is represented in bits as 00000010
. If we apply bitshift operator << to this number, all the bits are going to be shifting ‘n’ times to the left. So shifting 2 << 5
times, will make 01000000
. In order to see this bits, JavaScript provides toString method with operand. If you pass 2 to it, then it will represent digit in binary format.
(2).toString(2) // will be 10
(5).toString(2) // will be 1000000
Now that we know how bitshifting works and how to represent it as a string, we can easily create string with ‘n’ characters.
(1 << (n - 1)).toString(2);
That said, I have to note, that bitshifting in Javascript only works till 31 bits, then it bits get overflowed. That’s the reason why it’s only possible to repeat string 31 times using this approach:
(1 << 32).toString(2); // will be 1 again
Only what is left is to replace any character of this string with the string we need.
var repeat = function (times, str) {
return (1 << (times - 1)).toString(2).replace(/./g, str);
}
bit shifting, experiment, javascript