IBM Champion!

IBM announced the nominations for IBM Champions 2017 and I’m very happy to say that I was nominated.

It’s wonderful to have this sort of recognition and suddenly being part of a group who have made huge contributions to the IBM ICS Community of the years is humbling.

I strongly suspect that my nomination is linked to my co-organising the Swiss Notes User Group, so I’d like to mention at this point my co-conspirators, without whose work SNoUG wouldn’t happen:

Diana Leuenberger, Susanne Schugg and Helmut Sproll

And also many thanks to those that have contributed behind the scenes to make SNoUG a success – I think in particular here of Paul Harrison, with his wonderful AgendaWALL and AgendaPAD, as well as the very talented LDC Via Group, who help me whenever I’m stuck.

I’d also proud of strengthening the Swiss IBM Champions group – showing that plucky little Switzerland can pack a good punch!

From 3000 milliseconds to 200 to open a page: Speed up your Notes Client applications by discovering their bottlenecks.

I had inherited a template for a database which was taking 3 seconds to open any sort of document. It soon got to be irritating, and I noticed that the little lightning bolt at the bottom left side of the Notes client was flashing wildly, so I fired up the Notes RPC Parser to try and see what was all the traffic about.

The culprit was soon found: it was a subform containing buttons whose label was dynamically computed depending on the user’s desired language. And, unfortunately, it was making an @DbLookup for every single button, and an uncached one at that. If you look at the screenshot, you’ll see that an @DbLookup generates 3 NRPC Calls, for a total of 50ms – and this was done 30 times each time a document was opened.

That, and another issue that I corrected, meant that I could massively improve the performance of the application for only about half a day’s worth of work.


Feel free to use the tool, or if you can’t be bothered, hire me to solve your Notes Client performance issues.

P.S. I’ve just released a new version of the tool Notes RPC Parser, which you can download here:

Why do Software Developers always squawk when inheriting a foreign codebase?

The first thing that Software Engineers do when they inherit a codebase that they haven’t written is to squawk loudly.

This is terrible! That’s not the way to do it! We need to re-write this completely!

I recently had an experience when a project manager suddenly lost his developer and asked me to take over. ‘Here is the list of requirements from the customer, they’re just small changes’.

I said ‘OK, I’ll give this a look, doesn’t look too difficult‘. And then I started looking into the code, and I started grumbling more and more. There were no comments. No source control. No test code. The code I had in front of me what not working correctly, and I had no access to a codebase which was working. And within the code, there were so many times when I was thinking ‘Oh no, why would you do that? I’d really do it differently’.

So I promptly squawked. And the project manager expressed his incredulity. ‘It’s always the same with developers! They always complain of code that they haven’t written themselves!’.

So, here’s a little guide to explain why this happens.

We’re not masons.

That’s the crux. The job of building walls is well defined, and if a wall is not complete, you can hire another mason to finish the job. He’ll come to the site and the tools that he sees are familiar and he can start right where the previous person stopped. Easy.

That’s not how it works with software. There is a wide range of ways you can solve something, the wider since we basically can write your own tools. Another persons’ building site is going to be completely unfamiliar.

So here are my insights for non-developers, in the hope that you will get over the impression that we are whiners, and understand that software development is not masonry.

Software is easier to write than to read.

After about six months, even the code you have written yourself becomes more difficult to read. A good developer will be writing his code not just well enough for the machine to understand (bad developers stop at that point), but well enough for humans to understand.

This is a pretty accurate picture of what it’s like reading other people’s code from somebody else:

Software can be really badly written, and you can’t see it from outside.

This rejoins the points I was making in a previous post. Here we’re talking about maintainability, the ease with which a particular codebase can be modified.  Here are some best practices which you should insist that your developers follow:

  • One is to separate the responsibility of code into little independent boxes, which only do one specific thing. If you need to change something, there is only one place to look for. This is called modularity and follows the concept of separation of concern.
  • Another one is having Source Control. This shows all the changes that have been done on the codebase over time, including who did it, and enables us to revert code to what it was in the past (when a bug has been inserted, for instance.)
  • Automated tests make every small bit of the code run through its logic, often by trying to make it break. There is one thing that we software developers are scared of, and which happens a lot: we change a bit of code here and then notice that loads of other bits of code are no longer working. Automated tests enable you to be able to make bold changes because you immediately get feedback on whether you’ve broken something.
  • Finally, documentation really helps. And please note that I’m not talking about some separate Word Document that was once done for the client, I’m talking about comments explaining why things were done in a particular way embedded within the code.

If a software developer inherits a codebase which didn’t follow these guidelines, then that code is going to be a pain to maintain and to change. And as a developer, it’s a large source of frustration because we are in effect far slower at delivering changes than our predecessor, and our customer is not going to be impressed.

So, if you are a customer, request that these basics are followed, or else you’ve got a bad case of developer lock-in, where Bob is the only person who can maintain the code.

In conclusion:

If you are a customer and switch developers or provider for a project, you’ll have to factor in some budget and time just for the new developer to get acquainted with the project. You’ll also see that it will initially take longer to effect changes, depending on how well and maintainable the software was written in the first place.


Software is an iceberg. You’ll get hurt by the submerged parts.

Software Quality is an iceberg.

The tip of the iceberg is the small part sticking out that everybody can see. A non-IT person judges software with a far smaller subset of criteria than an IT professional: Functionality, performance, and usability.

To a normal user, the interface is the software. I regularly have to choke off exasperated comments when a customer tell me ‘but it’s only one extra button‘.

Lurking beneath the murky waters are those criteria which only IT professionals see: reliability, maintainability, security.

End-users should not be expected to see these ‘underlying’ aspects, but if the decision-makers (i.e. the ones with the purse string) don’t see under the water, you’re doomed to produce bad software. Tell-tale signs are:

  • There is no budget for IT maintenance
  • The only time fresh money is given is for new functionalities – money is never given for a functionality-free maintenance
  • Projects get stopped as soon as the software is barely out of beta testing. Finally, as a programmer, you’ve produced software that works, but in your mind, this is just a first version. There are lots of bits and bobs of useless code left in your software, debug code that writes to system.out, the object and data structure is a mess and could be optimised, you’ve yet to apply a profiler to your code to see where are the bottlenecks. And then the project gets stopped: ‘it works well enough’. That is very upsetting, and possibly one of the main reasons that in-house IT jobs are far less attractive than IT jobs within software companies.

If you recognise yourself as a company, then you should give more power to the IT department. Too many companies treat IT as a necessary, needlessly costly evil. IT should be an essential, central part of your company strategy. There is no other region of a company’s activity which gives you more of a competitive edge.

Too many companies (and I am looking at banks and insurances in particular) have failed to see that they are, essentially, a software company. You need to have a CIO who understands his trade, and accept that he is going to spend money for stuff which doesn’t make sense to you, or at least doesn’t seem to change anything.

If you are only ready to pay for the IT stuff which is visible – more functionality, for instance, but no money for a refactoring exercise which would add stability but not change the functionality, the accumulated technical debt is going to bite you at some point. It might be that an application gets exponentially difficult to change, or it might be that you have a massive data loss that you can’t recover from because the backup software was not up-to-date and nobody did an actual restore as part of regular maintenance.

If you are a developer, and you find yourself in such an environment, you’ll have to go guerilla. You’ll need to surreptitiously improve your code whenever you have the opportunity. Try to leave any code you touch a little bit better than before. Notice duplications and remove them. If you didn’t understand part of a code, then document it after you have understood it. If variable names are unclear, then rename them to something better. It’s an thankless occupation, because there is the risk that you break things that were working, and nobody will really care that you reduced the codebase by half, but it will help you progress and it will increase the quality of your work.

IT is surprising in that often the perception of quality is completely at odds with real quality, because of this iceberg effect. I once used to work in an environment where colleagues were doing terrible things – changing code directly in the production environment, and only doing quick-and-dirty cosmetic patches, but were actually perceived by the customers as delivering excellent quality, because the response times were so fast.

getting git

(git is wonderful. In the same way that unix is an’operating system done right’, git is ‘source control done right’.

It has a surprising history, being basically the creation of Linus Torvalds, who decided that all the available source control solutions were useless, and so just wrote his own. That is fabulously cool.

This is a small article to introduce you to git.

Learning Curve

There definitely is a learning curve with git. At least that was the case for me.

For starters, I had to throw away some concepts which I thought were set in stone. Git forces you to think about code in a different way. It turns out that that way of thinking has large advantages.

The single biggest mental stumbling block for me was that I always considered the real code, the precious bit, to be the one I’m currently working on, and the code that is in source control system to be a backup. A byproduct, so to speak. You need to completely overturn this with git.

The real stuff, the precious stuff, is what is in your repository. What you are currently working with, your working directory, that’s something like a scrapbook. You’re trying things out, making changes here and there, but it’s all very temporary and it’s not where the real stuff is. It shouldn’t be frightening to overwrite your working directory, for instance.


Let me track back a while and introduce the three areas in a local git repository (I am not talking about a remote repository, I am talking about a local git repository, on your local hard drive, and blazingly fast).

The repository is where the history of your project is stored. It’s quite similar to a series of snapshots of your whole project. Each of these snapshots contain your whole project (technically, git is not saving the whole project but just saving the delta to your initial commit, but conceptually, think of these snapshots as whole copies of your project).

These snapshots are called commits. They are unique, and can be referred to in a number of different ways, either directly (by using its number, which is a SHA-1 hash) or relatively (three commits before the last one I did).

Each commit points to a parent commit. When you have two different commits pointing to the same parent, these two commits are the starting points of different branches. The two codestreams, from this branching point, starts having a life of their own, although it’s possible to merge different branches together at a later stage.

Decentralized weirdness

This is a good time to introduce another weirdness that took me some time to grok. There is no primacy between branches. It’s a delightfully anarchic system, no bosses – a little like the anarcho-syndicalist communes of ‘The Holy Grail’. There is no master control program, no centralized master system. One can define certain branches to be more ‘important’ than others, but it’s just a naming convention, it’s not something that is built in.

The same applies between your local repository and the remote repository. Git really doesn’t assume that one of them is more important than the other. It’s very disquieting, but liberating once you’ve understood it.

Working Directory

Next, your working directory. This is where your whole project is, where the code is, images, what have you. This is what gets ‘snapshotted’ each time one makes a commit. It could be possible to snapshot your whole project every time, i.e. every single file, but it turns out that that is a bad idea, because you end up saving too much and it gets really difficult to understand what exactly happened between two commits. I must admit to having done that at the beginning, because I was blocked in my ‘backup’ concept.

No, what you want to do is make lots of small commits, with each commit changing as few files as possible, as long as they pertain to one common change.

Staging Area

Enter the staging area. Really well named, this is where you determine which files you want to belong to the next commit. You’re not interested in the files that have changed only because something trivial like a timestamp has changed, and you’re not interested in unrelated changes either. You really want to encapsulate all the changes necessary for a single debugging or a single new feature into a commit.


The following diagram shows the basic logic: you add files to the staging area with the git add command, then you commit those added files with a git commit command. Bringing back a specific commit to the working directory is done with the git checkout command.


So, you’ve got this conceptually? Working directory, repository, staging area? Right. This is what you should work with.

I got very confused by how this is actually built. The repository and the staging area are stored in a folder called .git which is contained within the working directory.

This made my head wobble. Surely, not inside the directory? Surely, that way lies folly? Infinite recursions? Surely it should be external? But that’s how it’s built. Inside. It’s elegant once you start thinking about it but it made my head hurt.

Start with the command line

I would recommend staring with a command-line interface. I know that there are many GUI Interfaces out there (SourceTree being a particularly good one), but all they are doing is adding an interface on top of the command-line instructions.

I would recommend moving to a GUI once you’ve learnt the basics of git. Additionally, the command line is very helpful for noobs, as it notices when you have typed something that doesn’t make sense and makes a suggestion. Stackoverflow and the git documentation are the place to go if you get stuck. is a good, compact resource.

You can download the git command-line bash here.

Recommended Reading

This is a tip from Eric McCormick.


Git in Practice [With eBook]

Useful Commands

Here I have listed those commands which I am using regularly and feel comfortable with:


[code gutter=”false”]
git init

First initialization, creates the magical hidden folder .git. This is where the actual repository is stored (see above). You’ll need to select the directory which you want to ‘track’


Staging means ‘saying these are the files I wish to commit’

The command for staging is

[code gutter=”false”]
git add <filename>

alternatively you can use

[code gutter=”false”]
git add .

which will add all the files in the current directory (. is the current directory)


[code gutter=”false”]
git commit –message=’Committing message’

The convention is to write a message with multiple lines, a bit structured like an e-mail. The first line should be like the subject of the e-mail, the other lines are the ‘body’ of the e-mail.

Creating a remote repository:

[code gutter=”false”]
git remote

shows the remote repository linked to the current repository. You’ll need to have created this beforehand on the github site.

I use github, so the example here is with github. First log into GitHub and create a repository there. Note the URL of the remote repository (there is a convenient button for this)

[code gutter=”false”]
git remote add origin <your remote repository URL>;

Note: origin here is one of these conventions, it’s the default name of the remote repository. Don’t make the mistake of naming one of your remote branches origin. Remember, convention!

[code gutter=”false”]
git remote -v

shows the urls linked to your remote repository.

[code gutter=”false”]
git remote show origin

This command shows all the branches on the origin repository (the remote one, origin is its name by default)




[code gutter=”false”]
git fetch

fetches the data from the external repository but does not merge the data.


[code gutter=”false”]
git pull

fetches, then merges the data.


[code gutter=”false”]
git push <remote> <local branch name>:<remote branch to push into>

pushes the data from the local repository to the remote one.

Checkout (restore files or change branch)

[code gutter=”false”]
git checkout

This switches branches or restores working tree files. It’s a bit of an uncomfortable command at first because it seems to be doing two completely different things. Actually, it’s doing the exact same thing – it’s overwriting your working directory with information from a particular commit. Your working directory is a scrapbook, remember?

I find this useful to avoid committing fluff (my technical term for files which have, technically, changed (a timestamp for instance) but which are not actually changes.) I add the files that I want to have committed with gid add, then I do a git commit, and then a

[code gutter=”false”]
git checkout — .

to overwrite the fluff in the working directory.

If you notice you’ve just committed nonsense (in this example I replaced the whole About This Database with nothing), you want to restore the file to what it was before. Here comes git checkout for helping again:

[code gutter=”false”]
git checkout HEAD~1 — odp/Resources/AboutDocument

HEAD~1 is a way to refer to a specific commit, i.e. ‘The commit on HEAD’ minus (~) one (1)

Conventions for naming files and directories:

dash dash

If you see before a filename, it’s a disambiguator saying ‘what follows are necessarily files’


. is the current directory – so, everything.

Syntax for defining an entire directory with everything in it

odp/Resources/ would define that particular directory, along with anything inside it. Note the trailing slash!

Conventions for naming commits:

Naming conventions:

HEAD: Head is a pointer the commit that was pushed into the working directory last. Think of it as ‘commit corresponding to the current state of my working directory before I made any changes’.

<commit name>~1 The tilde here means ‘minus’, so this would be the previous commit.

Configuration options that are useful

In git Bash, I immediately increase the font size to something readable with our big screens, and then

[code gutter=”false”]
git config –global core.autocrlf false

which avoids the annoying ‘lf/crlf’ comments, since I’m mostly working in a windows environment (sniff).
Do this before your initial commit! (see

Setting up SSH keys with Git Bash

Since I’m lazy and don’t want to type in password and username every time I connect to git, here’s how to do it it cleanly via ssh.

[code gutter=”false”]
ssh-keygen -t rsa -b 4096 -C ""

Press Enter for default in which to save key enter a passphrase (a complicated one, alright?)

Add the keys to the ssh-agent (This is a background agent whose job is to always automatically enter your passphrase in. You should be aware that this makes your local physical machine a lot more vulnerable)

[code gutter=”false”]
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa

copy the PUBLIC key to the clipboard (windows)

[code gutter=”false”]
clip < ~/.ssh/

or mac OS

[code gutter=”false”]
pbcopy < ~/.ssh/

add the ssh key to your github account Test the keys with

[code gutter=”false”]
ssh -T

Ignoring certain files or directories

There are some occasions when you don’t want certain files to be tracked by git. Typically, there are for instance binaries of your source code, or hidden files generated by your ide.

You can see which files are not being tracked with this command:

[code gutter=”false”]
git ls-files –others –exclude-standard
or alternatively,

[code gutter=”false”]
git add -A -n

There are unfortunately many different ways to define the excluded files. There is a central configuration file, whose location one can find with

[code gutter=”false”]
git config –get core.excludesfile

Within your project, there are two files that control the exclusions:

[code gutter=”false”]

Confused? Here is an explanation from Junio Hamano (the maintainer of Git)

The .gitignore and .git/info/exclude are the two UIs to invoke the same mechanism. In-tree .gitignore are to be shared among project members (i.e. everybody working on the project should consider the paths that match the ignore pattern in there as cruft). On the other hand, .git/info/exclude is meant for personal ignore patterns (i.e. you, while working on the project, consider them as cruft).

This command combination is practical (thanks Akira Yamammoto)

[code gutter=”false”]
git rm -r –cached .

(recursively remove all the cached files (i.e. tracked) from the current directory) then

[code gutter=”false”]
git add .
git commit -m "fixed untracked files"

Notes/Domino best practices

Best practice when gitting an odp project: Never disconnect the nsf from the eclipse project.
Never change the nsf the odp is connected to.

Thanks to Adrian Osterwalder for the tips.

Back from ICON UK with some goodies.

Another year gone and ICON UK did not disappoint. Tim Clark made a wonderful job of continuing on the success of last year’s performance, bringing back the event to two full days. IBM graciously hosted us in its stanley kubrik-esque Client Center, with stunning Thames views, lovely food, good infrastructure. Wonderful.

René Winkelmeyer held a wonderful session on gradle and I was very impressed by his mastery of his computer. He was actually using vim to edit text files, and I must admit that I was so impressed that I am actually writing this blog on vim, learning the hard way. Really fast, configurable, powerful text editors that are done for programmers seem to be more and more the norm. Beside vim, there’s sublime text, Matt White mentioned Atom as being his favourite, and I must admit that it is really a refreshing break after Eclipse, which is slow and ponderous in comparison. I’m not even making the comparison to DDE.

Here are the slides:

Speaking of gradle, that’s another trend which I can see happening in parallel in several different systems. I’ve been discovering the joys of Linux and scripting because of a small raspberry pi project I am doing on the side, and there is a common theme of self-updating, self-building systems, be it apt-get (for linus os x updates), or bower (for javascript libraries), or homebrew formac os libraries, or maven for building up the dependencies for a java project. The skillset needed to build a modern project is getting to be more and more to know which great big building blocks are needed and mastering the building tool. I’ve been trying to get my head around Maven right now, but since I heard some dark mutterings by Paul Withers about the documentation of Maven, I think I’m going to just jump over Maven and go to gradle directly.

I had the pleasure attending the session by Bill Malchisky and I’m proud to say that I understood at least half of what he said. He speaks surprisingly fast and surprisingly exactly; it’s an uncommon combination but one really needs to listen hard. He is also a script master, and again, eerily, a nudge in the direction of ‘invest in your text-editor and typing skill’.

Matt White showed the magic of node.js, which is used extensively in their solution LDC Via, and there again I was seduced by the simple structure, and the promise of only a single thread working very very fast.

I spent a thoroughly enjoyable hour with Serdar, and I discovered that we share many opinions as ‘convinced skeptics’. It was a pleasure to bash on pseudoscientific nonsense with him. Next time I’ll bring woowar in and we’ll do a bigger skeptics session.

On the second day Andrew Grill showed the advantages of Connections, his style was entertaining and persuasive. His colleague Harriet explained to us rather condescendingly what it was to be a millenial. I didn’t understand why being impatient and having a short attention span is somehow good, and I took exception to the comment that millenials don’t read instruction manuals but just expect things to work immediately out of the box. Surely that is a result of extraordinarily good product development (I am thinking in particular of the Apple products), and it’s not because the millenials are this super-brainy generation. Making things simple is extraordinarily difficult. Just try it.

Engage 2015 as engaging as ever

Engage is the most successful LUG in Europe, and as usual I am slightly bewildered by how Theo Heselmans, our gracious host, manages to pull it off. The venue was lovely, the opening session room stunning, and the content was very high quality. I really enjoyed meeting many community members whom I had only seen online, including a couple of my Stackoverflow saviours (Per, nice to have met you!)

The city of Ghent itself was a nice surprise. The inner city is full of history, with many old buildings harking back to a more prosperous past, and a surprising number of churches. I had a little walk to the north of the city, though, and it’s obvious that the city went through an industrial phase which it got out of and has not really recuperated from.

Here are my technical take homes from the whole two day session:

Both Ulrich Krause and Frank van den Linden independently confirmed that they didn’t like the new ‘Java’ element and found the oldskool WEB-INF folder stabler.

Theo Heselmans presented some of the Javascript frameworks he’s been using; I knew Bootstrap and Backbone; he recommended Ratchet and Knockout as well. Also, if you want to store local stuff, the way to go nowadays is no longer cookies but the manifest or local storage.

John Daalsgard had a good session explaining the Domino REST API; I learned lots of stuff but was sort of disappointed that authentication was not really talked about. Most of the examples were using anonymous access, and authentication is still not really an easy thing to do. Paul Harrisson, who did the local web application for engage, pointed me to his blog entry about authentication. I’ve been working with Julian Buss’ framework DominoToGo and I was initially under the impression that the REST Services introduced in 9.0.1 would mitigate its usefulness but I’m coming to the conclusion that as soon as you get out of the demo-cases ‘simple text’ and ‘anonymous access’ things start getting complicated using REST, i.e. one has got to start coding things oneself.

One of the most interesting sessions was the one on GIT done by Martin Jinoch and Jan Krejcarek. Martin was very stern and he endeavoured to persuade us to abandon the idea that the source code resides within the NSF and that the Git repository is the backup. Rather, the source code is in GIT and the NSF is just a throwaway, last-minute build construct. I almost broke in tears. Martin also admonished us to turn everything off that automatically builds, including the nsf to on-disk-project sync.

I was also relieved to hear that other fellow developers were irritated by ‘false positives’, i.e. files that have been touched, and therefore appear in the staging area of git, but whose code has not been practically modified, and therefore are really cluttering. There is a project called DORA which alleviates this, but it only works if one starts the project with it. Implementing it midway is bad, apparently (thanks to Serdar).

The London Developer Co-op was there in force, with a stand even, and showed us a very polished product for data exporting. I can see use cases if customers just want to store their data somewhere else, to finally kill off the remnants of their Domino infrastructure, but the fact that the business logic does not get exported will still represent a large exit barrier.

Mark Leuksink and Frank van den Linden introduced me also independently to bower, a package manager that manages the javascript library dependencies automatically for you. The idea here, if you’re doing an XPages project, to have bower point at your ods structure and do the updates here. You’ll need to press F9 in the package explorer before syncing the project.

In the mindblowing categories, Nathan Freeman showed the Graph construct he has made available within the OpenNTF Domino API. Documents stored in nsf without views? That’s just weird. Possibly illegal. And whereas I can see obvious advantages in terms of speed when the data structure is already known in advance, especially for transversal, multi-layered searches like ‘show me the persons who know the persons I know’, I’m not sure how the Graph concept would deal with ad-hoc requests, or with a change in the underlying data structure. I would really like to see what sort of measurements one can make as to the performance of data writing and reading, especially in large numbers. The demonstrations as well were built from scratch, and worked well, and I’d be very interested to see what happens when one takes an existing data landscape and ‘graphs’ it.

The final session I attended was from Paul Withers and Daniele Vistalli. Paul presented the newest possibilities of the next version of OpenNTF Domino API. They are introducing a concept of metaversalID which is a combination of database replicaID and Document Universal ID, and apparently the code has been made Maven-compatible. It looks like we will have, in conjunction with Christian Güdemann’s work on an eclipse builder, soon a system where we can start thinking of continuous builds. We’ll be big boys, then, finally.

Daniele introduced the Websphere Liberty Server. I had dismissed the Websphere server as a huge, lumbering IBM monster but apparently the Liberty Server is small and lightweight. And then, doing some magic, Paul and Daniels made the Liberty Server behave just like a Domino server. The demonstration was still very much in beta stage, and I’m not clear as to the implications of this tour de force. But it might be a game-changer.

my non-technical take homes:

When travelling, bring two phone chargers. With the iPhones losing juice so quickly, losing your charger leaves you strangely vulnerable and incommunicado. Thanks to Ben Poole for letting me load up at the LDC Via stand.

It is unwise to start debating with Nathan Freeman at 2.30 in the morning after everyone else has been kicked out of the hotel bar, and Nathan has a bottle of tequila in an ice bucket.

Loop elegantly through a JavaScript Array

I’ve been reading JavaScript Enlightenment to try and understand the language. There is beauty, and power, hiding behind the covers of JavaScript, but I haven’t clicked yet. I still don’t really get prototypal inheritance and what ‘this’ really means. There is power in understanding closure and scope, too, I am sure. This small book is recommended to see where the beauty is.


I’d like to share one bit of code which I found elegant: looping through an array:

[code language=”javascript”]
var myArray = [‘blue’,’green’,’orange’,’red’];

var counter = myArray.length;

while (counter–) {

It relies on the fact that integer values that are not 0 or -0 are ‘truthy’.