Code Readability Research

In 2018 Daniel started an online experiment at howreadable.com. The aim of the experiment is to try to uncover rules for how programming patterns affect code readability, based on empirical observation of developer behaviour.

A second iteration of the experiment ran from October to November 2019. 545 participant developers took part leading to statistically significant results on the readability of six coding constructs. The full write-up is available on howreadable.com

The howreadable experiment was developed by Daniel van Berzon (@dvberzon) in collaboration with Jake ‘Sid’ Smith (@JakeSidSmith) and Niall Coleman-Clarke (@mceniallator).

Thank you to Phil Teare, Freyja Nash and Narani van Laarhoven (@fsf_2025) for their help with the experiment design, Cian Campbell (Jazz Hands Presentations) for help with the visual design and Oskar Holm (@ohdatascience) for help with the statistical analysis.

Special thanks go to Ocasta for sponsorship and financial support.


Code readability is vital to productivity in software development, but there is little literature on the subject and almost no academic research. A developer reading up on how to improve the readability of their code will be presented with advice that is almost exclusively based on subjective personal opinion. There has been academic research on the subject, but even that is based on subjective assessments of the readability of code by developers themselves. What is lacking is an objective metric for readability based on empirical observation.

The inspiration for this experiment came from the world of linguistics, where the traditional view of grammar as a set of prescriptive rules was replaced by a search for descriptive grammar rules based on observation.

Read more about the inspiration for the project …

In the experiment, participant developers are presented with code snippets and asked to predict the result of executing the code. We measure their success at predicting the code, and the time they take to read it. Using these two metrics, the experiment compares the readability of different coding patterns with the aim of determining descriptive rules. An example would whether code comments help readability.

Read more about the methodology of the experiment …

The first version of the experiment went live in 2018. Daniel presented his initial findings on Dec 6th 2018 at the Async meetup in Brighton. You can watch my talk here and the slides are available here.

A second version of the experiment, with improved methodology, ran from Oct - Nov 2019. I presented the results at HalfStack conference london. The slides are available here.