Archive for the ‘English’ Category

An interview with John Koenig about YMIR

August 2, 2013 3 comments

Hello there! Today I want to introduce you my interview to John Koening. John is a PhD student at University of Minnesota in the fields of distributed and real time simulations. John is also working in the game studio he founded. Currently they are developing the game The Electric Adventures of Watt which has some Erlang in it.

Learning something more about Ymir

Paolo – Hello John and welcome to my blog! Can you please introduce yourself to our readers please?

John – Hi Paolo, thanks for having me. My name is John Koenig and I am a PhD student at the University of Minnesota (UMN) studying distributed, real-time simulation. I am going into the third year of my PhD program preparing for my written and oral defenses. I have been a regular Erlang-user for about 6 years.

Prior to, and inter-mixed with my time at UMN I worked at Cray Inc. Most recently I was contracted under Cray’s Chapel team where I worked on several language improvements in the area of portability. A majority of my time at Cray was spent as part of their Custom Engineering initiative. Together, we engineered unique super-computing platforms and software stacks for various customers.

In 2010, I founded a game studio, Called Shot LLC, with my good friends Gabriel Brockman and William Block. We are currently in the first round of funding for our flagship game title: The Electric Adventures of Watt.

Paolo – This is a common question I ask during my interviews: how did you start using Erlang? What are the features of Erlang that made you learn it?

John – I was first introduced to Erlang while pursuing my undergraduate degree at University of Wisconsin – Eau Claire (UWEC), I think it was around 2006. As part of a Programming Languages course we were tasked with picking a new language and implementing a solution to a sufficiently interesting problem which applied to the language’s domain. At the time, I was big into Plan 9 and distributed software in general so I chose Erlang and implemented a distributed prime number sieve.

Being more of an applied school, UWEC had me spending most of my time programming C and C++ and I remember being really impressed with how Erlang modeled processes and inter-process communication directly in the language. Once I got past Erlang’s syntax learning curve and my newness to functional programming, I found myself able to express distributed solutions very naturally in Erlang. After that, I was hooked. I picked up Joe’s book, Programming Erlang, and started keeping up with the Erlang community online.

I first started using Erlang professionally at Basho in early 2008 when I was brought on as a Reliability Engineer. I had thought, coming out of UWEC, that I knew Erlang fairly well, but I grew considerably during my six months at Basho. Justin and his development team are incredibly talented and being around that level of skill and enthusiasm was highly contagious. I remember that time fondly.

Paolo – Would you like to introduce and describe in a few lines what Ymir is? Where will Ymir be used?

John – Ymir is an open-source (GPL), cross-platform, distributed 3D game engine written in Erlang.

With the number of cores available to gamers on the rise, it is Ymir’s purpose to break games out of the traditional, single-core-dominate game-loop and, in doing so, achieve faster, larger simulations which grow in proportion to the number of available cores.

Paolo – Why did you decide to use Erlang for Ymir? Was there any other candidate language at the beginning of the project?

John – Ymir grew out of a desire to create a multi-player RPG that got away from the traditional client/server model. Myself and a few friends enjoyed online RPGs but didn’t enjoy the MMORPG scene. We were interested the approach of Neverwinter Nights 2, however, which featured smaller worlds developed and hosted by members of the community. Hosting of these worlds could get terribly expensive, as the worlds were hosted on a single server and, as their player base grew, admins of these worlds would be required to either co-locate their server or pay for expensive home internet access with sufficient upload speed. I set out to change this, wanting instead to see a game capable of simulating a world in a more peer-to-peer fashion. Namely, a game engine capable of utilizing the additional computational power and bandwidth present as players login to enjoy the simulated world.

I didn’t consider anything other than Erlang for this task. Along with OTP, Erlang is still the best language for distributed development as it allows me to focus more on the high-level challenges of distributed real-time simulation and less on the gritty details of implementing my own task-queues, inter-process communication, etc. This choice was further cemented when I proved that communication to port drivers, with minimal trickery, was sufficiently fast to support online rendering.

Paolo – Reading your paper I spotted many words often used among Erlang developers: scalable, soft-thread, message passing and minimal amount of synchronization. Would you like to discuss the meaning of each term with respect to Erlang and Ymir?

John – Game engines are traditionally frame-centric. Their primary goal is to compute and render frames as quickly as possible. Two aspects make this approach difficult when scaling over multiple cores: first, computing a frame is recursively dependent on the frames which came before it and, second, traditional spatial data structures used in collision detection require all game entities to be synchronized in order to function.

Ymir takes an object-centric approach and aggregates frames as quickly as possible. Game objects (entities) are represented as Erlang processes (soft-threads) and each entity is responsible for simulating itself locally. Discrete events (e.g. collisions, user-input) are modeled as messages which occur at a specific point in simulation time. As an optimization we allow entities to exist at various points in simulation time and to resolve events in the recent past by application of timewarp. Entities proceed through their local simulations, streaming updates to their physical state to relevant renderers. To enforce fairness, a sense of global time is defined as the minimum of all entity simulation times. This introduces a small amount of global synchronization as entities “vote”on the value of this global time through various shared ETS-based counters.

To break away from traditional spatial data-structures, Ymir applies map/reduce to spatial reasoning in order to achieve scalable collision detection. When simulating forward in time, entities volumetrically hash their physical extents against a fixed cube to various buckets (also soft-threads) and aggregate contacts which result from writing their latest physical states into each selected bucket. The act of mapping to spatial buckets is analogous to selecting nearest-neighbors (broadphase) and the buckets themselves compute points of contact for each pair of entities overlapping within its given volume of simulation space. In short, Ymir serializes only those objects which are sufficiently close together while permitting objects sufficiently separated in space to simulate unimpeded.

Using map/reduce in this fashion allows Ymir to scale-out over many cores very well. Currently, we are able to realize ~11x speedup in overall simulation time on 16 cores and sustained frame rates of ~500 fps. Ymir’s performance is dependent on many factors, however, chief among these is the degrees of freedom between entities. As entities are serialized based on spatial proximity, scenes where all entities exist in persistent contact are currently unable to obtain such lofty speedups. I am currently expanding our methods to better model persistent contact which will help Ymir obtain better speedups in these scenarios. Furthermore, using map/reduce to compute contacts works well locally or over low latency networks but as we scale up to many machines connected with higher latency other approaches will be needed. I am currently investigating network overlays between entities which capitalize on spatial assumptions present in game simulations.

Paolo – Do you have any partial results about Ymir our readers can take a look at? What kind of tests do you do on Ymir?

John – We are actively maintaining performance results on Ymir’s indiedb page, and as time permits I will be documenting Ymir more completely on our development blog.

This video showcases the three testing scenarios we used to gather our preliminary results. All three scenes are rendered offline using Ymir’s built-in support for Mitsuba. Parallel is rather boring to watch, but provides a best case performance. Cylinders features a stack of spheres falling onto an static array of cylinders and is more representative of the types of rigid body interactions one might see in an interactive game. Last, is Bounce in which spheres move randomly within fixed scene boundaries. Bounce is currently being used to measure Ymir’s performance as it relates to scene density.

Paolo – Is there any way our readers can contribute to the development of Ymir? Is there any fund-raising? Can other developers join the project?

John – Glad you asked, yes! We are currently on indiegogo seeking funding for The Electric Adventures of Watt which will be powered by Ymir. In supporting The Electric Adventures of Watt, contributors will be directly helping us mature Ymir into its first public release.

We will be advertising the public repositories for Ymir concurrently with its first official release. In the meantime, if developers are interested in working on Ymir, please, don’t hesitate to get in touch with me: john

Paolo – You are a PhD student at the University of Minnesota and your studies are mainly focused on parallel and distributed real time simulation. Do you think Erlang could be widely used in these fields?

John – Without a doubt, that is what is what Erlang was designed for. There is even a precedent for using Erlang server-side for games: MuchDifferent and SMASH. I feel that as CPUs continue to have more cores and affordable CPU accelerators (i.e. parallela) become available, game developers will turn to solutions like Ymir to grow their games. Scalable, real-time simulation is not an easy undertaking and savvy developers will be looking for the right tools for the job.

As many Erlang enthusiasts know, there are at times substantial resistance to using languages that are outside of other developer’s comfort zones. This is especially true for academia. I can’t even count the number of times I have had to defend Erlang to my lab mates at UMN. That said, we are not expecting that all developers wishing to use Ymir will embrace Erlang, and we have on our roadmap to develop front-ends for languages most game developers will find familiar: C/C++/Lua.

Paolo – Did you find any help in the Erlang community? Did any Erlang developer give you feedback or support online during your development?

John – Several times I got frustrated with the performance of Mnesia and ETS for Ymir’s collision detection and I turned to the Erlang IRC channel for support and guidance. The Erlang community has been nothing but insightful and supportive every time I have turned to them. Although for the life of me I cannot remember that handles of those who offered help, I owe the Erlang community thanks.

We also owe special thanks to you, Paolo, for featuring Ymir on your blog, Peer Stritzinger for helping us reach out to the Erlang community, and to your readers.

An interview with Kenji Rikitake (@jj1bdx)

June 27, 2013 4 comments

Hello there! In this post you can read my interview to Kenji Rikitake. Kenji is a famous Erlang developer and security expert. I really loved this interview because Kenji provided some really interesting anectodes connected to his personal life and many insights on the IT in Japan.

The Erlanger from Japan

Paolo – Hello Kenji! It’s great to have you here! Please, can you describe yourself to our readers.

Kenji – My name is Kenji Rikitake. I am a relatively new user and programmer of Erlang; my experience is only about five years.

I’ve been working on various aspects of internet and distributed computing for 25 years. I started as a intern of VAX/VMS sysadmin in 1987. A couple of years later, I became a VAX/VMS Asian screen management library programmer and the product tester in 1990 at Digital Equipment Corporation Japan.

After leaving Digital in 1992, I decided to start my career as an internet sysadmin, or “devops” in the latest trendy word, and a volunteer evangelist of explaining how internet would change the world. I worked for a systems integration company called TDI, and co-designed and implemented a corporate firewall with BSD/OS systems and dedicated routers, including a simple fault tolerance. The firewall system was running until 2000 when I left the company. I’ve also written two books about internet engineering and technologies in Japanese.

From 2001 to 2005, I was a researcher at KDDI R&D Labs, about network security on intrusion systems, DNS protocol, and teleworking. During the period, I also conducted a joint research with Osaka University as a PhD student. My PhD thesis was about DNS reliability and security.

From 2005 to 2010, I was a researcher for National Institute of Communications and Information Technology (NICT), a research body of the Telecom Ministry of Japan. I involved in the preliminary design of a network intrusion early warning and analysis system called “nicter”, and later I pursued the DNS reliability research especially on the behavior of DNS packet fragments. I also worked in IPv6 and NGN security.

After I met Erlang/OTP in 2008, my research interests have shifted into the concurrency programming and the various related issues, including security, efficiency, and the robustness. Distributed database design is my latest research topic, for the obvious reason that I am currently working on building Riak. I’ve presented four talks at Erlang Factory SF Bay Area from 2010 to 2013, one for each year.

From 2010 to 2012, I was a Kyoto University full professor, though my primary role there was to implement and supervise the campus network security policies and procedures. I worked on two Mersenne-Twister random number implementations for Erlang, called SFMT and TinyMT, which are published in ACM Erlang Workshop 2011 and 2012. I also organized the 2011 Workshop held in Tokyo, as the Workshop General Chair.

I’m currently working for Basho Japan, a Japanese subsidiary of Basho Technologies.

I’m an electronic geek, and my Twitter handle @jj1bdx is derived from my primary ham radio call sign in Japan, which I’ve been assigned since 1976. Morse Code on the shortwave is one of my favorites on the radio, though from 1986 to 1990 I also involved in the packet radio activities
based on TCP/IP. Music is another thing that makes me happy.

Paolo – First real question: how did you meet the functional programming world?

Kenji – I first read a Lisp book in the early 1980s when I was a teenager. I was not that interested in the S-expression though, because I didn’t have an execution environment then. It was even before the C language for the personal computers; I was playing around with my Apple II, mostly in the assembly language, and two tiny programming languages called GAME and TL/1. I even wrote a GAME compiler for 6502 running on Apple II.

Before starting my real career, I was a lab member of Professor Eiiti Wada from 1988 to 1990, at the University of Tokyo. Prof. Wada and his lab members created a Lisp implementation called UtiLisp, and the lab was the most advanced place in the campus networking. I was also learned some basic ideas on the functional and even logic programming, because of the nationwide buzzword called The Fifth Generation Computers. Some of the Wada lab alumni were the key designers and implementers of the language called Guarded Horn Clauses, which has surprisingly similar design philosophy to Erlang, although it is a logic programming language.

My problem about understanding functional/logic programming was, however, that I couldn’t really grasp the core reasons why those programming paradigms were effective and even required for a large-scale system design. I failed on a Prolog course in 1989 either because I didn’t find the unification principle was anything meaningful. So I was a very bad student. I wish I could have learned it in the Erlang way of the pattern matching then!

And unfortunately my mind in the late 1980s was too focused on how to run UUCP and email systems in an inexpensive way without UNIX, so any functional or logical programming paradigms seemed redundant to me, because they were so slow. I didn’t like regular commuting from my home to the university, so I wanted a way to discover a way of working from home. At that time my main target of code hacking at home was MS-DOS then; I had to wait until 1993 when I could use BSD/OS at home for experiencing the real UNIX at home. I later moved into FreeBSD in 1997. And I’ve been running Erlang/OTP mostly on FreeBSD since 2008.

Paolo – And when did you first hear about Erlang?

Kenji – I first saw a Japanese translation of Joe Armstrong’s “Programming Erlang”, published by Ohmsha in November 2007, at a bookstore I visited in Tokyo downtown in February 2008, on my way back home from Tokyo to Osaka. I instinctively found out this was the once I had to learn and go for, so I immediately bought it and started discovering the world of Erlang since then.

Paolo – You told me that you had some bad times during your experiences as developer and University Professor, but also that Erlang and functional programming helped you to overcome your difficulties. Can you tell our readers something about that?

Kenji – Let me start from my programming middle-age crisis first.

I have concentrated my programming effort to C since 1986. I haven’t really grasped the idea of the strict control of the module name space in Java, neither the template-based extension made by C++, even at this moment I am answering to the interview in 2013. Of course I can manage to handle other script based languages such as awk, Python (which is quite good), Ruby, or even JavaScript. I know programmers can no longer choose the languages because every system has chosen the best language for running. But that doesn’t mean you can just improvise all the code; you need to have deep knowledge base on at least a few languages.

I was looking for something completely new and innovative for a programming system to learn, after I thought working only on C was no longer sufficient to keep myself up as a modern programmer. I was sick and tired of understanding and modifying the BIND 9 DNS server code, written mostly in C, for a DNS research paper I was writing then. I don’t blame the BIND 9 programmers because it does really complex magic things, and I admire ISC people especially Paul Vixie, one of my mentors in Digital Equipment and the father of BIND. Nevertheless, having to read hundreds of header macro lines to reach the actual code looked no longer practical to me at that time. And I thought I would have lost my competitiveness as a programming person then, if I stuck into the old way of C programming. So eventually I become a polyglot programmer; I use C, awk, Python, Perl, and Erlang.

I knew multi-core or massive-parallel computing hardware is coming and I wanted to learn something very much different from the past sequential and inherently procedural programming languages and systems. While Erlang is *not* specifically designed for a massive-parallel execution environment, Erlang does have a lot of practical constraints for modern computing hardware requirements embedded in the language, for example the single-assignment variable principle, and the OTP system themselves, for example the gen_server behaviour [sic] framework, to solicit the programmers to do the least wrong things. This is something which other languages cannot emulate or mimic.

Next about the University Professor life crisis.

During my Kyoto University career, most of the things I had been doing there was talking, negotiating, and dealing with people, not with computers. The university is a very large organization, and keeping the campus network secure is something practically impossible without the university member’s help, namely from the administrative, education, research staff, and of course from the students. I am an introvert person and most of the university people are not geeks although many are excellent researchers, so the human communication tasks were the toughest thing to do in my life. Also the long-time commuting from my house to the office, spending four hours in total every day, literally killed me.

Fortunately I was allowed to do the CS research activity, however, during the Kyoto University career. And I was eligible to run a large batch jobs on a large Linux supercomputer cluster. So I decided to run some Erlang code and do the fun things over there. One good thing about Erlang is that it is mostly OS independent, so I did the prototyping on my home FreeBSD machines, and let the huge multi-core jobs run on the cluster. I’ve put the research result into GitHub. So I didn’t have to throw away the possibility of my career as a CS researcher 🙂

Paolo – You are widely respected not only for your knowledge on Erlang/OTP but also for your expertise on distributed system security. What is the intersection between these two fields?

Kenji – Erlang/OTP is a very good candidate for making a reliable system. This means it would be a prospective candidate even for a secure system, if properlydesigned. In other words, an unreliable system could *never* be secure. And every system is not 100% reliable.

The word “security” has a lot of implications in many different aspects, and is widely misused in many contexts, even if I exclude the militaristic and socialistic implications, which may be out of scope of this interview, though very serious issues themselves indeed.

I believe that the foundation of a secure system is a reliable and fault-tolerant system. This has been frequently ignored even by many “security” experts; for many of them, security is only about cryptography, or about restricting the user’s behavior in a system, or just about analyzing the behavior of the pieces of malware. I do not deny those aspects, they are very important, and the outcomes of those research activities are surely essential for making a better computer system, but those aspects are not only *the* security. A very broad perspective is needed for a computer security expert.

Also, I have to stress that security is mostly about people and how people behave. People want convenient systems; at many circumstances, security and convenience do not coexist. For example, if you really want a secure system, do not connect it to the internet. But such a special system, which could enable you to provide sufficient communication capability within the system while rejecting all the attack thwarts and zero attack vector, is virtually impossible to make, from the financial point of view. See the Stuxnet case? Consider what if the power plant were using Erlang/OTP as the core and the end-point controllers.

I wish Erlang/OTP developers always think about making a reliable software. It’s not that difficult; thinking carefully when programming will solve most of the cases.

Paolo – What is the best way to “secure” an Erlang distributed systems?

Kenji – Traditionally, putting the whole systems in a protected network, is the only solution. And unfortunately it has still been so.

This is a very good question to answer, because in the current Distributed Erlang (disterl) system on the OTP, the security model is very weak if any existed. TLS-based disterl (with the ssl and crypto modules) will be a good solution to protect the communication between BEAMs, but the problem is that the communication between BEAMs and the port-mapper daemons are plain text and it’s not trivial to incorporate necessary authentication and cryptographic features.

Erlang/OTP has been depending on the assumption that the whole disterl cluster is in a protected network without any attack vectors. In other words, the disterl cluster itself was considered a system without protection. Opening the communication ports to the internet, however, makes this assumption rather unrealistic; the Erlang/OTP devops must think about all the possible attack vectors for the disterl cluster as a whole system.

One possibility on protecting BEAM-to-BEAM communication is to establish cryptographically authenticated links between the BEAMs and let the links be used persistently, with proper periodic re-keying, without using any port-mapper daemon. I believe incorporating such a facility into Erlang is not that difficult, though the rendez-vous problem between the multiple BEAMs should be solved in another way.

Paolo – During your experience as Professor at Kyoto University, you did also research activity using Erlang and OTP. You worked in particular on SFMT and TinyMT. Would you like to introduce these two projects to our readers?

Kenji – Mersenne Twister (MT), a BSD-licensed innovative long-period (typically 2^19937 – 1) non-cryptographic pseudo random number generator (PRNG) by Profs. Makoto Matsumoto and Takuji Nishimura, has become the de facto standard on popular programming languages such as Python and R. SIMD-oriented Fast MT (SFMT) and TinyMT are the improved algorithms, by Profs. Makoto Matsumoto and Mutsuo Saito. The MT algorithms have all a very high order of equidistribution, which fits very well on a large-scale simulation, including the software testing.

SFMT is an improved version of the original MT, which is even faster than the MT, and has a tunable characteristics of the generation period and the sequence generation. TinyMT is another variant of MT, which has a much shorter generation period (2^127 – 1) and smaller memory footprint, but is still suitable for most simulation use. The algorithm of TinyMT is much compact than SFMT or MT, and can generate a massive number (~ 2^56) of independent orthogonal number sequences, which is suitable for massive-parallel asynchronous PRNG.

For the further details on SFMT and TinyMT, please take a look at:

I am not a mathematician so I cannot mathematically prove how MT and the derivatives are better than the other algorithms. But I have to emphasize very much that Erlang/OTP’s random module is still using an archaic old algorithm invented in 1980s which has a significantly shorter generation period (~ 2^43), and that has already become an indirect source of security vulnerability once (CVE-2011-0766, discovered by Geoff Cant). SFMT and TinyMT have much better characteristics than the random module, and I strongly suggest you to try them out if you really need a better non-cryptographic PRNG.

The sfmt-erlang repository is:
The tinymt-erlang repository is:

Recently I have put 256M (= 2^28) precomputed keys of TinyMT 32-bit and 64-bit generation parameters. This archive is huge (~82GB), but if you would like to use TinyMT for a serious simulation, it is worth taking a look for. The archive is at:

Paolo – Currently you are working at Basho Japan. Can I ask you what is like to work in one of the most acknowledged Erlang companies? How much Erlang code do you see in your working daily routine?

Kenji – Basho developers are all superb and are very energetic on making Riak and recently-open-sourced Riak CS even more better products. Working with such talented engineers and keeping yourself up with them is very very tough, but if you are capable to point bugs and propose contributions which have proven to work correctly to Basho’s open-sourced projects, you will surely be welcome.

I would also like to emphasize that Basho is not just an Erlang company. You need to know every programming languages and the computer science elements, from C, C++, Java, Python, Ruby, to the gory details of distributed database, including how the vector clocks work and commutative/conflict-free replicated data types (CRDTs). Riak, Riak CS, and rebar, include a lot of their by-products. See the deps/ directory under Riak and you will be astonished. On the other hand, there might be many ways to contribute your skills.

I would also like to emphasize that Basho’s client service engineers, sales and marketing people including the documentation experts, and all the other staff members, are closely working together with the developers and maintain the high standard of delivering the quality service and products.

I can only answer that the amount of Erlang code I have to see is *enormous*. 🙂

Paolo – I am very interested about Erlang and Japan. Is Erlang a niche programming language there as well or is it spreading fast as in the US and north Europe?

Kenji – I would rather want to ask the question back: is Erlang a popular language in anywhere in the world? I think the answer is probably no, comparing to the popularity of Java or C++. Looking at the TIOBE index will prove this. And I’d rather say nobody cares about that, because whether a language is spreading fast or not has already become irrelevant, comparing to the jobs or tasks what you want to get done with the language.

I do understand Erlang has gained a larger momentum in Sweden, where the language is from. And I see many people solving problems with Erlang in Europe. And in the USA and Canada (hi Fred!). And in Japan too, especially for the server-side programming solutions. So I feel the developers in Japan are slowly but surely showing more interests.

Getting back to the situation in Japan: I think not many people are interested in whatever the new paradigm of programming, except for relatively small number of communities. Fortunately, those communities surely exist. And some visionaries have discovered some languages, such as Haskell, OCaml, or Erlang, to solve *their* problems and helping others solving the problems. But for the majority of programmers, most of the details are “not really something to be carefully taken care of and to be blindly delegated to the experts”, also called the *omakase* attitude in Japan. So most programmers just do the omakase to the Rails, or to Java libraries, or to the pre-built C++ libraries. And that irresponsible attitude towards their profession, though not necessarily only of their sole responsibilities, cause a lot of sometimes lethal or disastrous bugs in the production systems. Unfortunately, many of programmers in Japan are not well-educated as the software engineers, and their supervisors are sometimes even worse. Their mindset of dumping the risks (or *doing the marunage*) for every difficult problem makes things even worse.

I think programming is not something for omakase and the quality of code will not be sufficient so long as major users of computers are doing the marunage to the developers in Japan. And I believe Erlang/OTP is not for the people who are not willing to take the risk of their own computer systems. On the other hand, for those who want to maintain the system by themselves or at least to eagerly, deliberately, and willingly take the responsibility of running the system without major outage, Erlang/OTP will become a great tool because the system provides the critical and essential functions such as non-stopping module replacement.

Paolo – As many other Erlang gurus out there, you are very active not only when it comes to promote new Erlang applications but also when Erlang newbies ask for support or suggestions. In your opinion what are the factors that make the Erlang community so nice?

Kenji – I was pretty much impressed by the friendly environment of the erlang-questions mailing list and the modest attitude of the experienced Erlang community-driving people there, when I first asked some questions. I just read and read and read all the things in the Erlang-related mailing lists as much as I could. Erlang Workshop papers were also a set of excellent source of information. And now we’ve been full of good code in GitHub, including the OTP itself. So we’ve got many many more things ready to learn now for free!

I’ve heard that one of the old sayings in the Erlang community is “no prima donna allowed”. This is so important for maintaining a community. I understand everybody wants to get grumpy sometimes, and quite often flame wars occur, but many people just endure and keep silent. I respect this rather European or even Swedish way of getting rid of chaos 🙂

Paolo – I think that the Erlang community is growing fast: many applications, conferences and new books, still most of the developers out there don’t know that behind many of the tools they use every day there is a piece of Erlang. How would you explain that?

Kenji – I think this is in fact a very good thing. People want to solve their own problems in whatever tools they have to use, or they think suitable to use. Erlang has flexible package release tools which can minimize the users of the package to think about the installation of Erlang/OTP itself. In many popular applications, the Erlang virtual machine and the necessary libraries are silently built-in and being there; and most people don’t care whether it uses Erlang/OTP or not so long as the software works OK. Erlang/OTP has become a part of the infrastructural ecosystem.

Of course, there is a strong negative side of this trend, too; developers are doing the marunage with the omakase attitude to the developer of those infrastructural tools with no knowledge about the tools. I try not to fall in this trap by building all user-land programs, kernels, and the Port programs of my FreeBSD development servers, at least for the past ten years. You have to think about the bugs if you have to build your own tools; this is a very good way to learn a new thing. You need to forcefully do so frequently.

Paolo – OK, Kenji. Many thanks for the interview!

Kenji – You’re welcome!

My thoughts about: “Erlang by Example with Cesarini and Thompson”

June 14, 2013 1 comment

In my previous post about TDD and Erlang I listed some ways to improve both coding proficiency and Erlang knowledge.  This week I would like to write here my opinion about the series of videos gently provided to me by O’Reilly: “Erlang by Example with Cesarini and Thompson”

The series is composed by 8 chapters (even though I would prefer calling them “lectures”):

  • Introduction
  • Basic Erlang
  • Sequential Programming
  • Concurrent Programming I
  • Concurrent Programming II
  • Process Error Handling
  • Mobile Frequency Server I
  • Mobile Frequency Server II

I guess that many of you (especially the non Erlangers ones) are now wondering: “What are the topics in detail? Who is the target audience? Should I buy the videos instead of a normal book? Are these videos really so good?” Well, let me answer that in the rest of this post.

What are the topics in details? – As you may notice from the list above the video lectures starts with the basics (data types, variables, pattern matching, etc etc.). After that you will learn things mostly related to sequential programming and concurrency. A good point of these videos is that you will end up with something real: a simple client-server application handling mobile frequencies. Notice that in the list above there is no reference to OTP: in fact you won’t learn about OTP here, but I believe many Erlangers are right when they say: “Learn with ‘normal’ Erlang and code your application using OTP”.

Who is the target audience? – Good question. Are you new to Erlang development? If so buy these videos. You will learn much and in a fast way. On the other hand I think that many experienced Erlang developer should take a look at these lessons, not only to review the basic concepts but also to hear the considerations and the suggestions of two of the most respected Erlangers out there. Sure, if you know Erlang very well you will skip some stuff, but still you will enjoy the lectures in their whole.

Should I buy the videos instead of a normal book? – No. Don’t do it! I have to be clear here: you won’t learn Erlang just by watching these videos. I believe these videos must be considered as a wonderful integration of what you read on a real book. As I wrote above you will benefit from the talking between the authors, but I must say that nothing beats the good old detailed books (especially the paper ones). So my advice is to select one of the many Erlang books out there, read it and complete the study chapter by chapter using these videos.

Are the videos really so good? – Yes they are, both for content quality and video quality. I must admit that every time I see some content authored by Francesco Cesarini and Simon Thompson I feel at ease. My first Erlang Book ever was “Erlang Programming”  and since then Francesco and Simon never let me down. I believe this is mostly related to their great experience in teaching and consulting: they know what to say, when to say it and how to say it. The quality of the videos is great too! The videos are 1280 × 720 and last on average ~15 minutes and this time amount is perfect because you never got bored or tired while watching them. I would like to point out that I read in some reviews people complaining about codec problems, but before writing this post I tried them in Ubuntu, Mac, Windows 7 and iPad and didn’t notice any kind of problem.

That’s all folks! Now it’s up to you: are you going to buy these videos???








Improving your Erlang programming skills doing katas

June 10, 2013 7 comments

There is one sure thing about programming: you should try to improve your set of skills in a regular way. There are several different methods to achieve this kind of result: reading books and blogs, working on your own pet project and doing pair programming are all very good examples of this, but today I want to introduce you code kata. What is a kata? Well, since you ask, you won’t mind if I digress for a while first!

What is a kata?

In Japanese, the word kata is used to describe choreographed patterns of movements that are practised in solo or possibly with a partner. Kata are especially applied in martial arts because they do represent a way of  teaching and practicing in a systematic approach rather than as individuals in a clumsy manner. If the concept of kata is still not clear (shame on me!) you just need to watch again the movie Karate Kid. For the whole movie Mr. Miyagi San teaches Daniel LaRusso the importance of kata and we know that Miyagi San is always right!

The basic concept behind kata is fairly simple: if we keep on practicing in a repetitive manner we can acquire the ability to execute movements without  hesitation and to adapt them to a set of different situations without any fear. Pretty cool uh?

Coming back to the good old world of software developers (and especially Erlang ones) we may ask ourselves: “how can we apply the concept of kata to our daily routine?”. David Thomas (one of the authors of “The Pragmatic Programmer”) introduced  the concept of Code Kata which is a programming exercise useful to improve our knowledge and skills through practice and repetition. The interesting point of code kata is that usually the exercises proposed are easy and can be implemented on a step-by-step fashion.

Let’s do Kata togheter Daniel san!

The kata I will show you today is the FizzBuzz one. In this post we will focus only on the initial parts of stage 1: the rules can be found at Coding Dojo, but I think I will rewrite them here for the lazy ones 🙂

  • write a program that prints the numbers from 1 to 100
  • for the multiples of three print “Fizz” instead of the number
  • for the multiples of five print “Buzz” instead of the number
  • for numbers which are multiples of both three and five print “FizzBuzz”

I will solve the FizzBuzz kata using TDD and this means that I will follow this pattern during my coding:

  1. write a test using Eunit
  2. run the test -> it fails
  3. write the code to make the test pass (dumb solution)
  4. run the test -> it passes
  5. refactor
  6. go to step 1

Let’s start coding then!

In this post I will write the tests and the logic inside the same file even though I know that this is not a good practice. Remember: you should never mix logic and tests in the same file. Usually what we do in Erlang is creating a test directory at the same level of ebin and src and save all our tests there, anyhow this is somehow out of the scope of this article which is about kata and I want to write this post only using one Erlang file, so forgive me for this horrible sin and let me start with our kata!

Let’s do Kata togheter Daniel san! (This time for real)

Let’s start by writing a first EUnit test where we ensure that given a number we return that number as a string:


%% Test that given a number we return that number as a string
normal_number_test() ->
    ?assertEqual("2", evaluate(2)).

Let’s watch our test fail with pleasure now:

1> c(fizzbuzz).
2> eunit:test(fizzbuzz).
fizzbuzz: normal_number_test (module 'fizzbuzz')...*failed*

  Failed: 1.  Skipped: 0.  Passed: 0.

Rember a failing test is a good news here, because as Kent Beck says: “failure is progress”. Our first test is failing for an obvious reason: we don’t have a function evaluate/1 in our module, so we can start by coding a dumb implementation for evaluate/1 that makes our test pass.



evaluate(_Num) ->

%% Test that given a number we return that number as a string
normal_number_test() ->
    ?assertEqual("2", evaluate(2)).

Let’s test it:

3> c(fizzbuzz).
4> eunit:test(fizzbuzz).
  Test passed.

Success! Our test is passing, but the implementation is pretty naive. What if we focus on refactoring then? We can change the function evaluate/1 as follows:

evaluate(Num) ->

Let’s see if our test is still passing after the refactoring:

5> c(fizzbuzz).
6> eunit:test(fizzbuzz).
  Test passed.

Good job! We have our first function and it passes the test. What about adding a new functionality to our code? No wait! The real question is: what about creating a new test? Here we are:

%% Test that given a number divisible by 3 we return the string "Fizz"
divisible_by_3_test() ->
    ?assertEqual("Fizz", evaluate(3)).

Let’s recompile and test:

7> c(fizzbuzz).
8> eunit:test(fizzbuzz).
fizzbuzz: divisible_by_3_test...*failed*
                     {expression,"evaluate ( 3 )"},

  Failed: 1.  Skipped: 0.  Passed: 1.

As expected the test is failing, in fact we didn’t add any new functionality to our code and therefore a failure is still a good news. Let’s make the test pass then!

This is the code you may have after implementing the aforesaid “Fizz” functionality. I believe this time we can skip a dumb solution and provide the real one as follows:



evaluate(Num) when Num rem 3 =:= 0->

evaluate(Num) ->

%% Test that given a number we return that number as a string
normal_number_test() ->
    ?assertEqual("2", evaluate(2)).

%% Test that given a number divisible by 3 we return the string "Fizz"
divisible_by_3_test() ->
    ?assertEqual("Fizz", evaluate(3)).

Let’s now test it:

9> c(fizzbuzz).
10> eunit:test(fizzbuzz).
  All 2 tests passed.


Our code behaves pretty well right? At this point we should do some refactoring (of both code and tests) and then add tests and implementations for the Buzz and FizzBuzz cases. Moreover we should add a function that prints all the numbers from 1 to 100 on screen but I think I will leave all this stuff  to you as homework for a couple of reasons:

  1. I don’t like writing blog posts too long, they tend to make my readers take a nap
  2. I don’t like writing blog posts with wall of code either, they tend to make my blog uglier 
  3. I guess you may want to try solving this kata by yourself 🙂 

This is all I have to say about the FizzBuzz kata…or not?  Well, I can add some useful information here:

An interview with Stavros Aronis about #erlang and Dialyzer

May 31, 2013 1 comment

What’s up Erlang addicts? Here is another interview you may find of interest.

In the  post I prepared for you today you will learn something about Dialyzer from Stavros Aronis. Stavros is a PhD student at Uppsala University and will be one of the speakers at the upcoming EUC 2013.  The talk will be: “Parallel Erlang – Speed beyond Concurrency”. Hope you will enjoy it!

Dialyzer is your friend!


Paolo – Hi Stavros. Thanks for accepting my interview. Would you like to introduce yourself to our readers?

Stavros – Hi Paolo! Thank you for the invitation. I am Stavros Aronis, I come from Greece and I am currently a PhD student in the IT department of the Uppsala University.

Paolo – Your experience with Erlang started in Greece. Can you tell us something about your first projects with Erlang?

Stavros – I didn’t know anything about the language until the final years of my studies at the National Technical University of Athens (NTUA). I was looking for an interesting topic for my diploma thesis and I turned for suggestions to Kostis Sagonas, who was then the head teacher of the Programming Languages courses there. I don’t think that I need to introduce Kostis here, as he is evidently popular in the Erlang community! At that time, Kostis had quite a few projects for diploma thesis students (I was working together with the students who developed the 0.1 versions of PropEr and Concuerror) and I decided to work on Dialyzer. My very first task was to implement support for the callback attributes and add them to the behaviors in the OTP distribution. After that I worked for a while on extending Dialyzer’s detection of race conditions to work on code that uses behaviors, but then I changed my focus and worked on enhancing Dialyzer’s type inference algorithm so that it could detect errors that were not possible to catch before.

Paolo – Currently you are a PhD student at Uppsala University, a place widely known by Erlangers. Can you give us some insight about the researches you do there?

Stavros – My current research is on Concuerror, a tool for exploring all the possible ways that the processes of an Erlang program can be interleaved during scheduling. In the simplest terms, you give a test to Concuerror and it returns to you a scheduling scenario that makes one of your processes throw an uncaught exception or leads your processes to a state where they are all waiting for some message and no progress can be made. If Concuerror is unable to do either, then there is no possible scheduling of your test that can lead to these generally undesired states. Kostis’ presentation in the Erlang User Conference 2013 will be on Concuerror.

There are two more PhD students working on Erlang in our group in Uppsala University, David Klaftenegger and Kjell Winblad. They will also be in the EUC’13, presenting their research on improving the concurrent performance of ETS tables.

Paolo – At Erlang User Conference 2013 you will give a talk about the parallel use of Erlang and the tool Dialyzer. Can you provide a brief description of your talk? Why should we parallelize Dialyzer?

Stavros – My talk will be about my experience from parallelizing Dialyzer, work that was included in OTP R15B02. It was a task that I already wanted to work on in Greece, as for the evaluation of my diploma thesis I wanted to run Dialyzer on the entire Erlang/OTP codebase to see whether I would catch any new errors. The extension I was developing made Dialyzer quite slower (this is by the way the reason it has not yet been included in OTP), so having to run it two times to compare results was already time consuming. I remember my frustration back then, as I was watching only one of the processors of my dual core laptop do all the work while the other was idling! With Erlang having such a wonderful support for concurrency, it was obvious that parallelizing Dialyzer should not be a very challenging task. The real story of course was a little different, with some interesting twists which I want to share with the other participants of the conference.

Paolo – Who should follow your talk and why? Is the talk only for experienced Erlang developers?

Stavros – Not at all! The talk has no real requirements, other than elementary understanding of Erlang’s concurrency primitives (spawn, send, receive and inserts/lookups on public ets tables). I want to show how easy it is using these primitives to parallelize an algorithm in a very natural way and what unique caveats you may run into.

Paolo – When should we use Dialyzer?

Stavros – At the very least, every time you want to commit changes on any Erlang project! Dialyzer is (famously) a totally automatic, “push button” tool, that you configure easily, just once, with the code that you depend on and trust to be correct, and then you are good to go. It is not a coincidence that celebrated members of the Erlang community, like Loic Hoguin, require that any contributions to their projects produce no Dialyzer errors. Dialyzer will catch many of the errors that you would otherwise have to write tests for (e.g. an erroneous call to a function in a rarely reached path in your code) and will also catch many that you could not detect otherwise (e.g. dead code, wrong specs).

Paolo – How much time/effort can we now expect when running Dialyzer?

Stavros – As I said, Dialyzer is fully automatic so it shouldn’t really require any effort to use. You just give your modules as input and you get warnings that are ‘always right’. Dialyzer will never produce a warning if there is no real issue. Fixing the problem is the only part where you really spend effort.

Regarding the time it takes, Dialyzer’s analysis is very much dependent on the structure of your code. On our most difficult real test, analyzing the entire OTP distribution, using the parallel version we managed to bring execution time from 1 hour 20 minutes down to 6 minutes (13x faster) on a 32 core machine. Scalability is very good on desktop machines as well: the aforementioned test scales linearly up to 4 cores.

Paolo – Last question: what do you see in the future of Erlang and Dialyzer? Is it possible to improve what we have now and if so how?

Stavros – The recent research in our group has shown that there is still plenty of room for improvements in the performance of the Erlang VM on big multicore machines. Operations of ETS tables are already being optimized and we plan to target the schedulers next. Uppsala University is a partner of the RELEASE ( project, that aims to improve the scalability of the Erlang VM on thousands of cores. Ericsson is also a partner of RELEASE, so the OTP team is also contributing significantly with improvements in other parts of the VM.

Dialyzer has currently no major updates scheduled. However, I will at some point find some time to update and include my diploma thesis contributions in the OTP, so that Dialyzer will be able to catch even more errors, before you have to really think about them!

An interview with Steve Vinoski (@stevevinoski)

May 14, 2013 4 comments

Today you can read my interview to Steve Vinoski, a famous Erlang developer/speaker and distributed systems expert. Steve will give the talk “Addressing Network Congestion in Riak Clusters” at Erlang User Conference 2013.

Some questions, some answers

Paolo – Hi Steve! It’s really good to have one of the most famous Erlangers here in my blog. Would you mind to introduce yourself to our readers in a few words?

Steve – I’m Steve Vinoski, a member of the architecture group at Basho Technologies, the makers of Riak and RiakCS. I have a background in middleware and distributed systems, and have been an Erlang user since 2006.

Paolo – I know you are expert in several programming languages. How did you end up using Erlang? Did you have any previous experience with functional languages?

Steve – As far as functional languages go, I’ve played with them on and off for decades, but never used one in production until I found Erlang.

I worked in middleware from 1991 to 2007, and in 2004 at IONA Technologies I started looking into innovative ways of expanding our product line and reducing the cost of product development. IONA’s products were written in C++, which I’ve used since 1988 and so I am well aware of its complexity, and Java, which frankly I’ve never liked (I like the JVM but don’t like the Java language). Neither language lends itself to rapid development or easy maintenance. I built a prototype that layered Ruby over one of our C++ products that allowed for an order of magnitude decrease in the number of lines of code required to write client applications, and built another prototype that provided a JavaScript layer for writing server applications, but customers didn’t seem interested, and both approaches only increased development and maintenance costs.

Then I found Erlang/OTP. I grew more and more intrigued as I discovered that it already provided numerous features that we had spent years developing and maintaining in our middleware systems, things like internode messaging, node monitoring, naming and discovery, portability across multiple network stacks, logging, tracing, etc. Not only did it provide all the features we needed, but its features were much more powerful and elegant. I put together a proposal for the IONA executive team that suggested we rebuild all of our product servers in Erlang so we could reduce maintenance costs, but the proposal was rejected because, as I later learned, they were trying to sell the company so it didn’t make sense to make such large changes to the code. I left IONA and joined Verivue, where we built video delivery hardware, and there I trained seven or eight other developers in Erlang and we used it to great advantage. After Verivue, I wanted to continue working with Erlang, which is part of the reason I joined Basho.

Paolo – In your blog you state that Erlang is your favourite programming language. Why?

Steve – To me Erlang/OTP is the type of system my middleware colleagues and I spent years trying to create. It’s got so many things a distributed systems developer wants: easy access to networking, libraries and utilities to make
interacting with distributed nodes straightforward, wonderful concurrency support, all the upgrading and reliability capabilities, and the Erlang language itself is sort of a “distributed systems DSL” where its elegance and small size make it easy to learn and easy to use to quickly become productive building distributed applications. And as if that’s not enough, the Erlang community is great, pleasantly supporting each other and newcomers while avoiding pointless arguments and rivalries you find in other communities. My use of other programming languages has actually decreased in recent years due primarily to my continued satisfaction with Erlang/OTP — it’s not great for every problem, but it’s fantastic for the types of problems I tend to work on.

Paolo – I know that in a previous working experience you had to deal with multimedia systems, a field where Erlang has still a minor impact with respect to languages like C++. Do you think Erlang will be able to find its place in this field as well? Can you give reasons for your answer?

Steve – Erlang/OTP is excellent for server systems in general, including multimedia servers. The Verivue system I worked on a few years ago had special TCP offload hardware for video delivery, so we didn’t need Erlang for that. Rather, we used Erlang for the control plane, which for example handled incoming client requests, looked up subscriber details in databases, and interacted with the hardware to set up multimedia data flows. Multimedia systems also have to integrate with billing systems, monitoring systems, and hardware from other vendors, and Erlang shines there as well, especially when it comes to finding bugs in the other systems and hot-loading code to compensate for those bugs. Customers tend to love you when you can quickly turn around fixes like that.

Another Erlang developer, Max Lapshin, built and supports erlyvideo, which seems to work well. I’ve never met Max but I know he faced some challenges along the way, as we did at Verivue, but I think he’s generally happy with how erlyvideo has turned out.

Paolo – Currently you are working at Basho, a very important company in the Erlang world. Do you mind telling our readers something more about your job?

Steve – At Basho I work in CTO Justin Sheehy’s architecture group. It’s a broad role with a lot of freedom to speak at and attend conferences and meetups, and I also work on research projects and pick up development tasks and projects from our Engineering team and Professional Services team when they need my help.

Paolo – At Erlang User Conference 2013 you will give a talk about Riak, its behaviour under extreme loads and the issues we may face when we want to scale it. Can you tell us something more about the topic?

Steve – At Basho we’re fortunate to have customers who continually push the boundaries of Riak’s comfort zone. Network traffic in Riak all goes over TCP — client requests, intracluster messages, and distributed Erlang communication. When clusters are extremely busy with client requests and transfer of data and messages between nodes, under certain conditions network throughput can drop significantly and messages can be lost, including messages intended for client applications. I am currently investigating the use of alternative network protocols to see if they can help prioritize different kinds of network traffic. This work is not yet finished, so my talk will give an overview of the problems along with the current status of the solution I’m investigating.

Paolo – I heard that you will also introduce during the talk a new Erlang network driver that should tackle some of this issues. Is this correct? Can you give us an insight?

Steve – Yes, I have been working on a new network driver. It implements an alternative UDP-based protocol for data transfer that can utilize full bandwidth when available but can also watch for congestion on network switches and quickly back off when detected. It also yields to TCP traffic under congestion conditions, preventing background data transfer tasks from shutting out more important messages like client requests and responses.

Paolo – Who should be interested in this talk? What are the minimum requisites needed in order to fully understand the topics of the talk?

Steve – Attendees should have a high-level understanding of Erlang’s architecture, what drivers are, and how they fit into the system. Other than that, my talk will explain in detail the problems I’m trying to address as well as the solution I’ve been investigating, so neither deep networking expertise nor deep understanding of Erlang internals is required.

Paolo – I can say without doubts that you are an expert in middleware and distributed computing systems. Can you suggest to our readers interested in those topics some books or internet resources?

Steve – The nice thing about distributed systems is that they never seem to get any easier, so there have been interesting research and development in this area for decades. The downside of that is that there are an enormous number of papers I could point to. In no particular order, here are some interesting papers and articles, most of which are currently sitting open in my browser tabs:

“Eventual Consistency Today: Limitations, Extensions, and Beyond”, Peter Bailis, Ali Ghodsi. This article provides an excellent description of eventual consistency and
recent work on eventually consistent systems.

“A comprehensive study of Convergent and Commutative Replicated Data Types”, M. Shapiro, N. Preguiça, C. Baquero, M. Zawirski. This paper explores and details data types that work well for applications built on eventually consistent systems.

“Notes on Distributed Systems for Young Bloods”, J. Hodges. This excellent blog post succinctly summarizes the past few decades of
distributed systems research and discoveries, and also explains some implementation concerns we’ve learned along the way to keep in mind when build distributed applications.

“Impossibility of Distributed Consensus with One Faulty Process”, M.Fischer, N. Lynch, M. Paterson. This paper is nearly 30 years old but is critical to understanding fundamental properties of distributed systems.

“Dynamo: Amazon’s Highly Available Key-value Store”, G. DeCandia, et al. A classic paper detailing trade-offs for high availability distributed systems.

Paolo – Day-by-day Erlang becomes more popular. In your opinion what can we expect from Erlang in the future? What are the next goals the Erlang community should try to reach?

Steve – Under the guidance of Ericsson’s OTP team and with valuable input from the open source community, Erlang/OTP continues to evolve gracefully to address production systems. I expect Erlang will continue to improve as a language
and platform for building large-scale systems that perform well and are relatively easy to understand, reason about, and maintain without requiring an army of developers. In particular I’m looking forward to the OTP team’s
continued work on optimizing multicore Erlang process scheduling. The Erlang community is very good at proving how good Erlang/OTP is through the results of the systems they build, so they need to keep doing that to broaden Erlang’s appeal. If you’re a developer building practical open source or commercial software, the presentations given by community members at events like the Erlang User Conference and the Erlang Factory conferences are amazing sources of knowledge and wisdom for what works well for Erlang/OTP applications and what can be problematic.

Erlang Camp 2013 is coming!

May 6, 2013 Leave a comment

Amsterdam: beautiful city of bicycles, canals and….. Erlang!

Nothing to do on Aug 30-31, 2013? What about  travelling to the lovely city of Amsterdam and attend the Erlang Camp 2013?

If you have been following my blog for a while you should already know what Erlang Camp is: an intensive two day learning experience focused on getting you up to speed on creating large scale, fault tolerant distributed applications in Erlang.

In particular, during the Erlang Camp 2013 which is exceptionally sponsored by the amazing company SpilGames you will get in touch with several Erlang topics as:

  • Erlang basic stuff
  • Erlang OTP
  • How to ship your Erlang code using applications and releases
  • Erlang Distribution

More information on the Erlang Camp schedule may be found in this web page.

Erlang Camp is a pretty good way to learn Erlang language and to get in touch with some of the best Erlang teachers and developers outh there. Knowing that only 100 seats are available and that they will go quickly I suggest you to hurry and register for the event!