Squeezing the most out of the server: Erlang Profiling
NextRoll’s real-time bidding (RTB) platform has been featured several times in this tech blog: we run a fleet of Erlang applications (the bidders) that typically ranges between one and two thousand nodes. As described in past articles, an ongoing goal of the RTB team —as well as a source of interesting technical problems— is to minimize operational costs as much as possible.
An obvious way to reduce costs is to make the system more efficient and this means entering the hazardous land of software optimization. Even for experienced programmers, identifying bottlenecks is a hard enough problem when using the right tools; trying to guess what could make the code run faster will not only waste time but is likely to introduce unnecessary complexity that can cause problems down the line. The cousin of premature optimization is necessary optimization without profiling first
While Erlang is famously known for its concurrency model and fault-tolerant design, one of its biggest strengths is the level of live inspection and tuning it offers, often with little or no setup and runtime cost. In this article, we outline how we leverage those features to profile our system, driving the optimizations that can lead to cost reductions.
10-15 minute read
Infrastructure
An interesting aspect of real-time bidding is that it is fairly low-risk to test in production. Even if the new code is slow or contains errors, the bidders are architected to just send a no-bid response whenever a request can’t be fulfilled.
Taking advantage of this, we incorporate canary deploys to our day-to-day development workflow. In the context of optimizations, this means we can quickly test our performance hypothesis by updating the code and testing with live traffic. We have metrics and dashboards that give feedback on common metrics like timeouts, errors and amount of requests processed, making it obvious when a change is beneficial.
Request-level timers
Bid request processing is the fundamental operation of the bidder application. Any improvement in the amount of time it takes us to send a response to an ad exchange means we can process more requests per server, requiring fewer servers to handle our traffic and ultimately saving us money.
The work involved in a bid request can be broadly divided into a series of tasks such as payload parsing, selection of matching ads, pricing a particular ad, and response encoding. A common practice is to periodically measure the time invested in each of those phases to make sure they don’t degrade over time and to help provide a frame of reference to use when we look for areas of the code that are worthy of our optimization efforts.
A sample of bid request timings per phase.
The most basic way of profiling consists of timing the calls to a given piece of the code (perhaps one that was deemed suspicious by one of the methods described in the next sections):
The above helper is used to wrap calls to the function we want to measure. A canary deploy of the timed code to production will generate average, median and percentile metrics that we can then compare to the overall request time to identify bottlenecks.
recon
Timing request operations can be a very useful technique for understanding the specifics of a request flow but gives us a limited perspective of the entire system. Most of the bid request phases are handled on a single process and some of them involve idle time waiting for external systems. There are many periodic tasks and long-lived support processes in the bidders, and we can benefit from system-wide profiling that looks beyond bid request processing. This is where the Erlang toolbox comes into play.
The first valuable resource is not a piece of software but a little book by Fred Hebert: Erlang in Anger. This guide is the perfect reference because it describes methods for gaining insight and optimizing production systems, backed by real-world experience. The book’s companion library, recon, provides safer, friendlier and more productive interfaces to Erlang’s powerful inspection tools. What follows are some simple examples mostly derived from the book.
As suggested by the book, a good method is to run recon:proc_window
repeatedly and try to identify patterns, e.g. a process that frequently ranks among the top CPU consumers. The process id can then be passed to recon:info
to get useful information (such as the stacktrace) in order to understand what the process is doing. Using this method we quickly found that a commonly accessed data structure contained ~100kb of debug data which was being copied thousands of times per second.
This approach, though, will tend to highlight long-running busy processes over short-lived ones (that could be called a lot and account for bigger overhead overall). This can be partially overcome by running proc_window
repeatedly and aggregating the results by location rather than process id. However, there are better tools to look at aggregated process times.
redbug
Strictly speaking, redbug is not a profiling tool, but it’s so useful for debugging live systems that it deserves a mention in this article. It allows you to safely trace functions from the shell in a very intuitive yet sophisticated way (as opposed to the rougher Erlang built-in tracing modules). It can be very handy to get a quick notion of how functions are being called, how frequently, and what production data looks like:
Including recon and redbug in an Erlang application release has no cost and can be a real life-saver when diagnosing production issues. These tools promote a flow that’s more powerful than adding prints to the code and feels more natural than step debuggers which wouldn’t be that useful in a highly concurrent world anyway.
Erlang Easy Profiling (eep)
eep allows for a more “traditional” approach to profiling by using Erlang tracing to take a snapshot of the system operation with function call counts, execution times and inter-dependencies.
It requires a bit more effort to use and it is not as safe as the rest of the tools described in the article. It will slow down the system potentially even killing it if used carelessly. Its output file can eat up a lot of disk space (a 100 ms snapshot takes about 300mb for our system). Depending on the nature of the application, it may not make sense to run it directly in production.
Here’s an example tracing session using eep:
Note that we start, sleep and stop tracing in the same line. Don’t rely on the shell being responsive during tracing! You could send a message or call a function in between, as well, to force a certain part of the code to be executed while taking the snapshot. The instructions above will output a file_name.trace
file in the release directory. The file then needs to be moved out of the production server and processed on a local Erlang shell:
This will, in turn, produce a callgrind.out.file_name
that can be input to Kcachegrind (Qcachegrind in macOS). Note that by default the tracing will discriminate entries per process id. This would yield a similar situation to what we saw with recon:proc_window
. A more interesting view is to merge function calls of all processes, which can be accomplished by stripping the pid:
$ grep -v "^ob=" callgrind.out.file_name > callgrind.out.merged_file_name
$ qcachegrind callgrind.out.merged_file_name
QCachegrind presents the snapshot in a very sophisticated UI that can be used to spot the most frequently called functions, where most time is spent, etc.
eep output to QCachegrind.
Since eep is based on Erlang tracing, it will add an overhead to all code and may comparatively misrepresent the work done by Built-in functions (BIFs) and Native-implemented functions (NIFs), so the timings shown in the snapshot need to be taken with a grain of salt. Nevertheless, it is still a great exploratory tool to understand how the different components of the system interact, how dependencies are used, and learn about obscure or suspicious areas that can be hard to spot by just looking at the code.
Note that there are other Erlang profiling libraries (which we haven’t tried yet), that produce callgrind output: erlgrind and looking_glass.
erlang:system_monitor
Yet another way of looking at the application is the erlang:system_monitor/2
BIF. It allows you to set up a process to receive a message every time a certain condition is met. It was particularly helpful for us when examining long garbage collections and schedules of long duration, the latter of which can surface issues with NIFs that would go unnoticed with other methods.
Here’s an example of its use in the shell (based off a snippet from Erlang in Anger):
This will monitor the system for 10 seconds and output process information to the shell every time there’s a garbage collection or schedule that takes more than 30ms:
Conclusion
This article is by no means an exhaustive list of Erlang diagnosing tools; there’s the observer, eprof, fprof, eflame, eministat and the list goes on. The Erlang documentation itself has a nice efficiency guide with an overview of the built-in profiling modules.
Since we started with this effort, we consistently reduced request times (and operational costs), month over month. To a large extent these gains came thanks to the advanced tools Erlang and its ecosystem have to offer. What’s most interesting is that we achieved this by getting to know our systems better, fixing bugs, and often removing rather than adding specialized code.