Forking and Threading in Ruby

By Marek Gierlach, 5 Oct 2015

As you probably know, Ruby has a few implementations, such as MRI, JRuby, Rubinius, Opal, RubyMotion etc., and each of them may use a different pattern of code execution. This article will focus on the first three of them and compare MRI (currently the most popular implementation) with JRuby and Rubinius by running a few sample scripts which are supposed to assess suitability of forking and threading in various situations, such as processing CPU-intensive algorithms, copying files etc.

forking

Before you start “learning by doing”, you need to revise a few basic terms.

Fork

  • is a new child process (a copy of the parent one)
  • has a new process identifier (PID)
  • has separate memory*
  • communicates with others via inter-process communication (IPC) channels like message queues, files, sockets etc.
  • exists even when parent process ends
  • is a POSIX call – works mainly on Unix platforms

Thread

  • is “only” an execution context, working within a process
  • shares all of the memory with others (by default it uses less memory than a fork)
  • communicates with others by shared memory objects
  • dies with a process
  • introduces typical multi-threading problems such as starvation, deadlocks etc.

There are plenty of tools using forks and threads, which are being used on a daily basis, e.g. Unicorn (forks) and Puma (threads) on application servers level, Resque (forks) and Sidekiq (threads) on the background jobs level, etc.

The following table presents the support for forking and threading in the major Ruby implementations.

Ruby Implementation Forking Threading
MRI Yes Yes (limited by GIL**)
JRuby - Yes
Rubinius Yes Yes

Two more magic words are coming back like a boomerang in this topic – parallelism and concurrency – we need to explain them a bit. First of all, these terms cannot be used interchangeably. In a nutshell – we can talk about the parallelism when two or more tasks are being processed at exactly the same time. The concurrency takes place when two or more tasks are being processed in overlapping time periods (not necessarily at the same time). Yes, it’s a broad explanation, but good enough to help you notice the difference and understand the rest of this article.

The following table presents the support for parallelism and concurrency.

Ruby Implementation Parallelism (via forks) Parallelism (via threads) Concurrency
MRI Yes No Yes
JRuby - Yes Yes
Rubinius Yes Yes (since version 2.X) Yes

That’s the end of the theory – let’s see it in practice!

* Having separate memory doesn’t necessary cause consuming the same amount of it as the parent process. There are some memory optimization techniques. One of them is Copy on Write (CoW), which allows parent process to share allocated memory with child one without copying it. With CoW additional memory is needed only in the case of shared memory modification by a child process. In the Ruby context, not every implementation is CoW friendly, e.g. MRI supports it fully since the version 2.X. Before this version each fork consumed as much memory as a parent process.

** One of the biggest advantages/disadvantages of MRI (strike out the inappropriate alternative) is the usage of GIL (Global Interpreter Lock). In a nutshell, this mechanism is responsible for synchronizing execution of threads, which means that only one thread can be executed at a time. But wait… Does it mean there is no point in using threads in MRI at all? The answer comes with the understanding of GIL internals… or at least taking a look at the code samples in this article.

Test Case

In order to present how forking and threading works in Ruby’s implementations, I created a simple class called Test and a few others inheriting from it. Each class has a different task to process. By default, every task runs four times in a loop. Also, every task runs against three types of code execution: sequential, with forks, and with threads. In addition, Benchmark.bmbm runs the block of code twice – first time in order to get the runtime environment up & running, the second time in order to measure. All of the results presented in this article were obtained in the second run. Of course, even bmbm method does not guarantee perfect isolation, but the differences between multiple code runs are insignificant.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
require "benchmark"

class Test
  AMOUNT = 4

  def run
    Benchmark.bmbm do |b|
      b.report("sequential") { sequential }
      b.report("forking") { forking }
      b.report("threading") { threading }
    end
  end

  private

  def sequential
    AMOUNT.times { perform }
  end

  def forking
    AMOUNT.times do
      fork do
        perform
      end
    end

    Process.waitall
  rescue NotImplementedError => e
    # fork method is not available in JRuby
    puts e
  end

  def threading
    threads = []

    AMOUNT.times do
      threads << Thread.new do
        perform
      end
    end

    threads.map(&:join)
  end

  def perform
    raise "not implemented"
  end
end
Load Test

Runs calculations in a loop to generate big CPU load.

1
2
3
4
5
class LoadTest < Test
  def perform
    1000.times { 1000.times { 2**3**4 } }
  end
end

Let’s run it…

1
LoadTest.new.run

…and check the results

MRI JRuby Rubinius
sequential 1.862928 2.089000 1.918873
forking 0.945018 - 1.178322
threading 1.913982 1.107000 1.213315

As you can see, the results from sequential runs are similar. Of course there is a small difference between the solutions, but it’s caused by the underlying implementation of chosen methods in various interpreters.

Forking, in this example, has a significant performance gain (code runs almost two times faster).

Threading gives the similar results as forking, but only for JRuby and Rubinius. Running the sample with threads on MRI consumes a bit more time than the sequential method. There are at least two reasons. Firstly, GIL forces sequential threads execution, therefore in a perfect world the execution time should be the same as for the sequential run, but there also occurs a loss of time for GIL operations (switching between threads etc.). Secondly, there is also needed some overhead time for creating threads.

This example doesn’t give us an answer to the question about the sense of usage threads in MRI. Let’s see another one.

Snooze Test

Runs a sleep method.

1
2
3
4
5
class SnoozeTest < Test
  def perform
    sleep 1
  end
end

Here are the results

MRI JRuby Rubinius
sequential 4.004620 4.006000 4.003186
forking 1.022066 - 1.028381
threading 1.001548 1.004000 1.003642

As you can see, each implementation gives similar results not only in the sequential and forking runs, but also in the threading ones. So, why MRI has the same performance gain as JRuby and Rubinius? The answer is in the implementation of sleep.

MRI’s sleep method is implemented with rb_thread_wait_for C function, which uses another one called native_sleep. Let’s have a quick look at it’s implementation (the code was simplified, the original implementation could be found here):

1
2
3
4
5
6
7
8
9
10
11
12
13
static void
native_sleep(rb_thread_t *th, struct timeval *timeout_tv)
{
  ...

  GVL_UNLOCK_BEGIN();
  {
    // do some stuff here
  }
  GVL_UNLOCK_END();

  thread_debug("native_sleep done\n");
 }

The reason why this function is important is that apart from using strict Ruby context, it also switches to the system one in order to perform some operations there. In situations like this, Ruby process has nothing to do… Great example of time wasting? Not really, because there is a GIL saying: “Nothing to do in this thread? Let’s switch to another one and come back here after a while”. This could be done by unlocking and locking GIL with GVL_UNLOCK_BEGIN() and GVL_UNLOCK_END() functions.

The situation becomes clear, but sleep method is rarely useful. We need more real-life example.

File Downloading Test

Runs a process which downloads and saves a file.

1
2
3
4
5
6
7
require "net/http"

class DownloadFileTest < Test
  def perform
    Net::HTTP.get("upload.wikimedia.org", "/wikipedia/commons/thumb/7/73/Ruby_logo.svg/2000px-Ruby_logo.svg.png")
  end
end

There is no need to comment the following results. They are pretty similar to those from the example above.

MRI JRuby Rubinius
sequential 0.327980 0.334000 0.329353
forking 0.104766 - 0.121054
threading 0.085789 0.094000 0.088490

Another good example could be the file copying process or any other I/O operation.

Conclusions

  • Rubinius fully supports both forking and threading (since version 2.X, when GIL was removed). Your code could be concurrent and run in parallel.
  • JRuby does a good job with threads, but doesn’t support forking at all. Parallelism and concurrency could be achieved with threads.
  • MRI supports forking, but threading is limited by the presence of GIL. Concurrency could be achieved with threads, but only when running code goes outside of the Ruby interpreter context (e.g IO operations, kernel functions). There is no way to achieve parallelism.

About the author

Marek Gierlach

comments powered by Disqus