The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or click here to continue anyway

Ruby+OMR JIT Compiler: What's next?

At the beginning of February 2017, I was fortunate enough to attend FOSDEM to speak at the Ruby Devroom. I gave a talk there called Highly Surmountable Challenges in Ruby+OMR JIT Compilation (view the slides). This blog post is a modified version of that talk.

The Eclipse OMR team originally announced the Ruby+OMR project at RubyKaigi 2015 — it feels unbelievable that that was only 13 months ago! So, what's next for the Ruby+OMR JIT compiler? Well, first things first, let’s look at where we are now.

Status update

Our team has been incredibly busy over the past 13 months. Here’s a quick rundown of what we’ve been doing:

Our JIT integration in 2.4 has been heavily focused on functional correctness. Our aim has been to pass all the built-in Ruby tests at all optimization levels, as well as ruby/spec. We’re getting there!

Today, I would describe what we have as an excellent foundation to build on.

Goal: Help deliver Ruby 3×3

The Eclipse OMR team’s goal is to be part of delivering Ruby 3×3. There are a couple of reasons for this. First and foremost, we think that Ruby will really benefit from having a JIT compiler, and suspect that achieving 3×3 (outside of parallelism) may require a JIT compiler. The OMR team thinks that our compiler technology is well suited to speeding up Ruby.

Of course, having Ruby 3×3 be achieved using OMR isn’t a selfless act. Designating our compiler as the JIT compiler for CRuby would be a validation of the OMR approach, and would help feed improvement into the OMR community, which include IBM products like the IBM SDK for Java.

Competition + collaboration = “coopetition”

Recently, Chris Seaton, Ruby expert at Oracle Labs, posted this tweet:

I know of two people working on secret Ruby JITs for MRI

— Chris Seaton (@ChrisGSeaton) December 20, 2016

This is super interesting! From the Ruby+OMR perspective, we think having multiple JIT compilers for CRuby (MRI) could be a really good thing. Competition helps drive improvement. And perhaps even more interesting is the opportunity for collaboration.

Someone asked Chris a follow-up question after his first tweet. Here’s the exchange:

We completely agree with Chris’s response. When I think of where we are with Ruby+OMR, I envision a picture like this:

The Ruby+OMR team has built a JIT foundation, but now we need to start adding some functionality to the VM in order to better exploit the technology.

Information interfaces

JIT compilers thrive on information. There’s a symbiotic relationship between the JIT compiler and its host VM. The JIT compiler will provide performance but depends on the VM to keep it informed. The more informed the JIT is, the higher the performance the JIT will be able to deliver. This means that one of the key elements of JIT performance is the interface between the JIT and the VM.

Infrequent event notification

There are some events that will occur in the Ruby VM infrequently. For example, redefinition of basic operations like Fixnum#+, modification of constants, or changes to the class hierarchy can happen infrequently, or be concentrated in a particular phase of the application running, such as when your application is starting up.

If the JIT is made aware of when these kinds of changes occur, it can begin to optimize based on this kind of knowledge, specializing on its understanding of the state of the world, then compensating should the world change. This can allow us to not generate code for circumstances that haven’t happened yet, improving code density and increasing the ability of the optimizer to optimize. Similarly, it is possible for code to inspect the Ruby VM stack. Today, the JIT compiler compensates by making sure that the VM stack is always up to date, even when it may not need be. If the JIT is given a notification before something wishes to peek into the VM stack, we restore the VM stack for consumers, while saving the work in the common case where no consumers are looking.

We’re tracking work on this GitHub issue #80.

Frequencies and types

The JIT wants to know how often a particular block of bytecodes have been executed when compiling. This unlocks better optimization by allowing the JIT compiler to only optimize code that will actually be executed in practice. Similarly, if the JIT compiler knows things about the types called in a function, it can specialize the compiled body to the types seen in practice, perhaps falling back to the interpreter when another type is infrequently passed in.

Improving the optimization horizon

In order to optimize code, a compiler needs to be able see a broad vision of what’s happening to recognize places that optimizations can occur. In a language like Ruby, where methods are often small, an important optimization that is often applied is inlining, which copies the body of a called method into the calling method. Ruby+OMR has an inliner that works for simple Ruby calls (although it’s not been turned on by default). However, as pointed out by Evan Phoenix in his keynote at Ruby Kaigi 2015, CRuby faces a problem for JIT optimization: Since so much core functionality of CRuby is written in C, a Ruby-level JIT compiler invariably finds itself incapable of seeing and subsequently optimizing the majority of the code actually executed.

There have been some ideas aimed at addressing this issue. For example, Rubinius was started with the philosophy of implementing as much of Ruby in Ruby as possible, to provide as broad an optimization range as possible. In Evan’s keynote, he proposed a really interesting and ambitious solution to the problem he called “lifting the core.” It involved shipping Ruby with LLVM intermediate representation of the CRuby functions to allow the LLVM JIT technology to look inside the CRuby functions and dramatically increase the optimization horizon. As far as I know, this hasn’t been attempted yet — although, if it has, I really want to see it!

The Ruby+OMR team’s current thinking on how to address this problem in CRuby is to start supporting incremental “Rubification.” What we mean by this is that CRuby should support overriding C implementations of functions in the Ruby VM with Ruby versions in a progressive fashion. This might actually be quite easy to do by taking advantage of the prelude.rb system, which is a set of Ruby code that gets compiled right into the VM and is run before any user code. We can selectively override implementations in the prelude, allowing a slow transition from C implementations to Ruby.

There are a number of possible benefits here if we do it with care. First, the change might be done optionally; perhaps the Ruby implementations are only used when the JIT compiler is enabled. This is important while the JIT compiler is being developed, as it allows the interpreter to maintain its current speed while setting the JIT compiler up for future benefit.

Second, Ruby versions of core methods can be shared among implementations. JRuby expert Charles Nutter recently posted this:

Managed to speed up [a,b].max 30% by reimplementing it *in Ruby* and running it on my modified Graal. Things are getting interesting.

— Charles Nutter (@headius) February 13, 2017

This is the kind of benefit that we also expect to make happen with Ruby implementations, but by sharing the Ruby code all implementations have an easier road to maintaining compatibility with each other, and the effort to do Ruby implementation can be shared.

Creating a community

Eclipse OMR is still a young project. We’ve only been fully open source for a bit more than four months. We’re still working on improving our interfaces and the story we tell other projects about integration. The most important work we have to do over the next little while is to build a community. Ruby+OMR needs a community interest to succeed.

In order to get there, we as OMR and Ruby+OMR developers need to provide mentorship. We are committed to helping anyone who wants to contribute to Ruby+OMR get up and running. We will:.

This is the only way we can build a community, and we are dedicated to doing it.

Our to-do checklist

As we continue down this road, the Ruby+OMR team have a few to-dos:

  1. We need to make our mentorship commitment clear.
  2. We need to start collecting feedback from ruby-core on what we would need to do to get community members interested.
  3. We need to start prototyping more of the VM changes a JIT will need. (As an early start, I prototyped this TracePoint for basic operation redefinition.)

If you’re interested in Ruby+OMR, I’d like to give you a to-do list as well:

  1. Give Ruby+OMR a try!
  2. Take a look at the “Help Wanted” or “Beginner Friendly” items.
  3. Open Issues if you have them!
  4. Ask about helping! Even little things are equally appreciated. If anything piques your interest, feel free to ask questions.

Let’s talk

We think that the road to Ruby 3×3 is going to involve a JIT. However, we think that if OMR is adopted, the split between JIT and VM work is probably 1:2 in favour of VM work, to enable the OMR JIT technology to provide high performance. I hope that the VM work can be shared among competing JITs for CRuby, as collaboration and competition will help us all build something great for CRuby.

I’d love to hear your feedback. Leave me a comment on this page or contact me in the Eclipse OMR Slack channel. (Join the dW Open Slack channel here:

Continue reading on