Wednesday, October 05, 2016

RubyConf 2015 - Everything You Know About the GIL is Wrong by Jerry D'Antonio

gonna go ahead and get started.
First off, thank you everybody for being here.
My joke a minute ago aside,
it's always really nice to have people actually come by
and want to listen to what you have to say,
especially when you've not actually given this talk before.
This is my first time giving this talk.
Hopefully, I think it's got some very interesting stuff in.
Hopefully people learn a few things from it.
The obviously inflammatory title,
which was clearly intended to (chuckles) make a statement
is "Everything You Know About the GIL is Wrong."
I'm sure there's at least one person in the room
sitting over there who knows way more
about the GIL than I do, but for the most part
a lot of people don't understand this particular thing,
so we're gonna talk about it.
My name is Jerry D'Antonio.
I'm from Akron, Ohio.
How many people here have ever heard of Akron?
Okay.
Now how many people have ever heard of Lebron James?
Right? (chuckles)
Local kid, lives, with the high school,
just down the street from me, pretty good at basketball.
I work for Test Double.
Many of you have probably heard of Test Double.
Some of you probably went and saw
Justin's talk yesterday morning.
"How to Stop Hating Your," excuse me,
"How to Stop Hating Your Test Week."
Justin's one of the founders of Test Double.
Of course, we're a consulting company out of Columbus, Ohio,
and I work there.
Beyond that, the probably most relevant thing about me
with respect to this particular conversation
is I created this thing here.
It is a gem called Concurrent Ruby.
Who's heard of this?
Anybody? Just out of curiosity.
Okay, cool. Cool.
So Concurrent Ruby is a Ruby gem
that is intended to provide some
suite of concurrency tools for Ruby
to extend our options in building current applications.
Concurrent Ruby is being used by a number of projects,
some of these you may have heard of.
Oh, Rails.
Sidekiq, Logstash, Dynflow, Volt, Hamster, Pakyow,
Microsoft Azure uses it in their cloud.
I know Sucker Punch is considering using it.
It's really humbling to see these projects on the list
saying that they're using our work.
But, there's a sad, unfortunate truth to all this
and that is that I've actually been wasting my time,
that this whole thing about trying
to build a concurrency gem for Ruby is just a fool's errand.
It's a complete and total absolute waste of time, why?
Because everybody knows, let's say it,
that Ruby can't do concurrency, right?
Raise your hand if you've ever heard somebody
say Ruby can't do concurrency.
All right, so let's just get that out of the way.
For those of you who have had not heard it,
clearly you don't have Twitter accounts
because, you know, if you follow Twitter
you will find that apparently Ruby cannot do concurrency,
and if I heard it on the internet it clearly must be true.
So being that I know how to Google
and I know how to use the internet thing,
I thought before I give this presentation about the GIL,
how about I actually look up a few factoids about the GIL?
That'd be fun, right?
So let's talk about what this thing, the GIL, is.
I did some Googling and I came up with a couple of factoids.
According to the internet, here are a few things.
First, Ruby has this thing called a Global Interpreter Lock,
also called a Global Virtual Machine Lock or GVL.
Right? Has everybody heard that before?
Here's a couple other factoids I picked up
on the internet about the GIL.
The GIL is a soulless, heartless, and pure evil.
The GIL hates you and it wants you, personally,
to be miserable.
The GIL eats babies for breakfast,
no, seriously, I read this.
It eats babies for breakfast, kittens for dessert,
and puppies for midnight snack.
I'm pretty sure that the GIL is the sole cause
of climate change and I also think somewhere
I saw that if there was no GIL there would be no war.
(audience laughing)
So pretty much, you guys have all heard those?
Let's just, for fun, I mean you're here,
you're already sitting down.
I've got, like, you know, 40 more minutes
so let's take a look at some code.
All right, so we're gonna look at some code.
This is a quick sample program.
Hopefully, everybody can see this all right.
What this is is this is sort of one of my go-to's
for showing concurrency stuff.
What we're gonna do is basically
I'm going to go out and hit an API
and I'm going the...
(audience member talking indistinctly)
Um, I can't because this is actually PowerPoint,
and I apologize for that.
Normally, when I do this color scheme it shows very well
and so I'm sorry it's not.
So I will try and explain it the best I can
and I'll put this up on the web later on.
So I apologize for that.
What I'm gonna do here is I'm gonna go out
and hit Yahoo!'s Finance API.
And I'm going to, I picked 20 stock ticker symbols.
I got those from Bloomberg.
I had to pick 20 and they had this list of
here's 20 that really did well.
So I'm gonna go and I'm gonna hit this Yahoo! Finance API.
I'm gonna bring back, for all 20 of these ticker symbols,
I'm gonna bring back the data and I'm gonna pull out of that
what the stock price was at the end of 2014.
It's just an arbitrary thing.
It doesn't have any real meaning.
So my top function is called 'Get Year-End Closing'
and it just does that, it's just that.
Then I've got a function called
'Serial Get Year-End Closing.'
What I'm gonna do with that is,
in there I'm gonna take that list of stock ticker symbols,
I'm gonna iterate over that using the collect method,
and I'm going to retrieve those one at a time
and put those in an array.
So at the end of this I'll have an array
with 20 prices that I pulled from that API.
The next method is called 'Concurrent Get Year-End Closing.'
I'm gonna do the same thing but in this case,
rather than doing it serially, I'm gonna do it concurrently.
That thing I'm using is called a 'Future.'
It's from the Concurrent Ruby library.
This is not meant to be a sales pitch for Concurrent Ruby,
but I can do that in one line of code
and it's very easy to read.
What's happening is I'm gonna do this thing
called 'Concurrent Future.'
I'm gonna fire this thing off and say, "Go get this thing."
I'm gonna fire 20 of these things off.
They're gonna go onto background threads
that run on a thread pool.
What's gonna happen is I'm gonna
collect up those future objects.
They're stateful objects.
They will have their state updated when the task is done.
Then I'm gonna do another 'collect' statement
to go and actually retrieve those and get the array.
So at the end of that I will have the same array
that I'll have for the serial one, right?
One extra line of code in order to do this,
but I'm gonna do it concurrently.
The second part of the script
is going to be some benchmarking.
Has anybody here used the benchmark before?
It's really cool, right?
For those of you that haven't,
what it's gonna do is this particular one,
benchmark.bm, will actually do a rehearsal phase.
It'll run a bunch of the things I give it.
It'll then determine how many times it has to do that
in order to get adequate data.
It'll then run it again and it'll give me the output
and I can compare these things.
So I'm gonna compare the execution of the serial method
to the execution of the concurrent method.
Now don't say anything, but think to yourself right now
what you should expect to see.
Because as we know, Ruby can't do concurrency.
Ruby has a GIL. It's a lock.
It prevents us from doing
anything really fun and interesting
and having nice lives.
So what we should expect here is that
despite the fact that in the second case
I'm gonna fire off all these things
asynchronously on a thread pool,
that the amount of time it takes
for each of these should be the same.
Is that reasonable?
I do it serially and it takes a certain amount of time,
then I do it concurrently because,
since we can't do concurrency in Ruby,
it's gonna take the same amount of time, right?
That's to be expected.
So just because we're here, we've got the time,
let's run that and see what happens.
So when you look at the output of this,
clearly you can see right there
that it took about four seconds to do that serially
and it took about four seconds to do that concurrently
because Ruby can't do concurrency, right?
Does everybody see that?
Does everybody see that on there?
Raise your hand if that's not what you see in that output.
All right, I should see every hand in the room raised.
Okay.
So what happened here is
it took roughly four seconds to do that serially
and then concurrently it took less than .3 seconds.
It took about 1/10 of the time.
Clearly something is wrong with my test, right?
So just for fun, let's compare this same thing to
I don't know, runtimes that can actually do concurrency
like, I don't know, JRuby.
Let's run that same thing on JRuby.
This is JRuby 9010.
That took about four seconds, which, serially,
that makes sense, right?
It's still Ruby code. It's still I/O.
But it actually took over a second to do it concurrently.
Hmm. Interesting.
Well, what about Rubinius?
Rubinius, that runs in LLVM.
It doesn't have a GIL.
It has, like, you know,
it should be able to do similar, right?
Well, it took that about four seconds to do it serially.
Okay, that makes sense.
That seems pretty consistent.
But it took that about .4 seconds to do that concurrently.
So MRI Ruby took
as little time or less time
to do that concurrently than the two runtimes
that are actually supposed to be good at that.
And they are good at that, don't get me wrong,
but from what I saw on the internet through all those tweets
was that MRI Ruby was not actually any good at that
and yet in this particular case it seemed to do okay.
Apparently, the internet lied to me
and I did not see that coming.
Actually, I did see it coming. It's all right.
I know what's going on.
But let me ask you this question.
Be honest with me.
Was there anybody here that was surprised to see
that MRI Ruby was able to perform that concurrently
that fast?
Anybody? Anybody?
So thank you very much for being honest.
But let's actually talk about why that is
because how many people,
let me ask you this question.
How many people would like to see
a 10 times performance improvement in their applications?
I think everybody in the room should say that.
Let's explain why that is because that really goes
against the storyline that we're hearing all the time.
Let's get into that, explain why that is,
and why we are able to do that.
I've got a lot of stuff in here.
I'm gonna try and go through it
as reasonable a pace as possible,
but I may have to do a Justin Searls impression
and fly really quickly.
We're gonna start by talking about the obligatory
'Concurrency vs. Parallelism' talk.
How many people, raise your hands,
have ever been subjected to
a 'Concurrency vs. Parallelism' talk or blog post?
Okay, so.
You can see what's coming.
Too long, didn't read, concurrency is not parallelism.
For the next few slides I'm gonna
channel a gentleman named Rob Pike.
Rob Pike is, among other things,
one of the creators of the Go programming language.
Go, as many of you might know,
has built in some very, very fast
and efficient concurrency mechanisms
and so he's actually been going around
over the past few years and giving a lot of talks
of this thing 'concurrency vs. parallelism'
and they're actually very, very good.
He doesn't get into a lot of Go in them.
He talks about these things conceptually.
So I'm gonna reference him a lot over the next few slides
because he's done a really good job of that.
So I'm quoting him here.
In fact, this presentation is called
'Concurrency is not Parallelism.'
Concurrency.
Programming as the composition
of independently executing processes.
Think about that code we saw.
We fired off each of those futures
as an independently executing process
and then we composed them into an application
that did something useful.
Now parallelism is the simultaneous execution
of possibly related components, computations.
Simultaneous execution.
Concurrency is not necessarily about
simultaneous execution.
Parallelism is.
Concurrency is about dealing with lots of things at once.
We did that. We sent off 20 futures.
That is lots of things at once.
Parallelism is about doing lots of things at once.
Or, to put it in my terms,
parallelism requires two processor cores.
It requires it.
If I only have one processor core, I cannot do parallelism.
Because a processor can only handle
one instruction at a time.
So if I have only one processor, I do not have parallelism.
Maybe I have concurrency, but no parallelism.
However, concurrency can be done on one
or more processor cores, right?
So concurrency is really about design.
Concurrency is this idea that I'm going to design my program
around these independently executing things,
these things that don't have to be seralized.
If I get improved performance as a side effect,
it's a good side effect, it's a desirable side effect,
but it is in fact a side effect.
Really, concurrency is about that design.
So here's the thing.
Non-concurrent programs gain no benefit
from running on multiple processors.
No benefit at all.
If I do not write my code concurrently,
I can run on as many processors as I want.
It's not gonna get faster.
But if I write my programs concurrently,
then when I do have parallelism available,
I will get a benefit.
This is sort of the point about concurrency vs. parallelism.
If I program my things concurrently,
at worst I get no benefit.
If I program them concurrently,
at best I get a huge benefit.
If I don't write them concurrently,
it doesn't matter how many processors you're riding.
So let's talk about the GIL.
That's sort of some background, let's talk about the GIL.
The 'L' in GIL stands for lock.
Lock, in computer science terms,
is I have a resource and I want to protect it
from multiple threads accessing it at once,
so I create a lock.
It's a very, very common thing.
Basically, what happens is a thread
wants to gain access to a resource,
and that resource is locked.
So the thread asks, "Can I get the lock?"
If the lock is available, it gets it.
The lock is now acquired and it accesses that resource.
If it's not available, then what happens is that thread
will normally block and wait for it to become available.
Yes, you can make non-blocking calls
and say, "Please, can I get the lock?
and it may say no and you can move on,
but generally speaking you block
waiting for the lock to become available.
So let's talk about what threads are.
Thread, according to Wikipedia, a thread of execution
is the smallest sequence of programmed instructions
that can be managed independently by a scheduler.
What does that mean?
It means I can have multiple sequences of instructions
that are running potentially simultaneously,
potentially not, and I have a scheduler that manages those.
Normally, that's gonna happen inside the operating system.
All modern operating systems provide threads
and they have a scheduler
and the operating system manages those threads
across all of your processes.
Every application has at least one thread.
It may spawn more,
and the scheduler in the operating system will manage that.
The number of threads you have running at any given time
may vastly exceed the number of processes you have.
For example, earlier I pulled up this
on my particular MacBook Pro.
If you look down at the righthand corner.
Has anybody ever looked at the thread count
running on their system at any given time?
There were over 1,000 threads active on this system
when I actually took this screen cap.
Dropbox, in the background, is running 60 threads.
What the hell does Dropbox need 60 threads for?
I don't know. It's a cool application.
We'll let it have it,
but there is over 1,000 threads running right now.
I'm pretty sure you already know
that my MacBook Pro does not have 1,000 processors.
So clearly we have more processors than there are cores.
More about threads.
Many programming languages like Ruby and Java,
they actually map language constructs
to these operating system threads.
Ruby does that.
If I tell you to do thread.new in Ruby,
it creates an operating system thread.
Java, same thing. C Sharp, same thing.
Some languages actually don't do that.
Languages like Erlang and Go, they actually
create their own internal concurrency mechanism
and they manage the multiplexing
across operating system threads internally.
In fact, if you read about a Go routine
in the official GoDocs it says,
"The runtime multiplexes these things
"across multiple operating system threads."
But you always have to have a scheduler.
The scheduler is going to be there.
So regardless of the language, however,
we still have threads in the OS.
Even Erlang and Go and languages like that
still have to manage to deal with operating system threads.
They're gonna be there.
Within the operating system itself
what's gonna happen is the OS and the scheduler
is going to schedule different threads
within the operating system across the different processors
that are available.
Remember, each core can only run one thread at a time.
So whenever the scheduler takes a thread
away from the processor and gives it another one
we have a context switch.
Basically, you can take the execution context
off of the processor, we put another execution context on,
and that happens.
When you have 1,000 threads running
in your system at one time
you have context switches happening all the time,
probably more than 1,000 per second.
Most likely several thousand per second.
So no programming language could ever really
preempt the context switches from the operating system.
The operating system does
what the operating system wants to do.
We can give it hints.
We can ask it to treat us nicely and give us favors,
but ultimately the operating system decides
when it's going to context switch our stuff out.
We have no control over that at all.
Let's get back to the GIL.
Because we are going to have context switches
within the operating system and because
that means our code is gonna get pulled off the processor,
every language must, within its runtime,
whether it be an interpreted runtime like Ruby,
or it's a compiled runtime in a language like Go,
everything must have, within the language itself,
the ability to protect itself from those context switches
and make sure that when those context switches occur
that the runtime itself maintains
a consistent internal state.
Some languages try and aim for
having one thread per process
and they have the processor do the switching
internally themselves, others don't.
But ultimately this is going to happen either
in your runtime, or in the operating system, or both.
We're gonna have this happen.
Ruby uses the GIL to protect its internal state
across those OS context switches.
This is important.
Ruby uses the GIL to protect its internal state.
Let's go into very, very simplified description
of what the GIL does.
After I pitched this talk and gave it this title,
somebody a couple weeks ago pointed out
some really, really great blog posts by Jesse Storimer
called 'Nobody Understands the GIL.'
Those are fantastic blog posts
and I highly recommend you read those
because he's got some really great stuff.
Read the comments.
There's really great discussion.
I'm gonna do a simplified version of what he talks about
because he goes into it much more deeply
and I'm gonna cover a bunch of different topics.
Basically, what happens is Thread A is doing some work.
Ruby locks the GIL when Thread A is doing that work
'cause it needs to protect its internal state.
At some point, the operating system pulls Thread A away,
throws in Thread B, which is another Ruby thread.
At that point, Thread B says, "Oh, wait.
"I can't access the lock," because it's been locked.
So it says to the OS, "Thank you very much,
"but I'm sort of not able to do something
"so why don't you let somebody else work."
Eventually, Thread A gets switched back in
and Thread A does its thing and it releases the lock
and eventually Thread B gets switched in
and now the GIL has not been locked
and Thread A can rock, or it should be Thread B can rock.
So the idea is
those context switches are happening all the time.
Whenever a thread needs to do something,
it locks the GIL and unlocks it when it's done.
So we still have those context switches,
but a lot of those context switches end up
in these sort of no-op operations
where I just can't get the GIL.
So what does that mean?
This is highly simplified, but it has some implications.
So what does that really mean?
The implications of this.
Only one unit of Ruby code can run at a given time.
You've heard this before, right?
I'm gonna say 'unit' here
and I'm gonna put it in quotes because we get into
where the context switch boundaries can be
and methods vs. various other things.
It gets really complicated,
but we're just gonna take a unit of Ruby code.
One unit of pure Ruby code can execute at any given time.
We may have multiple threads running.
We may have these context switches.
But one has the lock, the other ones don't.
If a context switch occurs,
what's gonna happen is you end up
with this no-op operation where one says,
"I can't do anything so I can't get the lock,"
which in effect means we don't get parallelism.
Because only one thread can have a lock.
But, because we've done this, Ruby can guarantee
that its internal state always is consistent
and is not corrupted and is not broken
and that protects our programs
because Ruby, as a runtime, needs to do that.
But what Ruby does not do and what the GIL does not do
is provide us guarantees about our code.
Here's your obligatory word definition.
What is a guarantee?
A guarantee is a formal promise or assurance,
typically in writing, that certain conditions
will be fulfilled.
It's a promise, it's usually in writing,
that says these things will happen.
The verb of that is to actually provide that promise.
Guarantees, what is guaranteed and what is not guaranteed?
Ruby is what we call a 'shared memory' language.
What that means is,
I'm just gonna do a little bit of Ruby 101 here,
every variable is a reference.
It is basically a pointer to an area of memory
where an object occurs.
If I have a variable called 'A'
and I assign to it a string
that string is somewhere in memory
and A, as a reference, points to that.
If I say Variable B equals Variable A,
I'm saying Variable B points to the same area of memory.
They're two different variables,
but they both point to that shared memory.
So that means that two variables
may reference the same point of memory
and two threads that have access to variables
that store that reference can access
the same shared memory simultaneously.
Right?
I'm gonna show us some code examples
that are contrived, that will demonstrate this
for people who may not know how this works
because this is gonna become very important later on,
but I just wanna give some examples.
Basically, here I've got just a thread, two threads.
I'm gonna create this string called 'Jerry.'
It's a mutable string.
'Str' points to it, that is a string in memory somewhere.
I'm gonna run these two threads
and when these two threads work
they're gonna randomly sync for periods of time.
They're both going to mutate that string in place.
I'm gonna call 'upcase bang' in one
and 'downcase bang' in the other
and that's gonna mutate those strings in place.
Then at the end I'm gonna use a join
so that I wait for those two threads to finish
and we're gonna see what happens.
So I ran this a couple times and luckily
I was able to get very, very different results both times.
The first time, 'cause it's a shared memory system,
so the first time it ended up coming out,
first of all the order of operations is different.
In both cases, I got something different at the end.
Here's the two questions we wanna explore.
Was this code thread safe
and was this code logically correct?
Think about that for a second.
Is this code thread safe and is it logically correct?
I'll give you a hint. It's not logically correct.
Let's talk about correctness vs. safeness.
In a fully paralell, shared memory system
it is possible for two threads to access
the same memory simultaneously.
That is an unsafe operation.
We can't have that happen.
That causes corruption.
In a concurrent, shared memory system,
meaning one where we only have one processor,
it's still possible for a context switch to occur
while a thread is in the process
of performing complex memory altering operations.
Not necessarily a single write,
but in this case where I did a read
and then I did a write later,
it lets them talk about a complex series of things.
There's no way I can prevent any context switching
from happening in there.
So the ordering of these operations
and their timing becomes very, very important.
Another contrived example.
Basically the same thing, but in this case what I'm doing
is I'm actually duplicating the string.
In the first case, I'm calling 'dupe'
so I now have a copy of the string.
I'm changing the copy of the string
and then I'm writing back to the original string
the data that I've changed.
So in the first case,
I'm replacing the lowercase R's with uppercase R's
and then I'm using a gsub bang to write it back in.
In the second case,
I'm changing the uppercase J to lowercase J
and I'm writing it back in.
Again, is this thread safe?
Is it logically correct?
We run this and what we see is we
get some very, very interesting results.
This code was not logically correct.
This is what we call a 'read update' error.
What it means is I read something
and then performed some operations based upon that,
but while I was doing that,
the thing that I read was updated,
which means that my data is stale
and my computations are calculating a poor result.
So then I try and write it back.
I've now written back an incorrect result
because it didn't take into account the update.
It's a very common concurrency problem.
It's called 'read update' error.
It is a 'not logically correct.'
But is it thread safe?
Keep in mind those are not the same things.
The answer is yes, in this particular case,
it was thread safe, but only by accident.
Ruby is an interpreted language.
Did anybody sit through Aaron Patterson's
presentation yesterday where he talked
about everything that was going on inside?
Right, okay.
Good, 'cause we're gonna reference that
'cause he did a really fantastic job
of bringing home the point that our Ruby code,
which looks like stuff that's actually happening,
really is not actually happening.
What happens is Ruby is a compiled, interpreted language
so internally Ruby takes our code
and it turns that into bytecode within the interpreter.
Ruby is free to optimize and reorder that code.
Aaron talked about some of the optimization that'll happen.
The bytecode instructions are not necessarily
directly mapped to our code.
They produce the intended result,
but they made be reordered or they may be
optimized.
Keep in mind, Ruby itself is a program.
It is written in C. It runs on our system.
It is compiled by a C compiler.
So everything that happens in Ruby is written in C code.
That C code then gets compiled as well,
and those compilers are free to optimize
and reorder those things when they
create the machine code that runs.
I'm running a Mac. You may be running Linux.
There's Windows.
The actual binary that gets created from those
is gonna be different in those cases
so there's no guarantee that the actual order
of machine code operations end up being the same.
All of this stuff happens under our stuff.
So we see a single line of code that says,
"Variable STR equals Jerry."
It's like, that's just a simple atomic operations
creating this thing, right?
Really it's not.
Because there's all this stuff going on
and there's all potential reordering
and there's a lot of potential optimization
and we can't look at that and say
that is an atomic operation
because there could be all kinds of context switches
within our code that's happening, right?
So what's going on is the GIL exists
to protect Ruby's internal state
so when all that stuff is going on
throughout that entire stack underneath our code,
Ruby maintains its internal consistency.
That's very important for us.
So, what that means is Ruby itself is thread safe,
but there are no guarantees that our code is thread safe.
The GIL does not exist to make our code thread safe.
It exists to make Ruby thread safe.
So what happens is the GIL prevents any kind
of interleaved access to memory that's used by the runtime
in the internal consistent state.
So Ruby itself will never become corrupt,
but Ruby does not provide any guarantees
for our code being thread safe.
How many people have heard of 'memory model' before?
Memory model? Okay, good.
Memory model. What is a memory model?
A memory model describes the interactions
of threads with the shared use of the data.
A memory model is a written description that says
when certain things happen to data variables,
things in memory, the runtime is gonna make certain promises
and guarantees about how that works.
It defines things like visibility, volatility, atomicity,
synchronization barriers, all this deep, gory stuff.
I'm not gonna go into these details.
Petr Chalupa, who is visiting us
all the way from the Czech Republic,
who is a major contributor to Concurrent Ruby,
he's speaking this afternoon and he's gonna talk about
memory models and a lot of other things related to that
and getting the internal details of the synchronization
so if this part of this really is interesting to you,
I highly recommend going to his presentation.
It's going to be fascinating.
But the point of this is some languages
have a documented memory model.
Java, its initial memory model was not considered sufficient
for concurrent programs and had to create a new one.
Java's memory models were not put in place,
the current one, until 2004.
2004, we had multiprocessor systems in 2004.
It took Java until then to come up with a memory model
that's considered sufficient for concurrency
and parallelism.
C and C++ did not get a formal
memory model until 2011.
Think about that.
Those languages have been around for a long time
and they did not get a formal memory model until 2011.
So this next slide is not meant to be a slam
or anything negative.
It's a statement of fact that Ruby, currently,
does not have a documented memory model
because when Ruby was created,
concurrency was not a top concern
and so a memory model was not documented.
To this day, we do not have a documented memory model.
However, because the GIL does what it does,
it presents an implied memory model.
Because when we look at what the GIL does,
we can make some predictions about how that's going to work
with respect to all of these things in memory,
which is why that code we were looking at before
was accidentally thread safe.
But remember this is not a documented memory model.
There are no guarantees.
At any given push to master, that behavior could change.
Moreover, the Ruby specification does not cover this.
That's why JRuby and Rubinius can pass all of the,
the Ruby spec tests and not have a GIL.
Because that doesn't cover a memory model.
Okay. Are we with me so far?
Okay, so let's get back to this
'cause this was where we started out
and this is where I said, "Gee, look at that.
"Apparently there's something wrong with that statement
"that Ruby can't do concurrency
"because we got a 10x performance increase
"by writing that code concurrently."
So how did that happen?
There's this thing where sometimes your program can do
a lot of stuff without actually doing anything at all.
Do we have any node programmers in here?
Anybody familiar with node?
(chuckles) Knew that hand was gonna go up.
Let's talk about I/O, input/output.
Modern computers support both blocking
and asynchronous I/O.
Blocking I/O means I'm gonna make an I/O call,
some sort of read/write out to the network,
out to the file, whatever,
and it's gonna block my current thread.
We also have this idea of asynchronous I/O,
which is I'm gonna do that but it's not gonna
block the current thread,
it's gonna go off and do it asynchronously
and the current thread can keep doing stuff
and come back to it.
Node has made this idea of asynchronous I/O
a very, very popular thing.
This is what node does, right?
Here's an important thing.
This is what it comes down to.
This is how we're gonna explain that first example,
is I/O and Ruby programs is blocking.
I/O within Ruby is asynchronous.
I'll say that again.
I/O in Ruby programs is blocking.
If I make a I/O call from a Ruby thread,
it will block my thread.
However, when Ruby internally makes the I/O call,
Ruby unlocks the GIL.
Ruby unlocks the GIL, as it does also
for backticks and system calls, right?
What that means is that if I have a Ruby thread
that is waiting on I/O,
it does not block other Ruby threads
from doing important stuff.
This is what this all comes down to
and this is what happened in that original example.
In Node.js, we have one main thread
and all of the I/O is asynchronous,
so I basically can just fire off all these callbacks
and I get deep into callback hell
and that works because the runtime itself
was designed around this event loop.
Ruby won't have an event loop.
We have a main thread,
so when we do I/O on the main thread, it's going to block.
But if we spawn multiple threads to do our I/O,
those threads will not lock the GIL
and they can't all be waiting simultaneously for that I/O
and we can actually have threads doing something useful.
Remember, the GIL exists to maintain
the internal consistency of the Ruby runtime.
Right? Just keeping back to it.
I/O operations are slow.
This is why we have asynchronous I/O.
We don't wanna have to block everything from happening.
When a Ruby thread is blocking waiting for I/O,
it doesn't have to change the internal state of Ruby.
It's not doing anything. It's asleep.
Ruby knows it's doing nothing.
The operating system knows it's doing nothing.
It's just sitting there.
So it can't make any changes to Ruby internally.
It's blocked.
So there's really no value in us, at that point,
locking the GIL and not letting another thread
do something useful while we're blocked.
So what this means is that Ruby programs,
like the one I started this presentation with,
that do a significant amount of I/O
generally benefit from concurrency.
I showed you an example where we got
a 10 times performance increase from that.
So what do we mean by I/O?
We're talking about whenever we read or write files
such as log files.
We're talking about when we interact with databases.
We're talking about when we listen
for inbound network connections like, I don't know,
let's say if we happen to want
to build a web application framework out of Ruby
or we want to connect external HTTP APIs
'cause, I don't know, maybe we want
to do some API stuff within our web application.
Or maybe we just want to send emails to,
I don't know, when people sign up for applications.
Raise your hand if these are the kinds of things
you do in your code all the time.
I should see every hand in this room go up, right?
What we're seeing here is that
the way Ruby works internally with respect to I/O
actually works very well for the kinds of things
we do all the time, the kinds of things we do.
So if you're doing all of these things,
which everybody is always doing,
then your program may benefit from concurrency
like we saw in the first example where we got
a 10x performance increase by doing that concurrently.
So here's my question.
Where's the love? Where's the love?
Why all the hate? Right?
Why is it I started out this presentation
with all those tweets beating up on Ruby
for sucking at concurrency
when we just saw that actually Ruby
can do fairly well at concurrency for the stuff that we do.
How many people here were, yesterday,
at the game show that happened right after lunch?
Okay, perfect.
People who were there were very, very smart,
very talented, very accomplished people.
I have a lot of respect for them.
All four people on the panel,
I have a lot of respect for all of them
and they did something yesterday
that I thought was very fantastic.
One of the words that came up was concurrency.
You remember that, right? Concurrency came up.
Two of the three people who responded to that question
started out by saying the same thing.
I see some people smiling.
You know what their answer was.
What was their answer?
"I don't know much about concurrency."
It took a lot of courage and a lot of humility
for those people who were sitting up there,
people who are very, very accomplished in our community,
who were streaming live all over the world,
to say, "You know what?
"I just don't know much about this topic."
I love that. I thought that was great.
Because, really, being humble
and saying I don't know something
is way better than running off to Twitter,
running off at the mouth about things
you don't know anything about,
which clearly is what Twitter was invented for.
(audience clapping)
Part of the reason why we don't have,
for the lack of love for Ruby with concurrency,
we just have a lack of knowledge, right?
But it's not a problem.
It's not a bad thing, right?
'Cause if you think about it, most of us
don't write code that requires concurrency, right?
We don't.
If we're using frameworks like Rails,
it handles the concurrency for us.
Rails is thread safe.
It can run on single-threaded web servers.
it can run on multi-threaded web servers.
If you're a total factor kind of person,
you can run that on multiple processors
and that handles that stuff for us.
Also, a lot of the domains where concurrency is necessary,
where high performance parallelism is necessary,
are domains that also need highly performant languages.
Look, we all love Ruby, but we love Ruby
because it makes us productive
not because it's the fastest language, right?
There's a lot of stuff going on
and all the stuff that makes us productive
makes Ruby not so fast, and that's okay.
There's nothing wrong with that.
It does good by us.
But the people who really need
that high performance parallelism
are probably writing in languages
that are much faster anyway,
'cause that's what they need.
So they're not coming into Ruby for those things.
Basically, learning about concurrency
is something that most of us, as Rubies,
just don't have to do, right?
We just don't have to do it, so we don't know about it.
That's part of the reason why there's so little love
for Ruby with respect to concurrency
'cause we just don't know and we don't have to know.
But there's a couple other reasons, two other reasons.
Another one is that, when it comes to concurrency,
Ruby's not perfect.
So I made a case for how Ruby is actually not as bad
at concurrency as we might have otherwise thought.
That's not the same thing as being perfect.
If we're gonna be perfectly honest,
Ruby is great at concurrent I/O, as we saw,
but not so much for processor intensive operations.
Remember, the GIL prevents full parallelism.
So some programs, if written concurrently,
will simply not gain a benefit.
This is very similar to the first example we saw,
but instead of going out and hitting that web API
and doing all of that I/O,
what I'm doing here is I've basically written
this very slow and very naive summation operation.
I'm gonna say we're gonna do, we're gonna be,
we're gonna have a count,
which is gonna be the number of times we're gonna do this.
I'm creating an array of one million
randomly generated numbers.
What I'm gonna do is then the sum operation
is literally just going to walk over the entire thing
and it is going to add up all those numbers.
It's a very naive brute force summation operation.
It's gonna work.
It's not very efficient, but for the purposes of this,
it'll demonstrate what I need to show.
The first time, I'm gonna do it serially.
I'm just gonna do it 10 times, one at a time.
The other one I'm gonna do the same thing I did before.
I'm gonna fire off 10 futures.
I'm gonna have each one of those happen on a future,
and it's gonna happen asynchronously.
So what do we expect to see when we run this in MRI?
Well, this time we're gonna see that both operations
take about the same amount of time.
The GIL prevents full parallelism
because only one unit of Ruby code can run at a time.
So in this case, we have no benefit,
but it didn't hurt us either.
Didn't go any slower, did not hurt us either.
Remember, concurrency is about design
and about breaking down my application
into those independent parts.
Now let's say I run this on JRuby.
What happens here?
I run it on JRuby, and it took about half the time to run
on JRuby by running concurrently than it did serially.
If I run it on Rubinius, guess what?
It takes about half the time to run
on Rubinius concurrently than it did serially.
What does this imply to you?
I have two cores in my machine
and those two runtimes got full parallelism,
which meant it took about half the time.
But on MRI, which has the GIL, we didn't get that advantage.
Again, I'm gonna be honest.
Ruby is very, very good at concurrency
when we're doing a lot of I/O,
not so much when it's processor intensive.
So the third reason, the last reason,
why I think Ruby gets a lot of hate about concurrency
is sort of, really, the lack of tools.
If you look in the Ruby standard library,
we've got some basic concurrency tools.
Thread, fiber, mutex, condition variable.
It's not a lot.
If you look at Java, we've got java.util.concurrent.
It's got those things, but it's also got futures
and executors and it's got exchangers
and it's got schedule tasks and timer tasks,
all these really, really cool things
that I can do with my applications.
Go, we've got GoRoutines, channels, tickers,
timers, mutexes, atomic variables.
Clojure, futures, promises, delays, refs,
atoms, agents, core-async library.
Erlang, we have gen_servers, gen_events, gen_fsm's,
respond processors, we have messages and all that stuff.
In Scala we've got executors, futures, promises.
We have Akka library for actors.
All of these languages provide tools that allow us
to build the concurrent systems very easily.
Ruby doesn't have that.
But think about Ruby as a language.
Why do we love Ruby? It makes us productive.
If we look in the standard library,
there are so many great things that we can do
in the standard library with very little code.
How many people saw Aja Hammerly's keynote yesterday?
Where she used tuple space and, in a couple lines of code,
built this hugely powerful distributed system
across Google's cloud platform
by taking advantage of stuff that's in the standard library.
Ruby's got this really awesome standard library
for everything but concurrency.
So really what it comes down to is
Ruby concurrency needs two things.
It needs better tools and a better publicist.
Like I said before, this presentation
is not intended to be a sales pitch for Concurrent Ruby,
but there was actually a sort of subversive reason
why I used future in those.
'Cause I wanna make this point about having better tools.
Notice what I did there.
I used that future construct in all of those programs,
ran it against all three Ruby runtimes,
and I compared the benchmark results.
Isn't that the way it's supposed to work?
Should I not be able to build concurrent systems
without having to know all
of this gory inner working of the GIL,
knowing when I can do this and when I can't,
knowing all the runtimes.
Look, when I've given presentations before,
I've asked this trick question.
I've said, "How many people can tell me,
"in a Ruby array, how many elements
"get pre-allocated when I create an array?"
The answer is two parts.
The first part is it depends on the runtime.
The second part is I don't care.
I don't care how many elements.
What I expect is I want a really awesome thing called array
and it does all the cool stuff that Ruby does.
I want it to have the exact same behavior on all runtimes
and I wanna know that each core team
is optimizing the bajeezus out of that thing
for that runtime.
When we deal with concurrency in Ruby,
we don't really have that 'cause we have
to have these conversations about the GIL and what it does.
But if you have good tools like the one I showed you before,
and there's others.
I'm kind of partial to Concurrent Ruby,
but there are plenty of other gems and libraries out there
that provide these tools.
Then I can stop worrying about this stuff
and do what Ruby is good at,
which is allow me to build really cool applications.
And that's the thing.
So we need better tools and a better publicist.
Myself and others are working,
and I know Match is working trying to build better tools,
but now that you know, now that you've seen this,
this is the call to action.
I would like each and every one of you
to help become Ruby's better publicist.
Please, get this message out that Ruby
is not as bad at this as many people think,
and in fact Ruby is pretty good at the things
that we do all the time.
Let's get that message out.
I'm just about out of time, so let me summarize quickly.
Conclusion.
Concurrency is not parallelism.
Concurrency is not parallelism.
Concurrency is not parallelism.
If I had a fricking nickel for every time
somebody on Twitter got this wrong (chuckles)
All right, but that's an error.
It's not the same thing.
Secondly, the GIL is intended
to protect Ruby's internal state
when the operating system context switches.
It does not provide thread safe guarantees to our code,
however it does provide an implicit memory model.
Do not write your concurrent code
with the assumption that that memory model is valid
because it could change at any moment.
The GIL does actually prevent true parallelism in MRI Ruby
because, you know, the GIL.
But Ruby is actually pretty good at multiplexing our threads
when we're doing a lot of concurrent I/O.
Since that's we do a lot and that's what really
is our bread and butter in Ruby,
that actually is a pretty good thing.
It's not really bad.
It's not perfect.
But at the same time it helps us
do the things that we do better
and we should really understand that better
and respect that better and take advantage of that more.
Finally, my parting message is basically this.
Keep calm and don't sweat the GIL.
So with that, again, I just want you to remember
I work for Test Double.
And I, like I said, created this thing
called Concurrent Ruby so I do have stickers for both
which I'm happy to give away.
Also, you can find me on the Twitters at JerryDantonio,
Github, JDantonio.
Of course concurrent-ruby.com takes you to that gem.
I do work at Test Double.
As Justin said the other day, we are available for hire
and we love working with cool people building cool stuff.
JavaScript, Ruby,
Erlang, Clojure, Go, testing, concurrency, whatever.
If you like what I've talked about,
if you've liked what Justin talked about
and you wanna talk to us more, we'd love to talk to you.
Reach out to me. Reach out to Justin.
Reach out to us at Test Double, our Twitter,
and we would love to do work with you.
That being said, again, thank you very much for being here.
My name is Jerry.
Take care.