Wednesday, October 05, 2016

RubyConf 2015 - Using Ruby In Security Critical Applications by Tom Macklin

dy to my talk.
Thanks for coming.
Hope everyone's having a good conference.
I know I am.
Is everybody learning a lot?
- [Voiceover] Yeah, real good. - Excellent.
I try to leave a few minutes when I do talks because
I learn so much in conferences I wanna talk about
the stuff I'm learning in other people's talks
more than what I came to talk about.
So if I get on a side note about something that I heard,
the last talk I was at was phenomenal.
But anyway, I hope you guys get a lot out of my talk today.
Before I say anything else, let me get this disclaimer
out of the way.
I work for the Naval Research Laboratory,
but my talk today is my opinions based on
my professional experience and my personal experience.
My opinions don't represent those of the U.S. Navy,
the U.S. government, anything like that.
As a matter of fact, if you do any research you'll
probably find that there's a lot of people in the government
who disagree with me on a lot of things.
Also another disclaimer.
I say 'we' a lot when I talk because I have
a really close knit team, and it's an awesome team.
And we argue about stuff, we don't always agree,
but when I say 'we', I'm not talking about big brother
or all the developers I work with.
I'm just kind of subconsciously referring to the fact that
we try to make as many decisions as we can as a team.
So I apologize in advance when I say 'we'.
So enough about that.
Little about me.
I consider myself a good programmer.
Not a great programmer, but a good programmer,
and I like to keep things simple.
I study a martial art called akido, and in akido we have
a lot of sayings, and one of the sayings we have is that
a advanced technique is just a simple technique done better.
And I like to apply that not just in martial arts,
but in all aspects of my life,
and programming is no exception.
So everything I do, everything I talk about,
the underlying theme is keep things as simple
as you possibly can.
So just a little bit about this Naval Research Lab thing.
It was started in 1923 by Congress by the recommendation
of this guy, Thomas Edison, who said we needed
a naval research lab, and so we have one.
And the group I work in, the Systems Group,
has come up with some pretty cool technology
they have used, most notably the onion router Tor,
came out of NRL.
And a lot of the foundational technologies and
virtual private networkings were developed by
Cathy Meadows and Ran Atkinson are two doctors at NRL.
The Vanguard Space Satellite Program came out of NRL,
which was America's first satellite program.
Of course, Sputnik was first out of the Soviet Union.
And there was a great paper from 1985 called
the Reasoning About Security Models.
It was written by Dr. John McLean,
who's my boss's boss's boss's boss's boss's boss.
But anyway, it's a great paper.
It talks about system Z, and if you're into academics
it's a really cool theory about security.
So all that said, my talk is not about anything
military related.
It's not academia.
It's not buzz word bingo.
I had a really cool buzz word bingo slide,
but I took it out because CeCe's was way better.
So anyway, what am I going to be talking about?
Well, I wanna spend some time unpacking
what I mean by security critical.
Like we just heard in the last talk,
people throw phrases around, and it means
different things to different people.
So I want to unpack what I mean by it.
Sorry about that.
I also wanna work through a use case.
Now this use case isn't an actual use case,
but it's kind of a composite of experiences I've had.
So it borrows from systems I've worked on and developed in,
but it's not actually representative of any system
we've ever built.
But the main reason I'm here is this last point, next steps.
We've got a lot of initiatives we're interested in pursuing
to improve our ability to use Ruby in
security critical applications.
And some of them we know how to do well.
Others we have an idea how we'd do it,
but we probably wouldn't do it well.
And others we know we can't do.
And so if anything you see on my next step slides
rings a bell with you, please come talk to me after the talk
because we're interested in getting help from people
who wanna do cool stuff with security in Ruby.
So anyway, there was a great talk that I saw that
influenced my thinking about this subject with Ruby.
Back in 2012, I was at a conference called
Software Craftsmanship North America.
I really recommend you go sometime, if you haven't.
It's a great conference.
But Uncle Bob had gave this talk called
Reasonable Expectations of the CTO.
You probably haven't seen it, it's on Vimeo.
If you haven't seen it, look it up.
I'm not gonna summarize it for you, but watch it.
And as you watch it, just add security to the list
of problems that systems have.
It's very applicable to the security problem as well,
and it rings even more true today
than when he gave the talk in 2012.
So when we talk about computer security
one of things we talk about alot is assurance.
And assurance usually is a verb.
It's something that I do to assure you that everything
is gonna be OK, that there's no problem.
Well, when I talk about assurance, I'm not talking about
telling you everything is gonna be OK because
what's the first thing you think when I tell you
everything's gonna be OK?
Something's wrong.
So I don't want to assure you of anything.
What I wanna do is talk about giving you assurances
that allow you to make a decision of your own.
And even if you don't like the assurances that you get
when you do a security analysis on something,
at least you know where you stand,
and that's really useful.
So when I talk about assurances, I'm not trying to tell you
everything's gonna be OK.
I'm talking about evidence.
We've all seen this chart before, and whether you're
trying to make money or make the world a better place
or solve a security problem, this chart is not avoidable
to my knowledge.
And when we go about solving a security problem,
we bump into it, too.
And we look at it and go, well, we got a few choices.
We can so something really clever that's gonna
outsmart the attackers.
We could go buy a really cool library that's gonna
provide us all this super awesome security
and solve all of our problems.
Or we could hire some outside consultant who's gonna
assure us that everything's gonna be OK.
Well, don't do any of that 'cause attackers
are really, really clever.
They're more clever than me, they're more clever than you,
and what's more is there is lots of them,
and they have lots of time.
You build a feature, it's on to the next feature.
They are out there hammering on your stuff
day after day, sometimes teams of them,
if you're unlucky enough to be a target,
and most of you aren't.
But we're going to make mistakes in our code.
It's just a fact of life.
There are going to be bugs.
There are going to be security bugs.
So I'm gonna talk about what we can do
to defend ourselves.
A key point I wanna make today is that a security critical
system should have the right security controls,
in the right places, and with the right assurances.
Say that again.
A security-critical system should have the right
security controls, in the right places,
and with the right assurances.
Now I like to do that with architecture.
We construct architecture, and a lot of times when we're
building code, the principles that make code awesome
are the same principles that make code secure.
We wanna reduce complexity.
We wanna localize functionality.
We wanna improve test coverage, things like that.
But also we wanna make sure we have the
right controls in the right places.
A firewall at the front door isn't gonna keep bad guys out,
just like the guy with a gun in your server room
isn't gonna keep hackers out of your server.
So you've gotta not only consider architecture of
your code and design and test coverage,
but you also need to think about what controls
you're using where.
And how, more specifically, we layer those controls
in our system.
So some of these acronyms you may not recognize.
I'll explain them later, but these are really the
security control layers that you consider
at a minimum in your application.
You have your operating sytem.
You have your security framework,
and then you have your application framework.
And these are what we're gonna layer in our assurances.
But what are these assurances?
Are they something squishy that we can't measure?
Well, kind of, but we do have the ability to
talk about them in a semi-structured way.
And I like to talk about them in terms of
this NEAT principle.
And NEAT stands for non-bypassable, evaluatable,
always invoked and tamper evident.
And so your security controls, the more you can measure
and answer these questions, nodding your head
instead of shaking your head,
the more security you're gonna have in your controls.
So I just wanna go through these real quick.
Non-bypassable, it's pretty easy to describe.
If you've got a circuit breaker, it keeps your electronics
from getting fried because of too much electricity
is going over the wire, it's going to trip the breaker
and keep the electricity from flowing.
But if there's a wire going around the circuit breaker,
going directly from the power grid to your laptop,
it's not going to do you any good, even if it does trip
because you're gonna get fried.
So for a good security control to work,
it has to be the only way from point A to point B.
Evaluatable is a little harder to talk about.
There's a lot of things like symbolic execution engines
and static analysis tools that we can use to measure
and evaluate code security.
But for most of you here, I think a great thing to do
if you haven't done it is just follow the instructions
on the screen.
And you can get a good idea of how readable,
how evaluatable your code is, if your score is
say less than 20.
The lower the better, but if it's code that needs to be
really secured, you should definitely be below 20.
So keeping things small, reducing branches,
not using things like eval or call are good things to do
when you're in a piece of code you consider
security enforcing in your application.
Always invoked.
I think the H sanitizer, the HTML sanitizer in ActionView
is a great example.
When it first came out it was something you could call
if you wanted, but you could also forget
to call really easily.
At some point they brought it into ActionView, I think,
and made it default so you had to not call it.
I haven't used Rails in awhile.
I'm one of those weird Ruby people that doesn't
use Rails very much.
But this is a C example, actually, and I like doing stuff,
having things like this littered in the headers 'cause
it makes the compiler insult people who do dumb things.
Not to be judgmental.
It's a learning experience for all of us,
and we all get a good laugh out of it.
But type this in, and see what your compiler says to you.
And then tamper evident.
This is another one that's a little tricky to describe.
This guy here, he's a coal miner,
and he's got a little canary with him.
And back in the day coal miners used to bring
these canaries into the coal mine with them because
when toxic gas leaked out of the rocks,
it would kill the canary well before it would kill them.
And while it's kind of gruesome to think about,
it was a good way for this guy to get home
to his family with the technology he had.
We do something similar in binaries every day.
We put these little cookies on the stack,
and so if there's a buffer overrun in your application,
the A's, the NOP slide, whatever it is, is going to crush
that cookie if the attacker is not careful,
and we can exit the program safely.
It's a way over-simplification, and it's not bulletproof,
but if you're interested in more, I've got a little link
on my slide here.
And if you just type stack canaries or stack smashing
protection, you can learn a lot more about
ways we protect binaries.
So I've got my checklist.
I've got some controls I wanna talk about today,
and I've got some assurances I wanna apply
to those controls and see how we're doing.
So we're gonna use this checklist
as we go through the rest of this brief.
Like I said, the use cases I'm going through,
it's one example in three parts,
and it's not a system I've actually built.
It's little pieces here and there from different projects
I've worked on that I think represent good explanations
of these security principles.
So at the base of your system is your
operating system controls.
No matter how secure your code is,
if your operating system is not configured properly,
you're screwed.
And the main security feature in your operating system
is access control.
Have the right security control to security degree,
talk about mandatory access controls,
and it can get complex.
But they're actually pretty simple.
It just means something that the administrator
sets up at boot, and it can't be changed.
So the neat thing about mandatory access controls
is they're nice and reliable.
They don't change.
They also have a really pretty static code base
supporting them because they're not changing.
They get set at boot, and they don't change,
so it's easy to make sure that code works well.
So at the base of your application of your system
it's good to use your operating system's access control
mechanisms, preferably in a mandatory way,
as opposed to a discretionary way.
It keeps your system simple.
So a use case might be you've got multiple databases,
and you wanna be really sure that people on
different networks can only ready from the database
that they're authorized for.
And you wanna be really, really careful about
what gets into those databases.
Maybe, I don't know, all sorts of examples of
why we'd wanna do that.
But rather than trusting our code that does the very best
it can to make sure there's no sequel in our posts,
we basically can give our applications read only access
to some of these databases.
And that way no matter how bad our network application is,
there's no way it's going to be able to read from
the databases it's not allowed to see,
and it's not gonna be able to write to any of the databases.
And then we simply implement a piece of glue,
a little router, and we can do that very securely.
And all it does is make sure that the right requests
go to the right places.
And with this way we can ensure that our information flows
are set up in a very secure manner.
Let's validate that.
And so the fact that these databases have read only
permissions, they're basically gonna have to own your box
to get around that.
So you have reasonable assurance that to write
it's gonna have to go through the dataflow
pipeline I've created.
Evaluatable.
Well, the security critical piece of code here is just
this simple little router that takes write requests
and sends it to the database owners.
So I can keep that pretty small, pretty evaluatable,
and I can use a type safe language like Rust
or something like that.
Always invoked.
Every single file system call you make
has to go through the kernel.
That's pretty reliably always invoked.
And then I tried to come up with a good example
for tamper evident.
Making sure your operating system isn't being tampered with
is kind of outside the scope of this talk.
So if you're interested, let me know, but I skipped it
because I didn't wanna bore you all to death with it.
So some take aways.
Use access controls mechanisms in the operating system
if you can, and then wrap them directly into
your application, maybe say with FFI.
And then don't stop there, though, because it's a pain
in the butt to develop your application with these things
in place, and more to the point, if you screw up,
you can crash your development box.
You don't wanna do that, so do your
day-to-day development with a stub.
We ended up in a situation where we had a third party
that was gonna help us write our application,
and we needed to get their help.
And they didn't have our Mac infrastructure,
so we just gave them the stub.
They wrote a really cool app that we couldn't write,
and then they sent us back the code.
And it was relatively easy for us to take the stub out
and integrate the code in with our application.
And then finally, it's only mandatory access control
if the application doesn't have the ability
to change the policy.
So if you can, avoid giving your application
system privileges.
If you look at Stagefright in Android,
it's a really cool library.
It does lots of awesome stuff.
I could have never written it.
And I don't blame them the least for making a small
little mistake, it's inevitable.
But if they wouldn't have had system privileges on it,
and maybe they had to, I don't know,
I haven't researched it that well.
But it wouldn't have been as catastrophic as it was
had it only had user privileges.
So I'm not saying never give your software
system privileges, but think really hard about it
'cause if you make a mistake, it's no longer your box,
it's their box.
So some of the things we wanna do is make it easier
to make test devils for a our file system objects
because we use the operating system so much in security.
I have talked to some of the guys from Test Devil,
and I don't know if it's such a good idea,
but it's something we've been playing with,
and if you wanna convince me it's a bad idea
or you wanna help, let me know.
The other is, like I said we are looking at using Rust more
for our type save security critical code
or our performance critical code.
And everything I've learned about Rust-Ruby integration
I learned from this blog entry I have here.
I'm not very good at it, so if there's more resources
that any of you know of, please let me know.
Huh?
- [Voiceover] (mumbling) (laughs)
- Good to know.
So moving on through the layers of the onion
of our use case is what I am calling our
services security framework.
And I didn't have a really good name for this,
but basically if we're gonna separate our application
into a bunch of processes,
we're gonna have to integrate them together in some way.
And those integration points are great places
for attackers to break your system.
Things like inter-process communication or database access.
These are great places where attackers are able to get in
and do things like CSRF, internationalization attacks,
sequel injection.
And it's a lot of time hard for us to get our paying
customers to understand the sorts of things that can happen
if we don't do a really good job with this.
And I don't know, have any of you guys
ever read Ender's Shadowj?
I know a lot of people have read Ender's Game,
but Ender's Shadow is a little less well read.
There's a great scene in there where Bean is
talking to his boss, and he basically points out that
as your attack surface grows, defense becomes impossible.
And with these sorts of systems that we're building
our attack surface is growing.
Fortunately not as big as the scope of the aliens
in Ender's Game and Ender's Shadow,
so it's not hopeless, but it's bad.
And I don't have enough confidence in my ability
and my team's ability, even though we've been doing this
for a long time, to cover every nook and cranny such that
our code can't be changed in 10 years
to allow this stuff through.
So we stick with this principle of separate,
isolate and integrate.
And essentially what we're trying to do is every time
a process component that's been separated ingests data,
it uses some sort of domain specific language to
enforce its security policy, such that it's protected.
And then when it sends data out, it also tries to
protect the data and protect the next process.
So that doesn't make much sense.
I tried to come up with a better way to explain it,
but I think I'm just gonna have to use an example.
So let's take a really, really over-simplified example
and say that we wanna make sure that no semi-colons
make it into storage.
Now there's a lot of web attacks that require semi-colons,
and so I'm not saying you wanna do this or not,
but it might be useful policy to use in trying to protect
a lot of web attacks.
And the example I'm about to give does not take into account
internationalization considerations, so don't just use it.
Internationalization is important for apps,
but it's also important for security.
So just wanted to throw that in there.
Keep internationalization in mind when you're
building your app, especially with regard to
how it impacts security.
So let's look at some of this application layer
pre-imposed processing we do.
What's this code doing?
Well, it's not entirely clear.
It looks like it's doing some sort of escaping
to turn semi-colons into something else because
the semi-colons might be natural.
And then it's doing some sort of resolution to make sure
that the semi-colon escape key doesn't show up in there,
and then it sends it off.
And then when it goes to render the data,
it goes to resolve it back to what it was.
And do I trust this code?
Umm, kind of.
But it's also kind of ugly.
And this is just one policy.
Imagine an application with five or six hundred policies
that you have to apply.
This is gonna get kind of ugly.
Let's look at the other side, the storage side.
I trust this code a lot more.
Its job is very simple.
It looks for semi-colons.
There's not supposed to be any semi-colons
in the data that's there.
And what's more is if you look at line nine,
it doesn't trust the collar to check the return code
before it moves on.
If code is security critical, you don't wanna entrust that
the collar is gonna check the return code because
maybe you're checking it now, but maybe someone's gonna
introduce a problem in two years that's gonna
block the check.
So if it's actually security-critical,
don't rely on the collar to check your return code.
Handle it right there.
Die might seem a little extreme, but like I said,
the application was supposed to have
gotten rid of all the semi-colons.
So if there's a semi-colon there, either there's a horrible
problem with our application, or somebody's taken it over.
So this is an example of how you can do tamper evidence
without using fancy thing like stack canaries
or anything like that.
Ruby is really awesome at monkey patching
code into classes, so there's all sorts of ways you can
trigger these hooks if they automagically get called.
We learned about how refinements could even be used
to make sure that these things got called at the last talk,
which was really cool.
And again, the point of the check was very small.
The normallization of the data preparation for storage
may have been complex, but it allowed for
a very simple check at the time of storage.
And that brings up a point that I wanna get to
in just a minute, but I wanted to talk about some other
cool technologies that are related.
I don't know if any of you guys have used ANTLR
or Parselet or anything like that.
Parsers are cool, but it's always hard to figure out
what to do with the parser once you've got that IST.
Tools like ANTLR and Treetop and Parselet and others
make it really easy to hook in behaviors in your code
when the parser hits certain things.
So if you need to do content validation or content parsing,
you should take a look at those projects.
Another really cool tool is checksec.sh.
It's literally just a batch script, and it does analysis
of your binaries to look for what sorts of exploit
mitigations have been compiled into it.
We use this all the time, and not only that.
If you go to their website there's all sorts of links about
all the security wonk stuff, and if you're interested,
you can just learn a tremendous amount about
binary exploitation just following the links on that site.
And then PoC or GTFO.
Has anyone read PoC or GTFO?
Think of it as why the lucky stiff, but for security geeks.
It's really funny.
Sometimes it's hard to follow 'cause it gets
pretty technical, but it's really funny.
And then finally the Spanner.co.uk.
It's a really neat blog where he talks about ways
he breaks web applications.
I haven't found any better than that one.
So anyway, couple things.
I don't know if any of you guys have ever worked with
SELinux or XAML, but really complex policy languages
that can do everything.
They're very powerful, and they're very good,
and people do great things with them,
but I have trouble keeping all the state in my head
when I'm trying to write policies.
So I try to keep things simple and use DSLs that are
kind of custom oriented towards the problem
I'm trying to solve.
I think that's a very Ruby way to look at it,
and it's a good way to look at it.
The other thing is to keep those simp checks
as simple as you can at enforcement.
Not just for the evaluatable thing, but there's this
other class of bugs called time of check, time of use bugs.
They're kind of obscure, but basically it means
you do a check, you do some other stuff,
and then you write to the database,
and it can change in the meantime.
They're really, really hard to detect.
Really good hackers are great at finding them
and causing them to occur.
Your unit test will almost never run into them,
and if they do, you'll just assume it was a glitch
and skip them.
So if you keep your checks simple, you'll avoid this whole
really, really ugly class of security bug.
And then a really good example, I have a link here.
There's a guy named Mat Honan who works for
Wired magazine.
In 2012 there was this terrible hack, or a great hack
depending on your perspective, but they did a bunch of
little things and then chained all of these hacks together.
So if you ever hear the argument,
well they'd have to break this, and then they'd have to
break this, and then they'd have to break this.
Well, that happened to this guy,
and so these things do happen,
and it's a really interesting story.
So next steps.
We...
If I can get it to...
So there was a talk that Tom Stewart gave in Barcelona, 2014
called Refactoring Ruby with Monads.
And I like the idea of monads right now
more than I like the practice because
I'm not particularly good at them.
But I do believe that we can use monads to
wrap our content that we ingest from untrusted sources
and ensure that they're properly validated
before we store it.
There was a good talk on hamster.
We've been looking at...
Immutablity also provides security properties,
not just performance and code quality.
So there's a lot of things we can do to improve
the mechanization in our code to enforce that
it's properly validated.
The other thing is we spend a lot of time writing
security rule sets, and it gets rather mundane.
If you take something like the SMPT specification,
it takes a tremendous amount of time,
and it's very boring to go through and write those.
So we're looking to build out our tool set to automatically
generate our rule sets from those things.
And yeah, anyway, enough on that.
So now we're to the part that affects most of you
most of the time, which is writing applications.
And unfortunately, there's a lot of security decisions
that have to be made in the app.
They can't happen at the services and integration layer.
They can't happen in the OS, they are in the app.
And a great use case is XML.
I try to avoid XML when I can, but sometimes
it's unavoidable, and XML processing is very complicated.
So how do we build a high assurance, secure XML processor?
Well, we don't.
It's really complex.
If you've looked at all of the different XML
libraries out there, some of them are really great,
but they are complex.
We're not possibly going to be able to get them
to meet that evaluatable construct at the very least.
So how do we do it?
Well, we use the same strategy I've been talking about
the whole time.
We break our goal into smaller pieces,
and we separate them, and then we integrate them
with well understood mechanisms that the OS can enforce.
Another thing we're introducing here is
what I call binary diversity.
There's a lot of different forms of binary diversity.
It's a great new research area, but the simple act of
using different libraries for different functions
makes attackers job much harder.
So if you can do the separation and use different libraries,
it gives you some level of protection.
Again, it's not bulletproof, but it's very good.
And like Justin Stroll was talking about, by breaking down
your functionality into smaller units,
it's easier to test them.
And this brings up another good point,
which is you can have really secure code,
but you might be using some library like Psych that is
a good library, but it has some obscure vulnerability
in the underlying C native code,
and you're screwed when it breaks.
So these things are going to happen.
We can do all the code analysis we want,
but your application is going to break.
So make sure that you've got fault isolation
built into your system.
So how are we doing with, let's see...
And it's not...
OK, there we go.
How did we do?
Well, we've got that great non-bypassable pipeline.
The only way to the data storage is through,
or the next step in the application is through my pipeline.
So we've got non-bypassability.
Evaluatable.
We've got a big code base.
There's really nothing we can do except for do our best
to keep our flog scores low, make sure our unit test
coverage is good.
We've got good pen testers, whatever we wanna do.
There's only so much we can do to evaluate our applications.
You can do a lot with Ruby to make sure that
your code is always invoked.
For example, if you're using Rails, you could instrument
your checks into asXML so that they're
automagically called.
And those little brick walls I had in the last slide,
they weren't just for decoration.
There's a really cool tool called SECCOMP that we use a lot.
And so SECCOMP, think of it as like a firewall
for your operating system.
Every time you go to read a file, your process calls down
into the operating system, which turns that into
a bunch of op codes and things that very few of us
understand very well.
And there's a few of those we really all need,
but there's a bunch of them you should never be using
in production, like high performance instrumentation
to see how you're doing at the microsecond level.
Or P-trace, which is used for GDB.
These are the system calls that you probably
don't even know are there, but the attackers sure do,
and that's where most of the current vulnerabilities lie.
So you can use tools like SECCOMP to protect
your applications so that if there is a security failure,
it can't be used to attack the operating system as a whole.
Even more controversially is this grsecurity tool.
It's a patch, and you have to apply the patch to recompile
the operating system, but it provides security controls
that protect against classes of bugs.
It's very controversial in the Linux community.
But there was a really good article in the Washington Post
on November 5th that gave a reasonable explanation
of Spender's perspective with grsecurity versus
Linux's perspective.
So if you look it up in the Washington Post
from November 5th, it's great.
Also, if you're interested in the internet of things,
there's a lot of tools, OpenEmbedded, Yocto,
but we really like Buildroot in my shop.
Buildroot's a great tool for building your own
Linux distributions, and makes it real easy to select
the things you want.
Ruby is provided in Buildroot.
So some of the things we wanna do moving forward is
we wanna make it easier for other people to use SECCOMP.
I wanna build a gem that makes it easy
for people to block the system calls that
they're not going to need in production.
This will greatly reduce the attack surface of
any application that uses this gem, or uses Linux.
Obviously it's not gonna work on Windows,
but it does provide real protection.
But it needs to be easier to use than it is,
certainly in the instance that I use in my day job.
Again, the importance of even in your application,
separating things into separate processes.
Isolating, and then integrating with assured
security controls, and just being relentless
in making sure every little piece of code that you have
is well tested, is well designed.
So like I said, I wanna do a better job of making SECCOMP
available to users.
And another really cool technology that's
come out recently is this Robusta.
I don't know too much about it.
I've read the paper, but I haven't actually downloaded
and tried to use it.
But basically Robusta is a container that lives inside
the Java virtual machine.
And if there's a security failure in a native extension,
a Java native extension, Robusta actually isolates it
so they can't break out and take over you whole JVM,
which is kinda cool.
And if you look at most of the vulnerabilities that happen
in Ruby, it's usually not in Ruby itself.
It's in some sort of native extension
that we all use and love.
It's some gem, it's buried.
We don't even know we're using it.
So this could be a real winner for a lot of people
in the Ruby community.
Another is like we like M-Ruby, and we're trying to
learn more about it.
Unfortunately, there's a birds of a feather talk going on
on M-Ruby right now that I couldn't go to,
which is kind of a bummer.
But M-Ruby allows us to put better weaponized...
Sorry, I shouldn't use words like that.
I work for the Navy.
The more robust security controls into your binaries
to make it much, much harder for attackers to break them.
And like I said, when you learn about GCC in claim,
there's all these little compiler files you can use
that make your binaries stronger,
and they're really, really awesome.
So I could put a picture of my cat up,
but I always like it when I see Zach and Corey's briefs.
I don't know if you've ever sat through
one of Corey's talks, but he's a great presenter.
And so it was just kind of homage to him.
My picture of Zach.
And I'm a little ahead of schedule, so I want to take a
non-sequitor very briefly into security penetration testing.
Like I said, I do that sometimes when called upon.
It's not my primary duty, but it is a duty that I do.
And there's a lot of mystery around penetration testing
for people who don't work in this.
I don't know if anyone recognizes this picture,
but this is the little grate in Helms Deep from
Lord of the Rings that the bad guys brought the bomb in
and blew the whole thing up.
And the obvious example is that you don't wanna have
this little hole in your outer wall.
But there's another example that a lot of people
don't know about.
The way castles are designed, that outer wall
is just designed to make it kind of a pain in the butt
for people to get through.
It's not really a defensive mechanism.
It's just a way to make attacking much harder.
So when they stationed, put all their eggs in guarding
that outer perimeter, and the bad guys blew it up
'cause it had a water drain, that was really
the mistake they made.
They should have been guarding the keep,
which had a two-by-two entrance.
No matter how strong the Yorks were, they would have been
coming two-to-two, and they could have fought 'em all off.
But anyway, enough geeking about security, and...
Sorry about that.
So I don't wanna talk to you about whether you should buy
penetration testing or not.
It often is money well spent, sometimes it's not.
But if you're gonna buy penetration testing services,
give information to them.
If you make them find the information, they will find it,
and it's just going to take them longer
and maybe annoy them.
And so if you give them information,
you're gonna get more value for your money.
And along those lines, build relationships with
your pen testers because you're gonna write these things
called rules of engagement,
like what they can and can't do.
Well, there's always ambiguity in that,
and the better relationship you guys have,
the more you're gonna be able to work with them to have
a more granular understanding of what those
rules of engagement are.
And don't just test from the outside.
We know in the Ruby community intuitively that we write
unit tests for all of our classes, no matter how deeply
embedded in the application, or to the outside.
In fact, a lot of times those core libraries that we rely on
the most, we put a lot of work into testing those.
If we make our pen testers come in from outside
the firewall, really they're testing your firewall
much more than your app.
So that's not a bad place to start, but maybe give them
script console access and see if they can test,
if they can get around your access control mechanisms
in your application.
So like I had those controls at the different layers,
have your pen testers do testing from different layers.
Obviously not on your production network,
but you know, in the lab or something like that.
So with that, I wanna thank you for coming to my talk.
I hope it was modestly entertaining.
Little link I have here.
Kim Cameron, Seven Laws of Identity.
Who's read that?
Wow.
This was written in 2005, and it was a treatise on
what the identity management software community should do
to protect the rights of consumers.
I really recommend it.
It's very relevant today.
So it just seems to be coming up a lot in the talks,
so it's a good read, it's timeless.
So with that, thank you for coming to my talk.
(applause)
I have ten minutes for questions,
if anybody has any questions.
Oh, so the question is, what's my opinion on getting
third party penetration testers coming in versus
just doing your own automated vulnerability scanning?
Well, it depends on what your goals are,
and that's a really lame answer, and I'm sorry
for giving it, but I have to.
But I would say, I would recommend using an automated...
Just like you have Travis or Team City, we use Team City
in my shop, just like you do continuous integration,
you should have pretty regularly someone point an automated
vulnerability scanner against your application,
both in production and in the lab.
It just makes sense.
The cool thing about pen testers is they're humans,
they're not automated tools.
And if you get ones that know what they're doing,
they'll know what to look for that's not in that tool suite.
But a lot of times the usual suspects are the problem,
and you can get a lot of mileage just out of using
automated tools yourself.
That answer the question?
Well, that's a really good point.
So the question, if I got it right, and jump up and down
if I didn't, is that now I'm using two libraries
instead of one, and the attack surface
just got a lot bigger.
And in a way of thinking, absolutely it's true.
But if you look at the example that I gave,
all RXML was doing was checking for well-formedness.
So it doing a very specific purpose, and we evaluated
the RXML was gonna do it well.
Whereas Nokogiri could assume up front that it was
already well-formed, so mass assignments bugs would be
a lot less likely to be applicable to it.
But it's gonna do a lot more deep dive on the content.
And the math term for this is Floyd-Hoare precondition
postcondition analysis, but basically what you're doing is
you're allowing, when it comes into RXML
your precondition is nothing, and your postcondition
is that it's well-formed.
And then with Nokogiri you've got a precondition that
it's well-formed data, and maybe that the comments
have been sanitized or something like that.
But using Floyd-Hoare it's easy to compose a secure system
using preconditions and postconditions.
That probably wasn't a great answer,
but that's kind of our take on it.
Any other questions?
Alright, cool.
Well, thank you all for coming.