Tuesday, May 15, 2007

Rails Performance Screencast notes

Here is the notes I took notes from topfunky's httperf screencast.

Production Log Analyzer is a gem that lets you find out which pages on
your site are dragging you down.

Change one thing at a time and benchmark. Take actual data from
production db, export it and use it for benchmarking. Create a
separate environment for benchmarking.

Install httperf. Need 2 machines one for running httperf client and
the other for running Rails app.

Shut down all unnecessary programs on both machines to reduce CPU
usage. This reduces unpredictable results.

Process for Benchmarking Webapplication using Httperf

1) rake sweep_cache
2) mongrel_rails start -e production (no logging)
3) Hit a particular page once
4) run httperf (with consistent args)

Step 1 and 2 is run on Rails app machine. Go to browser on the client
machine load the page and run:

httperf --server machine_name --port 3000 --ur /page_name --num-conns 1000

The number of connections parameter can be changed. In the output of
the httperf make sure that you get number of replies is same as total
connections. Reply status should not have any errors. If there are any
httperf errors the results must be tossed out.

Copy the "Reply rate" line of the httperf output for different number
of connections and use topfunky's script to generate graphs.

./script/parse_httperf_log log/httperf-sample-log.txt

Tuning Mongrel for Performance

Here are the steps for finding how many requests per second a single
Mongrel process can handle.

httperf --server machine_name --uri /page_name --port 3000 --num-conns
7000 --rate 650 --hog

650 is the requests/sec

If this process hangs for more than 20 secs then we have overloaded
Mongrel. Httperf should have no errors (because it means httperf was
overloaded, in that case data must be tossed out).

Try different values for number of connections and rate, such as 6600,
660, 6700, 670 etc. Till the reply rate reaches a maxium and begins to
drop off.

Scaling Rails - Notes from Silicon Valley Ruby on Rails Meetup

Some websites are launched by getting it on TechCrunch, Digg, Reddit
etc. In such cases there is no time to grow organically.

Resources:

1. The Adventures of Scaling Rails -
http://poocs.net/2006/3/13/the­adventures­of ­scaling­stage­1
2. Stephen Kaes "Performance Rails" - http://railsexpress.de/blog/f
iles/slides/rubyenrails2006.pdf
3. RobotCoop blog and gems -
http://www.robotcoop.com/articles/2006/10/10/the­sof
tware­and­hardware­that­runs­our­sites
4. OReilly's book "High Performance MySQL

This presentation's focus is on what's different from previous
writings. For comprehensive overview refer the above resources.

Scribd.com was in launched in march, 07. It is the "YouTube" for
documents and it handles around 1 Million requests per day.

Current Scribd Architecture

1 Web Server
3 Database Servers
3 Document Conversion Servers
Test and Backup machines
Amazon S3

Server Hardware

Dual, dual core woodcrests at 3 Gz
16 GB of memory
4 15K SCSCI hard drives in a RAID 10
Disk speed is important. Don't skimp; you're not Google, and it's
easier to scale up than out.
Hosted by Softlayer.

Software

CentOS
Apache/Mongrel
Memcached, RobotCoop's memcache-client
Stefan Kaes' SQLSessionStore - This is the best way to store peristent sessions.
Monit, Capistrano
Postfix

They ran tests and found out that fragment caching improved
performance for their web app.

How to Use Fragment Caching

Consider only the most frequently accessed pages.
Look for pieces of the page that don't change on every page view and
are expensive to compute

Just wrap them in a
<% cache('keyname') do %>
...
<% end %>
Do timing test before and afterwards; backtrack
unless significant performance gains

Expiring fragments - 1. Time based

Use memcached for storing fragments
It gives better performance
It is easier to scale to multiple servers
Most importantly: It allows time­based expiration

Use plugin http://agilewebdevelopment.com/plugins/memcache_fragments_with_time_e...
Dead easy:
<% cache 'keyname', :expire => 10.minutes do %>
...
<% end %>

Expiring fragments - 2. Manually

No need to serve stale data
Just use: Cache.delete("fragment:/partials/whatever")
Clear fragments whenever data changes
Again, easier with memcached

They also discussed about how to use 2 database servers with Rails
app. For more information you can see the slides at
http://www.scribd.com/doc/49575/Scaling-Rails-Presentation

Q & A

They use SWF open source plugin for uploading documents (it allows
multiple docs to be uploaded simultaneously)

They pay only $100 monthly for Amazon S3 for 5 tera bytes of b/w
usage. Downside of using Amazon S3 is that they cannot generate
analytics for that part of the app.

The uploaded files goes to a queue and is processed in the background.
So the documents don't appear immediately on the site if the load is
high.

Tip from the first presentation on CacheBoard: It uses backgroun drb
plugin by Ezra for exporting documents in XML format.

Tuesday, May 01, 2007

Configuring Selenium to Run Automated Tests

Had to dig out this buried info on Selenium to get the browser to run the automated tests.

Also the Selenium tests are not independent of each other. The tests within a particular file depend on what page the previous test has ended. Strange but if you don't know this you will end up spending lot of time banging your head against the wall.

Colorizing ZenTest output

To get the green and red color for autotest:

1. sudo gem install --remote Redgreen
2. put the statement: require 'redgreen' in the environment.rb

restart the autotest if is already running. Now you will have green color for passing scenario and red for failing tests. Enjoy!