- •Contents
- •Preface to the Second Edition
- •Introduction
- •Rails Is Agile
- •Finding Your Way Around
- •Acknowledgments
- •Getting Started
- •The Architecture of Rails Applications
- •Models, Views, and Controllers
- •Active Record: Rails Model Support
- •Action Pack: The View and Controller
- •Installing Rails
- •Your Shopping List
- •Installing on Windows
- •Installing on Mac OS X
- •Installing on Linux
- •Development Environments
- •Rails and Databases
- •Rails and ISPs
- •Creating a New Application
- •Hello, Rails!
- •Linking Pages Together
- •What We Just Did
- •Building an Application
- •The Depot Application
- •Incremental Development
- •What Depot Does
- •Task A: Product Maintenance
- •Iteration A1: Get Something Running
- •Iteration A2: Add a Missing Column
- •Iteration A3: Validate!
- •Iteration A4: Prettier Listings
- •Task B: Catalog Display
- •Iteration B1: Create the Catalog Listing
- •Iteration B4: Linking to the Cart
- •Task C: Cart Creation
- •Sessions
- •Iteration C1: Creating a Cart
- •Iteration C2: A Smarter Cart
- •Iteration C3: Handling Errors
- •Iteration C4: Finishing the Cart
- •Task D: Add a Dash of AJAX
- •Iteration D1: Moving the Cart
- •Iteration D3: Highlighting Changes
- •Iteration D4: Hide an Empty Cart
- •Iteration D5: Degrading If Javascript Is Disabled
- •What We Just Did
- •Task E: Check Out!
- •Iteration E1: Capturing an Order
- •Task F: Administration
- •Iteration F1: Adding Users
- •Iteration F2: Logging In
- •Iteration F3: Limiting Access
- •Iteration F4: A Sidebar, More Administration
- •Task G: One Last Wafer-Thin Change
- •Generating the XML Feed
- •Finishing Up
- •Task T: Testing
- •Tests Baked Right In
- •Unit Testing of Models
- •Functional Testing of Controllers
- •Integration Testing of Applications
- •Performance Testing
- •Using Mock Objects
- •The Rails Framework
- •Rails in Depth
- •Directory Structure
- •Naming Conventions
- •Logging in Rails
- •Debugging Hints
- •Active Support
- •Generally Available Extensions
- •Enumerations and Arrays
- •String Extensions
- •Extensions to Numbers
- •Time and Date Extensions
- •An Extension to Ruby Symbols
- •with_options
- •Unicode Support
- •Migrations
- •Creating and Running Migrations
- •Anatomy of a Migration
- •Managing Tables
- •Data Migrations
- •Advanced Migrations
- •When Migrations Go Bad
- •Schema Manipulation Outside Migrations
- •Managing Migrations
- •Tables and Classes
- •Columns and Attributes
- •Primary Keys and IDs
- •Connecting to the Database
- •Aggregation and Structured Data
- •Miscellany
- •Creating Foreign Keys
- •Specifying Relationships in Models
- •belongs_to and has_xxx Declarations
- •Joining to Multiple Tables
- •Acts As
- •When Things Get Saved
- •Preloading Child Rows
- •Counters
- •Validation
- •Callbacks
- •Advanced Attributes
- •Transactions
- •Action Controller: Routing and URLs
- •The Basics
- •Routing Requests
- •Action Controller and Rails
- •Action Methods
- •Cookies and Sessions
- •Caching, Part One
- •The Problem with GET Requests
- •Action View
- •Templates
- •Using Helpers
- •How Forms Work
- •Forms That Wrap Model Objects
- •Custom Form Builders
- •Working with Nonmodel Fields
- •Uploading Files to Rails Applications
- •Layouts and Components
- •Caching, Part Two
- •Adding New Templating Systems
- •Prototype
- •Script.aculo.us
- •RJS Templates
- •Conclusion
- •Action Mailer
- •Web Services on Rails
- •Dispatching Modes
- •Using Alternate Dispatching
- •Method Invocation Interception
- •Testing Web Services
- •Protocol Clients
- •Secure and Deploy Your Application
- •Securing Your Rails Application
- •SQL Injection
- •Creating Records Directly from Form Parameters
- •Avoid Session Fixation Attacks
- •File Uploads
- •Use SSL to Transmit Sensitive Information
- •Knowing That It Works
- •Deployment and Production
- •Starting Early
- •How a Production Server Works
- •Repeatable Deployments with Capistrano
- •Setting Up a Deployment Environment
- •Checking Up on a Deployed Application
- •Production Application Chores
- •Moving On to Launch and Beyond
- •Appendices
- •Introduction to Ruby
- •Classes
- •Source Code
- •Resources
- •Index
- •Symbols
PERFORMANCE TESTING 220
Path should be a string containing the URI to be invoked. It need not have a protocol or host component. If it does and if the protocol is HTTPS, an HTTPS request will be simulated. If the params parameter is given, it should be a hash of key/value pairs or a string containing encoded form data.5
get "/store/index" assert_response :success
get "/store/product_info" , :id => 123, :format = "long"
get_via_redirect(path, args={}) post_via_redirect(path, args={})
Performs a get or post request. If the response is a redirect, follow it, and any subsequent redirects, until a response that isn’t a redirect is returned.
host!(name)
Set the host name to use in the next request. Same as setting the host attribute.
https!(use_https=true)
If passed true (or with no parameter), the subsequent requests will simulate using the HTTPS protocol.
https?
Return true if the HTTPS flag is set.
open_session { |sess| ... }
Creates a new session object. If a block is given, pass the session to the block; otherwise return it.
redirect?()
Returns true if the last response was a redirect.
reset!()
Resets the session, allowing a single test to reuse a session.
url_for(options)
Constructs a URL given a set of options. This can be used to generate the parameter to get and post.
get url_for(:controller => "store", :action => "index")
13.5Performance Testing
Testing isn’t just about whether something does what it should. We might also want to know whether it does it fast enough.
5. application/x-www-form-urlencoded or multipart/form-data
Report erratum
PERFORMANCE TESTING 221
Before we get too deep into this, here’s a warning. Most applications perform just fine most of the time, and when they do start to get slow, it’s often in ways we would never have anticipated. For this reason, it’s normally a bad idea to focus on performance early in development. Instead, we recommend using performance testing in two scenarios, both late in the development process.
•When you’re doing capacity planning, you’ll need data such as the number of boxes needed to handle your anticipated load. Performance testing can help produce (and tune) these figures.
•When you’ve deployed and you notice things going slowly, performance testing can help isolate the issue. And, once isolated, leaving the test in place will help prevent the issue arising again.
A common example of this kind of problem is database-related performance issues. An application might be running fine for months, and then someone adds an index to the database. Although the index helps with a particular problem, it has the unintended side effect of dramatically slowing down some other part of the application.
In the old days (yes, that was last year), we used to recommend creating unit tests to monitor performance issues. The idea was that these tests would give you an early warning when performance started to exceed some preset limit: you learn about this during testing, not after you deploy. And, indeed, we still recommend doing that, as we’ll see next. However, this kind of isolated performance testing isn’t the whole picture, and at the end of this section we’ll have suggestions for other kinds of performance tests.
Let’s start out with a slightly artificial scenario. We need to know whether our store controller can handle creating 100 orders within three seconds. We want to do this against a database containing 1,000 products (as we suspect that the number of products might be significant). How can we write a test for this?
To create all these products, let’s use a dynamic fixture.
Download depot_r/test/fixtures/performance/products.yml
<% 1.upto(1000) do |i| %> product_<%= i %>:
id: <%= i %>
title: Product Number <%= i %> description: My description
image_url: |
product.gif |
price: |
1234 |
<% end %> |
|
Notice that we’ve put this fixture file over in the performance subdirectory of the fixtures directory. The name of a fixture file must match a database table name, so we can’t have multiple fixtures for the products table in the same
Report erratum
PERFORMANCE TESTING 222
directory. We’d like to reserve the regular fixtures directory for test data to be used by conventional unit tests, so we’ll simply put another products.yml file in a subdirectory.
Note that in the test, we loop from 1 to 1,000. It’s initially tempting to use 1000.times do |i|..., but this doesn’t work. The times method generates numbers from 0 to 999, and if we pass 0 as the id value to MySQL, it’ll ignore it and use an autogenerated key value. This might possibly result in a key collision.
Now we need to write a performance test. Again, we want to keep them separate from the nonperformance tests, so we create a file called order_speed_test.rb in the directory test/performance. As we’re testing a controller, we’ll base the test on a standard functional test (and we’ll cheat by copying in the boilerplate from store_controller_test.rb). After a superficial edit, it looks like this.
require File.dirname(__FILE__) + '/../test_helper' require 'store_controller'
# Reraise errors caught by the controller.
class StoreController; def rescue_action(e) raise e end; end
class OrderSpeedTest < Test::Unit::TestCase def setup
@controller |
= StoreController.new |
|
@request |
= |
ActionController::TestRequest.new |
@response |
= |
ActionController::TestResponse.new |
end end
Let’s start by loading the product data. Because we’re using a fixture that isn’t in the regular fixtures directory, we have to override the default Rails path.
Download depot_r/test/performance/order_speed_test.rb
self.fixture_path = File.join(File.dirname(__FILE__), "../fixtures/performance") fixtures :products
We’ll need some data for the order form; we’ll use the same hash of values we used in the integration test. Finally we have the test method itself.
Download depot_r/test/performance/order_speed_test.rb
def test_100_orders Order.delete_all LineItem.delete_all
@controller.logger.silence do elapsed_time = Benchmark.realtime do
100.downto(1) do |prd_id| cart = Cart.new
cart.add_product(Product.find(prd_id)) post :save_order,
{ :order => DAVES_DETAILS },
Report erratum
PERFORMANCE TESTING 223
{ :cart => cart } assert_redirected_to :action => :index
end end
assert_equal 100, Order.count assert elapsed_time < 3.00
end end
This code uses the Benchmark.realtime method, which is part of the standard Ruby library. It runs a block of code and returns the elapsed time (as a floating-point number of seconds). In our case, the block creates 100 orders using 100 products from the 1,000 we created (in reverse order, just to add some spice).
You’ll notice the code has one other tricky feature.
Download depot_r/test/performance/order_speed_test.rb
@controller.logger.silence do end
By default, Rails will trace out to the log file (test.log) all the work it is doing processing our 100 orders. It turns out that this is quite an overhead, so we silence the logging by placing it inside a block where logging is silenced. On my G5, this reduces the time taken to execute the block by about 30%. As we’ll see in a minute, there are better ways to silence logging in real production code.
Let’s run the performance test.
depot> ruby test/performance/order_speed_test.rb
...
Finished in 3.840708 seconds.
1 tests, 102 assertions, 0 failures, 0 errors
It runs fine in the test environment. However, performance issues normally rear their heads in production, and that’s where we’d like to be able to monitor our application. Fortunately we have some options in that environment, too.
Profiling and Benchmarking
If you simply want to measure how a particular method (or statement) is performing, you can use the script/profiler and script/benchmarker scripts that Rails provides with each project. The benchmarker script tells you how long a method takes, while the profiler tells you where each method spends its time. The benchmarker gives relatively accurate elapsed times, while the profiler adds a significant overhead—its absolute times aren’t that important, but the relative times are.
Report erratum