Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Agile Web Development With Rails, 2nd Edition (2006).pdf
Скачиваний:
30
Добавлен:
17.08.2013
Размер:
6.23 Mб
Скачать

PERFORMANCE TESTING 220

Path should be a string containing the URI to be invoked. It need not have a protocol or host component. If it does and if the protocol is HTTPS, an HTTPS request will be simulated. If the params parameter is given, it should be a hash of key/value pairs or a string containing encoded form data.5

get "/store/index" assert_response :success

get "/store/product_info" , :id => 123, :format = "long"

get_via_redirect(path, args={}) post_via_redirect(path, args={})

Performs a get or post request. If the response is a redirect, follow it, and any subsequent redirects, until a response that isn’t a redirect is returned.

host!(name)

Set the host name to use in the next request. Same as setting the host attribute.

https!(use_https=true)

If passed true (or with no parameter), the subsequent requests will simulate using the HTTPS protocol.

https?

Return true if the HTTPS flag is set.

open_session { |sess| ... }

Creates a new session object. If a block is given, pass the session to the block; otherwise return it.

redirect?()

Returns true if the last response was a redirect.

reset!()

Resets the session, allowing a single test to reuse a session.

url_for(options)

Constructs a URL given a set of options. This can be used to generate the parameter to get and post.

get url_for(:controller => "store", :action => "index")

13.5Performance Testing

Testing isn’t just about whether something does what it should. We might also want to know whether it does it fast enough.

5. application/x-www-form-urlencoded or multipart/form-data

Report erratum

PERFORMANCE TESTING 221

Before we get too deep into this, here’s a warning. Most applications perform just fine most of the time, and when they do start to get slow, it’s often in ways we would never have anticipated. For this reason, it’s normally a bad idea to focus on performance early in development. Instead, we recommend using performance testing in two scenarios, both late in the development process.

When you’re doing capacity planning, you’ll need data such as the number of boxes needed to handle your anticipated load. Performance testing can help produce (and tune) these figures.

When you’ve deployed and you notice things going slowly, performance testing can help isolate the issue. And, once isolated, leaving the test in place will help prevent the issue arising again.

A common example of this kind of problem is database-related performance issues. An application might be running fine for months, and then someone adds an index to the database. Although the index helps with a particular problem, it has the unintended side effect of dramatically slowing down some other part of the application.

In the old days (yes, that was last year), we used to recommend creating unit tests to monitor performance issues. The idea was that these tests would give you an early warning when performance started to exceed some preset limit: you learn about this during testing, not after you deploy. And, indeed, we still recommend doing that, as we’ll see next. However, this kind of isolated performance testing isn’t the whole picture, and at the end of this section we’ll have suggestions for other kinds of performance tests.

Let’s start out with a slightly artificial scenario. We need to know whether our store controller can handle creating 100 orders within three seconds. We want to do this against a database containing 1,000 products (as we suspect that the number of products might be significant). How can we write a test for this?

To create all these products, let’s use a dynamic fixture.

Download depot_r/test/fixtures/performance/products.yml

<% 1.upto(1000) do |i| %> product_<%= i %>:

id: <%= i %>

title: Product Number <%= i %> description: My description

image_url:

product.gif

price:

1234

<% end %>

 

Notice that we’ve put this fixture file over in the performance subdirectory of the fixtures directory. The name of a fixture file must match a database table name, so we can’t have multiple fixtures for the products table in the same

Report erratum

PERFORMANCE TESTING 222

directory. We’d like to reserve the regular fixtures directory for test data to be used by conventional unit tests, so we’ll simply put another products.yml file in a subdirectory.

Note that in the test, we loop from 1 to 1,000. It’s initially tempting to use 1000.times do |i|..., but this doesn’t work. The times method generates numbers from 0 to 999, and if we pass 0 as the id value to MySQL, it’ll ignore it and use an autogenerated key value. This might possibly result in a key collision.

Now we need to write a performance test. Again, we want to keep them separate from the nonperformance tests, so we create a file called order_speed_test.rb in the directory test/performance. As we’re testing a controller, we’ll base the test on a standard functional test (and we’ll cheat by copying in the boilerplate from store_controller_test.rb). After a superficial edit, it looks like this.

require File.dirname(__FILE__) + '/../test_helper' require 'store_controller'

# Reraise errors caught by the controller.

class StoreController; def rescue_action(e) raise e end; end

class OrderSpeedTest < Test::Unit::TestCase def setup

@controller

= StoreController.new

@request

=

ActionController::TestRequest.new

@response

=

ActionController::TestResponse.new

end end

Let’s start by loading the product data. Because we’re using a fixture that isn’t in the regular fixtures directory, we have to override the default Rails path.

Download depot_r/test/performance/order_speed_test.rb

self.fixture_path = File.join(File.dirname(__FILE__), "../fixtures/performance") fixtures :products

We’ll need some data for the order form; we’ll use the same hash of values we used in the integration test. Finally we have the test method itself.

Download depot_r/test/performance/order_speed_test.rb

def test_100_orders Order.delete_all LineItem.delete_all

@controller.logger.silence do elapsed_time = Benchmark.realtime do

100.downto(1) do |prd_id| cart = Cart.new

cart.add_product(Product.find(prd_id)) post :save_order,

{ :order => DAVES_DETAILS },

Report erratum

PERFORMANCE TESTING 223

{ :cart => cart } assert_redirected_to :action => :index

end end

assert_equal 100, Order.count assert elapsed_time < 3.00

end end

This code uses the Benchmark.realtime method, which is part of the standard Ruby library. It runs a block of code and returns the elapsed time (as a floating-point number of seconds). In our case, the block creates 100 orders using 100 products from the 1,000 we created (in reverse order, just to add some spice).

You’ll notice the code has one other tricky feature.

Download depot_r/test/performance/order_speed_test.rb

@controller.logger.silence do end

By default, Rails will trace out to the log file (test.log) all the work it is doing processing our 100 orders. It turns out that this is quite an overhead, so we silence the logging by placing it inside a block where logging is silenced. On my G5, this reduces the time taken to execute the block by about 30%. As we’ll see in a minute, there are better ways to silence logging in real production code.

Let’s run the performance test.

depot> ruby test/performance/order_speed_test.rb

...

Finished in 3.840708 seconds.

1 tests, 102 assertions, 0 failures, 0 errors

It runs fine in the test environment. However, performance issues normally rear their heads in production, and that’s where we’d like to be able to monitor our application. Fortunately we have some options in that environment, too.

Profiling and Benchmarking

If you simply want to measure how a particular method (or statement) is performing, you can use the script/profiler and script/benchmarker scripts that Rails provides with each project. The benchmarker script tells you how long a method takes, while the profiler tells you where each method spends its time. The benchmarker gives relatively accurate elapsed times, while the profiler adds a significant overhead—its absolute times aren’t that important, but the relative times are.

Report erratum