Image by Juan-Calderon.
When you practice test-driven development, most of the time you need to run only a small number of tests to validate your recent code changes. Unfortunately things change once you start refactoring. Refactoring models implicates your entire application. So in order to keep things from going down the drain, you’ll need to run all (or most of) your tests constantly. And that’s where things get tedious.
At Railsonfire we develop our Ruby on Rails web app test-driven with Rspec. Most of our specs are high level request specs (in a way integration tests) which are slow compared to controller or model specs. One of the reasons for this slowness is that request specs test the whole application stack. Another reason is that we changed the default capybara driver from Rack::Test to Poltergeist.
“Poltergeist? But that’s so much slower!”
Right! But we chose Poltergeist because it tests a web application more like a user experiences it. While Rack::Test only considers the HTML (or whatsoever) response from your application, Poltergeist launches a headless WebKit browser, provided by PhantomJS.
Because of this additional safety we decided on making Poltergeist the default capybara driver.
Development vs. Railsonfire CI
Using test-driven development will eventually take you to the point of misery, when running all your specs slows down your productivity unbearably. Poltergeist lets us reach this point at record speed. So what to do now? On the one hand we wanted to retain the quality of our web app, on the other hand we wanted to continue developing without feeling the need of poking our eyes out while waiting for the specs to succeed.
Running our spec suite on my computer with this setup took 8:45 minutes.
The solution was Railsonfire itself: We would speed up the specs in development as much as possible and let the Continous Integration server do all the cumbersome work.
This little change reduced the execution time to 5:00 minutes.
Step two: Skip slowest specs
There were some specs that took up to 40 seconds to run. These were used to check some quite complicated procedures that rarely changed. We decided to skip all specs for development that took longer than 10 seconds.
Rspec provides tagging to achieve this: We tagged our slow tags with
speed: "slow". Running our specs with
rspec --tag ~speed:slow reduced the execution time by half: 2:29 minutes. Yei!
Step three: Skip remote specs
In Railsonfire we integrate a couple of external services like Github. Of course we also needed to verify that the communication with these services worked. But a developer’s life is hard: Sometimes you are on a train, on a plane or there’s simply no network reception. And all of a sudden many of your specs fail.
Therefore we tried to remove the dependencies on external services for as many specs as possible. We tagged all specs that still required internet access with
remote: true and skipped them in development.
rspec --tag ~@remote --tag ~speed:slow now finished in: 1:28 minutes.
This stunning result was good enough for now, so we stopped optimizing here. But for sure we well cover further optimizations in future blog posts.
By changing our test setup and skipping time-consuming tests in development we were able to cut down test execution time by more than 83%. This way we could stay productive during development and still perform extensive checks on our web application using the Railsonfire continous integration server.
Here’s a final overview of the results after each optimization step:
|number of specs||execution time (minutes)|
|All tests with Poltergeist:||128||8:45|
|All tests with Rack::Test/Poltergeist:||128||5:00|
|Without slow specs (> 10s):||107||2:29|
|Without slow and remote specs:||99||1:27|