-
Notifications
You must be signed in to change notification settings - Fork 495
Home
Rails file cache -- by eLobato
Concurrent access to cache files by different process can easily make your tests fail. Make each process have its own cache file store to avoid this problem.
Add to config/environments/test.rb
:
config.cache_store = :file_store, Rails.root.join("tmp", "cache", "paralleltests#{ENV['TEST_ENV_NUMBER']}")
Rails Sprockets file cache -- by jordan-brough
Sprockets uses a default cache path of <rails-root>/tmp/cache/assets/<environment>
. Concurrent usage of the same cache directory can cause sporadic problems like "ActionView::Template::Error: end of file reached".
Add to config/environments/test.rb
:
if ENV['TEST_ENV_NUMBER']
assets_cache_path = Rails.root.join("tmp/cache/assets/paralleltests#{ENV['TEST_ENV_NUMBER']}")
Rails.application.config.assets.cache = Sprockets::Cache::FileStore.new(assets_cache_path)
end
With sphinx -- by ujh
At file config/sphinx.yml in the test section:
test:
mysql41: <%= 9313 + ENV['TEST_ENV_NUMBER'].to_i %>
indices_location: <%= File.join(Rails.root, "db", "sphinx", "test#{ENV['TEST_ENV_NUMBER']}") %>
configuration_file: <%= File.join(Rails.root, "config", "test#{ENV['TEST_ENV_NUMBER']}.sphinx.conf")%>
log: <%= File.join(Rails.root, "log", "test#{ENV['TEST_ENV_NUMBER']}.searchd.log") %>
query_log: <%= File.join(Rails.root, "log", "test#{ENV['TEST_ENV_NUMBER']}.searchd.query.log") %>
binlog_path: <%= File.join(Rails.root, "tmp", "binlog", "test#{ENV['TEST_ENV_NUMBER']}") %>
pid_file: <%= File.join(Rails.root, "tmp", "pids", "test#{ENV['TEST_ENV_NUMBER']}.sphinx.pid") %>
If solution above doesn't work, then you may try another approach via pat's recommendation:
test:
mysql41: <%= ENV['TEST_ENV_NUMBER'].to_i + 9307 %>
pid_file: <%= File.join(Rails.root, "tmp", "searchd.#{ENV['TEST_ENV_NUMBER']}.pid") %>
indices_location: <%= File.join(Rails.root, "db", "sphinx", "#{ENV['TEST_ENV_NUMBER']}") %>
configuration_file: <%= File.join(Rails.root, "config", "test.#{ENV['TEST_ENV_NUMBER']}.sphinx.conf") %>
binlog_path: <%= File.join(Rails.root, "db", "sphinx", "#{ENV['TEST_ENV_NUMBER']}", "binlog") %>
With capybara(~>0.4.0)+selenium -- by rgo
Capybara.server_port = 9887 + ENV['TEST_ENV_NUMBER'].to_i
With capybara(=0.3.9)/Rails 2.3 -- by xunker
Add to features/support/env.rb:
if ENV['TEST_ENV_NUMBER']
class Capybara::Server
def find_available_port
@port = 9887 + ENV['TEST_ENV_NUMBER'].to_i
@port += 1 while is_port_open?(@port) and not is_running_on_port?(@port)
end
end
end
With Selenium in Docker -- by ZimbiX
In order to avoid browser crashes due to /dev/shm (shared memory) being too small, specify a larger shm_size
.
If you're running Docker-in-Docker, using the alternative of mounting /dev/shm
won't work unless you increase the shm_size
of the outer container.
e.g., with Docker Compose:
services:
selenium:
image: selenium/standalone-chrome:3.141.59@sha256:d0ed6e04a4b87850beb023e3693c453b825b938af48733c1c56fc671cd41fe51
shm_size: 1G
With rspec_junit_formatter -- by jgarber
I've had better results with rspec_junit_formatter than with ci_reporter. Parallelizing it is easy!
Add this to .rspec_parallel
:
--format RspecJunitFormatter
--out tmp/rspec<%= ENV['TEST_ENV_NUMBER'] %>.xml
Then you can configure your CI to publish the JUnit test result report files.
- Jenkins: use
tmp/rspec*.xml
for "Test report XMLs". - Semaphore 2.0: can also handle multiple files, eg
test-results publish tmp/rspec*.xml
(see docs for more detail on setting up). - GitLab-CI: See the gradle example
@danielheath : I've had corrupted xml files with larger numbers of concurrent processes - not sure this is reliable.
dacook: to make all specs appear in the same suite, here's a quick transform to give them all the same name:
sed -i 's/name="rspec[0-9]"/name="rspec"/' tmp/junit*.xml # rename so they get combined
With ci_reporter for rspec -- by morganchristiansson
export CI_REPORTS=results
Add spec/parallel_specs.opts with the contents:
--format progress
--require ci/reporter/rake/rspec_loader
--format CI::Reporter::RSpec:/dev/null
Our project has the following in test/test_helper.rb
if ENV["CI_REPORTS"] == "results"
require "ci/reporter/rake/test_unit_loader"
end
Run the tasks like this:
rake "parallel:features[,,--format progress --format junit --out ${CI_REPORTS} --no-profile -r features]"
Or without rake like this:
bundle exec $(bundle show parallel_tests)/bin/parallel_test --type features -o '--format progress --format junit --out ${CI_REPORTS} --no-profile -r features'
Add following to RAILS_ROOT/.rspec_parallel
.
--format progress
--require ci/reporter/rake/rspec_loader
--format CI::Reporter::RSpec
Run rake parallel:spec
, then rspec generates reports in spec/reports
.
For more information on how to configure ci_reporter check under advanced usage on http://caldersphere.rubyforge.org/ci_reporter/
With ci_reporter for test_unit -- by phoet
See this issue 29 for more information:
# add the ci_reporter to create reports for test-runs, since parallel_tests is not invoked through rake
puts "running on #{Socket.gethostname}"
if /buildserver/ =~ Socket.gethostname
require 'ci/reporter/test_unit'
module Test
module Unit
module UI
module Console
class TestRunner
def create_mediator(suite)
# swap in ci_reporter custom mediator
return CI::Reporter::TestUnit.new(suite)
end
end
end
end
end
end
end
With DatabaseCleaner for RSpec -- by sciprog
See issue 66 for more information.
Do not use the truncation strategy in DatabaseCleaner, in your RSpec config.
This strategy seems to cause a bottleneck which will negate any gain made
through parallelization of your tests. If possible, use the transaction strategy
over the truncation strategy.
*Note: This issue does not seem to exist in relation to features, only to specs.
If you have put your features into subdirectories you may have problems running them in parallel as it will not find your step_definitions. To work around this I put all my features into the features directory.
You have to require features/ folder for cucumber in order to load step definitions.
rake "parallel:features[4, '', '-rfeatures/']"
If you want it to work with rake parallel:features
add -rfeatures/
to the end of std_opts
in config/cucumber.yml
rake parallel:features[,, --retry 1]
- 1 digit used for the number of times you want to retry Rerun file no longer required
If you do not have enough scenarios to warrant a separate instance of external service as described in sphinx section above, create a "mutex" in your env.rb file
Before("@solr") do
#we do not want solr tests to run in parallel, so let's simulate a mutex
while File.exists?("tmp/cucumber_solr")
sleep(0.2)
end
File.open("tmp/cucumber_solr", "w") {}
Sunspot.session = $original_sunspot_session
Sunspot.remove_all!
# or do other things
end
After("@solr") do
File.delete("tmp/cucumber_solr")
end
When running with cucumber + capybara + selenium-webdriver (w firefox), this error may be encountered due to firing of too many firefox instances altogether. To fix this issue, add the following to features/support/env.rb:
unless (env_no = ENV['TEST_ENV_NUMBER'].to_i).zero?
# As described in the readme
Capybara.server_port = 8888 + env_no
# Enforces a sleep time, i need to multiply by 10 to achieve consistent results on
# my 8 cores vm, may work for less though.
sleep env_no * 10
end
With action_mailer_cache_delivery (~> 0.3.2) -- by p0deje
You may get unexpected errors like EOFError
. If so, make sure cache files differ for processes. Change you config/environment/test.rb
config.action_mailer.cache_settings = { :location => "#{Rails.root}/tmp/cache/action_mailer_cache_delivery#{ENV['TEST_ENV_NUMBER']}.cache" }
It's best to use with sunspot-rails-tester. Then you don't need to run solr with rake task. For this you will only have to update config/sunspot.yml you can make it like this:
test:
solr:
hostname: localhost
port: <%= 8981 + ENV['TEST_ENV_NUMBER'].to_i %>
log_level: WARNING
data_path: <%= File.join(::Rails.root, 'solr', 'data', ::Rails.env, ENV['TEST_ENV_NUMBER'].to_i.to_s) %>
It's best to use with sunspot-rails-tester. Then you don't need to run solr with rake task. For this you will only have to update config/sunspot.yml you can make it like this:
test:
solr:
port: <%= 8981 + ENV['TEST_ENV_NUMBER'].to_i %>
solr_home: <%= File.join(::Rails.root, 'solr', 'data', ::Rails.env, ENV['TEST_ENV_NUMBER'].to_i.to_s) %>
With TeamCity -- by aaronjensen
TeamCity has its own logger so you'll need to use the --serialize-stdout
flag when you run anything in parallel. You'll need to use a custom rake task or script and call parallel_rspec
/parallel_test
/parallel_cucumber
directly.
You will need this rake task so that TeamCity will calculate the number of specs correctly. However what it does not pick up is duplicated test names within a spec. TeamCity processes this as a duplicate and does not add it to the total spec count.
namespace :teamcity do
task parallel_rspec: :environment do
sh('parallel_rspec spec --serialize-stdout')
end
end
Spork helps minimize the rails load time per-core, only performing it once. On an 8-virtual core rMBP this saved 20 seconds, running 1000 specs that used to take 65 seconds in just under 15 seconds.
The problem is spork seems not to use the separate database instances configured by rake parallel:create so we get a slew of DB lock issues. However, if we configure rspec to use in-memory databases, this problem goes away. This post shows how, but basically just add
setup_sqlite_db = lambda do
ActiveRecord::Base.establish_connection(adapter: 'sqlite3', database: ':memory:')
load "#{Rails.root.to_s}/db/schema.rb" # use db agnostic schema by default
end
silence_stream(STDOUT, &setup_sqlite_db)
to the bottom of your spec_helper.rb, and voila!
parallel_tests, simplecov, and json do not seem to play well together. Caused a number of errors in the form:
/.rvm/gems/ruby-1.9.3-p484@mercury/gems/json-1.8.1/lib/json/common.rb:155:in `parse': 795: unexpected token at 'null, (MultiJson::LoadError)
Until a better workaround is found, remove simplecov from your project.
Note: PARALLEL_TEST_GROUPS is an environment variable which has the number of test groups in use. It is unset if running without parallel_test.
#!/bin/bash
# custom_parallel_script.sh
# Usage: parallel_test --exec ./custom_parallel_script.sh
FILES=$(find . -type f -name \*.rb)
# workaround for the fact that TEST_ENV_NUMBER is '' for the 1st group - default to 1 if unset
LOCAL_TEST_ENV_NUMBER=${TEST_ENV_NUMBER:-1}
i=0
for f in $FILES; do
if [[ -z "$PARALLEL_TEST_GROUPS" || $(($i % $PARALLEL_TEST_GROUPS)) -eq $LOCAL_TEST_ENV_NUMBER ]]; then
echo $f # real action here
fi
((i++))
done
Tear down after all processes are done (RSpec) -- by demental
Say your specs rely on building a js app before suite, and build files must be deleted after suite is finished. before(:suite)
will run after the first process is done, so all the other tests running after that are likely to fail.
As a workaround, you can create a task and redefine rake parallel:spec
require_relative '../../spec/support/test_tools.rb'
namespace :parallel do
desc "Run parallel spec Clear artifact after parallel testing"
task :clean_spec => [:spec, :clean]
desc "Remove artifacts after parallel tests"
task :clean do
TestTools.remove_build_files
end
end
# In spec_helper.rb
config.after(:suite) do
TestTools.remove_build_files unless ENV.key? 'TEST_ENV_NUMBER'
end
# spec/support/test_tools.rb
module TestTools
def self.remove_build_files
# Doing the cleanup here...
FileUtils.rm_rf(Rails.root.join('tmp','test_build'))
end
end
If running regular rspec command, cleanup will be done after(:suite) as usual. Otherwise, it will be run after rake task parallel:spec is finished. Credits goes to seuros for this workaround.
With searchkick -- by emaxi
Set index name based in TEST_ENV_NUMBER for each model using searchkick, example app/model/product.rb
:
class Product < ActiveRecord::Base
searchkick index_name: "products#{ENV['TEST_ENV_NUMBER']}"
end
You can also override the .env
via an initializer:
if Rails.env.test?
Searchkick.env = "test#{ENV['TEST_ENV_NUMBER']}"
end
In 2.2.1 or more you can use Searchkick.index_suffix
Searchkick.index_suffix = ENV["TEST_ENV_NUMBER"]
https://github.com/ankane/searchkick#parallel-tests
With poltergeist -- by emaxi
Use different port for each process spec/spec_helper.rb
:
Capybara.register_driver :poltergeist do |app|
options = {
port: 51674 + ENV['TEST_ENV_NUMBER'].to_i
}
Capybara::Poltergeist::Driver.new(app, options)
end
At file config/mongoid.yml in the test section:
Mongoid >= 5.x
test:
sessions:
default:
database: app_name_test<%= ENV['TEST_ENV_NUMBER'] %>
hosts:
- localhost:27017
Mongoid <= 4.x
test:
client:
default:
database: app_name_test<%= ENV['TEST_ENV_NUMBER'] %>
hosts:
- localhost:27017
With headless -- by sauliusgrigaitis
Headless.new(display: 100, reuse: true, destroy_at_exit: false).start
With simplecov -- by a grateful user
To print the simplecov report after ALL threads are finished, to stop printing for each thread, do:
spec_helper.rb
if ENV['COVERAGE'] == 'true'
require 'simplecov'
require 'simplecov-console'
SimpleCov.formatter = SimpleCov::Formatter::Console
SimpleCov.start 'rails'
if ENV['TEST_ENV_NUMBER'] # parallel specs
SimpleCov.at_exit do
result = SimpleCov.result
result.format! if ParallelTests.number_of_running_processes <= 1
end
end
end
RSpec.configure do |config|
...
gems:
group :test do
gem 'simplecov'
gem 'simplecov-console'
Extras: Reduce noise in the output:
spec_helper.rb or rails_helper.rb
RSpec.configure do |config|
...
# mute noise for parallel tests
config.silence_filter_announcements = true if ENV['TEST_ENV_NUMBER']
With elasticsearch-extensions -- by [thatandromeda]
If you expect to be starting test clusters in parallel, the options with which you initialize them (Elasticsearch::Extensions::Test::Cluster.start(**es_options)
) must include the following:
es_options = {
port: 9250 + ENV['TEST_ENV_NUMBER'].to_i,
cluster_name: "cluster#{ENV['TEST_ENV_NUMBER']}",
path_data: "/tmp/elasticsearch_test#{ENV['TEST_ENV_NUMBER']}"
}
Isolating the ports isn't enough as the nodes you create with each start
will find each other, causing the health checks on cluster startup to fail as the number of nodes in the cluster won't match the number of nodes specified in its arguments. You have to separate the clusters so they can't find one another.
Redis supports up to 16 databases (0 to 15). If you are clearing the cache before or after every test, make sure that each process is using its own redis database.
db = ENV['TEST_ENV_NUMBER'].nil? ? 1 : (ENV['TEST_ENV_NUMBER'].presence || '1').to_i - 1
Redis.new(db: db)
Also, make sure to use redis.flushdb
and not redis.flushall
with carrierwave -- by yoav
removing all files, in the uploads directory, before or after each test will break other processes. Instead, remove all files before the suite is run.
In config/storage.yml
, modify root
to include the test env number:
test:
service: Disk
root: <%= Rails.root.join("tmp/storage#{ENV['TEST_ENV_NUMBER']}") %>
!! Add your own experience / gotchas !!