How we speed up testing and building process of our Ember app at Brandnew? ~2.5x faster!

Tomek Niezurawski
7 min readJul 11, 2017

--

We are the leading platform on creator marketing with application for Brands and Agencies that helps them manage campaigns across different Social Media. These means a lot of reports, easy to fill forms, messaging system to contact with creators, charts, statistics, advanced search and many more.

The scale

With that introduction you can imagine the scale of application we work on. Some more technical insights are:

  • project started 2 years ago
  • we pushed about 3 000 commits
  • let’s say 1.5 front-end developer was involved at the same time ;)
  • ~39k lines of application code (~46k with comments and empty lines) across ~1.3k files
  • 1 364 test cases (with ESLint)
  • 270+ components
  • 94 dependencies (that’s a bit scary)

You might think that 2 years old project with huge codebase and a lot of dependencies is probably outdated but fear not, we write in ES2017, use ESLint and our core dependencies are up to date at the time of writing.
I mean we use the latest stable releases 🚀.

Shipping fast is important

The ability to ship fast your application is in fact crucial. It should make you feel more comfortable with making changes (new feature, bug fix, refactor — you name it!).

This is also important for company to be able to release hotfixes as soon as possible. Of course path to super convenient updates guides through Continues Integration, proper Code Review process, manual testing and managing different stages of product (vide staging). We will focus only on making things happen faster.

We use CircleCI. “Build” timings (in minutes) that we want to speed up are presented below. Keep in mind that all dependencies are cached by CI as this is the most common scenario:

  • Running Tests — 10:26
  • Making whole build — 12:04
  • With deployment — 18:20

So even if you fix “one char” bug with no time needed for investigation and your workmate waits for your Pull Request to approve it… there is little chance that the change can be delivered under 25 minutes.

This would be a scenario where you make Pull Request against master (production) but in fact you probably do it against development branch (staging) and first keep the change on feature branch.

So every minute we gain in building process can gain us about 2.5 minutes in the whole process.

Assumption worth to test.

Please keep in mind that this article is a journey. You may have better or worse results.

Run tests in parallel

In our current CircleCI plan we can use 3 containers at the same time but this project uses only 1. Thanks to awesome ember-exam addon we can split the tests.

The usage is extremely simple. Add it to your project and run:

$ ember exam --split=NUMBER_OF_PARTITIONS --parallel

At this point we know how to run test partitions in parallel. But we would like to use different machines to run different partitions with tests. To run just the first out of three partitions we will use this command:

$ ember exam --split=3 --partition=1 --parallel

Easy, right?

Things might be more complicated when you want to make it automated on your CI. But with CircleCI and it’s environment variables it’s easy. Edit your circle.yml file and add:

# circle.ymltest:
override:
- npm run-script test -- --split=$CIRCLE_NODE_TOTAL --partition=`expr $CIRCLE_NODE_INDEX + 1`:
parallel: true

So now we use CircleCI’s environment variables to divide the work between virtual machines. Please notice that we use npm run-script test here. Remember to edit your package.json file and change test script to ember exam . If you need more details please check fantastic post wrote by Michael Klein.

The results are great!

  • Running Tests — 3:19 (~3.1x faster)
  • Making whole build — 5:30 (~2.2x faster)
  • With deployment — 11:10 (~1.6x faster)

When you look at detailed timings you will notice that containers do not run tests for the exact same time. Look closely at bars from ~2min to ~5min in timeline, especially at container #1.

Timings per container

Partitions are unbalanced. They are just divided into 3 containers. If there is 300 tests to run then every container gets partition with 100 tests.

There are two ways that I can think of that could make partition more balanced:

  1. Make decision which test should be run next at runtime and put it into available container. As far as I understand, this is not possible with ember-exam .
  2. Keep historical execution times for all tests and try to decide how to split them to make partitions more balanced.

Bingo! CircleCI guys had the same idea and already developed something like this. You can read more about it in their blog post.

Store tests execution time

CircleCI do all the magic for us. We just need to feed it with the right data. That seems to be easy. We will change a bit our circle.yml file - set reporter to xunit and silent the output:

# circle.ymltest:
override:
- npm run-script test -- --split=$CIRCLE_NODE_TOTAL --partition=`expr $CIRCLE_NODE_INDEX + 1` --silent -r "xunit":
parallel: true

Another step is reporting to file which we can set in testem.js . I still prefer the default output to console in development mode so we will look for CircleCI’s env. variable to set the path for file only if needed:

# testem.js/* eslint-env node */
let options = {
test_page: 'tests/index.html?hidepassed',
disable_watching: true,
launch_in_ci: [
'Chrome'
],
launch_in_dev: [
'Chrome',
'Firefox',
'Safari'
],
framework: 'qunit'
};

if (process.env.CIRCLE_TEST_REPORTS) {
options.report_file = process.env.CIRCLE_TEST_REPORTS + '/junit/test-results.xml'
}

module.exports = options;

Agrrr! I don’t see any difference to be honest… but making this short config is not a waste of time for sure! Maybe some changes in ember-exam made in the future will change the situation.

For now we will investigate the slowest tests in our repository thanks to collecting the data:

Test Summary in CircleCI

Gotcha! I wrote that test! I have been thinking about rewriting it since 2 months! I should probably do it instead of writing this post ;)

Update your machine

Currently we run our tests on Ubuntu 12.04. CircleCI gives us an opportunity to use Ubuntu 14.04.

Let’s check it on newer Ubuntu!

  • Running Tests — 2:50 (~3.7x faster)
  • Making whole build — 5:00 (~2.4x faster)
  • With deployment — 10:40 (~1.7x faster)

I was thinking what is the reason of this speed up? Is it really the operating system? It’s hard to say as at the same time the image is served with newer Chrome.

On Ubuntu 12.04 we were using Chrome 43 and now we are using
Chrome 54.

Update browser used for running tests

I know many people use PhantomJS and could argue why we use Chrome but that’s not the case, right? Let’s check what we can get by upgrading Chrome to the newest version in our builds (version 59 at the time of writing).

We will add these lines to dependencies step in circle.yml :

# circle.ymldependencies:
cache_directories:
- '~/downloads'
pre:
# download the latest Google Chrome if enabled by environmental variable $USE_LATEST_CHROME
- if [[ $USE_LATEST_CHROME == true ]]; then if test -f "$HOME/downloads/use_chrome_stable_version.sh"; then sh $HOME/downloads/use_chrome_stable_version.sh; else curl -o $HOME/downloads/use_chrome_stable_version.sh --create-dirs https://raw.githubusercontent.com/azachar/circleci-google-chrome/master/use_chrome_stable_version.sh && bash $HOME/downloads/use_chrome_stable_version.sh; fi; fi;

and set $USE_LATEST_CHROME=true in project’s settings in CircleCI. Definitely check the repo with .sh script that is used to download the newest Google Chrome.

Okey… it takes about 10 seconds to install Google Chrome from CircleCI’s cache… we do cache the .sh script thanks to mr azachar’s fork… so is it worth?

  • Running Tests — 2:40 (~3.9x faster)
  • Making whole build — 5:00 (~2.4x faster) — the same
  • With deployment — 10:40 (~1.7x faster) — the same

Indeed guys at Chrome made it faster between version 54 and 59. All tests are done ~10 seconds faster… but it also takes 10 seconds to install new Chrome.

I’m not sure what to do about it but I will give it a try and test the code against the newest Chrome. We will see in the future how it behaves.

Let’s stop for a second

The results are pretty cool. The config has been already used for some time so let’s leave it for another sprint or two and end this post.

What’s next?

Things we would like to try:

  • Incremental builds — like the ones you use on your dev. machine (cached per branch) — could be beneficial for feature branches.
  • Yarn instead of Bower and NPM — all dependencies are cached so it takes only 40 seconds to set everything up. I do not expect much progress here if yarn is truly faster.
  • Reduce number of Acceptance Tests with faster Integration Tests if possible — this is kind of work in progress 🛠. Maybe separate article with comparison would be helpful?

See you in the next post folks!

PS. If you have any other ideas please share with others in comments. Cheers! 🍻

--

--

Tomek Niezurawski

I connect humans and machines. Usually write about interfaces, digital products, and UX on tomekdev.com. Founder of checknlearn.com. A bit nerdy 🤓