Slow test suite and getting to ruby 2.0/2.1.2


#1

I was going to post this information here but I think it makes more sense as a discussion.

The idea is that we want a faster test suite and also get to newer rubies.
A concern I have is we have low hanging fruit that doesn’t decrease the readability and increase brittleness of our tests.

  1. Find a “good enough” RUBY_GC_MALLOC_LIMIT value and use it on the cruisecontrols and on developers laptops if desired. We can measure how much cpu time we’re spending in GC and I’d imagine it’s in the neighborhood of 33% or more. Tweaking a single env variable might get us under 15% cpu time in GC at the expense of more memory. Pro: the existing tests can remain as is and we might have gain some knowledge we can share for tweaking production appliances at the same time.

  2. Get our test suite testable on ruby 2.0/2.1 as they have much better object allocation probes/tracing and will just run faster. Using this information we can backport allocation fixes before we upgrade to those rubies. Pros: We can fix stuff now and also get ready to go to ruby 2.0/2.1. Cons: we don’t have to mock everything to get tests faster. Again, the work on discourse can help us.

  3. Re-organize the spec directories, (spec/migrations) is not testable with the other specs as it leaves the DB in a inconsistent state and should be moved to avoid accidentally running it with other specs.

  4. Use rspec tags to label slow or integration specs and only run the true unit tests tags locally and in a new project in cruisecontrol.

  5. Lastly, try to cut down on database queries in tests by mocking/stubbing and using FactoryGirl.build and SomeModel.new instead of .create where appropriate.


#2

Oh noes. that was my failed PR. :smile:

In regards to using rspec tags (point 4)

I feel this may be a smell.
Is is possible to move components out of vmdb like smart state analysis, automation, vendor integration?

If those tests are separate, and even tracked separately, there would be less tests to run for each change. If a test fails from an external “system”, then there is an implicit contract not being tested, and we can add a test around that.

Rails did a nice job of their rails repo and the active_* subdirectories. Each of which are packaged as gems.

Reducing number of queries (point 5)

We may want to document what gets tested in units tests vs integration.
I feel we are double testing some components. Sometimes building huge trees.

Maybe moving some of the edge case testing into the units. Not sure if we want to avoid Sandi Metz’s rule on not testing “private” methods.


It would also be nice to mark slow tests and have people aim to improve the speed of those components. Questioning if we need such large integration tests. It may also suggest where we need to split that area into a separate “component”


#3

Agreed @kbrock, I think the first step is insight into the various things via profiling/stats. What to do to fix them is secondary to figuring what the problem areas really are with stats to back them up. Then we can decide to tag or move things around.


#4

Agreed @kbrock. Some of the TODO items on the way there are:

  1. Extract out each directory in manageiq/lib into its own gem.
  2. Once that is complete, see if it is possible to make manageiq directory structure standard Rails app.
  3. Extract manageiq/vmdb/lib/miq_automation_engine into its own gem or Rails Engine, as appropriate.

#5

I think tagging is the way to go.

In the longer term moving stuff around makes the most sense to me, but in the short term, it seems easier to just sprinkle a few tags around.