Quality Assurance at Openbravo

July 16, 2010

Using Selenium with DBUnit

Filed under: openbravo — obqateam @ 9:09 am

We were working using Selenium for several years now, and our experience is great. Selenium is flexible, powerful and easy to learn.

We have the Smoke Test running as a required step for code promotion in Continuous Integration release cycle.

However, we are looking for moving the Selenium tests earlier in the ERP’s development cycle. Here we face one of the biggest drawbacks of our tests: They were developed thinking in a full execution starting from scratch.

That is, test cases depend of successful run of previous test cases in order to execute. A first snapshot of these dependencies appeared when we worked in parallel execution .

Now the following scenario. I am a developer and I made a fix that could cause some instability in Production module. I should not make the commit and wait CI’s email about the result, not to say waiting for QA’s standard testing cycle, nor a customer after the fix was included in some release.

Release Management team has done a very cool feature, named “Try” and we would like to add something else. Instead of executing an “all-purpose” test, we look for specifics suites.

So, in the example, even if we know that only Production module could be affected by the change, current available test would only allow a full run. Could the Production Smoke suite be executed separately? Well, yes. We added some extra capabilities to our suites by combining standard Selenium scripts with DBUnit scripts. The concept is quite simple, as explained in the DBunit’s web page:

DBUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs.

A DBUnit script will be executed before launching the Selenium script, and it will do the required changes in order to fulfill the Selenium script preconditions. Then, the test case is executed as usual.

DBunit data is created using XML files containing the rows that will be used by the next script. This has a complicated issue. The DBUnit part should be created in a safe way, meaning no interferences with current data are allowed. So static XML files were not enough. We created tags where dynamic data was required and then made that dynamic data parameters so we could use them as part of the next tests.

The result of this work can be seen at pi-dbunit branch . Currently, there is only an small set of tests available, and that is because generating the XML files is a hard task, requiring deep knowledge of Openbravo’s DB structure, table names, triggers and constraints.

Our goal is to have a full set of tests for every major module in the ERP, making easier to make specific testing in a very fast way.

Just for info: A full DBUnit+Selenium execution in Production suite could take ten to fifteen minutes. Currently, it takes ten minutes… but you have to execute one hour of tests before getting there.

Advertisements

July 14, 2010

Hudson’s integrated framework for Automation

Filed under: openbravo — obqateam @ 1:49 pm

For several weeks we were working on infrastructure improvements. In order to make more reliable our Department’s processes, and now that parallel execution is ready to run, we also automated the test of our own code.

One of the most critical processes in our automation cycle is the testing of the code we deliver. We work in several branches, and we have to assure a proper integration in order to execute Hudson‘s ERP-CI jobs in a reliable way.

Our goal was to use the same approach the ERP has. All in all, Automation and the ERP are both development projects, so what has proved to work great for the ERP has to be also good for Automation. So we mounted our test contexts in Hudson. The infrastructure is a simplified version of current ERP structure. We use to develop several branches, as many as our projects require. Smoke Test is the project, but there are other projects as well.

Name assignment for the branches was not easy. We got three levels of branches: Development, Product Integration and Stable. And the stable branch was required to run either in ERP’s PI and Main branches.

Tagging revisions for selecting the version that will run with PI and for Main would be the most logical choice. However, it would be inefficient. Since ERP branches for PI and Main are different, Main branch is updated in bulk (when Continous Integration tests passed) or through individual transplants. That means that, potentially, a transplant could change the behavior of the ERP, making the tag for the Automation useless.

So, we choose to use two branches to match the ERP layout. And for easy understanding, stable branch that is executed against Main is named Main, and the one for PI, PI.
That lead us to the next problem. Since we took the PI name for one stable branch, another name has to be chosen for the Product Integration branch. And we decided to use “int” (for Integration).

Finally, development branches have a simple naming convention: pi-* (i.e. pi-smoke, pi-regression, pi-localization, and so on)

All these branches are periodically synchronized using the Integration branch (int) as a hub. When Integration branch is considered stable, code is promoted to PI branch.

Once there, PI branch’s control is taken by Release Management team, in order to assure that proper Automation version is executed with any given ERP version. The Automation PI branch is considered stable, and it is used to test an ERP PI branch. If test passes, ERP code is promoted to Main branch, and Automation Main is updated to that version of Automation PI. That means that an specific version of the stable automation code is “frozen” so it can be executed successfully as many times as required.

If a new version of the ERP (in PI branch) requires testing, Automation PI will be used. And if ERP behavior changed for some expected fix, a change in automation could be developed and promoted to PI without changing the code in Main.

It could happen also a more complex scenario. When QA team is testing a Maintenance Pack candidate, that is last Main branch revision, could happen that a change were required (i.e. a defect was not properly fixed) triggering a transplant. The developer push to ERP PI a new changeset for fixing the issue and Release Management team transplant it to Main branch.
In that case, the automation code will remain the same, since it is expected that ERP behavior will remain unchanged. However, there is a small chance that the fix changed the behavior on purpose. So a fix in the automation branch should be pushed and then transplanted as well to Main branch, allowing a successful execution of the changed Automated Test.

At this point, we got a huge improvement by automating part of the deploy cycle. In next weeks we will add another cool feature (also inspired in current ERP’s process), automatic code promotion. The plan is to have a deamon monitoring the Integration branch. Whenever it detects a commit coming from any of the development branches, it will run a series of tests. If all of them succeed, code will be considered stable and code will be automatically promoted to stable PI branch.

Do you you want to try our automation code? Get the selenium code here or go through our documentation at Openbravo’s wiki

July 9, 2010

Welcome to The Grid

Filed under: openbravo — obqateam @ 2:21 pm
Continuing with the road map in automation, we are glad to announce we stepped in The Grid.
Selenium Grid is a wonderful tool for Selenium test environments. It provides two main benefits: multi-browser support and parallel execution. You can find how it works here .
In Openbravo ERP, we have a fully functional Smoke Test running in Selenium code. By adding Selenium Grid to our infrastructure, we aim to speed up Smoke execution.
On the very beginning, standard runs took more than four hours. After some hard work to fine tune the wait times, we got current duration of about 160 minutes.
This was a very good achievement of our version 2, but we keep looking for improvements. Version 3, using Grid, can execute a full smoke in just 90 minutes.
How does version 3 work?
Starting with version 2 code we managed to split different functional flows with its dependencies.
Sequential Execution Timeline

Version 2 time line. All suites are executed sequentially

For example, Create Purchase Order test case in Procurement Management suite required an specific product Raw Material A to be purchased to an specific provider Vendor A. So it depended of Import Products and Import Business Partner test cases.
As a result of this analysis, we ended up with current configuration:
<target name="test.integration.smoke">
<sequential>
<antcall target="test.integration.erp.testsuites.smoke.masterdata"/>
<antcall target="test.integration.erp.testsuites.smoke.accountingdata" />
<parallel>
<sequential>
<antcall target="test.integration.erp.testsuites.smoke.financialdata" />
<parallel>
<sequential>
<antcall target="test.integration.erp.testsuites.smoke.procurement" />
<antcall target="test.integration.erp.testsuites.smoke.sales" />
<antcall target="test.integration.erp.testsuites.smoke.projectandservice" />
<parallel>
<antcall target="test.integration.erp.testsuites.smoke.production" />
<antcall target="test.integration.erp.testsuites.smoke.accountingprocess" />
</parallel>
</sequential>
</parallel>
</sequential>
<antcall target="test.integration.erp.testsuites.smoke.assets" />
</parallel>
</sequential>
</target>
A graphical view of this change is shown in below picture.

Parallel Execution Timeline

Next step on this direction is to go deep into each suite and find the critical path for every individual test case.

Blog at WordPress.com.