Posted by George Pirocanac, Test Engineering Manager For the past nine months it has been my pleasure to work with a group of undergrad students from UC-Irvine as part of their senior class project. The course was run by professor Hadar Ziv and teaching assistant Sameer Patil. It focused on providing students industry experience by working with customers (in this case us) to formulate product requirements and deliver working software. Jason Robbins from the Google Irvine office was the lead for another project and several other local companies also participated. Our team members included Michelle Alvaraz, Jason Dramby, Peter Lee and Gabriela Marcu. It was the only project dealing directly with test engineering and one of our goals was specifically to create a plan and framework for testing the Google Mashup Editor (GME) tag language. For those unfamiliar with the GME, it is a framework for developing simple web applications and mashups using a custom set of XML tags, Javascript, CSS and HTML. More information about the GME can be found here . The first three months of the class were spent learning about the the GME and performing exploratory testing. The team became very familiar with the editor and created several mashups (You can try one of them here. ). They also created a traditional test plan which focused on testing the tag language. Later they executed this test plan by compiling and running their sample mashups on a variety of browsers. After a couple of iterations of this test plan they quickly encountered some of the typical challenges associated with the traditional approach - namely human resource oversubscription in test execution and insufficient coverage. We addressed with the first issue through automation and the team learned to automate their manual tests of the mashups with Selenium . They first used Selenium IDE to learn the basic Selenium commands and concepts such as locators. Afterwards they used the "Export Test As..." feature in IDE to create Python tests that would be run under a local server with Selenium-RC. The latter got them to a point where they could execute the existing test plan automatically on three different platforms (Windows, Linux, MacOS). Expanding coverage was less straightforward. The traditional approach would be to use the existing resources to write more tests. We, however, decided to create a framework that would itself generate more tests. This dovetailed nicely with the classroom material which was product-centric and focused on gathering customer requirements, creating a design document and delivering the software. In our case, the group's product was to be a GME Test Suite Creator . As a starting point we looked at the following simple Python script which creates a simple cross product on lists of strings: #!/usr/bin/python def cross(args): ans = [[]] for arg in args: ans = [x+[y] for x in ans for y in arg] return ansdef pprint(lists): for list in lists: a = '' for s in list: a = a + s print a tags = [ ['<'], ['gm:page '], ['', 'authenticate=true', 'authenticate=false', 'authenticate=invalid'], ['/>'] ] lists = cross(tags) pprint(lists) Running the script yields the following combination of tags: < gm:page / > < gm:page authenticate=true / > < gm:page authenticate=false / > < gm:page authenticate=invalid / > Each one could be used in a mashup that used the page tag. Likewise, the other tags from the GME tag language could be expanded with various combinations of valid and invalid attributes. These tag combinations could then be individually inserted into skeleton mashups producing a large number of both positive and negative tests which would be performed under Selenium-RC. This was the basic idea of the GME Test Suite Creator and the team implemented a GUI to facilitate the three steps in creating and running a testsuite:Code Generation - The selection of tags and creation of tests. Code Preview - The examination and execution of created tests Test Reporting - The examination of test results. The figure below shows the Code Generation tab of the GME Test Suite Creator. It displays a hierarchical view of the tabs and allows the user to select which tags to include in the sample tests. The sample test is generated from a skeleton test modeled after the documentation example scraped from the code.google.com website. This was a nice idea which added testing of the documentation to the process. An interesting problem that these types of automatic test generation frameworks can encounter is the combinatorial explosion of generated tests. For example, if each tag attribute can have 8 possible values and a sample mashup contains 10 tags, enumerating each combination would take roughly 1 billion (810 = 230 ) tests! To address this, the team created an Options dialog box that would allow the user to specify different test suite sizes in addition to the test suite name and type. A further refinement, allowing the user to select which set of specific values to use for tag attributes would have been implemented if the team had more time. The next figure shows the Code Preview tab of the GME Test Suite Creator. It shows the list of tests created under a given test suite and allows the user to manage and execute the test suite. Finally, the Test Report Tab shows the results of the tests executed under Selenium-RC.GME Test Suite Creator was itself written in Python and hosted on Windows, Linux and MacOS. The team presented and demonstrated the GME Test Suite Creator to faculty and other student/industry teams as part of the UC-Irvine ICS Student Show Case. Over the next few weeks I will be kicking the tires and evaluating the battle worthiness of the GME Test Suite Creator delivery which included source code and a complete set of documentation. I certainly had a wonderful time interacting with the team and participating in this program! The GME Test Suite Creator Team (from left to right: Gabriela Marcu, Peter Lee, Michelle Alvarez, Jason Dramby and George Pirocanac)
Could you please provide some high quality screenshots?
ReplyDeleteWere any bugs found using this test tool?
ReplyDeletefrom George...
ReplyDeleteA couple of documentation issues were found during the test plan creation in phase I. The prototype of the tool was completed just in time for the end of the class, so we really have not used it much yet. The goal was about students partnering with industry and using the framework of solving a problem to learn. On those dimensions, it was a big success.