Online Machine Learning Testing == Extreme Testing
Monday, November 24, 2008
Posted by Alek Icev
As you may know our core vision is to build "The perfect search engine that would understand exactly what you mean and give back exactly what you want.". In order to do that we learn from our data, we learn from the past and we love Machine Learning. Everyday we are trying to answer the following questions.
- Is this email spam?
- Is this search result relevant?
- What product category does that query belong to?
- What is the ad that users are most likely to click on for the query “flowers”?
- Is this click fraudulent?
- Is this ad likely to result in a purchase (not merely a click)?
- Is this image pornographic?
- Does this page contain malware? –Should this query bring up a maps onebox?
Solving many problems require Machine Learning techniques. On all of them we can build prediction models that will learn from the past and try to give the most precise answers to our users. We use variety of Machine Learning algorithms at Google and we are experimenting with numerous old and new advancements in this field in order to find the most accurate, fast and reliable solution for the different problems that we are attacking. Of course one the biggest challenges that we are facing in the Test Engineering community is how are we going to test these algorithms. The amount of the data that Google generates goes beyond all of the known boundaries of environments where the current Machine Learning Solutions were being crafted and tested. We want to open discussion around the ideas how to test different online machine algorithms. From time to time we will present an algorithm and some ideas how to test it and solicit the feedback from the wider audience i.e. try to build a wisdom of the crowds over the testing ideas.
So let's look at the Stochastic Gradient Descent Algorithm
Where X is the set of input values of Xi ,W is set of the importance factors(weights) of every value Xi. A positive weight means that that risk factor increases the probability of the outcome, while a negative weight means that that risk factor decreases the probability of that outcome. t is the target output value, η is the learning rate(the role of the learning rate is to control the level to which the weights are modified at every iteration and f(z) is the output generated by the function that maps large input domain to a small set of output values in this case. The function f(z) in this case is the logistic function:
z = x0w0 + x1w1 + x2w2 + ... + xkwk
The logistic function has nice characteristics since it can take any input, and basically squash it to 0 or 1. Ideal for predicting probabilities on events that are dependent on multiple factors(Xi) each with different importance weights(Wi). The Stochastic Gradient Descent provides fast convergence to find the optimal minimums of the error(E) that the function is making on the prediction as well as if there are multiple local minimums the algorithms guarantees converging to the global minimum of the prediction error. So let’s go back now into the real online world where we want to give answers (predictions) to our users in milliseconds and ask the question how are we going to design automated tests for the Stochastic Gradient Descent Algorithm embedded into a live online prediction system. The environment is pretty agile and dynamic, the code is being changed every hour, you want your tests to run on 24/7 basis, you want to detect errors upstream in the development process, but you don’t want to block the development process with tests that are running days, on the other side you want to release new features fast, but the release process has to be error prone(imagine the world with google being down for 5 mins, that is a global catastrophe, isn’t it?!
So let’s look at some of the test strategies:
Should we try to train the model(set of the importance factors) and test the model with the subset of the training data? What if this takes far more than hours, maybe days to do that? Should we try to reduce the set of importance factors (Xi) and get the convergence(E->0) on the reduced model?
Should we try to reduce the training data set(the variety of set of values for X as an input to the algorithm) and keep the original model and get the convergence by any price? Should we be happy with reducing both the model size and the training set? Are we going to worry for over-fitting in the test environment? Given the original data is online data and evolves fast, are we going to be satisfied with fixed data test set or change the input test data frequently? What are the triggers that will make you do so? What else should we do?
Drop us a note, all ideas are more than welcome.
wouldn't help normalize the data if you remove the Xi with excessively high/low (maybe just the low) values of Wi?
ReplyDeleteas for changing the input data, I would say, update it as often as possible
I think it helps to think about two different testing questions:
ReplyDelete1. Does the algorithm implement the math correctly?
2. Will the math perform correctly against the real world?
The first question is a traditional software QA question, which can be answered with "toy" test data, using fewer variables. It has a "yes or no" answer.
The second question is more of a statistical performance evaluation, where you're asking how well the algorithm performs against some relatively realistic data set. You could test against live data every time, but then when you get a borderline result, and want to tweak the algorithm and re-run the test, it's hard to tell if any changes are due to your tweaking, or due to changes in the live data.
On the other hand, you don't want to test against obsolete data based on live data from several years ago, either. I suspect what might work best is to maintain several corpi of data to test against, and stagger the replacement schedule so that one new corpus can be validated against test results from several older ones. If the new corpus checks out, you retire the oldest.
I agree with "Jeremy Leader" - it helps to first define what it is that is being tested. From my understanding of the post, it appears the main testing question being asked is, "does this algorithm learn correctly?" , with the sticking points being that, to test if something actually learns, you need to 1) define the parameters that indicate the system has demonstrated it has learned, and 2) define them in such a way that obtaining said parameters fits the time/resource constraints of the testing/development/release processes.
ReplyDeleteI think point 2) is the kicker - tests need to run as quickly as possible. Therefore, I would say firstly it is important to NOT run the full gamut of learning-related tests unless the algorithm itself has been changed. Sounds like a no-brainer, but the post mentions wanting to "release new features fast", and the first thing I thought was, "yeah, but are those features related to the algorithm?" With a bit of planning, changes to the algorithm itself could be tested in isolation of other features, and therefore be tested by the time they are integrated.
The questions around what sized corpus to use, whether to cull importance factors, how often to update the corpus depend on how you define 1). Again this may be obvious, but the reason to have a huge corpus is because it allows you to identify tiny, but significant changes in the behavior of the algorithm. Take a few snapshots of significant changes to the algorithm, and run them over the entire corpus. Are the variations meaningful? You might be able to [automatically] prune the corpus by analyzing these results.
I see value in having a baseline data set for trending purposes. Live data evolves, but if your corpus is already very large I wouldn't worry too much about it not capturing some magical new aspect out there. Perhaps there is some way to identify and obtain "new" live data that is quantifiably different than the data that already exists in the corpus, and valuable for that reason?
Overall, I tend to think that running a full suite of tests using all importance factors and the entire corpus is necessary for critical algorithms. Maybe the development and release processes could be tweaked so that, as far as possible, these tests are not the bottleneck.
There's an interesting related question about writing unit tests for non-deterministic code in general.
ReplyDeleteIf you have a stochastic algorithm that produces a different result each time -- because it's sampling from a distribution, for example -- how do you write tests that strongly test the code when you don't know what output to expect?
In this example you can only see if the sampling code is correct by doing it lots of times and seeing if the results fit the expected distribution. Which goes against the principle of making unit tests small and fast.
I work in bioinformatics, and I'm doing my best to champion test-driven development, but people do come up with examples like these where the normal model doesn't fit so well.
I'd be interested to hear people's thoughts on this.
Andrew.
@Andrew
ReplyDeleteI work on agent based modeling software that uses stochastic algorithms so I'm familiar with the situation you're in. For unit testing, the trick is to encapsulate the sampling from the distribution functions inside your own functions/classes and injecting these into classes that consume them. This way you can test code that consumes non-deterministic functionality by injecting a mock/stub object that returns predefined values. This way you can test the results as you will know apriori the values returned by the distribution function. I'd suggest you check out Misko Hevery's posts on this blog (as well as his own) for some great articles on Dependency Injection.
The approach you listed of doing lots of iterations and checking that it fits the specific distribution function would be a functional test and not a unit test. I suggest you use this approach as well, but it's probably something that should be handled by QA. Though I'm gonna take a stab in the dark and guess that you're working a research department like me and don't have QA either.
-Mark
Thanks for the suggestions, Mark. I'm a big fan of DI having started playing with Guice recently, but it's very easy to slip out of the OO paradigm and into procedural (=less modular) thinking when you're doing algorithmic stuff. Old habits etc.
ReplyDeleteYou're right about the QA dept., the closest thing we have is the peer reviewers on the resulting journal articles, and (I suspect) they rarely even try the software which the article is about..!
Andrew.
I work on a library of Computational Intelligence algorithms and testing is one of the things that has been plaguing me for a long time.
ReplyDeleteThere are effectively two different kinds of tests that are needed (similar to what Jeremy pointed out earlier in the comment list).
I eventually decided on making the algorithm completely deterministic by specifying the seed for the RNGs in order to test the validity of the algorithm as a whole. Many tests, however, exist to test the smaller components in isolation.
We use setter based injection, with a large amount of success so far. This makes sure that the objects are complete and we mock out what we need to in order to ensure that the behavior is as expected.