How Should Pest Managers Evaluate New Products?

In the second part of this series on product testing, we advise pest managers on how to accurately test new products against their usual go-to products. 

In the first part of this series on product testing, we looked at how manufacturers evaluate their products in order to get a product registered and make a claim on the product label. In this second and final part, we consider how pest managers can perform their own product comparisons in the field.

The key challenge for a product supplier is to convince a pest manager to try their product. If the supplier can get you to try the product, they will expect it to deliver on the performance claims (assuming it is applied correctly). Assuming it delivers on its promises, it then becomes a question for the pest manager as to whether or not this new product is added to their toolbox, maybe even replacing an existing product.

However, before making such a decision, pest managers will often do some comparison tests of their own, to see how the new product compares to the products currently used.

Pest managers, for the most part, are not trained scientists. So how should you test products in the eld to get a fair assessment of their performance?

Should you test the products side by side?

When testing products side by side you need to be extremely careful, as there is the very real danger that you will get a three-way interaction between the pest population and the two products, making it incredibly difficult to draw firm conclusions.

This is particularly true when pest managers evaluate bait products. Evaluating two different bait products side by side is, at best, only assessing attractiveness and palatability – it cannot provide data on efficacy. Even if the objective is to assess the attractiveness or palatability of the bait, this is a very specialised type of trial and needs special attention to the trial design to avoid bias. The only way to truly determine the level of efficacy of two products is for them to be tested on completely separate pest populations.

Remember the Pepsi challenge? In the early days of Pepsi, their marketeers came up with a clever marketing campaign – the Pepsi Challenge. Members of the public were offered Pepsi and Coca-Cola in unlabelled cups, asked to take a sip of each and state which they preferred. The testing panel claimed more people loved Pepsi. Whilst a great marketing campaign, it was awed from a methodology point of view, but it still didn’t stop Coca-Cola having a knee-jerk business response.

The test was ‘blind’ – they used unlabelled cups – which was good. However, participants were only asked to take a sip of each product before stating which one they liked. Pepsi, with its sweeter taste, was preferred. As a result, Coca-Cola decided to sweeten their recipe to negate the results of the test. But, the Pepsi Challenge is not a real- life situation. If participants were given a whole glass to drink would they have had the same opinion? The answer was no… a full glass of extra sweetness was too much for many. The result was a massive outcry from Coca-Cola’s loyal customers, whom disliked this new recipe. As a result they relaunched the original coke – Coke Classic.

In summary, the best way to evaluate products in most cases is to apply them in typical use situations, and if comparing more than one product, they should be tested against discrete populations in locations that do not overlap. In most cases it would be best to only test one product on one site.

Choosing a test site

Before starting the trial, decide on the pest(s) and site conditions against which you want to test the new product. Maybe you want to challenge the product under a range of conditions, for example low, medium and high pest pressure. This will make the trial longer but you will get a more complete picture of the product’s performance. The important point is that you also need to test your current product at the same time at similar sites for it to be a fair comparison.

When we are talking about a trial being ‘fair’, we mean the need to eliminate bias. Even with the best will in the world, everyone has their own biases. Maybe you have a preferred product or supplier, for example. However, in both setting up the trial and analysing results, it is important not to show favouritism. It is one of the key reasons that many suppliers have their products evaluated by external testing facilities. Often, the samples provided to the testing facilities are also coded, to further remove the chance of bias by the testing facility (who may respond to product names).

The test sites included in your trial should be similar (in terms of building type and use, environmental factors and level of pest infestation) or a range of properties should be chosen. In such cases the different properties should be classified to ensure each product is challenged in the same types of situations. Furthermore, the properties should be allocated to each product at random (pulling the names out of a hat is certainly an option!).

Accounting for variability

Field trials by their nature are less controlled than laboratory trials – there are many parameters that may vary between test sites, including pest pressure and environmental factors. Understanding this, and minimising the variability through site assessment and selection, is one option. The second option is to increase the number of replicates i.e. perform your test multiple times. Increasing the number of replicates doesn’t reduce the variability, but when there is variability (or ‘noise’) in the experiment, you need a larger number of replicates to give you confidence that the results are genuine.

Are the results real?

Replication – the number of times you repeat the assessment – is key in making sure you can have confidence in the results. You need to make sure you have a real result, rather than one which could have occurred by chance. It is important to understand that, generally speaking, scientists talk about a result being ‘statistically significant’ when the analysis reveals there is less than a 1 in 20 chance of an observed result being random. That’s like tossing a coin 20 times and getting 19 heads (or tails); the chance of that occurring randomly is quite small, so we might start to think there was a problem with the coin.

Are you likely to carry out 20 replicates to trial a new product? Probably not. But maybe carrying out ten replicates and finding product A is better than product B in eight or nine of those replicates may be good enough evidence. The size of the difference in performance may also influence the number of replicates you need to carry out. If you are seeing a big difference in performance, as few as five replicates may be enough, but less than five is probably not enough to give you confidence to draw a firm conclusion.

Choose Your Country or Region

Asia Pacific