In my last post, we looked at the topic of improving the quality of ideas from two different angles. In this post, we'll take a closer look at how to decide on all the input you get from your well-managed and successful campaigns and channels.
In your daily work, you may have experienced many different scenarios where you need to choose the best ideas you want to continue with. For example, when you ask your community to help you solve a problem that came up in your production process, it may be best to ask your experts in production about the quality of the submitted ideas by filling out a detailed evaluation form. In other scenarios, you may have asked a broader audience of senior colleagues about strategic options for future development or their input on the competition.
Taking the latter case, an easy way is to use pairwise evaluation. In this evaluation methodology, the manager chooses one or several criteria to evaluate the ideas against each other. For each criterion, the evaluators are asked to judge the relative quality of two ideas in comparison, i.e., which idea is better regarding each of the defined criteria. This opinion is usually expressed via a slider on a range of different values.
When an evaluator starts a pairwise evaluation session, an algorithm chooses a series of pairs of ideas to be compared. Different detailed aspects should be taken into consideration when selecting those pairs. We'll touch on some of them below. With an increasing number of completed evaluations, another algorithm can continually calculate a ranking of all ideas for both a global perspective as well as for each evaluation criterion. The more completed pairwise evaluations, the better and more exact the result becomes.
In general, you should apply this approach in situations where you want to ask for a personal opinion, comparing two different options. Taking a closer look, it is interesting to realize that two generally different situations can come up where you should apply this tool quite differently.
In the first situation, you expect that your evaluation team has roughly a similar opinion on the ideas questioned. For example, if you ask people how sweet or sour a certain food is, they will probably all give you very similar answers, mostly different in real details. In this scenario, you only need to ask your evaluation team for as many comparisons as you need, to cover all your ideas by a certain level. Usually, a few comparisons per evaluator are sufficient to get a good picture of the overall ranking.
In contrast to this, take the second scenario from above: You are asking senior colleagues of your company for different strategic options, and, in a second step, they should rate all these options to get a broader opinion for the discussion basis running in a strategy workshop. The opinions on how the overall strategy should look mainly differ from colleague to colleague, but not only slightly, but from a whole perspective. For something like a strategy, it is essential that all components smoothly play together.
In this situation, it does not really help to ask for a few comparisons per idea only, because those different blocks will most likely not fit together. In contrast to that, you should ideally ask your colleagues for a complete picture of their opinion about all the options. The software should then compare those pictures and compile it in one graph, representing all of your colleagues' views.
We improved our pairwise algorithms by adding more flexibility so that you can now make adjustments to the number of comparisons you are asking each evaluator for (just to mention one of the updates we made). But we did not stop there. By asking you for more parameters for the algorithm, we improved the guidance of the setup of pairwise evaluation sessions as well, so that your campaign managers are now much better equipped when choosing the right way to approach your evaluators when asking them for necessary decisions.