Once you’ve completed your Job Desirability Map (which we discuss in detail here) you’re left with the kind of problem that most product teams only dream of: You’ve got too many possible product solutions, all of which have the potential to add value and increase motivation among your users.
But how do you choose, say, the top five best and most promising solutions to implement among the dozens you’ve come up with? Given that the job desirability mapping process is so methodical and scientific, it would be a shame to put your faith in luck or pseudoscience at this late stage.
It’s tempting to just show all of your solutions to your customers and ask them which ones they like best, but that kind of feedback is far too subjective if you hope to gather reliable and accurate results. In fact, most feedback gathering methods aren’t nearly as accurate and objective as they claim to be. Although on the surface they sound reassuringly “science-y,” the most popular ranking systems — forced choice analysis, paired comparisons, focus groups, and conjoint analysis — all share some flaws, and shouldn't be used as your sole means of gathering data.
So what are you supposed to do? Trust to fate and pick your top five solutions out of a hat? Of course, we would never advocate such a random approach to important decision making. Instead, we recommend a technique called “Maximum Difference Scaling, or “Max Diff” for short. First pioneered in the 1990s, this method is a more objective and quantitative way to gain insight into the solutions that would be most valuable to your customers.
Keep in mind that even a perfect method won’t yield useful or accurate results if you start from a bad premise -- but thanks to job mapping, you don’t have that problem. You’re already starting from a better place than most teams, because you’ve focused on the job that your users are hiring your product to do. Thanks to the job desirability map, you’ve gained insights into your users’ motivations, identified friction points and workarounds, and theorized about the best solutions to accelerate your users’ progress. It’s a fair bet that none of your potential solutions are clunkers, but you do need to narrow the field and implement the very best ones.
And that’s exactly what Max Diff can help you accomplish.
At first glance, Max Diff seems very similar to the more well-known method of conjoint analysis, but there are some important differences. In conjoint analysis, users are given a set of products, each with different configurations of features — price, size, add ons, etc. They are then asked to choose which one is most appealing. While this method can be somewhat useful in cases where there are variable feature configurations (assuming you haven't simply brainstormed your ideas in a vacuum), it isn't the best way to narrow down your solution statements.
Max Diff, on the other hand, offers users a small set of features at a time -- say, 3 or 4, and asks them to rate the most and the least important feature in that set. The most important feature then gets carried over into a new set of features to be ranked. Think of it like a vision test, in which you’re given a choice between two lenses, and your preferred lens is compared against a new lens, until the best choice emerges.
How would this work with your solution statements? Let’s say your product is a monthly milkshake subscription. (We break down what that would look like in our Milkshakes & Metrics series.) After completing your Job Desirability Map, you’ve come up with 50 potential solutions to friction points or workarounds your customers have encountered when trying to take advantage of their subscription. You want to narrow it down to the five best. You first offer them the following three choices:
Then you simply ask them to choose the solution that is most likely to accelerate their progress, and the one that is least likely to do so. If, for example, they choose “Recurring delivery of milkshakes,” as the most likely, then roll that choice into the next group. The more times a solution is chosen, the higher its rank -- and when you’ve cycled through every choice, you’ll be able to choose the top five.
Of course nothing is 100 percent accurate whenever human beings are involved and you‘re asking for feedback rather than observing their behaviors. That said, the Max Diff method is inherently more objective than other popular ranking methods. As marketing professor Tim Daly wrote in the Qualtrics blog, a huge advantage of the Max Diff method is that “We obtain not only greater discrimination between all of these appealing attributes, but the relative degrees to which they are seen as important decision factors.