Measuring how much an infringed patent affects consumer demand: why and how
Expert witnesses Betsy and Gabriel Gelb review the methods that litigators use to calculate patent infringement damages
We start with why to conduct a survey given a patent infringement suit.
Clearly, a survey is worthwhile for defendants, to show the proportion of revenue from their product that is attributable to the feature based on a contested patent. Any number less than 100 percent is good news, given that juries continue to award what many observers consider exorbitant damage amounts to companies whose patents they have found to be infringed, despite the admonition by the US Court of Appeals for the Federal Circuit that a measurement of consumer demand for a patented feature should influence the calculation of damages.
This modification of the entire market value rule followed the digital revolution, as products with hundreds of patented processes came on the market. Scott Breedlove, partner at the international law firm Vinson & Elkins LLP, put it this way at the 10th Annual Texas State Bar’s Advanced Patent Litigation Course: “Courts … appear to be requiring more real-world evidence of a nexus in addition to experts’ opinions, and expecting to see consumer surveys to show consumer demand for the patented features.”
However, a survey is also worthwhile for the plaintiff, for three reasons. Obviously, the patent owner may want to offer a jury a proportion that differs from the one that results from a survey by the defendants.
The defendant may not undertake a survey, but may offer an expert’s estimate of the influence on the feature on consumer demand, and a survey by the plaintiff may find a higher proportion.
But a third benefit concerns the perception by the plaintiff of the value of its patent in motivating purchase, particularly when a product has literally thousands of features (smartphones come to mind here).
A patent owner’s perceived value for one patent can far exceed reality, making a settlement more difficult to obtain than should be the case. The patent owner’s attorneys may be faced with exaggerated expectations by their client unless a survey can present a dose of reality.
How surveys work well, even with complex products
We turn now to the ‘how’ of survey research to find out what proportion of the choice of Product X the buyers or potential buyers of that product attribute to Feature Y.
First of all, simply asking for rankings or ratings produces numbers that are easily attacked by the other side. Consequently, survey research experts initially approached the task of the determining proportionate value of a patent-dependent feature of a product using trade-off techniques, based on all features of the product.
The two survey methods we discussed in IPProTheInternet in 2013 are Conjoint Analysis and MaxDiff, the name used by survey researchers for Maximum Difference Scaling. Both of these measures have been often accepted in federal courts as ways to mimic the consumer’s real-world behaviour in making trade-offs in selecting products or any other purchases.
The initial ‘gold standard’ has been Conjoint, asking consumers to choose among screens (assuming an internet survey) that ‘package’ sets of features, but vary systematically which features are grouped with others, so that the relative value of an individual feature emerges after consumers react to multiple screens. In this technique, all features are ‘considered jointly’ (thus the ‘Conjoint’ name), recognising that consumers make trade-offs among multiple features when selecting a product.
While at first Conjoint analysis seemed to fit all applications, it became unwieldy as product complexity increased. What we mean is that as product features/attributes multiplied, the number of screens that respondents had to view became burdensome.
As a result, Maximum Difference Scaling, an advanced form of Conjoint, was developed in 1986 and began to make its way in the litigation battlefields. MaxDiff is called informally the best-worst method, in that respondents do not have to view ‘packages’ but rather see a shorter list on each screen and select which item on each screen is most important and least important in their choice of product. MaxDiff scores are percentages of the total value of a product excluding generic items, such as tires on a car. They add up to 100 percent. In one such study we conducted of 21 features, the highest percentage was 17 and the lowest 1, an example of the ability of the technique to discriminate among the items.
Respondents can breeze through a screen much faster in MaxDiff than in Conjoint analysis, and even children have no trouble using the method. The numbers of items to be measured can range up to 40, and the resulting scores are easy to interpret and easy to communicate to the jury.
But what about more than 40 features?
Now we turn to a method of measuring the proportional contribution of a patented feature when a product’s complexity puts the number of features well into the hundreds. This new method we pioneered is easy for the customers who are providing data and—importantly—does not require us as researchers, nor the customers we survey, to consider all the features of a product.
We call this survey research method ‘Proportional Valuation’ (PV). It offers a straightforward way to measure the relative contribution of a patent to the product’s attractiveness and/or usefulness. This method has been employed in infringement cases in which we served as experts. Two recent examples involved mobile devices and global social media platforms.
To see how PV works in practice, consider an example adapted, to disguise the actual case, from work we undertook recently. Companies with truck or van fleets purchase software that tracks the locations of these delivery vehicles, but a new feature allows the drivers of those vehicles to notify headquarters concerning road repairs or other traffic hazards. What is the value of this feature to the many companies that now use it? Damages experts will express that value in dollar amounts, but those dollars will be a proportion of revenue: the new feature helped to sell the tracking software. What proportion of the value of the software is due to the added feature?
Via the internet, a survey goes out to the relevant population, that is, those who were involved in the purchase of the tracking software after the date the new feature was added. Basically, the purpose of the survey is to elicit what percentage of the total value of the software was attributed to the added feature.
We ask these respondents to imagine the software of interest “complete with all its features” as valued at 100 points.
Then we ask how many of those 100 points they would take away, reducing the value of the software to them, if the (patent-dependent) feature was not available. For each respondent, we also ask a parallel question about the proportional value of a “control” feature, something of little consequence, but useful to refute the criticism that when asked about a feature, people exaggerate its importance.
Presumably, that tendency to exaggerate will apply to the control feature as well, and subtracting its proportional value from the proportional value of the feature that depends on the patent at issue gives a justifiable number.
These difference scores are then arrayed and the mean calculated, to be passed on to the damages expert. In our view, the resulting statistic, the mean of these difference scores, measures the value of the patented feature as a proportion of the overall value of the tracking software.
The bottom line
Technology products and processes make up the majority of patent lawsuits. In January 2015, Bloomberg Technology reported that in 2014 , “63 percent [of all 5,002 patent cases] involved hardware, software and networking companies”.
As technology products increased in complexity, survey research methods to measure consumer impact—having to incorporate many more features—of necessity became simpler in their execution, from Conjoint to MaxDiff to PV. This shift has been necessary, as few respondents want to see hundreds of screens, and few researchers can claim they have included all of the, for example, 52 features of a technical product.
Each of the three methods we have described briefly has appropriate uses. For attorneys who see the need for survey research in patent infringement cases, it should be useful to realise that the methods to conduct those surveys have evolved along with the complexity of the products employing contested patents.