By Joryn Jenkins, Esq. and Michael J. Mard, CPA/ABV.
State Farm v. Electrolux
In State Farm v. Electrolux(1), for example, the Western District of Washington held that the expert exceeded his scope of experience. This was a subrogation action concerning a fire that occurred in the home of a couple who held a State Farm homeowner’s insurance policy. State Farm retained an engineer and a fire investigator from a local forensic engineering laboratory, who determined that the fire’s origin was the Electrolux clothes dryer.
State Farm subsequently sent the dryer to a second fire examiner, Sanderson, who examined the dryer, along with Electrolux representatives. Sanderson issued an expert report concluding that: The fire originated in the dryer heater pan and was caused by burning particles of lint; Electrolux defectively manufactured the dryer, which caused lint to accumulate internally; Electrolux defectively designed the dryer because it allowed lint to accumulate where it could be ignited; Electrolux failed to warn because it did not place adequate warnings on the dryer itself, but only placed a warning in the instruction manual; and Electrolux’s expectation that customers would hire qualified service personnel to clean the dryer at eighteen-month intervals, as set forth in the instruction manual, was not reasonable.
In arriving at this last conclusion, Sanderson and his staff contacted the authorized service providers in question, “posing as a customer wanting to have our dryer cleaned.” Sanderson claimed “few, if any, of them had ever been asked to perform such a service. Some even said they only repaired dryers.”
The defendants moved in limine to preclude Sanderson from testifying regarding some of his conclusions, contending that he was not qualified to opine upon the defendants’ alleged failure to warn or issues relating to consumer expectations. The court granted the motion to the limited extent that Sanderson was prohibited from testifying regarding his “survey” of Electrolux personnel. The State Farm court clarified:
Expert testimony that does not relate to any issue in the case is not relevant and, ergo, not helpful … [I]t is only “reasonable” for an expert to rely on the statements of others if the statements or declarations were collected through methods calculated to elicit reliable information. Here, the court is not convinced that [the expert] collected the statements at issue through methods calculated to elicit reliable information.
Apple v. Samsung
In the more recent case of Apple v. Samsung(2), plaintiff Apple moved to exclude testimony of the eight defense experts. Samsung, in turn, moved to exclude testimony of eight of the Apple experts. The Northern District of California found:
An expert witness may provide opinion testimony if: 1) The testimony is based upon sufficient facts or data; 2) the testimony is the product of reliable principles and methods; and 3) the expert has reliably applied the principles and methods to the facts of the case. Under Daubert, “a court should consider 1) whether a theory or technique ‘can be (and has been tested)’; 2) ‘whether the theory or technique has been subjected to peer review and publication’; 3) ‘the known or potential rate of error’; and 4) whether it is generally accepted in the scientific community . . .”
The Apple court found that the underlying consumer survey data on which the defendant’s proffering expert had relied for his calculations was the kind of data on which “experts in the particular field would reasonably rely.” However, the court went on to find that neither the expert nor the defendant had cited any evidence supporting their assertion that the expert’s calculations were based on a generally accepted, peer- reviewed method. And the plaintiff had raised serious doubts about the reliability of the expert’s arithmetic, “the flaws of which are apparent even on the face of some of his calculations.”
The court granted in part and denied in part both Apple’s and Samsung’s motions to exclude, thus disallowing some of the testimony from some of the experts on both sides.
Recursion Software, Inc. v. Double-Take Software, Inc.
The plaintiff in Recursion Software, Inc. v. Double-Take Software, Inc.(3) sued for breach of contract and copyright infringement of its product, C++ Toolkits. The license agreements were at issue in the case, and contained identical language regarding the distribution of the C++ Toolkits. The parties disputed the scope of the license agreements, specifically, whether the agreements allowed the C++ Toolkits to be distributed by static linking.
The parties each moved to exclude the expert testimony of two of the other party’s experts. The Eastern District of Texas ruled:
In deciding whether to admit or exclude expert testimony, the court should consider … 4) whether the theory or technique is generally accepted in the relevant scientific community.
The court denied the plaintiff’s motions, finding the expert’s opinions admissible. The court did not see a “profound analytical gap” in one of the expert’s opinions, as asserted by the plaintiff. The court found that both experts’ opinions were sufficiently reliable
and admissible. The plaintiff could still challenge the validity of the opinions through cross-examination or by the presentation of contrary evidence at trial.
However, the court granted one of the defendant’s motions, finding that the expert’s testimony was not relevant in helping the jury to determine a fact in issue. Further, the court found that the report was not the product of reliable principles or methods because the expert did not use appropriate comparable license agreements to determine a hypothetical license fee (i.e., the “comps” were not comparable). Because the reliability analysis applied to all aspects of an expert’s testimony, the court granted the motion to exclude.
The court granted in part and denied in part the other defense motion. That expert’s conclusions and opinions provided useful information regarding the context of the parties’ contract negotiations and agreements, but were based in part on interpretations of the contractual agreements and thus were considered legal conclusions. The court excluded the parts of his opinions considered improper legal conclusions. Instead, the court ruled that the plaintiff ’s counsel should make such arguments in briefs or argument before the court.
Legendary Art, LLC v. Godard
In Legendary Art, LLC v. Godard(4), the Eastern District of Pennsylvania considered the plaintiff’s proffer of an expert report and testimony to establish a range of fair-market values in order to prove its damages. The defendants argued that the data upon which the expert had based his damages calculations was unreliable, rendering his conclusions speculative and inadmissible.
The report relied upon a business plan developed by the plaintiff that contained profit and loss projections for the business. The expert relied upon the plan to form his valuation conclusions and assumed the validity of the plan projections. He only reviewed their reasonableness in light of his experience as a finance professional; in fact, he had no experience in the manufacture, sale, or distribution of artwork or related products. He did not conduct any independent research into the plaintiff ’s industry, and the report referenced no market surveys or studies. There was no evidence that the expert engaged in any independent verification of the projections supplied by the plaintiff.
The Legendary Art court opined that Rule 702 “embodies a trilogy of restrictions on expert testimony: qualification, reliability, and fit.” The court found that the expert’s reliance on projections supplied by the plaintiff, without independent verification, rendered his analysis “unreliable.” The court pointed out that, in ID Sec. Sys. Can., Inc. v. Checkpoint Sys., Inc., 249 F. Supp. 2d 622, 695 (E.D. Pa. 2003), the expert was found to be even more unreliable because he based his opinion on projections supplied by the plaintiff company’s president, who had more incentive than an independent consultant to inflate the predictions of the company’s potential sales and profits. The Legendary Art court therefore excluded the expert’s opinion, holding that:
Unverified profit and loss projections cannot be the type of evidence “reasonably relied upon by experts” as required by Daubert … Here, the court is faced with the opinion of an expert who has no knowledge of the relevant industry, relied exclusively on the self-serving data provided by Legendary Art, and conducted no independent verification of that data.
Uniloc USA, Inc. v. Microsoft Corporation
In Uniloc USA, Inc. v. Microsoft Corporation(5), the patent at issue involved a system to deter the copying of software. The accused product was a feature offered by the software company to protect several of its software programs. After a full trial, a jury found that the patent was valid, the software company engaged in willful infringement, and the patentee was due $388 million in damages. The district court denied judgment as a matter of law on invalidity, but granted a judgment as a matter of law of non- infringement and of no willfulness, and granted, in the alternative, a new trial on infringement and willfulness.
On review, the court reversed the judgment as a matter of law of noninfringement because it concluded that the jury’s verdict on infringement was supported by substantial evidence. However, willfulness was not supported. The court also reversed the district court’s alternative grant of a new trial on infringement. The court found that the jury’s damages were fundamentally tainted by the use of the twenty-five percent rule of thumb, which it held, as a matter of Federal Circuit law, to be a legally inadequate methodology. Thus, a new trial on damages was warranted.
The court agreed that the jury verdict of no invalidity was supported by substantial evidence.
The Federal Circuit honed in on the Kumho Tire case and applied the specificity required of an expert to the facts and circumstances immediately before the court:
The specific issue was not whether the visual and tactile inspection methodology was “reasonable in general,” but whether “it was [reasonable to] us[e] such an approach … to draw a conclusion regarding the particular matter to which the expert testimony was directly relevant … The relevant issue was whether the expert could reliably determine the cause of this tire’s separation.”
Baisden v. I’m Ready Productions, Inc.
In Baisden v. I’m Ready Productions, Inc.(6), B aisden allege d copyright infringement and several state common law claims regarding two books he had authored, The Maintenance Man and Men Cry in the Dark. Baisden had entered into agreements with IRP, pursuant to which IRP had adapted the two novels into live performance plays. The agreements provided for Baisden to be paid a portion of the ticket and merchandise sales related to the plays. IRP produced and toured the plays and claimed derivative work copyrights on the plays. IRP coproduced video recordings of live-performances of the plays and entered into an agreement to distribute them. IRP neither paid Baisden royalties for the sale of video recordings nor provided him with an accounting of those sales.
To assist in calculating damages for infringement of the copyrights, Baisden’s expert consulted “Intellectual Property Damages in the Entertainment Industry” (Litigation Services Handbook 4th Ed.) and Internet Movie Database (Pro Edition), among other entertainment industry data sources. The Southern District of Texas excluded the expert’s testimony concerning the plaintiff ’s actual damages to the extent of his opinions on profits from movies, addressing the reliability issue squarely:
Reliability hinges on the sufficiency of the facts or data upon which the opinion is based, the dependability of the principles and methods employe d, and the prop er application of the principles and methods to the facts of the case. Among the factors to be considered in determining reliability of scientific testimony are: 1) the extent to which the theory can be tested or has been tested; 2) whether the theory has been subject to peer review and publication; 3) potential rate of error for the technique used and the existence of standards and controls; and 4) whether the underlying theory or technique is generally accepted as valid by the relevant scientific community.
Because a bulk of the expert’s opinions were based largely on speculation, the court disallowed them. However, the expert’s testimony on lost profits from the failure to produce The Maintenance Man movies did survive the reliability challenge. While the defendant’s argument that the expert lacked important evidence and that he employed an improper method for calculating actual damages under the Copyright Act did raise concern, it did not warrant exclusion of his testimony. Instead, these facts, if proven at trial, were fodder for cross-examination and went to the weight of the expert’s testimony.
The twenty-two-year history of federal cases reveals simplicity and precision for the financial expert and the client’s lawyers: The reliability of the financial expert’s opinion must be a function of specific relevance particular to the facts and circumstances forming the testimony. Square pegs go into square holes. Evidence of reliability of the expert’s methods can originate from testability, being peer- reviewed (and presumably accepted), establishment of an error rate (from some testing mechanism including standards and controls), and/or establishing that the methodology is generally accepted (e.g., Generally Accepted Accounting Principles). The measurement of error rates has a threshold function of materiality because immaterial information does not affect a user’s decision.
The nature of finance creates a two- pronged level of testability and potential error rates that must be minimized to comply with the Daubert test. The first is at the quantitative level of judgment, in which quantitative error rates should be tested (and corrected, if necessary) and evidenced by the appropriate completion of checklists specific to the purpose and utilized in both non- litigation and litigation settings. For instance, quantitative error rates can be tested in a valuation engagement at several levels. Typically, in such an engagement, the expert will first input five years of financial statements to perform a financial analysis. After the initial input, another analyst should trace the results back to the source data to verify that the input was performed correctly. Errors are, of course, corrected. Thus, the final result of this phase for the report should have no error rate.
A next level of input might be the peer ratios (RMA or comps) and then the analyst’s interpretation of the subject’s performance to the peer ratios. Finally, the value driver methodology (market guideline data, income projections for a DCF, market value data for an asset- based approach, etc.) is input, reviewed, changed, and, upon final acceptance, traced from the underlying inputs (10Ks, strategic plans, real estate appraisals) to the subject’s report narrative. Again, errors normal and are routinely corrected. There is an error rate, but the final product in the report should be error-free and include documentation established at each level of work by appropriate sign- off of the firm’s checklist. That is, the integrity of the source data from input to manipulation to report has now been double-checked, and the valuator’s quantitative input to the subject’s report is error-free.
The qualitative level of judgment is the second level of testability, and potential error rates in quality of judgments must also be minimized. The monitoring at the qualitative level is an entity-specific consideration and thus an aspect of relevance. Materiality is a pervasive constraint to the expert’s testimony because it is pertinent to all of the qualitative characteristics underlying the expert’s opinion. Such extensiveness of judgment should be well-documented.
There are hundreds of judgments throughout the valuation process, but typically only a few that have a material impact on any one conclusion. Certainly, the qualitative interpretation of the subject’s performance compared to its selected peers requires elaboration. In what manner are the peers comparable (size, industry, geographic area)? Does the subject perform better or worse than the selected peers? Why (market share, more efficiencies)? Qualitative aspects extend to the professional judgment necessary for interpreting the economic behavior of the subject and matching such behavior to the performance of the guidelines, directly via public markets or indirectly via rates of return.
It is axiomatic that value is a function of economics and always based on the return on assets. The asset-based approach represents the subject’s things, owned or borrowed. The income approach quantifies the return these assets can be expected to produce.
The market approach, while critical if available, merely reflects the market’s perceptions of the subject’s usage of its things, owned or borrowed, and their expected returns.
Though every engagement is different, every engagement always has some economic linchpin. The due diligence and the paper trail should make clear the analysis, support, rationale, and ultimate qualitative derivations. It is this trail laid by the analyst and aligned within the generally accepted methodologies that is then reviewed and adjusted by the in- charge principal and cold-reviewed by another principal.
Pursuant to the Daubert test, these generally accepted methodologies are your (CPA) peer-reviewed ethics, principles, and standards and also include peer-reviewed protocols such as practice aids, courses, books, and articles. Evidence for Daubert compliance of the qualitative aspects is documented by the appropriate sign-offs on the firm’s checklists. The measurable error rate is established over time by the number of such assignments that have failed to be accepted by the users of the firm’s opinions and reports. Financial experts are quirky beasts who use peculiar methods not commonly known to non-financial courts. Understanding and observing the methods at both quantitative and qualitative levels of judgment above the threshold of materiality evidenced by broadly applied checklists can go a long way toward bridging the gap. Leaving a clear trail of documentation in the work papers and checklists can demonstrate compliance with the Daubert test:
- Peer-reviewed methodology
- Error rate established
- Tested empirically
- Accepted generally in the expert’s industry
- Litigation and non-litigation use of methodology is established
- State Farm Fire and Casualty Company v. Electrolux North America, 2011 U.S. Dist. LEXIS 148106, 2011 WL 6753140 (W.D. Wash. December 23, 2011)
- Apple, Inc. v. Samsung Electronics Co., Ltd., 2012, U.S. Dist. LEXIS 90877 (N.D. Cal. June 29, 2012)
- Recursion Software, Inc. v. Double-Take Software, Inc., 2012 U.S. Dist. LEXIS 63299, 2012 WL 1576252 (E.D. Tex. May 4, 2012)
- Legendary Art, LLC v. Godard, 2012 U.S. Dist. LEXIS 116270, 89 Fed. R. Evid. Serv. (Callaghan) 210, 2012 WL 3549990 (E.D. Pa. August 17, 2012)
- Uniloc USA, Inc. v. Microsoft Corporation, 632 F.3d 1292, 1312 (Fed. Cir. 2011)
- Baisden v. I’m Ready Productions, Inc., 2010 U.S. Dist. LEXIS 45057, 2010 WL 1855963 (S.D. Tex. May 7, 2010
Note, this article was published by permission of the authors in The Value Examiner, July / August 2015 issue.