5 Key Questions Attorneys Should Ask About Statistical Analyses, Law360

Time 11 Minute Read
April 15, 2026
Publication

Vetting the reliability of an expert opinion was at the forefront of a U.S. Court of Appeals for the Ninth Circuit oral argument on March 9, where the panel questioned whether criticisms of the plaintiffs' damages experts went to the admissibility or merely the weight of their testimony.[1]

The appeal stems from an August 2024 decision in the high-profile In re: NFL "Sunday Ticket" Antitrust Litigation, in which the U.S. District Court for the Central District of California set aside a $4.7 billion jury verdict for DirecTV Sunday Ticket subscribers.[2]

The trial court, conducting a post-trial Daubert analysis, excluded the plaintiffs' damages experts, finding that their models — one based on a hypothetical college football "but-for" world, and another relying on assumptions about alternative distribution and pricing — rested on unreliable economic methodology and speculative assumptions, and thus failed to provide a rational basis for classwide injury or damages.

As a result, judgment was entered for the defendants. On appeal, the Ninth Circuit panel pressed both sides on whether the trial court's exclusion of the experts was proper.

In another recent example, in Flanks v. City of New Orleans, the U.S. District Court for the Eastern District of Louisiana issued a ruling in December 2025 that limited the scope of statistical expert testimony. The court allowed the expert to describe extrapolated case numbers but precluded broader opinions regarding overall trends, noting that such overarching conclusions would not be helpful to a jury.[3]

These cases highlight the importance of understanding the strengths and limitations of statistical evidence, and underscore how attorneys can play a role in vetting such analyses because the elements at issue can oftentimes be understood without particular expertise in statistics.

In light of these two cases, this article offers attorneys targeted questions to help vet a statistical analysis. A statistical analysis involves several steps: forming a hypothesis, choosing a methodology and dataset, conducting calculations, and interpreting the results.

At each step, experts make decisions that can significantly influence the outcome of the analysis. For instance, one expert might test whether average wages differ between Group A and Group B, while another might compare the percentage of people earning above a certain threshold.

Although both approaches address compensation across two groups, how the hypothesis is framed can yield different results, even if the experts are relying on the exact same datasets. These kinds of decisions permeate all steps of a statistical analysis, and an expert's conclusions can rest on the decision points along the way.

We highlight these decision points below, and provide some specific questions for attorneys to ask themselves about each step when vetting a statistical analysis and preparing for deposition. By carefully vetting the core elements of a statistical analysis, attorneys can ensure that expert testimony is both reliable and relevant.

Breaking Down a Statistical Analysis

A key point is that, regardless of how many formulas, datasets, calculations or tables are included in a statistical analysis, the results must make sense and be relevant to the case. Additionally, while statistical analyses may appear complex, the underlying concepts are generally straightforward and can be understood by an educated layperson.

Below, we summarize how statistical analyses are typically structured, and provide some starting-point questions for attorneys to consider.

Step 1: Framing the Hypothesis

Starting with step one, hypothesis testing begins with the researcher making an assertion known as the null hypothesis. The researcher then statistically tests this null hypothesis to determine if the available evidence is sufficient to reject it.

Take, for example, the following null hypothesis: The probability of a layoff is the same between Group A and Group B. Rejecting this null hypothesis means that the probability of a layoff is statistically significantly different between Group A and Group B.

This result could be powerful in a litigation context if, for example, Group A comprises members of a protected class and Group B does not. Importantly, the power of a statistical analysis comes from rejecting a null hypothesis, because failing to reject a null hypothesis is not the same as accepting it.

Given the hypothesis testing framework, the findings of any statistical analysis depend crucially on the question that is being asked. Think of the scene from the 2011 movie Moneyball about Billy Beane, who was one of the first general managers in Major League Baseball to use "big data" to assemble his team, the Oakland Athletics.

In the scene, Beane, played by Brad Pitt, is sitting in a conference room with his staff trying to decide which players to recruit for their team. Lots of names are being proposed, and, after some commotion, Beane stops the conversation and asks: "What's the problem?"[4] In other words: What question are we trying to answer?

As Beane suspected, the group had no consensus. Like the people around that conference room table, when a statistician presents an answer — i.e., that a finding is statistically significant — it is critical to understand what question is being asked.

Seemingly benign word choices, in particular, can alter the conclusion of a statistician's analysis. Omissions are also important: What questions could have been asked but were not?

When examining the hypotheses within a statistician's report, attorneys should ask the following questions:

  • What hypothesis is being tested by the statistician (e.g., the probability of a layoff is the same between Group A and Group B)? Is this research question in line with the specifics of the case?
  • Is there anything about the way the question is being asked that tips the scale in favor of a particular outcome (e.g., the probability of a layoff is the same between Group A and Groups B and C)?
  • Is the question overly narrow, such that the outcome is essentially predetermined (e.g., the probability of a layoff is the same between Group A and Group B in the second week of July 2023)?
  • Is the question overly broad, such that any outcome will be of questionable relevance (e.g., the probability of a layoff is the same between Group A and Group B between 2006 and 2026)?

Step 2: Evaluating Data and Methodology

The second step of a statistical analysis is the data and methods section. What attorneys can focus on here is external validity: the degree to which the statistician's data is consistent with other reliable data sources.

If, for example, a statistician's analysis of a county-level dataset shows a median home value of $500,000, but the U.S. Census Bureau's QuickFacts website shows a median value of $250,000, then an error might exist within the statistician's underlying data or calculations.[5]

Another check is whether the sample counts and minimum and maximum values make sense. For example, it does not take a statistician to know that a minimum home value of $100 on a summary table is a red flag. Given the potential for errors and outliers, a lawyer should study the underlying figures and data cited in support of the expert's data and methods section.

When examining the data and methods section, an attorney should ask:

  • Are the reported values that describe the dataset (e.g., averages or means, medians, minimums, and maximums) consistent with other data sources?[6]
  • Are the dates in the dataset reasonable and consistent with the facts of the case?
  • Are the reported values in the dataset consistent with the facts of the case and common sense?

Step 3: Testing Calculations for Internal Consistency

An attorney's ability to vet a statistical analysis might be limited when it comes to the actual calculations, but they can still conduct checks for internal consistency. Are the results in one table consistent with the results in another table?

Like the construction of hypotheses and checks for external validity, an assessment of internal consistency does not require any specialized knowledge of statistics — just attention to how the final pieces, as presented in tables, fit together.

When examining the analysis section, an attorney should ask:

  • How does each table or figure fit together? For example, if a table shows home values over time for two different regions, and a figure plots the difference over time, are the differences in the figure consistent with the values in the table?
  • Do the results in any figure appear inconsistent with other figures or with the tables? For example, if one figure shows home price inflation in percentage terms increasing over time, and another figure shows home values in dollar terms decreasing over time, does that make sense?
  • Does the text of the statistician's report match the results of the tables and figures? For example, if a statistician reports that 25% of homes are priced higher than $400,000, and references a table, does a check of the table show the interpretation to be accurate?
  • Do the values in the tables and figures make sense, and are they consistent with the facts of the case? For example, if an average home value is reported for a particular region over a specific time period, does any aspect of the location or time period fall outside of what is relevant in the case?

Step 4: Making Sense of the Results

For the final step, attorneys must ensure that the results of any statistical analysis, no matter how complicated, simply make sense. Statisticians often refer to this check as the "smell test." Attorneys might actually have an advantage over a statistician in this regard.

The statistician, mired in the minutia of their analysis, might find it challenging to focus on both the forest and the trees simultaneously. The statistician is also unlikely to know as much as the attorneys about the facts of the case and how their analysis fits within them.

Attorneys can leverage their advantage by focusing exclusively on the forest when it comes to the statistical analysis, and save the tree-level details for the specifics of the case.

The process of questioning whether a statistical analysis makes sense can lead to specific action steps. For example, if a statistician's results seem inconsistent with reality or common sense, then attorneys can take steps to counter arguments that the analysis is unreliable.

If a statistician's results do not align directly to the specifics of the case, then attorneys can take steps to counter arguments that the analysis is irrelevant. The point here is that it does not matter how the expert got to their opinion — what matters is whether the opinion is reliable and relevant.

When examining outcomes, an attorney should ask:

  • Do the statistician's findings make sense, either with respect to a yes/no interpretation of something (e.g., Group A is more likely to be laid off than Group B) or with respect to magnitude of a finding (e.g., Group A is 80% more likely to be laid off than Group B)?
  • Are the statistician's main findings relevant to and consistent with the facts of the case (e.g., are layoffs measured across facilities, some of which are not at issue in the case)?
  • How are the statistician's conclusions tied to the research question (e.g., if the research question was to assess whether individuals in Group A experienced layoffs at a higher rate than individuals in Group B, does the expert conclude, with no additional analysis, that Policy Q caused the discrepancy)?

Could a different phrasing of the question lead to a different conclusion (e.g., were individuals in Group A more likely to be laid off than individuals in Group B, or were individuals in Group A more likely to be laid off than individuals in Group B, taking tenure into account)?

Key Takeaways

Statistical analyses can be complicated. The general concepts behind any statistical analysis, however, can be understood by almost anyone: what research question is being asked, whether the data is consistent with other known sources, whether the calculations are consistent within the analysis, whether the conclusion makes sense, and whether it is relevant to the specifics of the case.

In short, attorneys do not need a background in statistics to effectively vet large portions of a statistical analysis. By taking an active role in this process, attorneys can strengthen their case by reinforcing the credibility and relevance of expert testimony — or exposing its weaknesses.


The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] See Dorothy Atkins, 9th Circ. Doubts Trial Judge Properly Nixed $4.7B NFL Verdict, Law 360 (March 9, 2026), https://www.law360.com/articles/2450485.

[2] In re: NFL "Sunday Ticket" Antitrust Litig. , No. ML 15-02668 PSG (SKX), 2024 WL 3628118 (C.D. Cal. Aug. 1, 2024).

[3] Flanks v. City of New Orleans , No. CV 23-6897, 2025 WL 3761621 (E.D. La. Dec. 30, 2025).

[4] Moneyball (Columbia Pictures 2011).

[5] U.S. Census Bureau. (2026). "QuickFacts." Washington, DC: U.S. Department of Commercehttps://www.census.gov/quickfacts/.

[6] AI could potentially assist with such an assessment.

Jump to Page
gazebo17