Chapter 31 Intermediate ~50 min read

Individual Defensive Metrics

Survey of metrics that attempt to measure individual defensive contribution, from box scores to tracking data.

The Individual Defense Puzzle

Measuring individual defense requires solving the attribution problem: when a team defense succeeds, which players deserve credit? The team's Defensive Rating reflects collective performance, but roster decisions, player evaluation, and contract negotiations need individual assessments. Various approaches attempt to extract individual signal from team outcomes.

Defensive Box Score Metrics

Traditional approaches combine available defensive statistics into composite measures. Defensive Win Shares uses team defense allocation and individual defensive statistics. Defensive Box Plus-Minus (DBPM) regresses historical plus-minus on defensive box score components to predict individual impact.

These metrics share common limitations: they rely heavily on steals, blocks, and defensive rebounds—the only defensive box score statistics—and must infer unmeasured contributions from limited information. Players who excel at unmeasured aspects of defense (positioning, communication, contest without blocks) receive inadequate credit.

def calculate_basic_defensive_metrics(player_stats, team_stats):
    """Calculate basic defensive metrics from box score data"""

    # Steal percentage: steals per 100 possessions
    stl_pct = (player_stats['STL'] * 100) / (
        (player_stats['MIN'] / (team_stats['MIN'] / 5)) * team_stats['POSS']
    )

    # Block percentage: blocks per opponent two-point attempts while on court
    blk_pct = (player_stats['BLK'] * 100) / (
        (player_stats['MIN'] / (team_stats['MIN'] / 5)) * team_stats['OPP_2PA']
    )

    # Defensive rebound percentage
    drb_pct = (player_stats['DRB'] * 100) / (
        (player_stats['MIN'] / (team_stats['MIN'] / 5)) * team_stats['DRB_OPPS']
    )

    return {
        'STL_PCT': round(stl_pct, 1),
        'BLK_PCT': round(blk_pct, 1),
        'DRB_PCT': round(drb_pct, 1)
    }

Individual Defensive Rating

Individual Defensive Rating attempts to estimate points allowed per 100 possessions when a player is on court, with adjustments for teammate and opponent quality. The calculation involves complex estimation of individual defensive contributions and credit allocation for team defense.

This metric provides some signal but carries substantial uncertainty. The adjustments require assumptions about credit allocation that may not hold. A weak defender hidden on poor offensive opponents might appear better than their actual contribution; a strong defender assigned difficult matchups might appear worse.

Defensive Real Plus-Minus and RAPTOR Defense

Modern metrics like DRPM (Defensive Real Plus-Minus) and RAPTOR Defense combine plus-minus data with box score information to estimate individual defensive impact. These approaches use sophisticated statistical methods to isolate individual contributions from team contexts.

By observing how team defense changes when specific players enter or exit, these metrics can detect defensive value that box scores miss. A player with modest steals and blocks but consistently excellent team defense when on court will receive credit through the plus-minus component. However, sample size requirements are substantial, and estimates for low-minute players remain highly uncertain.

Tracking-Based Defensive Metrics

NBA tracking data enables direct measurement of individual defensive events. Defensive matchup data shows who guards whom and how those matchups produce points. Contest data measures how often players challenge shots and how those contests affect conversion rates.

These metrics provide granular insight previously impossible. We can now evaluate rim protectors on shots challenged rather than just blocks recorded. Perimeter defenders can be assessed on opponent shooting percentages when they're the primary defender. This represents meaningful progress toward individual defensive measurement.

However, tracking metrics still miss important contributions. Help defense deterrence, communication, and scheme execution remain difficult to capture. And the tracking data itself contains measurement error—"primary defender" assignments don't always match film review, and contest classifications may be imprecise.

Practical Application

Given these limitations, responsible individual defensive evaluation uses multiple approaches. Metrics provide starting points for investigation rather than definitive assessments. Film review remains essential for understanding what players actually do defensively. And appropriate uncertainty acknowledges that our best defensive measures are substantially less reliable than offensive ones.

Implementation in R

# Perimeter defense analysis
library(tidyverse)

analyze_perimeter_defense <- function(perimeter_data) {
  perimeter_data %>%
    group_by(defender_id, defender_name) %>%
    summarise(
      # Three-point defense
      threes_defended = sum(shot_type == "three"),
      three_dfg_pct = mean(shot_made[shot_type == "three"]),
      three_contest_pct = mean(contested[shot_type == "three"]),

      # Mid-range defense
      mid_defended = sum(shot_type == "mid_range"),
      mid_dfg_pct = mean(shot_made[shot_type == "mid_range"]),

      # Overall perimeter
      perimeter_shots_defended = n(),
      perimeter_dfg_pct = mean(shot_made),

      # Closeout quality
      avg_closeout_speed = mean(closeout_speed, na.rm = TRUE),
      avg_closeout_distance = mean(closeout_distance, na.rm = TRUE),

      .groups = "drop"
    )
}

perimeter_defense <- read_csv("perimeter_defense_tracking.csv")
perimeter_analysis <- analyze_perimeter_defense(perimeter_defense)

# Best perimeter defenders
best_perimeter <- perimeter_analysis %>%
  filter(perimeter_shots_defended >= 100) %>%
  arrange(perimeter_dfg_pct) %>%
  select(defender_name, perimeter_shots_defended, perimeter_dfg_pct,
         three_contest_pct) %>%
  head(15)

print(best_perimeter)
# Isolation defense analysis
library(tidyverse)

analyze_iso_defense <- function(iso_data) {
  iso_data %>%
    group_by(defender_id, defender_name) %>%
    summarise(
      iso_possessions = n(),
      pts_allowed = sum(points),
      ppp_allowed = round(pts_allowed / iso_possessions, 2),

      # Outcomes
      turnover_forced_rate = mean(turnover_forced) * 100,
      shot_allowed_rate = mean(shot_attempted) * 100,
      foul_rate = mean(foul_committed) * 100,

      # When shots allowed
      fg_pct_allowed = mean(shot_made[shot_attempted == 1]),

      .groups = "drop"
    )
}

iso_defense <- read_csv("isolation_defense.csv")
iso_analysis <- analyze_iso_defense(iso_defense)

# Best isolation defenders
best_iso <- iso_analysis %>%
  filter(iso_possessions >= 50) %>%
  arrange(ppp_allowed) %>%
  select(defender_name, iso_possessions, ppp_allowed,
         turnover_forced_rate, fg_pct_allowed) %>%
  head(15)

print(best_iso)

Implementation in R

# Perimeter defense analysis
library(tidyverse)

analyze_perimeter_defense <- function(perimeter_data) {
  perimeter_data %>%
    group_by(defender_id, defender_name) %>%
    summarise(
      # Three-point defense
      threes_defended = sum(shot_type == "three"),
      three_dfg_pct = mean(shot_made[shot_type == "three"]),
      three_contest_pct = mean(contested[shot_type == "three"]),

      # Mid-range defense
      mid_defended = sum(shot_type == "mid_range"),
      mid_dfg_pct = mean(shot_made[shot_type == "mid_range"]),

      # Overall perimeter
      perimeter_shots_defended = n(),
      perimeter_dfg_pct = mean(shot_made),

      # Closeout quality
      avg_closeout_speed = mean(closeout_speed, na.rm = TRUE),
      avg_closeout_distance = mean(closeout_distance, na.rm = TRUE),

      .groups = "drop"
    )
}

perimeter_defense <- read_csv("perimeter_defense_tracking.csv")
perimeter_analysis <- analyze_perimeter_defense(perimeter_defense)

# Best perimeter defenders
best_perimeter <- perimeter_analysis %>%
  filter(perimeter_shots_defended >= 100) %>%
  arrange(perimeter_dfg_pct) %>%
  select(defender_name, perimeter_shots_defended, perimeter_dfg_pct,
         three_contest_pct) %>%
  head(15)

print(best_perimeter)
# Isolation defense analysis
library(tidyverse)

analyze_iso_defense <- function(iso_data) {
  iso_data %>%
    group_by(defender_id, defender_name) %>%
    summarise(
      iso_possessions = n(),
      pts_allowed = sum(points),
      ppp_allowed = round(pts_allowed / iso_possessions, 2),

      # Outcomes
      turnover_forced_rate = mean(turnover_forced) * 100,
      shot_allowed_rate = mean(shot_attempted) * 100,
      foul_rate = mean(foul_committed) * 100,

      # When shots allowed
      fg_pct_allowed = mean(shot_made[shot_attempted == 1]),

      .groups = "drop"
    )
}

iso_defense <- read_csv("isolation_defense.csv")
iso_analysis <- analyze_iso_defense(iso_defense)

# Best isolation defenders
best_iso <- iso_analysis %>%
  filter(iso_possessions >= 50) %>%
  arrange(ppp_allowed) %>%
  select(defender_name, iso_possessions, ppp_allowed,
         turnover_forced_rate, fg_pct_allowed) %>%
  head(15)

print(best_iso)
Chapter Summary

You've completed Chapter 31: Individual Defensive Metrics.