Chapter 30 Intermediate ~50 min read

Defensive Rating and Team Defense

Measuring team defensive performance using Defensive Rating and understanding the components of team defense.

Defensive Rating Fundamentals

Defensive Rating (DRtg) measures points allowed per 100 possessions, providing the fundamental gauge of team defensive effectiveness. By expressing defense in per-possession terms, DRtg enables fair comparison across teams playing at different paces. A team allowing 105 points per game at slow pace is actually worse defensively than one allowing 108 at fast pace if the latter allows fewer points per possession.

The league-average Defensive Rating typically falls between 108-112 in the modern NBA, varying by season based on rule changes, playing styles, and talent distribution. Elite defenses achieve ratings in the low 100s, while poor defenses may exceed 115. The best historical defenses have recorded ratings below 100, representing exceptional achievement.

Calculating Defensive Rating

Team Defensive Rating calculation is straightforward: Points Allowed divided by Possessions, multiplied by 100. The complexity lies in accurate possession counting and appropriate contextual adjustments.

def calculate_team_defensive_rating(team_stats):
    """
    Calculate team Defensive Rating
    """
    # Estimate possessions
    possessions = (team_stats['OPP_FGA'] +
                   0.44 * team_stats['OPP_FTA'] -
                   team_stats['OPP_OREB'] +
                   team_stats['OPP_TOV'])

    # Defensive Rating
    drtg = (team_stats['OPP_PTS'] / possessions) * 100

    return round(drtg, 1)

def analyze_team_defense(team_stats, league_stats):
    """Compare team defense to league average"""
    team_drtg = calculate_team_defensive_rating(team_stats)
    league_drtg = calculate_team_defensive_rating(league_stats)

    return {
        'team_drtg': team_drtg,
        'league_drtg': league_drtg,
        'relative_drtg': team_drtg - league_drtg,
        'defensive_rank': 'Elite' if team_drtg < league_drtg - 4 else
                          'Good' if team_drtg < league_drtg - 1 else
                          'Average' if team_drtg < league_drtg + 1 else
                          'Below Average'
    }

Components of Team Defense

Oliver's Four Factors framework applies to defense as well as offense. Defensive performance depends on: preventing efficient opponent shooting (opponent eFG%), forcing turnovers (opponent TOV%), limiting offensive rebounds (opponent ORB%), and preventing free throws (opponent FTR). These four factors explain the vast majority of variation in Defensive Rating across teams.

Opponent effective field goal percentage typically matters most. Teams that force difficult, contested shots with low conversion rates tend to be the best defenses. This encompasses both shot selection (limiting rim attempts) and shot contest (making shooters uncomfortable).

Shot Profile Defense

Modern defensive analysis emphasizes the shots defenses allow rather than just conversion rates. A defense that allows many three-pointers might still be effective if those threes are contested and poorly selected. A defense that limits rim attempts is preventing the highest-value shots regardless of whether the remaining shots happen to fall.

Shot profile defense examines the distribution of opponent shot attempts by location and context. Elite defenses tend to force mid-range shots (the lowest expected value location), limit rim attempts, and contest three-pointers effectively. This approach requires tracking data to fully implement but provides insight beyond simple Defensive Rating.

Transition vs. Half-Court Defense

Teams face fundamentally different defensive challenges in transition versus half-court settings. Transition defense requires getting back, matching up, and preventing easy baskets before the offense can organize. Half-court defense involves executing schemes, communicating switches, and grinding through possessions.

Some teams excel at one but struggle with the other. A team might have elite half-court defense through strong rim protection and perimeter rotation but give up transition points due to offensive rebounding emphasis. Separating transition and half-court Defensive Rating reveals these patterns and suggests targeted improvement areas.

Defensive Consistency

Defensive Rating measures average performance, but variance matters too. A team that is average on most possessions but occasionally allows catastrophic breakdowns might have the same overall DRtg as one that is boringly consistent. Playoff basketball often punishes inconsistency more than the regular season, making defensive reliability valuable beyond what averages capture.

Tracking possession-by-possession outcomes enables analysis of defensive consistency. Teams with high variance in points allowed per possession might be more vulnerable to opponent hot streaks and less able to grind out low-scoring games when needed.

Implementation in R

# Rim protection analysis
library(tidyverse)

analyze_rim_protection <- function(rim_defense_data) {
  rim_defense_data %>%
    group_by(defender_id, defender_name) %>%
    summarise(
      # Volume
      rim_shots_defended = n(),
      contested_rim_shots = sum(contested),

      # Effectiveness
      rim_dfg_pct = mean(shot_made),
      expected_rim_fg = mean(expected_fg_pct),
      rim_fg_diff = rim_dfg_pct - expected_rim_fg,

      # Contest quality
      avg_contest_distance = mean(contest_distance),
      alter_rate = mean(shot_altered),
      block_rate = mean(blocked),

      .groups = "drop"
    ) %>%
    mutate(
      # Rim protection score
      rim_protection_score = -rim_fg_diff * 100 + block_rate * 10
    )
}

rim_defense <- read_csv("rim_defense_tracking.csv")
rim_protection <- analyze_rim_protection(rim_defense)

# Best rim protectors
best_rim_protectors <- rim_protection %>%
  filter(rim_shots_defended >= 100) %>%
  arrange(desc(rim_protection_score)) %>%
  select(defender_name, rim_shots_defended, rim_dfg_pct,
         rim_fg_diff, block_rate) %>%
  head(15)

print(best_rim_protectors)
# Rim deterrence analysis
library(tidyverse)

analyze_rim_deterrence <- function(possession_data) {
  possession_data %>%
    group_by(primary_defender_id, primary_defender_name) %>%
    summarise(
      defensive_poss = n(),

      # Shot distribution allowed
      rim_attempt_rate_allowed = mean(shot_zone == "rim"),
      three_attempt_rate_allowed = mean(shot_zone == "three"),
      mid_range_rate_allowed = mean(shot_zone == "mid_range"),

      # Deterrence (forcing tough shots)
      avg_shot_difficulty = mean(shot_difficulty),
      contested_shot_rate = mean(shot_contested),

      .groups = "drop"
    )
}

possessions <- read_csv("defensive_possessions.csv")
deterrence <- analyze_rim_deterrence(possessions)

# Most effective deterrents
best_deterrents <- deterrence %>%
  filter(defensive_poss >= 500) %>%
  arrange(rim_attempt_rate_allowed) %>%
  select(primary_defender_name, rim_attempt_rate_allowed,
         avg_shot_difficulty, contested_shot_rate) %>%
  head(15)

print(best_deterrents)

Implementation in R

# Rim protection analysis
library(tidyverse)

analyze_rim_protection <- function(rim_defense_data) {
  rim_defense_data %>%
    group_by(defender_id, defender_name) %>%
    summarise(
      # Volume
      rim_shots_defended = n(),
      contested_rim_shots = sum(contested),

      # Effectiveness
      rim_dfg_pct = mean(shot_made),
      expected_rim_fg = mean(expected_fg_pct),
      rim_fg_diff = rim_dfg_pct - expected_rim_fg,

      # Contest quality
      avg_contest_distance = mean(contest_distance),
      alter_rate = mean(shot_altered),
      block_rate = mean(blocked),

      .groups = "drop"
    ) %>%
    mutate(
      # Rim protection score
      rim_protection_score = -rim_fg_diff * 100 + block_rate * 10
    )
}

rim_defense <- read_csv("rim_defense_tracking.csv")
rim_protection <- analyze_rim_protection(rim_defense)

# Best rim protectors
best_rim_protectors <- rim_protection %>%
  filter(rim_shots_defended >= 100) %>%
  arrange(desc(rim_protection_score)) %>%
  select(defender_name, rim_shots_defended, rim_dfg_pct,
         rim_fg_diff, block_rate) %>%
  head(15)

print(best_rim_protectors)
# Rim deterrence analysis
library(tidyverse)

analyze_rim_deterrence <- function(possession_data) {
  possession_data %>%
    group_by(primary_defender_id, primary_defender_name) %>%
    summarise(
      defensive_poss = n(),

      # Shot distribution allowed
      rim_attempt_rate_allowed = mean(shot_zone == "rim"),
      three_attempt_rate_allowed = mean(shot_zone == "three"),
      mid_range_rate_allowed = mean(shot_zone == "mid_range"),

      # Deterrence (forcing tough shots)
      avg_shot_difficulty = mean(shot_difficulty),
      contested_shot_rate = mean(shot_contested),

      .groups = "drop"
    )
}

possessions <- read_csv("defensive_possessions.csv")
deterrence <- analyze_rim_deterrence(possessions)

# Most effective deterrents
best_deterrents <- deterrence %>%
  filter(defensive_poss >= 500) %>%
  arrange(rim_attempt_rate_allowed) %>%
  select(primary_defender_name, rim_attempt_rate_allowed,
         avg_shot_difficulty, contested_shot_rate) %>%
  head(15)

print(best_deterrents)
Chapter Summary

You've completed Chapter 30: Defensive Rating and Team Defense.