9
SOCON Basketball Evaluation Jamey McDowell, Ian McConnell, Eli Stutzman

Presentation to the Team

Embed Size (px)

Citation preview

Page 1: Presentation to the Team

SOCON Basketball EvaluationJamey McDowell, Ian McConnell, Eli Stutzman

Page 2: Presentation to the Team

Objective

Taking NCAA game data from the 2007-2012 tournament seasons, create a metric that predicts how teams will perform against one another

Identify key aspects of the game that have a significant impact on the outcome

Predict with good probability the outcomes of games using our metrics

Page 3: Presentation to the Team

Preliminary Steps

Look at games that only involve teams that were identified to be in comparable conferences America East, Atlantic Sun, Big Sky, Big South, Big West, Northeast,

Ohio Valley, Patriot, Southern, Southland, Western Athletic Run the analysis on games up to and including January 31

Gives ratings on 26 metrics Each rating is given on a per/100 possession scale, and so the

actual point margin of the game is also adjusted

Page 4: Presentation to the Team

Identifying Key Metrics

Using the difference of ratings between involved teams, run a regression analysis to identify metrics that more greatly affected the actual point margin of the game, and assign values to them Identified 9 metrics: tm_efg_pct, op_efg_pct, op_tov_pct, tm_orb_pct,

op_orb_pct, tm_ftr_pct, op_blk_pct, op_thpttnd_pct, totalopp_ft_pct Multiplying the value of each metric by the difference in rating of

the two teams involved in a game, and adding them together, we get a “Ketchup” value

Page 5: Presentation to the Team

“Ketchup” and “Cocktail”

From the initial analysis, each team is given a SRSRating. When comparing teams, the difference between Team 1 and Team 2’s SRSRating, is the predicted score of the game

Taking the “Ketchup” value and adding it to the difference in SRSRating, we get our “Cocktail” value for the game, which is our adjusted predicted score of the game

“Cocktail” tailors the predicted score to the individual teams playing, rather than looking at the first half of the season alone

Page 6: Presentation to the Team

Logistic Regression

Page 7: Presentation to the Team

Results

Over the 2013-2014 tournament seasons, we predicted with 66.7% accuracy the winner of games taking place after February 1st (1202 games) “Close” Games (0-5 point spread predicted): 55.1% (314 games) “Contested” Games (5-10 point spread predicted): 56.9% (276

games) “Normal” Games (10-15 point spread predicted): 67.7% (189 games) “Blowout” Games (15+ point spread predicted): 81.6% (423 games)

Page 8: Presentation to the Team

Furman Ratings

Year 2013 (116 Teams) 2014 (117 Teams)Tm_efg_pct 54 15Op_efg_pct 71 47Op_tov_pct 68 110Tm_orb_pct 63 73Op_orb_pct 42 97Tm_ftr_pct 67 43Op_blk_pct 25 93Op_thpttnd_pct 14 62Totalopp_ft_pct 22 49

Page 9: Presentation to the Team

Predicting Furman Games

2013: 41.7% “Close”: 2-2 “Contested”: 1-5 “Normal”: 0-0 “Blowout”: 2-0

2014: 60% “Close”: 1-4 “Contested”: 3-0 “Normal”: 1-0 “Blowout”: 1-0