ncaa_eval.model.logistic_regression module

Minimal logistic regression model — test fixture for the Model contract.

This is NOT a production model. It exists solely to demonstrate and test the stateless Model interface in ~30 lines of logic.

class ncaa_eval.model.logistic_regression.LogisticRegressionConfig(*, model_name: Literal['logistic_regression'] = 'logistic_regression', calibration_method: Literal['isotonic', 'sigmoid'] | None = None, C: float = 1.0, max_iter: int = 200)[source]

Bases: ModelConfig

Hyperparameters for the logistic regression test fixture.

C: float
max_iter: int
model_config = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_name: Literal['logistic_regression']
class ncaa_eval.model.logistic_regression.LogisticRegressionModel(config: LogisticRegressionConfig | None = None, *, batch_rating_types: tuple[Literal['srs', 'ridge', 'colley'], ...] = ('srs',), graph_features_enabled: bool = False, ordinal_composite: Literal['simple_average', 'weighted', 'pca'] | None = None)[source]

Bases: Model

Thin wrapper around sklearn LogisticRegression.

feature_names_: list[str]
fit(X: DataFrame, y: Series) None[source]

Train the model on feature matrix X and labels y.

get_config() LogisticRegressionConfig[source]

Return the Pydantic-validated configuration.

get_feature_importances() list[tuple[str, float]] | None[source]

Return absolute coefficient values as feature importance.

classmethod load(path: Path) Self[source]

Load a previously-saved model from path.

predict_proba(X: DataFrame) Series[source]

Return P(team_a wins) in [0, 1] for each row of X.

save(path: Path) None[source]

Persist the trained classifier, config, and feature config to path.