Consider a linear equation:
\(Y =\) \(\beta_0 + \beta_1X_1 + \beta_2X_2\)\(+\) \(\epsilon\)
the linear predictor (often called \(\eta\), but in the following slides simply called \(X\)) is the “\(\beta_0 + \beta_1X_1 + \beta_2X_2\)” part.
the job of the link function in GLMs is to transform (re-map) the linear predictor \(X\), which may span in \((-\infty, + \infty)\), to the appropriate range of the response variable \(Y\) (e.g., times in \((0, +\infty)\), probabilities in \((0, 1)\))
link="identity"
link="log"
link="inverse"
link="logit"
link="probit"
link=mafc.probit(3)
Logit: “thinking in terms of odds ratios makes sense”; a linear increase in predictors multiplies the odds of an event happening; e.g., chance of diagnosis (true categorical); ???
Probit: “thinking about an underlying Gaussian distribution makes sense”; when a linear increase in predictor reflects a linear increase in an underlying normally-distributed trait; e.g., chance of: signal detection (normally-distributed noise in sensory processes), diagnosis (underlying dimensional), correctly answering a question / a math problem / passing an exam