All Study Guides Actuarial Mathematics Unit 1 โ Probability Theory & Distributions
๐ Actuarial Mathematics Unit 1 โ Probability Theory & DistributionsProbability theory and distributions form the foundation of actuarial mathematics. These concepts help quantify uncertainty and model random events, crucial for pricing insurance policies and assessing financial risks. Understanding probability basics, random variables, and common distributions is essential for actuaries.
Key applications in actuarial science include modeling claim frequencies and severities, calculating risk measures, estimating reserves, and assessing solvency. Actuaries use various probability distributions to analyze data, make predictions, and develop pricing models for insurance products and financial instruments.
Study Guides for Unit 1 โ Probability Theory & Distributions Key Concepts and Terminology
Probability measures the likelihood of an event occurring and ranges from 0 (impossible) to 1 (certain)
Random variables assign numerical values to outcomes of a random experiment
Discrete random variables have countable outcomes (number of heads in 10 coin flips)
Continuous random variables have uncountable outcomes (time until next customer arrives)
Probability distributions describe the probabilities of different outcomes for a random variable
Expectation (mean) represents the average value of a random variable over many trials
Variance and standard deviation measure the dispersion or spread of a probability distribution
Independence implies that the occurrence of one event does not affect the probability of another event
Conditional probability measures the probability of an event given that another event has occurred
Bayes' theorem relates conditional probabilities and is used for updating probabilities based on new information
Probability Basics
Probability axioms define the properties that probability measures must satisfy
Non-negativity: P ( A ) โฅ 0 P(A) \geq 0 P ( A ) โฅ 0 for any event A A A
Normalization: P ( ฮฉ ) = 1 P(\Omega) = 1 P ( ฮฉ ) = 1 , where ฮฉ \Omega ฮฉ is the sample space (set of all possible outcomes)
Countable additivity: For mutually exclusive events A 1 , A 2 , โฆ A_1, A_2, \ldots A 1 โ , A 2 โ , โฆ , P ( โ i = 1 โ A i ) = โ i = 1 โ P ( A i ) P(\bigcup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) P ( โ i = 1 โ โ A i โ ) = โ i = 1 โ โ P ( A i โ )
Probability of the complement: P ( A c ) = 1 โ P ( A ) P(A^c) = 1 - P(A) P ( A c ) = 1 โ P ( A ) , where A c A^c A c is the complement of event A A A
Probability of the union of two events: P ( A โช B ) = P ( A ) + P ( B ) โ P ( A โฉ B ) P(A \cup B) = P(A) + P(B) - P(A \cap B) P ( A โช B ) = P ( A ) + P ( B ) โ P ( A โฉ B )
Conditional probability: P ( A โฃ B ) = P ( A โฉ B ) P ( B ) P(A|B) = \frac{P(A \cap B)}{P(B)} P ( A โฃ B ) = P ( B ) P ( A โฉ B ) โ , where P ( B ) > 0 P(B) > 0 P ( B ) > 0
Independence: Events A A A and B B B are independent if P ( A โฉ B ) = P ( A ) P ( B ) P(A \cap B) = P(A)P(B) P ( A โฉ B ) = P ( A ) P ( B )
Multiplication rule: P ( A โฉ B ) = P ( A ) P ( B โฃ A ) P(A \cap B) = P(A)P(B|A) P ( A โฉ B ) = P ( A ) P ( B โฃ A )
Law of total probability: For a partition { B 1 , B 2 , โฆ , B n } \{B_1, B_2, \ldots, B_n\} { B 1 โ , B 2 โ , โฆ , B n โ } of the sample space, P ( A ) = โ i = 1 n P ( A โฃ B i ) P ( B i ) P(A) = \sum_{i=1}^{n} P(A|B_i)P(B_i) P ( A ) = โ i = 1 n โ P ( A โฃ B i โ ) P ( B i โ )
Random Variables
Random variables map outcomes of a random experiment to real numbers
Probability mass function (PMF) for a discrete random variable X X X : p X ( x ) = P ( X = x ) p_X(x) = P(X = x) p X โ ( x ) = P ( X = x )
Properties: p X ( x ) โฅ 0 p_X(x) \geq 0 p X โ ( x ) โฅ 0 and โ x p X ( x ) = 1 \sum_x p_X(x) = 1 โ x โ p X โ ( x ) = 1
Cumulative distribution function (CDF) for a random variable X X X : F X ( x ) = P ( X โค x ) F_X(x) = P(X \leq x) F X โ ( x ) = P ( X โค x )
Properties: F X ( x ) F_X(x) F X โ ( x ) is non-decreasing, right-continuous, lim โก x โ โ โ F X ( x ) = 0 \lim_{x \to -\infty} F_X(x) = 0 lim x โ โ โ โ F X โ ( x ) = 0 , and lim โก x โ โ F X ( x ) = 1 \lim_{x \to \infty} F_X(x) = 1 lim x โ โ โ F X โ ( x ) = 1
Probability density function (PDF) for a continuous random variable X X X : f X ( x ) = F X โฒ ( x ) f_X(x) = F_X'(x) f X โ ( x ) = F X โฒ โ ( x )
Properties: f X ( x ) โฅ 0 f_X(x) \geq 0 f X โ ( x ) โฅ 0 and โซ โ โ โ f X ( x ) d x = 1 \int_{-\infty}^{\infty} f_X(x) dx = 1 โซ โ โ โ โ f X โ ( x ) d x = 1
Relationship between CDF and PDF: F X ( x ) = โซ โ โ x f X ( t ) d t F_X(x) = \int_{-\infty}^{x} f_X(t) dt F X โ ( x ) = โซ โ โ x โ f X โ ( t ) d t
Quantile function (inverse CDF): Q X ( p ) = inf โก { x : F X ( x ) โฅ p } Q_X(p) = \inf\{x: F_X(x) \geq p\} Q X โ ( p ) = inf { x : F X โ ( x ) โฅ p } , where 0 < p < 1 0 < p < 1 0 < p < 1
Probability Distributions
Bernoulli distribution: Models a single trial with two possible outcomes (success or failure)
PMF: p X ( x ) = p x ( 1 โ p ) 1 โ x p_X(x) = p^x(1-p)^{1-x} p X โ ( x ) = p x ( 1 โ p ) 1 โ x , where x โ { 0 , 1 } x \in \{0, 1\} x โ { 0 , 1 } and 0 < p < 1 0 < p < 1 0 < p < 1
Binomial distribution: Models the number of successes in a fixed number of independent Bernoulli trials
PMF: p X ( x ) = ( n x ) p x ( 1 โ p ) n โ x p_X(x) = \binom{n}{x}p^x(1-p)^{n-x} p X โ ( x ) = ( x n โ ) p x ( 1 โ p ) n โ x , where x โ { 0 , 1 , โฆ , n } x \in \{0, 1, \ldots, n\} x โ { 0 , 1 , โฆ , n } , 0 < p < 1 0 < p < 1 0 < p < 1 , and ( n x ) = n ! x ! ( n โ x ) ! \binom{n}{x} = \frac{n!}{x!(n-x)!} ( x n โ ) = x ! ( n โ x )! n ! โ
Poisson distribution: Models the number of events occurring in a fixed interval of time or space
PMF: p X ( x ) = e โ ฮป ฮป x x ! p_X(x) = \frac{e^{-\lambda}\lambda^x}{x!} p X โ ( x ) = x ! e โ ฮป ฮป x โ , where x โ { 0 , 1 , 2 , โฆ } x \in \{0, 1, 2, \ldots\} x โ { 0 , 1 , 2 , โฆ } and ฮป > 0 \lambda > 0 ฮป > 0
Exponential distribution: Models the time between events in a Poisson process
PDF: f X ( x ) = ฮป e โ ฮป x f_X(x) = \lambda e^{-\lambda x} f X โ ( x ) = ฮป e โ ฮป x , where x > 0 x > 0 x > 0 and ฮป > 0 \lambda > 0 ฮป > 0
Normal (Gaussian) distribution: Models many natural phenomena and is characterized by its bell-shaped curve
PDF: f X ( x ) = 1 2 ฯ ฯ 2 e โ ( x โ ฮผ ) 2 2 ฯ 2 f_X(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} f X โ ( x ) = 2 ฯ ฯ 2 โ 1 โ e โ 2 ฯ 2 ( x โ ฮผ ) 2 โ , where x โ R x \in \mathbb{R} x โ R , ฮผ โ R \mu \in \mathbb{R} ฮผ โ R , and ฯ > 0 \sigma > 0 ฯ > 0
Common Probability Distributions
Uniform distribution: Models a random variable with equally likely outcomes over a specified interval
Discrete uniform: p X ( x ) = 1 n p_X(x) = \frac{1}{n} p X โ ( x ) = n 1 โ , where x โ { 1 , 2 , โฆ , n } x \in \{1, 2, \ldots, n\} x โ { 1 , 2 , โฆ , n }
Continuous uniform: f X ( x ) = 1 b โ a f_X(x) = \frac{1}{b-a} f X โ ( x ) = b โ a 1 โ , where x โ [ a , b ] x \in [a, b] x โ [ a , b ]
Geometric distribution: Models the number of trials until the first success in a sequence of independent Bernoulli trials
PMF: p X ( x ) = ( 1 โ p ) x โ 1 p p_X(x) = (1-p)^{x-1}p p X โ ( x ) = ( 1 โ p ) x โ 1 p , where x โ { 1 , 2 , โฆ } x \in \{1, 2, \ldots\} x โ { 1 , 2 , โฆ } and 0 < p < 1 0 < p < 1 0 < p < 1
Negative binomial distribution: Models the number of trials until a specified number of successes occur in a sequence of independent Bernoulli trials
PMF: p X ( x ) = ( x โ 1 r โ 1 ) p r ( 1 โ p ) x โ r p_X(x) = \binom{x-1}{r-1}p^r(1-p)^{x-r} p X โ ( x ) = ( r โ 1 x โ 1 โ ) p r ( 1 โ p ) x โ r , where x โ { r , r + 1 , โฆ } x \in \{r, r+1, \ldots\} x โ { r , r + 1 , โฆ } , 0 < p < 1 0 < p < 1 0 < p < 1 , and r โ { 1 , 2 , โฆ } r \in \{1, 2, \ldots\} r โ { 1 , 2 , โฆ }
Gamma distribution: Generalizes the exponential distribution and models waiting times and lifetimes
PDF: f X ( x ) = 1 ฮ ( ฮฑ ) ฮฒ ฮฑ x ฮฑ โ 1 e โ x ฮฒ f_X(x) = \frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-\frac{x}{\beta}} f X โ ( x ) = ฮ ( ฮฑ ) ฮฒ ฮฑ 1 โ x ฮฑ โ 1 e โ ฮฒ x โ , where x > 0 x > 0 x > 0 , ฮฑ > 0 \alpha > 0 ฮฑ > 0 , and ฮฒ > 0 \beta > 0 ฮฒ > 0
Beta distribution: Models probabilities, proportions, and percentages
PDF: f X ( x ) = 1 B ( ฮฑ , ฮฒ ) x ฮฑ โ 1 ( 1 โ x ) ฮฒ โ 1 f_X(x) = \frac{1}{B(\alpha, \beta)}x^{\alpha-1}(1-x)^{\beta-1} f X โ ( x ) = B ( ฮฑ , ฮฒ ) 1 โ x ฮฑ โ 1 ( 1 โ x ) ฮฒ โ 1 , where x โ ( 0 , 1 ) x \in (0, 1) x โ ( 0 , 1 ) , ฮฑ > 0 \alpha > 0 ฮฑ > 0 , and ฮฒ > 0 \beta > 0 ฮฒ > 0
Expectation and Variance
Expectation (mean) of a discrete random variable X X X : E [ X ] = โ x x p X ( x ) E[X] = \sum_x xp_X(x) E [ X ] = โ x โ x p X โ ( x )
Expectation (mean) of a continuous random variable X X X : E [ X ] = โซ โ โ โ x f X ( x ) d x E[X] = \int_{-\infty}^{\infty} xf_X(x) dx E [ X ] = โซ โ โ โ โ x f X โ ( x ) d x
Linearity of expectation: E [ a X + b Y ] = a E [ X ] + b E [ Y ] E[aX + bY] = aE[X] + bE[Y] E [ a X + bY ] = a E [ X ] + b E [ Y ] for constants a a a and b b b and random variables X X X and Y Y Y
Variance of a random variable X X X : V a r ( X ) = E [ ( X โ E [ X ] ) 2 ] = E [ X 2 ] โ ( E [ X ] ) 2 Var(X) = E[(X - E[X])^2] = E[X^2] - (E[X])^2 Va r ( X ) = E [( X โ E [ X ] ) 2 ] = E [ X 2 ] โ ( E [ X ] ) 2
Standard deviation: ฯ X = V a r ( X ) \sigma_X = \sqrt{Var(X)} ฯ X โ = Va r ( X ) โ
Covariance between random variables X X X and Y Y Y : C o v ( X , Y ) = E [ ( X โ E [ X ] ) ( Y โ E [ Y ] ) ] Cov(X, Y) = E[(X - E[X])(Y - E[Y])] C o v ( X , Y ) = E [( X โ E [ X ]) ( Y โ E [ Y ])]
Properties: C o v ( X , X ) = V a r ( X ) Cov(X, X) = Var(X) C o v ( X , X ) = Va r ( X ) and C o v ( a X + b , c Y + d ) = a c C o v ( X , Y ) Cov(aX + b, cY + d) = acCov(X, Y) C o v ( a X + b , c Y + d ) = a c C o v ( X , Y ) for constants a a a , b b b , c c c , and d d d
Correlation coefficient between random variables X X X and Y Y Y : ฯ X , Y = C o v ( X , Y ) ฯ X ฯ Y \rho_{X,Y} = \frac{Cov(X, Y)}{\sigma_X\sigma_Y} ฯ X , Y โ = ฯ X โ ฯ Y โ C o v ( X , Y ) โ
Properties: โ 1 โค ฯ X , Y โค 1 -1 \leq \rho_{X,Y} \leq 1 โ 1 โค ฯ X , Y โ โค 1 , ฯ X , Y = 1 \rho_{X,Y} = 1 ฯ X , Y โ = 1 for perfect positive linear relationship, and ฯ X , Y = โ 1 \rho_{X,Y} = -1 ฯ X , Y โ = โ 1 for perfect negative linear relationship
Multivariate Distributions
Joint probability mass function (PMF) for discrete random variables X X X and Y Y Y : p X , Y ( x , y ) = P ( X = x , Y = y ) p_{X,Y}(x, y) = P(X = x, Y = y) p X , Y โ ( x , y ) = P ( X = x , Y = y )
Joint probability density function (PDF) for continuous random variables X X X and Y Y Y : f X , Y ( x , y ) f_{X,Y}(x, y) f X , Y โ ( x , y )
Properties: f X , Y ( x , y ) โฅ 0 f_{X,Y}(x, y) \geq 0 f X , Y โ ( x , y ) โฅ 0 and โซ โ โ โ โซ โ โ โ f X , Y ( x , y ) d x d y = 1 \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} f_{X,Y}(x, y) dx dy = 1 โซ โ โ โ โ โซ โ โ โ โ f X , Y โ ( x , y ) d x d y = 1
Marginal PMF: p X ( x ) = โ y p X , Y ( x , y ) p_X(x) = \sum_y p_{X,Y}(x, y) p X โ ( x ) = โ y โ p X , Y โ ( x , y ) and p Y ( y ) = โ x p X , Y ( x , y ) p_Y(y) = \sum_x p_{X,Y}(x, y) p Y โ ( y ) = โ x โ p X , Y โ ( x , y )
Marginal PDF: f X ( x ) = โซ โ โ โ f X , Y ( x , y ) d y f_X(x) = \int_{-\infty}^{\infty} f_{X,Y}(x, y) dy f X โ ( x ) = โซ โ โ โ โ f X , Y โ ( x , y ) d y and f Y ( y ) = โซ โ โ โ f X , Y ( x , y ) d x f_Y(y) = \int_{-\infty}^{\infty} f_{X,Y}(x, y) dx f Y โ ( y ) = โซ โ โ โ โ f X , Y โ ( x , y ) d x
Conditional PMF: p Y โฃ X ( y โฃ x ) = p X , Y ( x , y ) p X ( x ) p_{Y|X}(y|x) = \frac{p_{X,Y}(x, y)}{p_X(x)} p Y โฃ X โ ( y โฃ x ) = p X โ ( x ) p X , Y โ ( x , y ) โ , where p X ( x ) > 0 p_X(x) > 0 p X โ ( x ) > 0
Conditional PDF: f Y โฃ X ( y โฃ x ) = f X , Y ( x , y ) f X ( x ) f_{Y|X}(y|x) = \frac{f_{X,Y}(x, y)}{f_X(x)} f Y โฃ X โ ( y โฃ x ) = f X โ ( x ) f X , Y โ ( x , y ) โ , where f X ( x ) > 0 f_X(x) > 0 f X โ ( x ) > 0
Independence for discrete random variables: p X , Y ( x , y ) = p X ( x ) p Y ( y ) p_{X,Y}(x, y) = p_X(x)p_Y(y) p X , Y โ ( x , y ) = p X โ ( x ) p Y โ ( y ) for all x x x and y y y
Independence for continuous random variables: f X , Y ( x , y ) = f X ( x ) f Y ( y ) f_{X,Y}(x, y) = f_X(x)f_Y(y) f X , Y โ ( x , y ) = f X โ ( x ) f Y โ ( y ) for all x x x and y y y
Applications in Actuarial Science
Pricing insurance policies using probability distributions to model claim frequencies and severities
Poisson distribution for modeling claim counts
Exponential, gamma, and Pareto distributions for modeling claim sizes
Calculating risk measures such as Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) using quantiles of loss distributions
Estimating reserves for outstanding claims using stochastic models and simulation techniques
Assessing the solvency and capital requirements of insurance companies using probabilistic approaches
Solvency II and Swiss Solvency Test frameworks
Pricing and hedging financial derivatives using stochastic calculus and martingale methods
Black-Scholes model for pricing European options
Binomial and trinomial tree models for pricing American options
Modeling dependence between risks using copulas and multivariate distributions
Gaussian copula for modeling linear dependence
t-copula and Archimedean copulas (Clayton, Gumbel, Frank) for modeling non-linear dependence
Applying credibility theory to blend individual and collective risk information for experience rating
Bรผhlmann-Straub model and empirical Bayes estimators
every AP exam is fiveable go beyond AP ยฉ 2025 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable go beyond AP
ยฉ 2025 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.