🎲Mathematical Probability Theory Unit 11 – Stochastic Processes
Stochastic processes are mathematical models that describe random systems evolving over time or space. They're essential in fields like finance, physics, and biology, helping us understand and predict complex phenomena with inherent uncertainty.
This unit covers key concepts like state spaces, sample paths, and stationarity. We'll explore various types of stochastic processes, including Markov chains, Poisson processes, and Brownian motion, and their applications in real-world scenarios.
Stochastic process is a collection of random variables indexed by time or space representing the evolution of a random system
State space is the set of all possible values that a stochastic process can take at any given time or position
Sample path (or realization) refers to a single possible outcome or trajectory of a stochastic process over time or space
Stationarity implies that the statistical properties of a stochastic process do not change over time (time-invariant)
Strict stationarity requires the joint probability distribution to be invariant under time shifts
Weak stationarity (or covariance stationarity) only requires the mean and covariance to be time-invariant
Ergodicity is a property where the statistical properties of a stochastic process can be inferred from a single, sufficiently long realization
Martingale is a stochastic process whose expected value at any future time, given the current state, is equal to its current value
Submartingales have expected future values greater than or equal to the current value
Supermartingales have expected future values less than or equal to the current value
Types of Stochastic Processes
Discrete-time processes have random variables indexed by discrete time steps (integers) while continuous-time processes have random variables indexed by a continuous time parameter (real numbers)
Markov processes are memoryless stochastic processes where the future state depends only on the current state, not on the past states
Markov chains are discrete-time Markov processes with a countable state space
Continuous-time Markov chains have a countable state space but continuous time parameter
Gaussian processes are stochastic processes where any finite collection of random variables has a multivariate normal distribution
Brownian motion (or Wiener process) is a continuous-time Gaussian process with independent increments
Poisson processes model the occurrence of rare events in continuous time with a constant average rate
Renewal processes generalize Poisson processes by allowing the inter-arrival times between events to have any distribution (not necessarily exponential)
Birth-death processes are continuous-time Markov chains used to model population dynamics with birth and death rates
Probability Spaces and Random Variables
Probability space (Ω,F,P) consists of a sample space Ω (set of all possible outcomes), a σ-algebra F of events (subsets of Ω), and a probability measure P assigning probabilities to events
Random variable X is a measurable function from the sample space Ω to the real numbers R, assigning a numerical value to each outcome
Discrete random variables take countable values (integers) while continuous random variables take uncountable values (real numbers)
Probability distribution of a random variable X is a function that assigns probabilities to the possible values or ranges of values that X can take
Probability mass function (PMF) for discrete random variables: P(X=x)
Probability density function (PDF) for continuous random variables: fX(x) such that P(a≤X≤b)=∫abfX(x)dx
Expected value (or mean) of a random variable X is the average value it takes, denoted by E[X]
For discrete X: E[X]=∑xxP(X=x)
For continuous X: E[X]=∫−∞∞xfX(x)dx
Variance of a random variable X measures its spread around the mean, denoted by Var(X)=E[(X−E[X])2]
Markov Chains
Markov chain is a discrete-time stochastic process with the Markov property: the future state depends only on the current state, not on the past states
State space S is the set of possible values the Markov chain can take at each time step (countable)
Transition probability pij is the probability of moving from state i to state j in one time step: pij=P(Xn+1=j∣Xn=i)
Transition probability matrix P=(pij) contains all the transition probabilities, with rows summing to 1
n-step transition probability pij(n) is the probability of moving from state i to state j in n time steps, obtained by taking the n-th power of the transition probability matrix: Pn=(pij(n))
Stationary distribution π=(π1,π2,…) is a probability distribution over the states that remains unchanged under the transition probabilities: πP=π
If a Markov chain is irreducible (all states communicate) and aperiodic, it has a unique stationary distribution
Absorbing Markov chains have one or more absorbing states that, once entered, cannot be left
Fundamental matrix N=(I−Q)−1 gives the expected number of visits to each transient state before absorption, where Q is the submatrix of transient-to-transient transition probabilities
Poisson Processes
Poisson process is a continuous-time stochastic process that models the occurrence of rare events with a constant average rate λ>0
Inter-arrival times between events are independent and exponentially distributed with mean 1/λ
Number of events in any interval of length t follows a Poisson distribution with mean λt: P(N(t)=k)=k!(λt)ke−λt
Superposition of independent Poisson processes with rates λ1,λ2,…,λn is also a Poisson process with rate λ=∑i=1nλi
Thinning (or splitting) a Poisson process with rate λ into two independent Poisson processes with rates pλ and (1−p)λ, where 0<p<1, is done by assigning each event to the first process with probability p and to the second process with probability 1−p
Non-homogeneous Poisson process has a time-varying rate function λ(t), with the expected number of events in an interval [a,b] given by ∫abλ(t)dt
Compound Poisson process associates a random variable (mark) with each event in a Poisson process, representing the event's magnitude or cost
Brownian Motion
Brownian motion (or Wiener process) is a continuous-time stochastic process {B(t),t≥0} with the following properties:
B(0)=0 (starts at the origin)
Independent increments: for any t1<t2≤t3<t4, B(t4)−B(t3) is independent of B(t2)−B(t1)
Stationary increments: for any s<t, B(t)−B(s) has a normal distribution with mean 0 and variance t−s
Sample paths are continuous almost surely (with probability 1)
Standard Brownian motion has unit variance per unit time, while a general Brownian motion can have any constant variance σ2 per unit time
Brownian bridge is a Brownian motion conditioned to start and end at specified values, often used to model random processes with fixed endpoints
Geometric Brownian motion is a stochastic process {S(t),t≥0} where the logarithm of S(t) follows a Brownian motion with drift: dlogS(t)=μdt+σdB(t)
Used to model stock prices in the Black-Scholes option pricing model
Fractional Brownian motion is a generalization of Brownian motion with correlated increments, characterized by the Hurst parameter H∈(0,1)
For H=1/2, it reduces to standard Brownian motion
For H>1/2, increments are positively correlated (persistent)
For H<1/2, increments are negatively correlated (anti-persistent)