diff --git a/lectures/_config.yml b/lectures/_config.yml index 1d5a36e6..59ada928 100644 --- a/lectures/_config.yml +++ b/lectures/_config.yml @@ -75,6 +75,9 @@ sphinx: colab_url : https://colab.research.google.com thebe : false # Add a thebe button to pages (requires the repository to run on Binder) intersphinx_mapping: + intermediate: + - https://python.quantecon.org/ + - null pyprog: - https://python-programming.quantecon.org/ - null diff --git a/lectures/lln_clt.md b/lectures/lln_clt.md index 7e7676ce..291690e0 100644 --- a/lectures/lln_clt.md +++ b/lectures/lln_clt.md @@ -167,6 +167,7 @@ $$ The next theorem is called Kolmogorov's strong law of large numbers. +(iid-theorem)= ````{prf:theorem} If $X_1, \ldots, X_n$ are IID and $\mathbb E |X|$ is finite, then diff --git a/lectures/time_series_with_matrices.md b/lectures/time_series_with_matrices.md index c809de9c..1e88336d 100644 --- a/lectures/time_series_with_matrices.md +++ b/lectures/time_series_with_matrices.md @@ -3,14 +3,16 @@ jupytext: text_representation: extension: .md format_name: myst + format_version: 0.13 + jupytext_version: 1.16.2 kernelspec: - display_name: Python 3 + display_name: Python 3 (ipykernel) language: python name: python3 --- (time_series_with_matrices)= -```{raw} html +```{raw} jupyter
QuantEcon @@ -26,13 +28,12 @@ This lecture uses matrices to solve some linear difference equations. As a running example, we’ll study a **second-order linear difference equation** that was the key technical tool in Paul Samuelson’s 1939 -article {cite}`Samuelson1939` that introduced the **multiplier-accelerator** model. +article {cite}`Samuelson1939` that introduced the *multiplier-accelerator model*. This model became the workhorse that powered early econometric versions of Keynesian macroeconomic models in the United States. -You can read about the details of that model in [this](https://python.quantecon.org/samuelson.html) -QuantEcon lecture. +You can read about the details of that model in {doc}`intermediate:samuelson`. (That lecture also describes some technicalities about second-order linear difference equations.) @@ -44,7 +45,7 @@ a "forward-looking" linear difference equation. We will use the following imports: -```{code-cell} ipython +```{code-cell} ipython3 import numpy as np import matplotlib.pyplot as plt from matplotlib import cm @@ -64,20 +65,20 @@ y_{t} = \alpha_{0} + \alpha_{1} y_{t-1} + \alpha_{2} y_{t-2} ``` where we assume that $y_0$ and $y_{-1}$ are given numbers -that we take as **initial conditions**. +that we take as *initial conditions*. In Samuelson's model, $y_t$ stood for **national income** or perhaps a different measure of aggregate activity called **gross domestic product** (GDP) at time $t$. -Equation {eq}`tswm_1` is called a **second-order linear difference equation**. +Equation {eq}`tswm_1` is called a *second-order linear difference equation*. It is called second order because it depends on two lags. But actually, it is a collection of $T$ simultaneous linear equations in the $T$ variables $y_1, y_2, \ldots, y_T$. ```{note} To be able to solve a second-order linear difference -equation, we require two **boundary conditions** that can take the form -either of two **initial conditions** or two **terminal conditions** or +equation, we require two *boundary conditions* that can take the form +either of two *initial conditions*, two *terminal conditions* or possibly one of each. ``` @@ -131,42 +132,42 @@ The vector $y$ is a complete time path $\{y_t\}_{t=1}^T$. Let’s put Python to work on an example that captures the flavor of Samuelson’s multiplier-accelerator model. -We'll set parameters equal to the same values we used in [this QuantEcon lecture](https://python.quantecon.org/samuelson.html). +We'll set parameters equal to the same values we used in {doc}`intermediate:samuelson`. -```{code-cell} python3 +```{code-cell} ipython3 T = 80 # parameters -𝛼0 = 10.0 -𝛼1 = 1.53 -𝛼2 = -.9 +α_0 = 10.0 +α_1 = 1.53 +α_2 = -.9 -y_1 = 28. # y_{-1} -y0 = 24. +y_neg1 = 28. # y_{-1} +y_0 = 24. ``` Now we construct $A$ and $b$. -```{code-cell} python3 +```{code-cell} ipython3 A = np.identity(T) # The T x T identity matrix for i in range(T): if i-1 >= 0: - A[i, i-1] = -𝛼1 + A[i, i-1] = -α_1 if i-2 >= 0: - A[i, i-2] = -𝛼2 + A[i, i-2] = -α_2 -b = np.full(T, 𝛼0) -b[0] = 𝛼0 + 𝛼1 * y0 + 𝛼2 * y_1 -b[1] = 𝛼0 + 𝛼2 * y0 +b = np.full(T, α_0) +b[0] = α_0 + α_1 * y_0 + α_2 * y_neg1 +b[1] = α_0 + α_2 * y_0 ``` Let’s look at the matrix $A$ and the vector $b$ for our example. -```{code-cell} python3 +```{code-cell} ipython3 A, b ``` @@ -175,25 +176,24 @@ Now let’s solve for the path of $y$. If $y_t$ is GNP at time $t$, then we have a version of Samuelson’s model of the dynamics for GNP. -To solve $y = A^{-1} b$ we can either invert $A$ directly, as in +To solve $y = A^{-1} b$ we can either invert $A$ directly, as in -```{code-cell} python3 +```{code-cell} ipython3 A_inv = np.linalg.inv(A) y = A_inv @ b ``` -or we can use `np.linalg.solve`: +or we can use `np.linalg.solve`: - -```{code-cell} python3 +```{code-cell} ipython3 y_second_method = np.linalg.solve(A, b) ``` Here make sure the two methods give the same result, at least up to floating point precision: -```{code-cell} python3 +```{code-cell} ipython3 np.allclose(y, y_second_method) ``` @@ -207,7 +207,7 @@ it directly. Now we can plot. -```{code-cell} python3 +```{code-cell} ipython3 plt.plot(np.arange(T)+1, y) plt.xlabel('t') plt.ylabel('y') @@ -215,7 +215,7 @@ plt.ylabel('y') plt.show() ``` -The **steady state** value $y^*$ of $y_t$ is obtained by setting $y_t = y_{t-1} = +The {ref}`*steady state*` value $y^*$ of $y_t$ is obtained by setting $y_t = y_{t-1} = y_{t-2} = y^*$ in {eq}`tswm_1`, which yields $$ @@ -225,21 +225,21 @@ $$ If we set the initial values to $y_{0} = y_{-1} = y^*$, then $y_{t}$ will be constant: -```{code-cell} python3 -y_star = 𝛼0 / (1 - 𝛼1 - 𝛼2) -y_1_steady = y_star # y_{-1} -y0_steady = y_star +```{code-cell} ipython3 +y_star = α_0 / (1 - α_1 - α_2) +y_neg1_steady = y_star # y_{-1} +y_0_steady = y_star -b_steady = np.full(T, 𝛼0) -b_steady[0] = 𝛼0 + 𝛼1 * y0_steady + 𝛼2 * y_1_steady -b_steady[1] = 𝛼0 + 𝛼2 * y0_steady +b_steady = np.full(T, α_0) +b_steady[0] = α_0 + α_1 * y_0_steady + α_2 * y_neg1_steady +b_steady[1] = α_0 + α_2 * y_0_steady ``` -```{code-cell} python3 +```{code-cell} ipython3 y_steady = A_inv @ b_steady ``` -```{code-cell} python3 +```{code-cell} ipython3 plt.plot(np.arange(T)+1, y_steady) plt.xlabel('t') plt.ylabel('y') @@ -250,7 +250,7 @@ plt.show() ## Adding a random term To generate some excitement, we'll follow in the spirit of the great economists -Eugen Slutsky and Ragnar Frisch and replace our original second-order difference +[Eugen Slutsky](https://en.wikipedia.org/wiki/Eugen_Slutsky) and [Ragnar Frisch](https://en.wikipedia.org/wiki/Ragnar_Frisch) and replace our original second-order difference equation with the following **second-order stochastic linear difference equation**: @@ -260,8 +260,8 @@ equation**: y_{t} = \alpha_{0} + \alpha_{1} y_{t-1} + \alpha_{2} y_{t-2} + u_t ``` -where $u_{t} \sim N\left(0, \sigma_{u}^{2}\right)$ and is IID, -meaning **independent** and **identically** distributed. +where $u_{t} \sim N\left(0, \sigma_{u}^{2}\right)$ and is {ref}`IID `, +meaning independent and identically distributed. We’ll stack these $T$ equations into a system cast in terms of matrix algebra. @@ -292,16 +292,13 @@ $$ (eq:eqma) Let’s try it out in Python. -```{code-cell} python3 -𝜎u = 2. -``` - -```{code-cell} python3 -u = np.random.normal(0, 𝜎u, size=T) +```{code-cell} ipython3 +σ_u = 2. +u = np.random.normal(0, σ_u, size=T) y = A_inv @ (b + u) ``` -```{code-cell} python3 +```{code-cell} ipython3 plt.plot(np.arange(T)+1, y) plt.xlabel('t') plt.ylabel('y') @@ -314,12 +311,12 @@ number of advanced countries in recent decades. We can simulate $N$ paths. -```{code-cell} python3 +```{code-cell} ipython3 N = 100 for i in range(N): col = cm.viridis(np.random.rand()) # Choose a random color from viridis - u = np.random.normal(0, 𝜎u, size=T) + u = np.random.normal(0, σ_u, size=T) y = A_inv @ (b + u) plt.plot(np.arange(T)+1, y, lw=0.5, color=col) @@ -332,12 +329,12 @@ plt.show() Also consider the case when $y_{0}$ and $y_{-1}$ are at steady state. -```{code-cell} python3 +```{code-cell} ipython3 N = 100 for i in range(N): col = cm.viridis(np.random.rand()) # Choose a random color from viridis - u = np.random.normal(0, 𝜎u, size=T) + u = np.random.normal(0, σ_u, size=T) y_steady = A_inv @ (b_steady + u) plt.plot(np.arange(T)+1, y_steady, lw=0.5, color=col) @@ -347,8 +344,6 @@ plt.ylabel('y') plt.show() ``` - - ## Computing population moments @@ -389,44 +384,42 @@ $$ Let's write a Python class that computes the mean vector $\mu_y$ and covariance matrix $\Sigma_y$. - - ```{code-cell} ipython3 class population_moments: """ - Compute population moments mu_y, Sigma_y. + Compute population moments μ_y, Σ_y. --------- Parameters: - alpha0, alpha1, alpha2, T, y_1, y0 + α_0, α_1, α_2, T, y_neg1, y_0 """ - def __init__(self, alpha0, alpha1, alpha2, T, y_1, y0, sigma_u): + def __init__(self, α_0, α_1, α_2, T, y_neg1, y_0, σ_u): # compute A A = np.identity(T) for i in range(T): if i-1 >= 0: - A[i, i-1] = -alpha1 + A[i, i-1] = -α_1 if i-2 >= 0: - A[i, i-2] = -alpha2 + A[i, i-2] = -α_2 # compute b - b = np.full(T, alpha0) - b[0] = alpha0 + alpha1 * y0 + alpha2 * y_1 - b[1] = alpha0 + alpha2 * y0 + b = np.full(T, α_0) + b[0] = α_0 + α_1 * y_0 + α_2 * y_neg1 + b[1] = α_0 + α_2 * y_0 # compute A inverse A_inv = np.linalg.inv(A) - self.A, self.b, self.A_inv, self.sigma_u, self.T = A, b, A_inv, sigma_u, T + self.A, self.b, self.A_inv, self.σ_u, self.T = A, b, A_inv, σ_u, T def sample_y(self, n): """ Give a sample of size n of y. """ - A_inv, sigma_u, b, T = self.A_inv, self.sigma_u, self.b, self.T - us = np.random.normal(0, sigma_u, size=[n, T]) + A_inv, σ_u, b, T = self.A_inv, self.σ_u, self.b, self.T + us = np.random.normal(0, σ_u, size=[n, T]) ys = np.vstack([A_inv @ (b + u) for u in us]) return ys @@ -435,20 +428,20 @@ class population_moments: """ Compute the population moments of y. """ - A_inv, sigma_u, b = self.A_inv, self.sigma_u, self.b + A_inv, σ_u, b = self.A_inv, self.σ_u, self.b - # compute mu_y - self.mu_y = A_inv @ b - self.Sigma_y = sigma_u**2 * (A_inv @ A_inv.T) + # compute μ_y + self.μ_y = A_inv @ b + self.Σ_y = σ_u**2 * (A_inv @ A_inv.T) + + return self.μ_y, self.Σ_y - return self.mu_y, self.Sigma_y - -my_process = population_moments( - alpha0=10.0, alpha1=1.53, alpha2=-.9, T=80, y_1=28., y0=24., sigma_u=1) +series_process = population_moments( + α_0=10.0, α_1=1.53, α_2=-.9, T=80, y_neg1=28., y_0=24., σ_u=1) -mu_y, Sigma_y = my_process.get_moments() -A_inv = my_process.A_inv +μ_y, Σ_y = series_process.get_moments() +A_inv = series_process.A_inv ``` It is enlightening to study the $\mu_y, \Sigma_y$'s implied by various parameter values. @@ -458,14 +451,14 @@ Among other things, we can use the class to exhibit how **statistical stationar Let's begin by generating $N$ time realizations of $y$ plotting them together with population mean $\mu_y$ . ```{code-cell} ipython3 -# plot mean +# Plot mean N = 100 for i in range(N): col = cm.viridis(np.random.rand()) # Choose a random color from viridis - ys = my_process.sample_y(N) + ys = series_process.sample_y(N) plt.plot(ys[i,:], lw=0.5, color=col) - plt.plot(mu_y, color='red') + plt.plot(μ_y, color='red') plt.xlabel('t') plt.ylabel('y') @@ -478,41 +471,44 @@ Visually, notice how the variance across realizations of $y_t$ decreases as $t$ Let's plot the population variance of $y_t$ against $t$. ```{code-cell} ipython3 -# plot variance -plt.plot(Sigma_y.diagonal()) +# Plot variance +plt.plot(Σ_y.diagonal()) plt.show() ``` -Notice how the population variance increases and asymptotes +Notice how the population variance increases and asymptotes. +++ -Let's print out the covariance matrix $\Sigma_y$ for a time series $y$ +Let's print out the covariance matrix $\Sigma_y$ for a time series $y$. ```{code-cell} ipython3 -my_process = population_moments(alpha0=0, alpha1=.8, alpha2=0, T=6, y_1=0., y0=0., sigma_u=1) - -mu_y, Sigma_y = my_process.get_moments() -print("mu_y = ",mu_y) -print("Sigma_y = ", Sigma_y) +series_process = population_moments( + α_0=0, α_1=.8, α_2=0, T=6, y_neg1=0., y_0=0., σ_u=1) + +μ_y, Σ_y = series_process.get_moments() +print("μ_y = ", μ_y) +print("Σ_y = ", Σ_y) ``` -Notice that the covariance between $y_t$ and $y_{t-1}$ -- the elements on the superdiagonal -- are **not** identical. +Notice that the covariance between $y_t$ and $y_{t-1}$ -- the elements on the superdiagonal -- are *not* identical. -This is is an indication that the time series represented by our $y$ vector is not **stationary**. +This is an indication that the time series represented by our $y$ vector is not **stationary**. -To make it stationary, we'd have to alter our system so that our **initial conditions** $(y_1, y_0)$ are not fixed numbers but instead a jointly normally distributed random vector with a particular mean and covariance matrix. +To make it stationary, we'd have to alter our system so that our *initial conditions* $(y_0, y_{-1})$ are not fixed numbers but instead a jointly normally distributed random vector with a particular mean and covariance matrix. -We describe how to do that in another lecture in this lecture [Linear State Space Models](https://python.quantecon.org/linear_models.html). +We describe how to do that in [Linear State Space Models](https://python.quantecon.org/linear_models.html). But just to set the stage for that analysis, let's print out the bottom right corner of $\Sigma_y$. ```{code-cell} ipython3 -mu_y, Sigma_y = my_process.get_moments() -print("bottom right corner of Sigma_y = \n", Sigma_y[72:,72:]) +series_process = population_moments( + α_0=10.0, α_1=1.53, α_2=-.9, T=80, y_neg1=28., y_0=24., σ_u=1) +μ_y, Σ_y = series_process.get_moments() +print("bottom right corner of Σ_y = \n", Σ_y[72:,72:]) ``` -Please notice how the sub diagonal and super diagonal elements seem to have converged. +Please notice how the subdiagonal and superdiagonal elements seem to have converged. This is an indication that our process is asymptotically stationary. @@ -522,7 +518,6 @@ There is a lot to be learned about the process by staring at the off diagonal el +++ - ## Moving average representation Let's print out $A^{-1}$ and stare at its structure @@ -531,16 +526,13 @@ Let's print out $A^{-1}$ and stare at its structure To study the structure of $A^{-1}$, we shall print just up to $3$ decimals. -Let's begin by printing out just the upper left hand corner of $A^{-1}$ +Let's begin by printing out just the upper left hand corner of $A^{-1}$. ```{code-cell} ipython3 with np.printoptions(precision=3, suppress=True): print(A_inv[0:7,0:7]) ``` - - - Evidently, $A^{-1}$ is a lower triangular matrix. @@ -551,7 +543,6 @@ with np.printoptions(precision=3, suppress=True): print(A_inv[72:,72:]) ``` - Notice how every row ends with the previous row's pre-diagonal entries. @@ -560,7 +551,7 @@ Notice how every row ends with the previous row's pre-diagonal entries. Since $A^{-1}$ is lower triangular, each row represents $ y_t$ for a particular $t$ as the sum of - a time-dependent function $A^{-1} b$ of the initial conditions incorporated in $b$, and -- a weighted sum of current and past values of the IID shocks $\{u_t\}$ +- a weighted sum of current and past values of the IID shocks $\{u_t\}$. Thus, let $\tilde{A}=A^{-1}$. @@ -580,13 +571,13 @@ Just as system {eq}`eq:eqma` constitutes a ## A forward looking model -Samuelson’s model is **backwards looking** in the sense that we give it **initial conditions** and let it +Samuelson’s model is *backward looking* in the sense that we give it *initial conditions* and let it run. -Let’s now turn to model that is **forward looking**. +Let’s now turn to model that is *forward looking*. -We apply similar linear algebra machinery to study a **perfect -foresight** model widely used as a benchmark in macroeconomics and +We apply similar linear algebra machinery to study a *perfect +foresight* model widely used as a benchmark in macroeconomics and finance. As an example, we suppose that $p_t$ is the price of a stock and @@ -599,7 +590,7 @@ $$ y = A^{-1} \left(b + u\right) $$ -Our **perfect foresight** model of stock prices is +Our *perfect foresight* model of stock prices is $$ p_{t} = \sum_{j=0}^{T-t} \beta^{j} y_{t+j}, \quad \beta \in (0,1) @@ -634,34 +625,34 @@ y_{T} \end{array}\right] $$ -```{code-cell} python3 -𝛽 = .96 +```{code-cell} ipython3 +β = .96 ``` -```{code-cell} python3 +```{code-cell} ipython3 # construct B B = np.zeros((T, T)) for i in range(T): - B[i, i:] = 𝛽 ** np.arange(0, T-i) + B[i, i:] = β ** np.arange(0, T-i) ``` -```{code-cell} python3 +```{code-cell} ipython3 B ``` -```{code-cell} python3 -𝜎u = 0. -u = np.random.normal(0, 𝜎u, size=T) +```{code-cell} ipython3 +σ_u = 0. +u = np.random.normal(0, σ_u, size=T) y = A_inv @ (b + u) y_steady = A_inv @ (b_steady + u) ``` -```{code-cell} python3 +```{code-cell} ipython3 p = B @ y ``` -```{code-cell} python3 +```{code-cell} ipython3 plt.plot(np.arange(0, T)+1, y, label='y') plt.plot(np.arange(0, T)+1, p, label='p') plt.xlabel('t') @@ -676,7 +667,7 @@ Can you explain why the trend of the price is downward over time? Also consider the case when $y_{0}$ and $y_{-1}$ are at the steady state. -```{code-cell} python3 +```{code-cell} ipython3 p_steady = B @ y_steady plt.plot(np.arange(0, T)+1, y_steady, label='y') @@ -687,4 +678,3 @@ plt.legend() plt.show() ``` -