Skip to content

Latest commit

 

History

History
451 lines (334 loc) · 22.1 KB

economics-math.md

File metadata and controls

451 lines (334 loc) · 22.1 KB

Refining the Basic Mathematics of Economics and Accounting

Intro

There's something wrong with economics and accounting math. Not "wrong-answers" wrong, but bad pedagogy, and icky approximations.

It's easy to go after approximations — "No! I insist that you write this other formula that is actually correct" ...and far larger. And indeed, if there was just a trade off between precision and bloat, I would not bother writing this. Rather, and what really gets me excited, is that there is no such trade-off — we can be both more correct and just as (or more!) terse. We just need to use the right abstractions.

A first example

Here's a starting point. Consider a phrase like "2% growth"; you've seen it newspapers, in high school math doing loan interest, and maybe in econ or accounting classes too. The meaning of the phase is "went from 100% to 102%" (over some period), adding another 2% on top.

But adding percents is, at best, a very dubious endeavor. For example what is 2% growth twice? Is it 4% growth? Nope! That neglects compounding, which is to say each percentage is in terms of the total up to the previous period. You can only add "synchronous" percentages, and that is not a very common thing to do either.

A quick not on method: the above paragraph is not formal, and also beyond the realm of regular dimensional analysis. But we shouldn't dismiss it for lack of formality; the burden of proof should be on justifying correct formulae, not disputing incorrect ones ("guilty until proven innocent"). In type theory terms, we should be conservative and ban $(+) \colon \mathrm{Percent} \to \mathrm{Percent} \to \mathrm{Percent}$, and instead replace it with some more complex operation which takes a "proof of synchronicity", whatever that looks like. I don't know what sort proposition we'd need to prove, but the good thing is that, based on what fallows, I think we can side-step needing to figure this operation out entirely.

There is an alternative to addition which avoids this problem, however, and handles compounding correctly, for free. That is multiplication. Instead of doing (wrong!):

$$a + 2\% \cdot a + 2\% \cdot a$$

we can do (right!):

$$a \cdot (1 + 2\%) \cdot (1 + 2\%)$$

or more simply,

$$a \cdot 102\% \cdot 102\%$$

And there we have a bit a shibboleth: call it "102% growth", not "2% growth". It might be too late to change English, but one can dream...

Differing growth amounts

Suppose we had $g_0$ growth for one period, $g_1$ for another, and so one. What does that look like?

In the conventional regime where the steady-state is $\bar{g} = 0%$ not $g = 100%$, that's

$$a_n = a_0 \cdot (1 + \bar{g}_0) \cdot (1 + \bar{g}_1) \cdot \ldots \cdot (1 + \bar{g}_{n-1})$$

But with the "corrected" variables, that's simply

$$a_n = a_0 \cdot g_0 \cdot g_1 \cdot \ldots \cdot g_{n-1}$$

Clearly this is terser. But, this just the same point as the previous section, now made using variables rather than concrete percentages.

Differing period lengths

Now, say we want to incorporate time. Our periods will be different lengths: $\Delta t_0$, $\Delta t_1$, etc. The growth of each period will be given not as literally occurred over that period, but according to what it would be over a unit length of time. That is to day, we will not specify the growth with

$$g'_n := \frac{a_{n +1}}{a_n}$$

but with

$$g_n := \left( \frac{a_{n +1}}{a_n} \right)^{\frac 1 {\Delta n}}$$

the final amount (post growth) will thus be

$$a_n = a_0 \cdot g_0^{\Delta t_0} \cdot g_1^{\Delta t_1} \cdot \ldots \cdot g_{n-1}^{\Delta t_{n-1}}$$

This is correct, and terse.

But I must note there is (again, if you read the previous aside) a problem with dimensional analysis. $\frac 1 {\Delta t}$ is not dimensionless, but rather has dimension $\frac 1 {\mathrm{T}}$, i.e. units like $\frac 1 {\mathrm{seconds}}$ $\frac 1 {\mathrm{years}}$ or similar. Both numbers given to exponentiation must be dimensionless, so we shouldn't be doing this. There is a solution, however, which is to use the identity $g = \exp (\ln g)$. Rewriting with that, we have

$$a_n = a_0 \cdot \exp (\Delta t_0 \cdot \ln g_0) \cdot \exp ( \Delta t_1 \cdot \ln g_1) \cdot \ldots \cdot \exp (\Delta t_{n-1} \cdot \ln g_{n-1})$$

$\ln g_t$ we hereby declare to have dimension $\frac 1 T$, and now the final exponent $\ln g_t \Delta t$ properly has dimension $1$ (dimensionless). The logarithm is just as dimensionally invalid as the exponent before, but for that we just need to inline further. Recall our definition for $g_n$:

$$g_n = \left( \frac{a_{n +1}}{a_n} \right)^{\frac 1 {\Delta n}}$$

thus

$$\begin{aligned} \ln g_n & = \ln \left( \left( \frac{a_{n +1}}{a_t} \right)^{\frac 1 {\Delta t_n}} \right) \\\ &= \frac 1 {\Delta t_n} \ln \left( \frac{a_{n +1}}{a_n} \right) \\\ &= \frac 1 {\Delta t_n} \left( \ln a_{n +1} - \ln {a_n} \right) \\\ \end{aligned}$$

The final right-hand side "properly" has the proper $\frac 1 {T}$ dimension; we've additionally found and fixed another dimensional analysis violation in the definition of $g_t$. Lets call this value $\bar{\bar{g}}$:

$$\bar{\bar{g}} := \frac 1 {\Delta t} \left( \ln a_{t +1} - \ln {a_t} \right)$$

so that our formula for $a_t$ is:

$$a_n = a_0 \cdot \exp(\Delta t_0 \cdot \bar{\bar{g}}_0) \cdot \exp(\Delta t_1 \cdot \bar{\bar{g}}_1) \cdot \ldots \cdot \exp(\Delta t_{n-1} \cdot \bar{\bar{g}}_{n-1})$$

or equivalently

$$a_n = a_0 \cdot \exp(\Delta t_0 \cdot \bar{\bar{g}}_0 + \Delta t_1 \cdot \bar{\bar{g}}_1 + \ldots + \Delta t_{n-1} \cdot \bar{\bar{g}}_{n-1})$$

Now everything is dimensionally correct, and still terse. Conversely, rewriting any of these equations with $\bar{g}$ would have cluttered things up again, only introducing more $_ + 1$ inside these formulae. That would have been a real mess.

The terse approximation for $a_n$, only valid for low $g$ and low $n$, is:

$$a_n \approx a_0 \cdot (1 + \Delta t_0 \cdot \bar{g}_0 + \Delta t_1 \cdot \bar{g}_1 + \ldots + \Delta t_{n-1} \cdot \bar{g}_{n-1})$$

but that is hardly more terse!

Sorting out $g$, $\bar{g}$, and $\bar{\bar{g}}$

$\bar{\bar{g}}$, the final sort of growth rate variable we settled on, does admittedly have some similarites to $\bar{g}$, the one we rejected. $\ln g$ and $g - 1$ are similar when $g$ is close to 1, as is the case for most macro growth rates, and identical at 1: $\ln 1 = 1 - 1 = 0$. Perhaps correctness and usefulness of $\bar{\bar{g}}$ boosted $\bar{g}$'s reputation by means of the latter being a convenient approximation for the former.

In any event, we will expressions that look like $g$ and $\bar{\bar{g}}$ in the further developments below.

Multiplicative sequence pre-calculus

So far, we've used standard concepts and notations, even if the emphasis on absolute syntactic rigor with no approximations is a bit unidiomatic. Now, however, will introduce not harder but more obscure concepts

Products of sequences

Suppose we have a loan with a variable (compound) interest rate $r$, the outstanding balance is calculated every unit interval, and no payments are made. Because of the discrete points at which the interest is calculated, $r$ can be a (real-valued) sequence ($\mathbb{N} \to \mathbb{R}$).

The formula for the balance is informally.

$$B_t = A \cdot r_0 \cdot r_1 \cdot r_2 \cdot \ldots \cdot r_{t -1 }$$

Similar to lumping together the 1 and 2% above as 102%, note that in the formula above the balance seems more "fundamental" than the total interest: $B_t - A$ is the total interest written in terms of the balance (and principle), and there isn't an obvious way to rewrite that expression such that we "skip" calculating the balance.

We can do better, formalizing it, with the notion of of a product of a sequence:

$$B_t = A \prod_{u = 0}^{t - 1} r_u$$

The subscripts in the above formula don't add too much value in this case. We can define cumulative product and sum operators on sequences, that is, $\sum, \prod : (\mathbb{N} \to \mathbb{R}) \to (\mathbb{N} \to \mathbb{R})$, as follows:

$$\sum s := 0 :: \left( n \mapsto \sum_{i = 0}^n s_i \right)$$ $$\prod s := 1 :: \left( n \mapsto \prod_{i = 0}^n s_i \right)$$

where $v :: s$ "delays" $s$ by one, using $v$ as the new initial value.

And then (with arithmetic on sequences defined point-wise), the balance formula above can be rewritten:

$$B = A \cdot \left(\prod r \right)$$

Just as we did above, this can be rewritten with more familiar summation with $\exp$:

$$B = A \cdot \exp\left(\sum \ln r \right)$$

Ratios of sequences

If we have some sequence $s$, the (forward) quotient operator is1

$${\Large Ϙ} s := n \mapsto \frac{s_{n+1}}{s_n}$$

We have a nice "fundamental therorem" where

$${\Large Ϙ} \left( \prod s \right) = s$$

and

$$\prod \left( {\Large Ϙ} s \right) = s / s_0$$

Differing period lengths, again

We can now use the above path to rewrite our formulae for growth with varying rates for varying period lengths much more succinctly. The informal $\ldots$ can be replaced with the use our cumulative sequential operators.

$$\begin{align} a & := a_0 \cdot \prod \exp(\Delta t \cdot \bar{\bar{g}}) \\\ & ~ = a_0 \cdot \exp\left(\sum \Delta t \cdot \bar{\bar{g}}\right) \end{align}$$

The commuting of the cumulative product/sum (it changes) and exponentiation is on display with little else to distract from it.

Loan with payments

Now let's add one more complication to our modeling goal. We'll have a variable interest rate, and variable time lengths, like before, but also variable loan payments (imagine the debtor gets behind and tries to check up).

The recurrence relation is this:

$$B_{n +1} = (B_n - P_n) \cdot \exp(\bar{\bar{r}}_n \cdot \Delta t_n)$$

which is say the next balance before interest is the old balance less the payment. Then the final next balance also includes the interest is calculated from that intermediate total.

We can do "discrete differential equations", called as "difference equations", and their multiplicative counterpart, for this:

$$\Delta B = (B - P) \cdot \exp(\bar{\bar{r}} \cdot \Delta t) - B$$ $${\Large Ϙ} B = \left(1 - \frac P B \right) \cdot \exp(\bar{\bar{r}} \cdot \Delta t)$$

Admittedly, neither of these look very pretty.

We can write the first additive one:

$$\Delta B = (B - P) \cdot (\exp(\bar{\bar{r}} \cdot \Delta t) - 1) - P$$

where $\exp(\bar{\bar{r}} \cdot \Delta t) - 1$ is just "pure interest" rate forthat period, something like $2%$ not $102%$ as we've been arguing against. However, this does not actually simplify the formula, for two reasons. Firstly, the interest rate itself tersely reaonably use $\bar{r}$, because the $\Delta t$ has to be an exponent (as we've discussed before. Secondly, we're stuck repeating the $-P$ because we need to calculate the interest after including the payment.

There is not a close-form equation for this in the style of what we've done so far — only the recurence relation which gets around the issue with subscripts. The fundamental problem is that the loan payments are inherently additive, while the interest calculation is inherently multiplicative, so we cannot express the sequence as single cumulative sum or product for a closed-form solution.

Multiplicative Calculus

The world may (or may not be) discrete, but we use continuous math to explore intuitions and ideals for a reason. In physics there is explicitly continuum mechanics to make this argument. I'm surprised given Economics's infamous "physics envy" the phrase "continuum economics" isn't out there; maybe that's because most/all theoretical neoclassical econ is "continuum economics"?

For this topic, the continuous counterpart we might call continuously compounding growth, after the standard term "continuously compounding interest". Again, this is nothing obscure, a lot of people will learn it in high school or early college math whether they go on to study economics and accounting, or not. But, in a typical bad pedagogy mistake, too much emphasis is on how to "solve" the problem/equation/whatever, and not enough is on what the problem to be solved is.

For loans, the interest rate is constant. Interesting things just stem from irregular (or arbitrary) payments. But in general, we also want to consider non-constant, time varying growth. Regular integration is for "continuous sums"; per the previous section, if the right way to deal with growth is not iterated addition but multiplication, then what we are looking for is "continuous products".

The math we want for this is the "Multiplicative calculus", which I wrote a bit about separately, based chiefly on this paper doi:10.1016/j.jmaa.2007.03.081.

Logarithmic derivative, not quite what we want

This is very close to the logarithmic derivative, except that one skips the final $\exp$ step, losing the symmetry.

Other topics

Elasticity

The Wikipedia article for elasticity, like most econ texts I could find from a quick glance, just has an informal definition made from infinitesimals:

The $x$-elasticity of $y$ is:

$$\epsilon := \frac{\partial y / y}{\partial x / x}$$

I won't lie, that is pretty. But it does more suspicious addition — despite looking like all division — in the form of the infinitesimals. This is because infinitesimals, as "funny zeros" — funny additive identities — are an additive concept. Or, if that is a bit too much woo-woo, more prosaically it is because they stem from subtraction in limits.

This other Wikipedia article has a formal limit definition:

$$\epsilon(f) := \frac{x}{f(x)} \cdot f'(x)$$

but underneath the definition of the derivative are the suspicious addition/subtractions on values with output-dimension we'd like to avoid.

However, that article also transforms the original definition into

$$\epsilon(f) := \lim_{x \to a} \frac{\frac {f(x)}{f(a)} - 1} {\frac x a -1}$$

With this definition, we divide first, and then only subtract dimensionless values. This successfully avoids any criticism for suspicious subtractions. Also, the $-1$ is very natural in this case: it would seem to be the perfect repudiation to my original claim that using values like $2%$ rather than $102%$ is unnatural and misguided!

However, there is another problem with this, and a solution in the more the vein I am thinking. Recall that the curves of constant elasticity are power-law functions in the form:

$$x \mapsto a \cdot x^k$$

(In particular, $\epsilon(x \mapsto a \cdot x^k) = k$.)

Recall the limit definition of a derivative:

$$f' = a \mapsto \lim_{x \to a} \frac {f(x) - f(a)} {x - a}$$

The standard geometric interpretation of the derivative is we have a family of secants, with the two points of the secant growing ever closer together, and their limit is the one-point tangent.2 The expression inside the limit, called the difference quotient, is the slopes of the family of secants (the choice of $x$ determines the secant in question) The overall expression, with the limit, it is the slop of the tangent.

Less well-known is the idea that we can do a similar geometric construction for elasticities. Two points determine a power-law function just as they determine a line; we can thus speak of a "power-law secant", and in the limit as the two points approach, we have a "power law tangent". We'd want the inner expression to be the elasticities of the family of "power-law secants", and the overall expression should be the elasticity of the tangent.

The curves of constant slope are just lines, graphs of functions in the form $x \mapsto a \cdot x + b$. In this case, we note two lemmas, one geometric and the other arithmetic:

  • geometric: the original line is every line in the family of secants is the tangent line
  • algebraic: the limit is trivial and we can just as well use the underlying expression for any value of $x$ and $a$ to calculate the constant slope.

For good practice, lets prove the second lemma:

$$\frac {(c \cdot x + b) - (c\cdot a +b)} {x - a} = \frac {c\cdot (x - a)} {x - a} = c$$

Likewise, we would expect the same thing about curves of constant elasticity:

  • geometric: the original power law curve3, the every curve in the family of "power law secants", and the "power law tangent" are all the same curve.
  • algebraic: the limit is trivial and we can just as well use the underlying expression for any value of $x$ and $a$ to calculate the constant elasticity.

The geometric lemma is true for power law curves, but with the formulae given above, the arithmetic lemma is false! Let's try substituting an arbitrary power-law function and simplifying:

$$\frac {\frac {b \cdot x^c} {b\cdot a^c} - 1} {\frac x a - 1} = \frac {\frac {x^c} {a^c} - 1} {\frac x a - 1}$$

We can't readily simplify it further, and if we plug in different values for $x$ and $a$, we in fact get different results! We do not get $c$ in all case, even though the (constant) elasticity everywhere on the curve is in fact $c$.

Is all hope lost? Is elasticity just a more broken concept than slope? Not so! The key is we just need a different formula

Try this:

$$\epsilon(f) \stackrel{?}{=} a \mapsto \lim_{x \to a} \log_{x/a}{\frac {f(x)} {f(a)}}$$

The intuition here is we are comparing a small multiplicative perturbation in the input to the corresponding perturbation in the output, and instead of taking quotient of these quantities (inverse binary multiplication), we are taking the logarithm (inverse binary exponentiation). We are asking, what power of the input multiplicative perturbation yields the output multiplicative perturbation?

It is very interesting to compare this definition, the multiplicative derivative from before, and the regular additive derivative. The multiplicative derivative "upgraded" output-dimension operation (subtraction to division), but left input-dimension one the same, additive as before (that is $a - x$ is still the same). The multiplicative derivative's mixed dimension $\frac {\mathrm{Output}} {\mathrm{Input}}$ operation (division) also got upgraded to a root. (Note that after this upgrading, the mixed-dimension operation becomes a dimensionless one.)

In this definition, both the input and output dimension operations are upgraded, $f(x) - f(a)$ becomes $\frac {f(x)} {f(a)}$, and $x - a$ becomes $x / a$. The mixed dimension division gets upgraded to a different sort of next level operation, the logarithm. It might be tempting to think of this function as "more upgraded" since the input operations are updated too, and logarithms are more "exotic" than roots, but keep in mind that there is also a tension between inputs and outputs, and in some sense upgrading both "cancels out" the upgrading some more.

Finally, lets note that we can rewrite the non-standard-base logarithm the usual way.

$$\epsilon(f) \stackrel{?}{=} a \mapsto \lim_{x \to a} \frac {\ln \frac {f(x)} {f(a)}} {\ln \frac x a} = a \mapsto \lim_{x \to a} \frac {\ln f(x) - \ln f(a)} {\ln x - \ln a}$$

I didn't do this before because it obscures the analogies I wanted to make, and also because the third expression above is not dimensionally compliant. But I include them now since these are more "conventional" formulae, at what I deem less cost.

Is this formula valid for elasticity? Well, it does work for power-law functions:

$$a \mapsto \lim_{x \to a} \frac {\ln \frac {b \cdot x^c} {b \cdot a^c}} {\ln \frac x a}$$ $$a \mapsto \lim_{x \to a} \frac {c \cdot \ln \frac {x} {a}} {\ln \frac x a}$$ $$a \mapsto \lim_{x \to a} c$$

And even better, look how we were able to derive a constant without first solving the limit! That bails out our second property after all, if this formula is in fact a correct definition: the new limit is trivial for power law functions, and thus we do not need to use a limit to compute functions where the elasticity is everywhere constant.

Finally, we sketch a proof that it is. The proof is limit more than our previous observation comparing $\bar{g}$ and $\bar{\bar{g}}$: $\ln$ and $x \mapsto x - 1$ are similar functions close to 1, and by L'Hôpital's_rule, the original and new formula (with the limits) are in fact equal.

For the record, the new formula is not entirely made up by me. The Wikipedia pages after all have the informal

$$\epsilon = \frac {d \ln y} {d \ln x}$$

It is not exactly clear what this means just looking at it alone, but as far as I can tell what this correspond to is the final rewrite we did above:

$$\epsilon(f) = a \mapsto \lim_{x \to a} \frac {\ln f(x) - \ln f(a)} {\ln x - \ln a}$$

One thing that is nice about this version is it exactly corresponds to how log-log plots are interpreted. With both axes so scaled, power law functions become lines, and elasticisties become slopes. The peculiar "limit of power-law secants to power-law tangent" geometric interpretation we described before are likewise transformed to the regular "limit of secants to tangent". I like to think these definitions make it easier to understand how those plots work.

Footnotes

  1. https://math.stackexchange.com/q/3691073 made the cheeky suggestion to use the archaic Greek letter "qoppa" for this. I like it!

  2. For anyone not familiar, the Wikipedia page on tangants speaks of this limit somewhat.

  3. the properties we care about of these curves are not translation-invariant; on the contrary the location of the origin in crucial for comparing ratios of inputs to ratios of outputs. It is therefore fair to point out this is not "geometric" in the usual euclidean sense.