There's something wrong with economics and accounting math. Not "wrong-answers" wrong, but bad pedagogy, and icky approximations.
It's easy to go after approximations — "No! I insist that you write this other formula that is actually correct" ...and far larger. And indeed, if there was just a trade off between precision and bloat, I would not bother writing this. Rather, and what really gets me excited, is that there is no such trade-off — we can be both more correct and just as (or more!) terse. We just need to use the right abstractions.
Here's a starting point. Consider a phrase like "2% growth"; you've seen it newspapers, in high school math doing loan interest, and maybe in econ or accounting classes too. The meaning of the phase is "went from 100% to 102%" (over some period), adding another 2% on top.
But adding percents is, at best, a very dubious endeavor. For example what is 2% growth twice? Is it 4% growth? Nope! That neglects compounding, which is to say each percentage is in terms of the total up to the previous period. You can only add "synchronous" percentages, and that is not a very common thing to do either.
A quick not on method: the above paragraph is not formal, and also beyond the realm of regular dimensional analysis. But we shouldn't dismiss it for lack of formality; the burden of proof should be on justifying correct formulae, not disputing incorrect ones ("guilty until proven innocent"). In type theory terms, we should be conservative and ban
$(+) \colon \mathrm{Percent} \to \mathrm{Percent} \to \mathrm{Percent}$ , and instead replace it with some more complex operation which takes a "proof of synchronicity", whatever that looks like. I don't know what sort proposition we'd need to prove, but the good thing is that, based on what fallows, I think we can side-step needing to figure this operation out entirely.
There is an alternative to addition which avoids this problem, however, and handles compounding correctly, for free. That is multiplication. Instead of doing (wrong!):
we can do (right!):
or more simply,
And there we have a bit a shibboleth: call it "102% growth", not "2% growth". It might be too late to change English, but one can dream...
Suppose we had
In the conventional regime where the steady-state is
But with the "corrected" variables, that's simply
Clearly this is terser. But, this just the same point as the previous section, now made using variables rather than concrete percentages.
Now, say we want to incorporate time.
Our periods will be different lengths:
but with
the final amount (post growth) will thus be
This is correct, and terse.
But I must note there is (again, if you read the previous aside) a problem with dimensional analysis.
thus
The final right-hand side "properly" has the proper
so that our formula for
or equivalently
Now everything is dimensionally correct, and still terse.
Conversely, rewriting any of these equations with
The terse approximation for
but that is hardly more terse!
In any event, we will expressions that look like
So far, we've used standard concepts and notations, even if the emphasis on absolute syntactic rigor with no approximations is a bit unidiomatic. Now, however, will introduce not harder but more obscure concepts
Suppose we have a loan with a variable (compound) interest rate
The formula for the balance is informally.
Similar to lumping together the 1 and 2% above as 102%, note that in the formula above the balance seems more "fundamental" than the total interest:
We can do better, formalizing it, with the notion of of a product of a sequence:
The subscripts in the above formula don't add too much value in this case.
We can define cumulative product and sum operators on sequences, that is,
where
And then (with arithmetic on sequences defined point-wise), the balance formula above can be rewritten:
Just as we did above, this can be rewritten with more familiar summation with
If we have some sequence
We have a nice "fundamental therorem" where
and
We can now use the above path to rewrite our formulae for growth with varying rates for varying period lengths much more succinctly.
The informal
The commuting of the cumulative product/sum (it changes) and exponentiation is on display with little else to distract from it.
Now let's add one more complication to our modeling goal. We'll have a variable interest rate, and variable time lengths, like before, but also variable loan payments (imagine the debtor gets behind and tries to check up).
The recurrence relation is this:
which is say the next balance before interest is the old balance less the payment. Then the final next balance also includes the interest is calculated from that intermediate total.
We can do "discrete differential equations", called as "difference equations", and their multiplicative counterpart, for this:
Admittedly, neither of these look very pretty.
We can write the first additive one:
where
There is not a close-form equation for this in the style of what we've done so far — only the recurence relation which gets around the issue with subscripts. The fundamental problem is that the loan payments are inherently additive, while the interest calculation is inherently multiplicative, so we cannot express the sequence as single cumulative sum or product for a closed-form solution.
The world may (or may not be) discrete, but we use continuous math to explore intuitions and ideals for a reason. In physics there is explicitly continuum mechanics to make this argument. I'm surprised given Economics's infamous "physics envy" the phrase "continuum economics" isn't out there; maybe that's because most/all theoretical neoclassical econ is "continuum economics"?
For this topic, the continuous counterpart we might call continuously compounding growth, after the standard term "continuously compounding interest". Again, this is nothing obscure, a lot of people will learn it in high school or early college math whether they go on to study economics and accounting, or not. But, in a typical bad pedagogy mistake, too much emphasis is on how to "solve" the problem/equation/whatever, and not enough is on what the problem to be solved is.
For loans, the interest rate is constant. Interesting things just stem from irregular (or arbitrary) payments. But in general, we also want to consider non-constant, time varying growth. Regular integration is for "continuous sums"; per the previous section, if the right way to deal with growth is not iterated addition but multiplication, then what we are looking for is "continuous products".
The math we want for this is the "Multiplicative calculus", which I wrote a bit about separately, based chiefly on this paper doi:10.1016/j.jmaa.2007.03.081.
This is very close to the logarithmic derivative,
except that one skips the final
The Wikipedia article for elasticity, like most econ texts I could find from a quick glance, just has an informal definition made from infinitesimals:
The
I won't lie, that is pretty. But it does more suspicious addition — despite looking like all division — in the form of the infinitesimals. This is because infinitesimals, as "funny zeros" — funny additive identities — are an additive concept. Or, if that is a bit too much woo-woo, more prosaically it is because they stem from subtraction in limits.
This other Wikipedia article has a formal limit definition:
but underneath the definition of the derivative are the suspicious addition/subtractions on values with output-dimension we'd like to avoid.
However, that article also transforms the original definition into
With this definition, we divide first, and then only subtract dimensionless values.
This successfully avoids any criticism for suspicious subtractions.
Also, the
However, there is another problem with this, and a solution in the more the vein I am thinking. Recall that the curves of constant elasticity are power-law functions in the form:
(In particular,
Recall the limit definition of a derivative:
The standard geometric interpretation of the derivative is we have a family of secants, with the two points of the secant growing ever closer together, and their limit is the one-point tangent.2
The expression inside the limit, called the difference quotient, is the slopes of the family of secants (the choice of
Less well-known is the idea that we can do a similar geometric construction for elasticities. Two points determine a power-law function just as they determine a line; we can thus speak of a "power-law secant", and in the limit as the two points approach, we have a "power law tangent". We'd want the inner expression to be the elasticities of the family of "power-law secants", and the overall expression should be the elasticity of the tangent.
The curves of constant slope are just lines, graphs of functions in the form
- geometric: the original line is every line in the family of secants is the tangent line
-
algebraic: the limit is trivial and we can just as well use the underlying expression for any value of
$x$ and$a$ to calculate the constant slope.
For good practice, lets prove the second lemma:
Likewise, we would expect the same thing about curves of constant elasticity:
- geometric: the original power law curve3, the every curve in the family of "power law secants", and the "power law tangent" are all the same curve.
-
algebraic: the limit is trivial and we can just as well use the underlying expression for any value of
$x$ and$a$ to calculate the constant elasticity.
The geometric lemma is true for power law curves, but with the formulae given above, the arithmetic lemma is false! Let's try substituting an arbitrary power-law function and simplifying:
We can't readily simplify it further, and if we plug in different values for
Is all hope lost? Is elasticity just a more broken concept than slope? Not so! The key is we just need a different formula
Try this:
The intuition here is we are comparing a small multiplicative perturbation in the input to the corresponding perturbation in the output, and instead of taking quotient of these quantities (inverse binary multiplication), we are taking the logarithm (inverse binary exponentiation). We are asking, what power of the input multiplicative perturbation yields the output multiplicative perturbation?
It is very interesting to compare this definition, the multiplicative derivative from before, and the regular additive derivative.
The multiplicative derivative "upgraded" output-dimension operation (subtraction to division), but left input-dimension one the same, additive as before (that is
In this definition, both the input and output dimension operations are upgraded,
Finally, lets note that we can rewrite the non-standard-base logarithm the usual way.
I didn't do this before because it obscures the analogies I wanted to make, and also because the third expression above is not dimensionally compliant. But I include them now since these are more "conventional" formulae, at what I deem less cost.
Is this formula valid for elasticity? Well, it does work for power-law functions:
And even better, look how we were able to derive a constant without first solving the limit! That bails out our second property after all, if this formula is in fact a correct definition: the new limit is trivial for power law functions, and thus we do not need to use a limit to compute functions where the elasticity is everywhere constant.
Finally, we sketch a proof that it is.
The proof is limit more than our previous observation comparing
For the record, the new formula is not entirely made up by me. The Wikipedia pages after all have the informal
It is not exactly clear what this means just looking at it alone, but as far as I can tell what this correspond to is the final rewrite we did above:
One thing that is nice about this version is it exactly corresponds to how log-log plots are interpreted. With both axes so scaled, power law functions become lines, and elasticisties become slopes. The peculiar "limit of power-law secants to power-law tangent" geometric interpretation we described before are likewise transformed to the regular "limit of secants to tangent". I like to think these definitions make it easier to understand how those plots work.
Footnotes
-
https://math.stackexchange.com/q/3691073 made the cheeky suggestion to use the archaic Greek letter "qoppa" for this. I like it! ↩
-
For anyone not familiar, the Wikipedia page on tangants speaks of this limit somewhat. ↩
-
the properties we care about of these curves are not translation-invariant; on the contrary the location of the origin in crucial for comparing ratios of inputs to ratios of outputs. It is therefore fair to point out this is not "geometric" in the usual euclidean sense. ↩