An intuition for logarithms
For a while now, I've been trying to improve my understanding of working with logarithms. Technically, I knew their purpose, but I wasn't comfortable past reciting their definition. This bugged me, especially since we use them so much in CS. This post sort of collects what I've learned, both for my future self and for anyone else, who feels the same way. Hope you like it :D
Logs are linearized exponents so all the laws of exponents are the log laws: log(xy) is logx + logy. Logs are also a way to scale which is what they were originally used for to do large multiplication without electronic assistants. We log scale graph axis/plots all the time today. They also describe the shape of some processes hence come up in compsci like tree algos. If you have a logarithmic process and work increases by a million new inputs then actual work is only 6x increase. So log is also the number of zeros in a number
There's a book called Calculus Reordered by Bressoud that describes Napier logs/natural ln(x) too
This article is great for getting a practitioner to understand how to calculate a logarithm. For non-math friends, I still reach for this[0] as a great way to get an intuition for the utility of logarithms.
[0] - https://betterexplained.com/articles/demystifying-the-natura...
Reminder that in TeX you want to escape functions so they aren't rendered in italics as a product of separate variables, i.e. "\log" not "log"
While I read this, I kind of assumed that this was written by a math-enthusiastic 9th grader and it felt wonderful! Write down your new-found math skills, that will make you remember them even better for the future and also trains writing skills!
Then I learned that he'd been out of school for quite a few years and is a programmer now. That makes me pretty sad. Is it really such a novel insight to learn about the relationship between exponentiation, roots and logarithms that you need to write a blog post? Isn't this basic school math anymore that everybody doing anything even remotely related to math should know by heart? (Programming definitely counts into that..) How do you even do things like basic finance, interest rates, inflation, without groking this?
It makes me worried not just about the future of society in general, but about our industry in particular. I'd feel uncomfortable working at a place where any of my coworkers would feel compelled to write a blog post about the relationship between exponentiation and logarithm that explains that exponentiation is not commutative.
Slide rules build on logarithms as well.
https://www.sliderulemuseum.com/SR_Class/OS-ISRM_SlideRuleSe...
I think it's fun to know how to compute log2(x) or log10(x) using a 4 function calculator.
Log 2 is a matter of dividing by two (and adding 1 to the log2 value) until X is in the range [1,2)... (or multiplying by 2 and subtracting 1). The to handle the fraction, square the number, if it's above 2, divide by two and add a 1 to the binary fraction, if not, add a 0.... and continue.
Log 10 is similar... divide by 10, then once you're in the [1,10) range... take X to the 10th power (square, square, *X, then square)... count the number of times to get the number back to the [1,10) range, and add that digit after the decimal, and repeat.
Why not start with a simple graphical explanation on the 2D plane? E.g. feed this to the chatbot: "Logarithmic and exponential functions have distinct characteristics, and their behavior around the slope of a line defined as y=mx+b can be analyzed as follows..."
https://people.richland.edu/james/lecture/m116/logs/log2.gif
Graphical visions of functions are more intuitive especially if your're thinking continuously.
The easiest explanation I can conjure for logarithms is: the number of digits needed to write a number in a given base.
In computer science, I read log as levels. n log n is n time levels(n, 2). levels is the number of time n can be repeatedly halved until result is less than 2. levels has something to do with recursion of problem into 2 smaller sub problems.
levels(n, 10) is approximately num_digits(n).
knowing that levels(10, 2) is approximately 3.32, I know that to represent 1000 (3 digits) needs 3 *3.32 approximately 10 bits. 1 million (6 digits) needs 20 bits. 1 billion needs 30 digits. bits are just binary digits.
One of the most common usage of logarithms that I encounter is digits. In that case I usually just write dig instead of log.
Another important case when logarithm shows up, that multiplication on (0,inf) and addition on (-inf,inf) have the same structure, and logarithm creates an isomorphisms between them. Usually it makes multiplication easier/more familiar. Vi Hart has an entertaining video about it [0] where she explains how she likes to smell numbers, and visualizes the exponential function on a line written values over it, instead of a 2d graph.
There are a lot of other cases where logarithm may show up. I usually treat them separately. For example if a logarithm shows up as the integral of 1/x, then chances are that I better think of it as the integral of 1/x, and I don't gain anything if I think of it as the number of digits. That doesn't happen often.
Linear values are "displacement metrics", exponents are "scale metrics".
It thus follows that power functions becomes "linear displacements" when log-log-transformed.
Logarithm should really be called "scale-metric displacement-ifier" or "linearizer-for-quantities-that-have-an-absolute-zero" or something along those lines
That square root estimation is new to me and freaky accurate. The error bounces and is generally decreasing. It's also always an underestimate. Here are the first six local maxima of errors.
And there's an obvious pattern there. Interesting stuff.2: 0.0809 6: 0.0495 12: 0.0355 20: 0.0277 30: 0.0227 42: 0.0192A potential heuristic for the optimal number of management layers in any company is the base10 log of the total number of employees.
Assuming 1 manager oversees 10 people (then -1 to subtract lowest level of engineers).
Google 190,000 => ~4 levels of managers (5 - 1 = 4)
Apple 164,000 => ~4
etc.
The most underrated insight about logarithms is that for almost all practical applications, log2(n) is smaller than 32.
The difference between a O(1) and O(log n) algorithm is usually not that big!
The major problem with logarithms is that there's no verb for "to take the log of" analogous to exponentiate
Logarize?