I am pretty sure you have heard of it before. It is one of the most famous number in the world. π is a ratio that we come across if we study mathematics even at an elementary Level. You must already know that it has an unending infinite decimal expansion (Explored in previous post on how to calculate pi) . In this post we will talk about the beauty of pi in detail (See last year’s pi day post about surprising places pi appears in), discuss what are normal numbers and show (not prove) that π most probably is a normal number.

I have used python for analysing more than 100 Million Digits of Pi to show the Randomness and behaviour of pi and analysing the “Normalness” of the Number.

We will also talk about why I believe , you, me and everything in the universe can exist in the unending digits of π.

Defined as the ratio of the circumference of a circle to its diameter, pi, or in symbol form, *π*,* *seems a simple enough concept. Although the number and it’s value was calculated way back in 2000 BCE by Early babilonians and Indian philosophers ^{1}

The extent of the decimal expansion for the irrational number has gone up exponentially as more and more advancements have been made to machines. Many computer scientists took part in a race to calculate maximum digits in the expansion of pi. Recently the record is held by Timothy Mullican (USA) who calculated 50,000,000,000,000 digits using old server equipment and a software called y cruncher. ^{2}

Here are first 100 digit of pi in base 10:

3.14159265358979323846264338327950288419716939937510582097

4944592307816406286208998628034825342117067

Above you can see I have colored few numbers separately. It is not just for aesthetic purposes(Although the colors do look pretty). I have separately colored digits 1,2,3,and 4 so that you can count the number of times they appear. 1 occurs 8 times, 2 appears 12 times while 3 gets repeated 10 times, and 4 appears 10 times again. If we take mean frequency of each digit appearing, we find it close to 10 times for 100 digits, giving a frequency of 0.1. Infact the table below provides you the value for all 10 digits: And you can see the distribution is more or less flat. Meaning If i were to pick up a digit randomly from the 100 digits written above, chances of the digit being 0 is equally likely than it being any other digit. This is known as uniformly distributed numbers.

```
+--------+----------------------------+------------+
| Number | Number of times it appears | Percentage |
+--------+----------------------------+------------+
| ones | 8 | 8.0 |
| twos | 12 | 12.0 |
| threes | 11 | 11.0 |
| fours | 10 | 10.0 |
| fives | 8 | 8.0 |
| sixes | 9 | 9.0 |
| sevens | 8 | 8.0 |
| eights | 12 | 12.0 |
| nines | 14 | 14.0 |
| zeroes | 8 | 8.0 |
+--------+----------------------------+------------+
Number of Decimal Places Considered: 100
```

Most of you can already see where this is leading to. Let us now introduce the concept of normal numbers and see how pi fits in there.

For a non mathematician, normality of a number or abnormality of a number may sound absurd. It did to me when i first read about it 5 years ago. It was when i started writing this post and started my spiraling obsession with this number. Since then, the world record for expansion of pi has been broken multiple times. And my thirst of curiosity has not yet been quenched. Let us take a step back from the decimal expansion of pi and discuss what Normal Number actually is.

Wolfram Mathworld defines normal number as :

A number is said to be simply normal to base $b$ if its base- $b$ expansion has each digit appearing with average frequency tending to $b^{-1}$

^{3}

In simple words, if the number in in base 10 (digits 0 to 9), each digit will appear with frequency $\frac{1}{10}$ or 0.1.

Infact the frequency of finding a number that is k digits long is given by $b^{-k}$. So frequency of occurrence of objects like ’12’, ’01’,’99’ ,etc. has a frequency of $\frac{1}{10^2} = 0.01$ in decimal (base 10) system. And numbers like ‘123’,’078′,’569′,etc has a frequency of $\frac{1} {10^3} =0.001$.

It is an unsolved problem in Mathematics to prove that irrational numbers like $\pi$,$\sqrt{2}$, e or $\sqrt{s}$ for any s is a normal number or not. But we can still verify with whatever data that we have got if it is a normal number or not.

A few years ago I started looking at ways to calculate python to large number of digits. Unfortunately, soon i landed on an issue. And the issue (as always) turned out to be money. Or lack thereof. Perhaps this shouldn’t be a surprise, that to calculate pi-to say 100 million or a billion digits- it requires heavy expensive hardware. A few tens of thousand can be calculated in any usual machine. ^{4}.

The internet as usual came to the rescue. You can download the expansion of pi online via multiple resources. If you really want to go crazy there is a 22 trillion digits datasets available online ^{5}. I used this MIT’s database(see the reference) ^{6} for billion digits of pi.

I wrote a very simple code to count number of digits in the data file.All Codes are in the GitHub repository in the references below ^{7}. (The repository has similar analysis code for $\sqrt{2}$ and e to check normality of the numbers)

Following table shows the result of code that processes a billion digits of pi. It took my not so decent laptop around 727 seconds (more than 12 minutes) to process all billion digits of pi.

Digit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 0 |

# occurred | 99997334 | 100002410 | 99986911 | 100011958 | 99998885 | 100010387 | 99996061 | 100001839 | 100000273 | 99993942 |

The following chart provides results for 10 million, 100 million and 1 Billion digits of Pi. As we increase the number of digits, the percentage gets flattened out. We can do same analysis for finding two digit strings (like ’01’,’12’,etc.) and we will find the same uniform distribution now with probability staying around 1

Let us take it a step ahead. We know that from a uniformly distributed set, if we pick up a number it is equally likely to be any number. Can we say then that it is a completely random distribution. Yes, infact people have shown that digits of pi (with scrambling) can be used as better random number generators than what we already use in our machines. ^{8}.

As we have shown in a normal number any set of k numbers follows the frequency 1/$b^{k}$. By definition, We can find any arbitrary set of number in the decimal expansion of pi (assuming we have appropriate number of digits). To be clear, if frequency of occurrence is 1/x, then we need more than x digits to get the set at least once in the expansion. (The mathematics of finding set of numbers in expansion goes beyond this and deals with binomial distribution) Take some time and let that sink in. It really blew my mind when I first realized it. For example, we can find first few digits of pi in the expansion of pi itself.

I did exactly this in another program, this program searches for a string of number (14159265 to be exact from $\pi ~3.14159265$ ) ^{9}. As expected we get the number ‘1’ ($\frac{1}{10^1} \times 10^7$) almost a million times. String like ‘1415’ is found ~1000 times as expected using $\frac{1}{10^4} \times 10^7 = 10^3=1000$. A 8 digit long string should not exist in the expansion with $10^7$ digits, but here one such string exists. Can you think of reason why? It is quite an obvious one. (Hint: Same reason exists for why a 7 digit long string is occurring twice and not once.)

```
(base) admin@laptop:~/Desktop/pi/Python$ python pi_normal1.py
No. of Digits Considered: 10000000
+----------+--------------------------+
| String | No. of times it appears |
+----------+--------------------------+
| 1 | 999333 |
| 14 | 100232 |
| 141 | 10158 |
| 1415 | 1001 |
| 14159 | 98 |
| 141592 | 9 |
| 1415926 | 2 |
| 14159265 | 1 |
+----------+--------------------------+
1415926 found at position : 1457054
--- 18.550782918930054 seconds ---
```

Jorge Luis Borges in 1941 wrote a short story called Library of Babel ^{10}, the following quote is extracted from the story itself.

Everything: the minutely detailed history of the future, the archangels’ autobiographies, the faithful catalogues of the Library, thousands and thousands of false catalogues, the demonstration of the fallacy of those catalogues, the demonstration of the fallacy of the true catalogue, the Gnostic gospel of Basilides, the commentary on that gospel, the commentary on the commentary on that gospel, the true story of your death, the translation of every book in all languages, the interpolations of every book in all books.

The Library of Babel,by Jorge Luis Borges(1941)

In the library of Babel, Borges imagines a library made up of hexagonal rooms, each hexagonal room having walls covered with shelves. Each book shelf containing volumes and volumes of books. Each book containing 410 pages completely filled with text. The library of Babel has all possible combinations of the 25 characters that are possible in the language(the story is being told in), including comma, fullstop and space. In the library any string of letter, any collection of word that was ever written or will ever be written hides in seemingly infinite books.

The philosophical themes underlying the library is immense. What is the purpose of writer as a creator, if what he writes has already been thought of. Is any thought original? From the questions of free will, to the idea of reality, library of babel is a gold mine for human thought. You can explore the library of babel using the link in the reference and have your mind blown by the absurdity of human thought, ^{11}.

But you might be wondering why we are talking about it in a post about pi?

I would like to call pi, as a kind of library of babel itself. Only the set of characters have changed. Assuming we could transform text into base 10 decimal number, and ignoring comma, spaces and full stop, we can potentially find every possible string ever inside the library of Babel.

Library of Babel has also been compared with the human genome and protein sequence. Much like the library of Babel, the protein (peptide) sequence in living thing forms seemingly arbitrary chain.

Let us take some time and discuss some biology. Living beings (may it be humans, bacteria or plants) are made up of cells. Each cell has a nucleus in the center and inside the nucleus lies the genetic material of the cell. It is made up of heavily condensed long chain polymers of Nucleic Acids. The polymer are bound together in the famous double helical structure, which is known as DNA.

There are four kinds of Nucleotide bases(which makes up the polymer-DNA) Adenine (A), Thymine (T), Guanine (G) and Cytosine (C), the other strand of the DNA has complimentary bases, A joins with T and G joins with C.

This string of bases, A,T,G and C forms the genetic information of a living being. It provides all information that is needed for the cell to function. Let us consider an example of protein formation to understand it. Each protein is coded by a sequence of Genetic bases. The sequence is known as a codon. A codon consists of three bases. There are total (4*4*4) 64 codons which code for 20 known Amino Acids (Protein building blocks).

Let us say the body is in need of insulin, the cell sends signal to the DNA and a complicated yet enthralling process of Translation starts. In the process of Translation, DNA gets unwound and turns into a single stranded mRNA (messenger RNA-Ribonucleic Acid), the RNA comes out of the nucleus and Ribosome(a machine that moves on the strand) reads the RNA and produces chain of Amino Acids that correspond to the code on the RNA. Actual process is a bit more complicated but beyond the scope of the article.

Now let’s get back to our favourite number $\pi$ and its expansion. Let us consider an example of a particular gene in our body. Human beings require Insulin to function. Insulin generation is controlled by a gene (with gene_id=3630 from NCBI database). The gene for insulin consists of 465 Bases and produces peptide of length 110. Let us see how we can convert from the genetic code to the digits of pi.

We can let each base to be a digit in base 4 numeric system. Say A=0, T=1, G=2 and C=3. Now each genetic code can be converted into a string of number in base 4. We can then convert base 4 to base 10 using simple method. A better way to handle it would be to convert the codons into decimal directly using basae 4 to base 10 conversion. In the following code block we can see each of the possible codon(even those that do not code for proteins) and its corresponding numeric value in decimal system.

```
change={
'genome':['aaa','aat','aag','aac',
'ata','att','atg','atc',
'aga','agt','agg','agc',
'aca','act','acg','acc',
'taa','tat','tag','tac',
'tta','ttt','ttg','ttc',
'tga','tgt','tgg','tgc',
'tca','tct','tcg','tcc',
'gaa','gat','gag','gac',
'gta','gtt','gtg','gtc',
'gga','ggt','ggg','ggc',
'gca','gct','gcg','gcc',
'caa','cat','cag','cac',
'cta','ctt','ctg','ctc',
'cga','cgt','cgg','cgc',
'cca','cct','ccg','ccc'],
'converted':
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63],
}
```

Let us now consider say the insulin gene. From base #441 to base #450, the code starts with the start codon and ends with the stop codon.

Bases: atggaataa (This codes for Glutamic Acid)

base10_decimal conversion: 63216 (This string occurs 1982 times in the first 200M digits of Pi) ^{12}

If we add two more successive codons we get 632161161, This string occurs 1 times in the first 200M digits of Pi.

Since we know, and have analytic proof that the decimal expansion pi is non repeating and non recurring (irrational number). By definition we can keep increasing the size of gene and keep calculating the digits of pi and we will find the genetic code inside the expansion of pi. The complete Insulin gene would translate to a 296 digit long decimal number and to find an instance of it we would need to calculate $10^{296}$ digits of pi.

I know, I know it sounds absurd. But it is hypothetically correct.

Many might consider it a stretch, but so far people have explored trillions of digits of pi and the normality is here to stay. ^{13}. And in future if some brilliant mathematician can prove the normality of pi analytically, we can safely say any string of digits would lie in the expansion of pi. So assuming instead of having the words and letters in the volumes in the library, we could make a library with shelves lined up the wall made of decimal expansion of $\pi$. While library of babel is a finite library (that leaves me as a writer breathless- there is a limit to human thought) the pi library would be infinite. And among those, a wondering librarian might find a book shelf that would consist of you. Your essence of being, every base pair from your DNA would be written in that bookshelf. And so would be mine. An amazing thought to think about and to end with.

Notes:

- (Indian sources by about 150 BC treat $\pi$ as $\sqrt{10}$ ≈ 3.1622. wiki). ↩
- (Most accurate value of $\pi$ Guinness World Records (link) ↩
- (Wolfram MathWorld Normal Numbers (link) ↩
- (More advanced laptops can even calculate million digits of pi using clever techniques) ↩
- (He also provides smaller datasets at his blog) ↩
- (MIT’s database https://stuff.mit.edu/afs/sipb/contrib/pi/ with billion digits of pi) ↩
- (GitHub Repository) ↩
- (Using π digits to Generate Random Numbers: A Visual and Statistical Analysis (arxiv link) ↩
- (pi_normal1.py in GitHub Repository) ↩
- (Original text translated in english ) ↩
- (Libraryofbabel.info created by Jonathan Basile Emory Comp Lit Ph.D. Candidate) ↩
- ( Search your own string in pi) ↩
- (Digit Statistics of the First $\pi^e$ Trillion Decimal Digits of π :arxiv link) ↩

This post is meant to be a supplement to what’s coming ahead in a few days. I made this blog to share my exploration and curiosities and Here’s me doing exactly that. So let’s see what i learned today.

With the ongoing pandemic of COVID-19, the Coronavirus disease outbreak has spread over the whole world, with daily rise in the number of reported cases across the world. Now an important part of dealing with the disease is targeted by understanding how the world interacts as a social network, and Network Scientists are working day and night to understand this epidemic for future predictions. (There is a really wonderful New York Times article titled : “Mapping the Social Network of Coronavirus” I would recommend to read: Link in the end).

So i thought it would be cool to take time out and understand the dynamics of how diseases are spread and understand the mathematics involved.

To study the somewhat complicated equations governing spread of a disease it is important to understand Population Dynamics first:

The subject of population dynamics deals with the evolution of population of a specie with certain parameters involved(e.g. growth rate, resource limit, etc). Simplest model is often known as Malthusian model of growth:

We are going to arrive at the Malthusian Model by simple arguments. So to talk about the population growth, we must talk of the Number of individuals of a specie (N) as a quantity changing with time i.e we must talk about $N(t)$ as a function of time $t$.

Also the change in quantity with time is given by the derivative of the said quantity with respect to time, so we must study

$$\frac{dN(t)}{dt}$$

Now, consider two colonies A and B , A has 50 individuals while the B has 500. Assume on average they reproduce once in a few days and both have same rate of reproduction. We count again after a few month, which colony do you expect to have grown up better?

You expect Colony B to do much better right? So Intuition tells us the change in population must be proportional to the population itself. i.e. At time $t+\delta t$ the population will be proportional to poulation at time $t$. Thus we can say:

$$\begin{aligned} \frac{dN(t)}{dt} &\propto N(t) \\ \frac{dN(t)}{dt} &= rN \end{aligned}$$

Here r is the proportionality constant and shows us how strong the rate of increase/decrease in the population is. It is known as *intrinsic rate of increase* because r depends intrinsically on the system. So what known rates can you think of that intrinsically define a population? The most obvious answer are Birth and death rate. In fact we can define our intrinsic rate to be equal to the difference between the birth and death rate.

$$r=b-d \quad \\where b=Birth Rate and d=Death rate$$

Now this differential equation is not really difficult to solve : the solution is given by :

$$\begin{aligned} \frac{dN(t)}{dt} &= r.N(t) \\ \frac{dN(t)}{N(t)} &= r dt \\ log_e\left( N(t) \right) &= r t + c \implies N(t) =N_0 . e^{r.t} \end{aligned}$$

In the solution above $N_0$ is the population at time t=0 and shows exponential change in population. Based on sign of r the population can either exponentially increase or decay to zero. The graphical solution for this is really interesting:

The Red Curve shows the increasing population with r=0.5, while the blue curve shows the same solution with r=-1 and both solution had same initial population.

The calculations of the same kind of equation were the basis of Thomas Malthus’s *essay on the Principles of Population* (1798) in which he predicted the collapse of Human Specie into an ever increasing Misery and chaos if we don’t abstain from Procreation. Thus this model is known as Malthusian Model.

Time for a logical Leap that Verhulst also took when he read Malthus’s essay. He realized that there was a need for the obvious limit to the explosion of population with positive growth rate. Malthus himself argued about the fact that there is a finite rate of production of food and if we increase our population dramatically there will be a catastrophe which we call a Malthusian Catastrophe (link) . Verhulst introduced a term that’s known as Carrying Capacity, a limit to the population.

Thus we need the right hand side of the Differential equation to be zero when N is equal to Carrying capacity C, one way to do that would be to multiply a term $(N-C)$, but this leads to quadratic increase in population and for small population our model must mimic the primitive model. We can also use fraction $\frac{N}{C}$ which goes to 1 when N goes to K thus the Differential equation is given by:

$$\frac{dN}{dt} =rN \cdot \left( 1- {\frac{N}{C}}\right) $$

Here C is the carrying Capacity of the population. This equation is famously known as the Logistic Equation and arrives in various fields like Physics, Medicine, Computers, Networks, and Ecology.

The solution again can be found out as we did in the previous case albeit a bit more calculation is needed. Any standard ecology book will have the solution and you can look it up.

Solution is given by :

$$N(t)={\frac {C}{1+\left({\frac {C-N_{0}}{N_{0}}}\right)e^{-rt}}}$$

The solution is a sigmoid curve:

C is The highest value that the population can reach given infinite time (or come close to reaching in finite time). It is important to stress that the carrying capacity is asymptotically reached independently of the initial value. So , does it mean the world population always reach saturation?

No here, we must understand that these models, although show really good approximation to real world populations of bacteria and other primitve organisms , human ecology has many variables and much more terms that one must take into account.

For example if the population reaches the carrying capacity the limited resources on earth would come into play, there will be a time when we would run out of the resources to survive and the sigmoid curve will dwindle.

We can have terms that involve Inter and Intra specefic competition, We can dynamically study different species hunting and boosting each other.

In the next post we will discuss a model for Disease Spreading and perhaps hopefully try to explore some real life data involving COVID-19. Untill then Stay curious!

NYT ARTICLE: https://www.nytimes.com/2020/03/13/science/coronavirus-social-networks-data.html

]]>Now pi has been a fascination for me for a long long time, ever since i started studying geometry in school and the fascination just grew exponentially. You can check out my last post on the blog which was my first love letter to pi, in which i talk about various ways to calculate including Monte Carlo method.

So, I have been working with non linear dynamical equation and solving a set of equations numerically using RK4 solver.(To those of you who don’t know RK4 is a Runge Katta algorithm to solve Differential Equations), I had a complicated enough system of multiple equations and I ran my algorithm, as the program started giving me output , I saw numbers flashing by my screen. 3.25, 3.43, 2.91, … and there I was sitting in the library giggling like a 3 year old as I witnessed $\pi$ coming out as time it takes to reach the attractor(final state).

But as obvious once I changed the step size of my ode solver the final time changed. (Halving the step size took twice as much time thus tending to $2\pi$.) But this post is not meant for this simulation. This post is to explore other such examples where pi creeps in. Most astonishing being the Mandelbrot Set.

Gaussian Integral is the integral of Gaussian function : $e^{-x^2}$

$$\int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}$$

Now we can write a whole post about the applications and importance of gaussian function and normal distribution, applications of the function range from statistics to natural science. It’s interesting how the ratio of circumference of a circle to the diameter appears in integral of a distribution function that on first glance has no connection to a circle. It’s not difficult to show the relation of spherical symmetry and circle to this integral. Click below to see the derivation and connection:

[bg_collapse view=”button-orange” color=”#72777c” icon=”arrow” expand_text=”Show More” collapse_text=”Show Less” ]

$$ \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\int _{-\infty }^{\infty }e^{-y^{2}}\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-(x^{2}+y^{2})}\,dx\,dy$$

In polar co-ordinates, we can show $r^2=x^2+y^2$ and $dx dx= r dr d \theta$ , thus the integral can be solved as :

$$\begin{aligned} \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r\,dr\,d\theta \\&=2\pi \int _{0}^{\infty }re^{-r^{2}}\,dr\\&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s}\,ds&&s=-r^{2}\\&=\pi \int _{-\infty }^{0}e^{s}\,ds\\&=\pi (e^{0}-e^{-\infty })\\&=\pi ,\end{aligned}$$

Thus taking square-root of both sides we get $\sqrt\pi$ . There’s always a circle around.

[/bg_collapse]

You get π in *any* oscillation. When a mass bobs on a spring, or a pendulum swings back and forth, the position behaves just like one coordinate of a particle going around a circle in the phase space.

The catch starts with physicist’s favorite assumption , As shown in this wired article titled : Everything—Yes, Everything—Is a Harmonic Oscillator

If we boldly go there, we can assume any dynamical system to be locally harmonic around a minima and thus have a periodic circular motion and therefore have pi appearing in the expression for the Time period.

This is a wild one : It can be shown that for a Tri-diagonal Matrix of the form :

$$M=m^2\left( \begin{array}{cccccc} -2 & 1 & 0 & \cdots & \cdots & 0 \\ 1 & -2 & 1 & \cdots & \cdots & 0 \\ 0 & 1 & -2 & 1 & \cdots & 0 \\ \vdots & \cdots & \vdots & \vdots & \cdots & \vdots \\ 0 & \cdots & \cdots & 1 & -2 & 1 \\ 0 & \cdots & \cdots & 0 & 1 & -2 \\ \end{array} \right).$$

The square root of absolute value of largest eigen value tends to value of pi as m tends to infinity. (Source at the end)

$$2m^2-2{m^2}\cos \left({\frac {k\pi }{n+1}}\right),\qquad k=1,\ldots ,n$$

If you take large m approximation in the expression above you can show the result analytically tending to $\pi$

For example I did a quick numerical calculation by using R and found for m=100 largest eigen value is : -9.674354 which gives me approximate value of pi to be 3.110362.

m | Value of Square root of |largest Eigen Value| |

10 | 2.846297 |

100 | 3.110362 |

1000 | 3.1384 |

5000 | 3.140964 |

n=100 m <- diag(-2*n*n, n) m[abs(row(m) - col(m)) == 1] <- n*n ev<-eigen(m)$values p2=max(ev) p=sqrt(abs(p2))

The Riemann zeta function *ζ*(*s*), is a function of a complex variable *s :*

$$\zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}$$

With applications from Statistical Physics, to number theory Riemann Zeta function holds in itself some incredibly astonishing properties (Think of s=-1 case), if we consider s=2 the function converges to :

$$\zeta (2)=1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}\approx 1.64493406684822643647; $$

This problem is famous Basel Problem that was solved by Euler himself back in late 1700s. Here is a very beautiful geometric proof explaining where and how the circle comes in and donates the pi in the expression.

This is the most interesting bit i could find. It’s astonishingly brilliant!.

So for those who don’t know Mandelbrot set arrives from the complex logistic equation, $z_{n+1}=z_{n}^{2}+c$. We start from z_0=0 and reiterate the expression , we map this for various values of complex number c.

Thus, a complex number *c* is a member of the Mandelbrot set if, when starting with *z*_{0} = 0 and applying the iteration repeatedly, the absolute value of *z*_{n} remains bounded for all *n*>0.

For example, for *c*=1, the sequence is 0, 1, 2, 5, 26, …, which tends to infinity, so 1 is not an element of the Mandelbrot set. On the other hand, for *c*=−1, the sequence is 0, −1, 0, −1, 0, …, which is bounded, so −1 does belong to the set. (Source:http://math.bu.edu/DYSYS/explorer/def.html)

Okay so I hope you are still here with me, the beautiful self repeating pattern that we have just got again has immense interesting qualities which we will be going after in another post later on.

What I am going to verify here has blown my mind and i hope you find it interesting too. If we look at a boundary point of the Mandelbrot Set, say c=0.25 or 1/4, we can show that it converges to 1/2.

$$\begin{aligned} z_1 &= 0*0+0.25=1/4 \\ z_2 &= (1/4)*(1/4) + 0.25 =5/16 \\ … & so \quad on \end{aligned}$$

Now since we are at the boundary of the set, if I add a small number to c, (i.e $c \to c + \delta$ where $\delta$ is a small number), the value should diverge to infinity.

Now let’s calculate the Rate of Escape, something like escape velocity in complex space, defining the rate as the number of steps the equation takes so that $z_n>2$.

So Rate of Escape : $N(\delta)=$No. of iterations it takes until $ z_n>2.0.$

For example : if $\delta=0.1$ the iterations looks something like this

$Z_n$ | 0.26 | 0.35 | 0.4725 | 0.573256 | 0.678623 | 0.810529 | 1.006957 | 1.363962 | 2.210393 |

$z_{n+1}$ | 0.35 | 0.4725 | 0.573256 | 0.678623 | 0.810529 | 1.006957 | 1.363962 | 2.210393 | – |

So here it took 8 steps, now we multiply this number by square root of the original deviation and get 2.5298…, Hmmm, not really interesting yet

Here is a simple C program I wrote to find out the same quantity $P=N(\delta) \sqrt{\delta}$

#include <stdio.h> #include <math.h> int main() { double z0=0,z1,c=0.25,d=0.1; c=c+d; do { z0=0; c=0.25; c=c+d; int p=0; do { z1=z0*z0+c; z0=z1; p++; }while(z1<2.0); printf("%d\t%1.12lf\t%1.14lf\n",p,d,p*sqrt(d)); d=d*0.1; }while(d>1e-11); return 0; }

Compiled below is the result for various values of $\delta$

This astonishing result was first shown in 1991. It lead to even further research in the field of Julia and Mandelbrot Set.

We can go on about the beauty of this number, so i’ll save some more for my next love letter. Untill then stay Curious folks :).

SOURCE: main source of inspiration this blog post was based on : https://math.stackexchange.com/questions/689315/interesting-and-unexpected-applications-of-pi

This will be my first post of a series of love letters to the irrational number $\pi$. The ratio of circumference to the diameter of the circle.

There are many ways to calculate Pi. Simply because of the weird fact that the number appears in multitude of locations that are quite surprising. Some methods are pretty simple and some are the most complex to calculate.

Simplest exercise you can work on now. You can just walk into your kitchen, find any circular object. For example I used a circular plate. Using a thread find the circumference-run it around the edge- then using a tape measure find the length of the diameter.

When you have both lengths just divide them, you can use the calculator on the left side of the screen to find out the result. You’ll always find the ratio to be around 3 albeit the object was circular. (Now that’s an approximation which physicists in general won’t like)

We will be ignoring the geometrical method to calculate $\pi$, because in this post i want to show how to numerically calculate $\pi$. I will be using Python embedded code which you can play around with even without a lot of knowledge of the programming language itself.

The first written description of an infinite series that could be used to compute π was laid out in Sanskrit verse by Indian astronomer Nilakantha Somayaji in his *Tantrasamgraha*, around 1500 AD. Much like other old Indian mathematics works, there was a lack of proof to this series.

Nilakantha attributes the series to an earlier Indian mathematician, Madhava of Sangamagrama, who lived c. 1350 – c. 1425. The series was later in 17th Century rediscovered by Scottish mathematician James Gregory in 1671, and by Leibniz in 1674:

$$

\frac{\pi}{4}=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\ldots

$$

This series is thus famously known as Gregory-Leibniz series. The formula for π is referred to as *Madhava–Newton series* or *Madhava–Leibniz series* or Leibniz formula for pi or Leibnitz–Gregory–Madhava series.

If we keep on calculating this infinite series we would be able to calculate $\frac{\pi}{4}$ and by simply multiplying by 4 we can calculate the value of $\pi$.

Understanding this code is pretty easy we find the kth term in the infinite expansion which is of the form of $\frac{(-1)^{k}}{2 k+1}$. In python the for loop runes from zero to the defined range. So the terms are added from k=0 to k=range.

You can explore the program and observe it working below.

**Monte Carlo methods** are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. [More]

Lets say i have to solve a problem whose solution i know resembles a random process. So i can perform that random process N number of times and find out my result. Monte Carlo methods are used to solve many complicated systems with multiple Degrees of Freedom.

We are going to look at a shooting problem.Let us say we have a square target with a circular region for bonus points. Each bullet striking inside the circle will be counted while the bullets that strike outside circle are not. *pretty easy game right.*

Frequency of the bullets being counted, i.e. the frequency of bullets landing inside the circle will be equal to the probability of bullet falling in the circular region. For an unbiased shooter (completely random) the probability is equal to the ratio of area of circular region to the area of the square.

Let us say f is the frequency of shots, N is the total no. of shots and m is the number of bullets on counted. Area of circle, we all know is given by $A_c= \pi r^2 = \pi$ (r=1 unit) while area of square is given by $A_s=side^2=2^2=4$ (side=2 units)$

$$\therefore f=m/N=\frac{\pi}{4} \hspace{1cm} or \hspace{1cm} \pi=\frac{4 * m}{N}$$

The program below simulates the same using python. I have written it in the simplest form and the program runs 5 times to show how random the result actually is.

random() function generates a random number between 0 and 1 and using appropriate algebra we make sure the numbers lie between -1 and 1. Imagine the figure i made above to be translated so that the centre of the circle lies at the origin. We count the random numbers that are generated inside the circle. Feel free to ask me doubts if any in the comments.

The BBP (named after Bailey-Borwein-Plouffe) is a formula for calculating pi discovered by Simon Plouffe in 1995. It converges faster than Gregory-Leibniz formula. We will show this in a while. The BBP formulae is given below.

Below I have implemented both BBP formula and Gregory Leibniz to calculate the value of pi. We can see how BBP converges to 4 digits accurate value of pi in less than 5 iterations. It is a fast formula that is used to calculate expansions of pi.

]]>Jose Saramago

Chaos is merely order waiting to be deciphered

Humans have had the tendency to breed chaos ever since the first nomads started fighting for food. But there are few among us who found order among all of the chaos. Mendeleev was one of them and so was Dr. Murray Gell-Mann.

It was in 1869 when the Russian Chemist Dmitri Mendeleev organized the known chemical elements in a table and found order among many known elements -metals and non metals alike- making a periodic table.

Doing that Mendeleev not only had an easy way to classify different elements but also predicted various elements which were not yet known and predicted their properties.

Murray Gell-Mann did a similar thing but for Particles (Bosons and Fermions), proving the importance of Innovative thinking and how out of the box thinking can lead you to wonders.

Dr. Murray Gell-Mann had an inherent curiosity as he was growing up in Manhattan. When he was a kid he and his elder brother- who was 9 year older- used to go bird-watching where he encountered his various interests. Including Nature, Archaeology, Etymology and even literature. He pondered through and read James Joyce’s *Finnegans Wake* an Irish novel which played an important role in the future(unknowingly).

Gell-Mann found elementary physics difficult and he wanted to go with Biology and Linguistics as his major while filling for the university, but his father told him to reconsider and opt for Engineering (as it would be easier to earn a living with it and he won’t starve). Gell-Mann then replied to him: “I’d rather starve. Besides, if I design anything, it will fall down, fall apart.” (you can find this in CalTech’s oral History Project here)

Vocal about his dislike of the subject his father then advised to find a middle ground with Physics. As he narrates in his interview:

So then I said, “Well, what do you suggest?” My father said, “What about a compromise? What about physics?” And I said that that course in high school was a disaster. It was the only course in which I did badly in high school, and I hated it. He said, “Oh, that doesn’t make any difference. At the university, you’ll study quantum mechanics and relativity, and you’ll love it. It’s marvelous.” So I took physics, and after a while I got to like it. And I found that my father was right, in fact—uncharacteristically, he was quite right. Quantum mechanics and relativity were marvelous.

There’s a life lesson to be learnt from Mr. Gell-Mann here, A subject that he really disliked, it grew up on him and went on to become his passion. He never gave up on it. And of-course his father helped him on. He fondly remembers a few of his teachers and professors that helped on the way.

He did went on to pursue his PhD from MIT and started teaching at University of Chicago. In 1950s he started his work on Particle Physics and Quantum Electrodynamics which was the upcoming branch of physics that was giving incredible results with physicists like Feynman, Schwinger.

In 1940-50s there were a slew of particles that were discovered in UK as well as in CalTech by Carl David Anderson in cosmic rays (Radiations coming from outer space).

Then in 1952 first particle accelerator that could accelerate particles up to energy order of few GeV started operating and many other baryons were introduced to the array of particles. They were dubbed strange particles because they showed a few behaviors which other particles never did. These particles were always produced in pairs and were produced by Strong Interaction (Time Scale ~ $10^{-23}$ s) and decayed by Weak interaction.(Time Scale ~ $10^{-10}$ s)

In 1953 Gell-Mann and *Nishijim*a proposed a new property that these new particles must have assigned to. They called it *Strangeness. *It was a qunatum number much like Charge, Lepton Number or baryon Number which already existed back then. The number is conserved in Strong interactions while not in weak interactions thus explaining the weird behavior of strange Particles. Gell-Mann also working with Richard Feynman proposed a vector/axial vector (VA) Lagrangian for weak interactions. Without going into much more detail let’s get back to the particles.

By the end of 1960s there were more than 100 different particles that had been discovered in last two decades and which confused physicists across the world. Some physicists started calling it * “particle zoo” .*Each particle was an elementary particle. And there could not have been as many elementary particles.

Gell-Mann :

"Anyway, in those years I was thinking a lot about approximate symmetries—going beyond isotopic spin and forming very approximate families, which we can call supermultiplets. Wigner invented that term, for something slightly different, in 1936 and ’37. And I use that term sometimes myself for this slightly different physical concept. The idea was to put these particles, which were already in isospin families, into bigger families. It’s much like classification in biology. But here it has dynamical consequences and dynamical origins. "

In the academic year 1959-60, Gell-Mann took on the problem that was troubling the physics world. He started meddling with Symmetries of Weak interaction. The SU(2) times U(1) symmetry was giving results which were not coherent with the eight particles. An year later he discarded the global symmetry he was working on and started on the lie algebra of SU(3) group.

He identified the Symmetry present in the particles and then in January 1961 proposed the so called *Eight-fold* way to arrange the particles. While the group representation of SU(3) holds complicated mathematics and lie algebra which is complicated. It is very simple to see how the eight-fold way works and arranges particles as their properties.

Gell-Mann and Nee’man independently discovered the eight-fold way to arrange Baryons and Mesons in complex geometrical patters according to their properties.

In this arrangement each property as you go from left to right (Strangeness) or top to bottom (Isospin) or diagonally (Charge). ( As we had groups in Mendeleev’s Periodic Table)

Eight particles are arranged in Hexagons called as Octets and ten particles are arranged in a triangle and is known as a decuplet.

Each octet and Decaplet had Spin and parity constant. As we go down an octet or a decuplet the mass of particles increase and the mass is almost constant horizontally.

For constructing the baryon Decuplet ,The principles of the eightfold way also applied to the spin-3/2 baryons .

Gell-Mann found that at the bottom of the figure there was a particle with -1 Charge and -3 Strangeness that should exist. But it had not been discovered yet. He proposed that the particle $\Omega^-$ will have mass of 1650 MeV. In 1964, particle researchers detected a particle corresponding almost exactly to Gell-Mann’s description.

*Note: It says a lot about Gell-Mann’s understanding across multitudes of subjects that he called it the Eight-Fold way based on the Eight-Fold way of achieving Nirvana in Budhist Philosophy.*

Yes the Eight-fold way led to physically good looking geometrical arrangements of known particles. They arrived from symmetries that were shown by the particles. But why did they follow the symmetry in the first place.

In 1964 Gell-Mann and George Zweig independently proposed that all hadrons (Mesons and Baryons) are made up of more elementary particles which Gell-Mann called Quarks while Zweig called them Aces.

Each Baryon is composed of three quarks while each meson is composed of two quarks.

The quarks were propsed with fractional charge which many physicists thought to be absurd and thus did not accept Quark theory and rejected Gell-Mann’s hypothesis.

Gell-Mann named the particles quark from a line in James Joyce’s book that we talked about in the starting. In his book *The Quark and the Jaguar *Gell-Mann writes:

In 1963, when I assigned the name "quark" to the fundamental constituents of the nucleon, I had the sound first, without the spelling, which could have been "kwork". Then, in one of my occasional perusals ofFinnegans Wake, by James Joyce, I came across the word "quark" in the phrase "Three quarks for Muster Mark". Since "quark" (meaning, for one thing, the cry of the gull) was clearly intended to rhyme with "Mark", as well as "bark" and other such words, I had to find an excuse to pronounce it as "kwork". But the book represents the dream of a publican named Humphrey Chimpden Earwicker. Words in the text are typically drawn from several sources at once, like the "portmanteau" words inThrough the Looking-Glass. From time to time, phrases occur in the book that are partially determined by calls for drinks at the bar. I argued, therefore, that perhaps one of the multiple sources of the cry "Three quarks for Muster Mark" might be "Three quarts for Mister Mark", in which case the pronunciation "kwork" would not be totally unjustified. In any case, the number three fitted perfectly the way quarks occur in nature.

In 1967 at Stanford Linear Accelerator (SLAC), in experiments of Deep Elastic scattering of an electron and a proton proved the existence of quarks and Gell-Mann’s hypothetical particles were accepted by the Scientific Community.

*In 1969 Gell-Mann received the Nobel Prize in Physics for “his contributions and discoveries concerning the classification of elementary particles and their interactions.”*

Since, Quarks are fermions, there can’t be three particles of the same type together. But we had some baryons which had three same quarks like $\Omega^-$ had three strange quarks. To solve this conundrum, Gell-Mann and some other physicist described *Color* quantum number that allowed three quarks to exist together in a small space. This gave rise to a new branch in Quantum theory: Quantum Chromodynamics.

Gell-Mann was not only a famous and successful scientist. he found that success starting from difficult circumstances. In his interview series ( You can watch it on YouTube here )

His is the prime example as how if you think outside the box. If you keep your mind open, keep your curiosity alive and never give up. You can achieve incredible feats.

Here is a Ted Talk by the man himself to inspire you

Sources:

- https://www.achievement.org/achiever/murray-gell-mann-ph-d/
- https://www.nytimes.com/2019/05/24/obituaries/murray-gell-mann-died-.html
- https://en.wikipedia.org/wiki/Murray_Gell-Man
- https://www.caltech.edu/about/news/caltech-mourns-passing-murray-gell-mann
- http://oralhistories.library.caltech.edu/228/1/Gell-Mann_OHO.pdf
- https://www.youtube.com/playlist?list=PLVV0r6CmEsFxKFx-0lsQDs6oLP3SZ9BlA

Today we are gonna talk about Entropy, what in the world does it actually mean. The misunderstanding of the concept of disorder and how Entropy defines the world around us. Not only defines the state but also holds an importance in the future of our Universe itself.

~ Albert Einstein

“ [Thermodynamics is] the only physical theory of universal content concerning which I am convinced that, within the framework of the applicability of its basic concepts, it will never be overthrown.”!

Let’s do some time travel to understand the concept of Entropy. Trust me I will try to make the journey interesting.

Alright, looks like we have traveled back in the right time. Its Early 1800s, Napoleon Bonaparte is rising as a great leader in France after the infamous French revolution. And one person who recently resigned from position of Napoleon’s Minister of War is working on a paper which he called *Principes Fondamentaux de l’Equilibre de Mouvement* (*Fundamental Principles of the Equilibrium of Movement*). An enigamtic philosopher and a successful military leader Carnot was the reason Napoleon was successful (known as *Organizer of Victory).* ^{1}

Published in 1803 the paper talks about one of the first (incomplete) statement of law of conservation of Energy. He says in *any natural process there exists an inherent tendency towards the dissipation of useful energy . *

Around two decades later. His son *Sadi Carnot*, used the same concept to describe an ideal reversible engine and gave his famous Carnot Engine argument. He proposed an ideal reversible steam engine that undergoes a cycle (which we now call Carnot’s Cycle -see below ) such that there is no other engine that is more efficient than Carnot’s Engine. ^{2}

**Carnot’s theorem** is a formal statement of this fact: *No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs.*

$${ \eta ={\frac {W}{Q_{H}}}=1-{\frac {T_{C}}{T_{H}}} \text{ ; (1)} } $$

Any basic Undergraduate thermodynamics textbook have the proof for Carnot’s theorem. Another statement of the theorem is : *All reversible engines operating between the same heat reservoirs are equally efficient. *Another interesting point that we can note is the efficiency of heat engine depends only on the temperature of the source and sync. As can be seen in equation 1.

But we still don’t have the concept of Entropy and the name.

~ Rudolph Clausius .The “world” here refers to the universe as a whole. This is the way Clausius gave a summary of the first and second laws of thermodynamics. German from Clausius, conclusion of

Die Energie der Welt ist constant. Die Entropie der Welt strebt einem Maximum zu.

The energy of the world is constant. The entropy of the world tends towards a maximum.‘Ueber verschiedene für die Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wärmetheorie’in Poggendorff’s Annals of Physics (1865)‚125, 400

Alright let’s take our Time machine to 1850 Berlin, Germany where Rudolph Clausius is teaching as a professor at Royal Artillery and Engineering School in Berlin. It was here when he published his most famous paper, *Ueber die bewegende Kraft der Wärme* (“On the Moving Force of Heat and the Laws of Heat which may be Deduced Therefrom”) ^{3}

In the paper Clausius provided a correlation of heat transfer and work. He writes:

In all cases where work is produced by heat, a quantity of heat proportional to the work done is expended; and inversely, by the expenditure of a like quantity of work, the same amount of heat may be produced

He later on concluded his work in another paper written in 1854 titled (In English in 1856 ^{4}): *“On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat” *

Second fundamental theorem now famously called as Second Law of Thermodynamics was written in this paper as: *“Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time “*

In the paper Clausius derives that no matter what path we take in a reversible cyclic process. (See P-V diagram above) the ratio of change in heat to the temperature is always zero. i.e. and for an irreversible process it’s always less than 0.

$$ \oint \frac{\delta Q}{T} \leq 0 $$

This is famously known as Clausius Inequality

Note: If someone is being adventurous you can read the 1856 paper and see that the T in above formula was actually just a function f(T) and he later shows it to be absolute temperature.

Let us travel 11 years further in Zurich, Switzerland. It was here employed as a professor of Physics in ETH Zurich that Clausius gave the concept and coined the term Entropy.

$$\Delta S \ge \int \frac{\delta Q}{T} $$

uThis is the Law of Entropy the final form of Second Law of Thermodynamics. Clausis closes off his Memoir with the statement : ** Die Entropie der Welt strebt einem Maximum zu. The Entropy of the world moves towards a maximum.**

And that’s the story of thermodynamic definition of Entropy.

For understanding Boltzmann’s definition of Entropy we need a small refresher on Statistical Mechanics. Before traveling further in time let’s stop in a classroom and discuss it. I will try to make things as simple as possible.

Systems involving 1 degree of Freedom(DoF), like a bead moving in a linear thread were easy to solve with classical mechanics. Even system involving 2 bodies was easily resolved to 2 DoF and solved analytically. But when talking of realistic systems like a gas contained inside a volume the DoFs increase monumentally. For N gas particles confined in a volume (like a glass of water) there are 3N DoFs where in any realistic system N is around the Avogadro Number ($10^{23}$) which even the most sophisticated super computers can’t deal with.

In 19th to early 20th century physics dealt with such large systems using thermodynamics which was a “phenomenological”(based on observing patterns in phenomenon without going into underlying cause) study of astronomically large “Macroscopic” systems in *equilibrium.*

It was in mid 19th century when Maxwell gave his distribution function in Kinetic Theory of Gas which provides one of the first Statistical Law in physics.

While the macroscopic variables like Energy, Volume, etc do not change for a system in equilibrium there can be small microscopic fluctuations inside the system that may lead to change in configuration of the particles in the system but our total energy of the system remains same. These are called “microstates” of the system.

To understand the concept of Micro and Macrostates let us consider a simple example. Illustrated above we consider a 6 indistinguishable particle system with 5 allowed energy levels (0,E,2E,3E,4E). Basically there are 6 identical balls and 5 shelves in which i can place them.

The Total energy of the system is fixed to be 8E. Now macrostate is defined by the total energy. So all the copies of the system are basically same macrostate.

How do they differ? They differ by their internal distribution. For example we could have a distribution in which two particles are in 4E energy level.

Or we could distribute four of the six particles in 2E level while remaining two in ground level (0).

Its easy to see there are nine such possible distributions. Each such distribution is called a “*Microstate*” of the system while all of them belong to one macrostate.

Two systems with the same values of macroscopic parameters are thermodynamically indistinguishable. A macrostate tells us nothing about a state of an individual particle. For a given set of constraints (conservation laws), a system can be in many macrostates.

Statistical Mechanics mostly deals with finding out the probability distribution of such systems.

Let’s hop back into our time Machine and travel to 1864 Vienna. A 20 year old student came across Maxwell’s paper on The Kinetic Theory of Gases and started working on a series of publication that would lead to a birth of a new branch of physics which we now call Statistical Mechanics.

Beginning in 1866. His first paper was titled, ’On the mechanical meaning of the second law of thermodynamics’, but he reached this objective only in later publications. In 1868 Boltzmann extended Maxwell’s kinetic theory of gases and took the important step of saying that the total energy of the system should be distributed amongst the individual molecules in such a manner that all possible combinations are equally possible.(Now known as one of the postulates of Statistical Mechanics)

He later examined the approach to equilibrium according to two ideas, the dissipation of energy and the increase in entropy, and this lead in 1877 to one of the most famous equations of physics: ^{5}

$$S=k_{\mathrm {B} }\ln \Omega$$

Here $ k_{\mathrm {B}} $ is known as Boltzmann constant, commemorating his name. And $\Omega$ is the number of microstates for a corresponding macrostate. In our above example the Entropy for Energy 8E (Macrostate) is given by $S_{8E}= k_{\mathrm {B} }\ln 8$

*Note: The form of this equation is consistent with the fact that entropies are additive but probabilities are multiplicative. Note also that as $\Omega$ cannot be less than one, S is always positive. *

That’s it for our History lesson. Let’s apply some of what we have learnt from the greats. Let’s travel back to the present.

The Gibbs entropy is the generalization of the Boltzmann entropy holding for *all* systems, while the Boltzmann entropy is only the entropy if the system is in global thermodynamical equilibrium. Both are a measure for the microstates available to a system, but the Gibbs entropy does not require the system to be in a single, well-defined macrostate.

This is not hard to see: For a system that is with probability $p_i$ in a microstate, the Gibbs entropy is:

$$S_G = -k_B \sum_i p_i \ln(p_i)$$

and, in equilibrium, all microstates belonging to the equilibrium macrostate are equally likely, so, for *N* states, we obtain with:

\begin{align} S_G &= -k_B \sum_i \frac{1}{N} \ln\left(\frac{1}{N}\right) \\&= -k_B N \frac{1}{N} \ln\left(\frac{1}{N}\right) \\ &= k_B \ln(N)\end{align} |

by the properties of the logarithm, where the latter term is the Boltzmann entropy for a system with *N* microstates.

How would we express in terms of the statistical theory the marvellous faculty of a living organism, by which it delays the decay into thermodynamical equilibrium (death)? … It feeds upon negative entropy … Thus the device by which an organism maintains itself stationary at a fairly high level of orderliness (= fairly low level of entropy) really consists in continually sucking orderliness from its environment.

Erwin Schrödinger

In ‘Organization Maintained by Extracting “Order” from the Environment’,What is Life? : The Physical Aspect of the Living Cell(1944), 74.

An increase in entropy has often been referred to as an increase in disorder, popular fiction writer often use the word entropy as synonym to chaos. In the quote above you can see how one of the greatest quantum physicist of all time writes Entropy as death itself.

Its quite a spectacle to witness, we humans get seduced by the poetic idea of death. And when a law of physics itself proclaims the death of everything around you, its easy to see how mind can get distracted. This lead to a common misconception though(which is slowly moving away as many textbooks have removed the section of entropy and disorder ^{6}

Let us consider another example to discuss order, disorder and Entropy.

Let us imagine an empty room with floor tiled as shown in the graphic on left.

Let this floor be divided into 30 equal sized tiles. We can number these tiles from 1 to 30 to make them distinguishable and provide them an index.

Let us say there are 30 balls scattered across the floor. We will consider two different cases.

Case 1:

Equal Probability Distribution (Smeared Over)

Let all the balls be spread over “smeared over” uniformly across the floor. So that each tile has one ball each. The distribution function for the balls is uniform over all the surface.

That is if you pick up one ball from the floor, probability of it being from first tile is same as the 15th tile and so on.

In this case you can say that your room is not really chaotic. Its completely ordered. In the traditional definition of disorder , **the disorder of the room is least in this case.**

Now let us calculate the entropy for the system using Gibbs Entropy discussed below.

As we can see since the probability is equal for all tiles using normalization $\sum_{i=1}^{30} p_i = 1$ I can write each $p_i$ equal to $\frac{1}{30}$

\begin{align} S_G &= -k_B \sum_{i=1}^{30} \frac{1}{30} \ln\left(\frac{1}{30}\right) \\&= -k_B 30 \frac{1}{30} \ln\left(\frac{1}{30}\right) \\ &= k_B \ln(30)\end{align}

One could prove that maximum entropy is for uniform distribution in simple cases. But for large N and complex probability distribution it is difficult to say. But one thing is sure that for least disorder entropy is zero.

peaked distribution

Let’s now say that a toddler or a cat visited and started playing with the balls in the room. After some time the balls will occupy some random distribution and corresponding to that probability distribution one could find out the entropy.

Now let’s say the toddler somehow gathers all balls trying to make a fort and put them all along the wall on tile number 12.

Another physical example can be a room with 30 toddlers(imagine the noise in the room) and you put an infinite supply of candy on tile 12.

Now if I pick a kid from the room the probability of the kid being from the tile number 12 is unity. (Maximum) while probability that they came from any other tile is zero.

This is what we call a peaked distribution. Where the distribution is constrained to be over only at one point in space.

Now let us calculate the entropy for the system using Gibbs Entropy discussed below.

As we can see since the probability is equal to zero for all tiles except tile number 12. Using normalization we know $p_{12} = 1$ I can write : $$p_i= \delta _{{i,12}} \delta _{{ij}}={\begin{cases}0&{\text{if }}i\neq j,\\1&{\text{if }}i=j.\end{cases}} $$

\begin{align} S_G &= -k_B \sum_{i=1}^{30} p_i \ln p_i \\&= 0+0+……+ \left[-k_B \times 1 \times \ln\left(1 \right) \right] \\ &= 0 \because \ln(1)=0 \end{align}

So what we observe here is that entropy is always zero when the distribution is peaked at one point and zero somewhere else. So when our system was disordered and not uniform Entropy turns out to be *zero.*

So the concept of Entropy being a measure of disorder is wrong. But there’s a catch. There is an ambiguity present in the very definition of disorder and order. What I find ordered in the system might look chaotic to another observer. the relative ambiguity is also another reason this definition of Entropy does not hold true in many cases. Also our system needs to be a closed one for even the second law to hold true.

For non isolated system statistical mechanics defines various partition functions and you can deal with them which is beyond the scope of this post. And the connection of Entropy with disorder holds while talking about statistical entropy not the thermodynamic entropy.

Another way to define Entropy which leads us away from the problem mention above is to leave behind the confusing concept of order and disorder with entropy and rather looking at the *Entropy as the measure of lack of information.*

There is another branch of science involved here which I won’t go into the detail of but in Information theory. In 1948, while working at Bell Telephone Laboratories electrical engineer Claude Shannon set out to mathematically quantify the statistical nature of “lost information” in phone-line signals. He found a quantity that behaved very much like Gibb’s Entropy (infact had a covariant form if we replace constant by Boltzmann constant). The quantity is called as Shannon Entropy and is a measure of lack of information.

In case 2 when we had a peaked distribution, we know for sure that ball picked from the room will definitely be from the tile. Thus *we know all the information we need for the system and thus have zero entropy. *

On the other hand we have in case 1, maximum entropy because of the less information about the system that we know off.

The physical entropy represents a *lack* of information about a system’s microscopic state. It is equal to the amount of information you would gain if you were to suddenly become aware of the precise position and velocity of every particle in the system. Entropy thus is a degree of smoothness of the distribution.

Whoa, that was a lot of typing and physics. If you guys are still with me we deserve a cup of coffee or Tea. Let’s make one

together shall we. Once i mix the milk in my tea, i can’t seperate them. This right here is an example of a system going to a state of high entropy. 2nd Law of Thermodynamics state that every close system will ultimately lead to maximum entropy state (As we discussed it before)

The law not only explains the mixing of tea, it also explains why glass shatters, why walls collapse and some believe it also shows us the death of the universe itself.

*Death* is a morbid topic that has been part of human imagination since a long time. May it be in literature where Dante visits the underworld with Virgil or in cinema. It has been part of our culture, our theologies and thoughts. The morbid curiosity has had many writers writing, philosophers philosophizing and physicists ‘physicising’.

“No structure, even an artificial one, enjoys the process of entropy. It is the ultimate fate of everything, and everything resists it.”

― Philip K. Dick, Galactic Pot-Healer

Philip K. Dick talks of Entropy as the harbinger of death and destruction in his novel. Another example can be seen in the writing by John Green (Famous writer of romance novels like The Fault in our Stars, Paper Towns, etc.) cited below.

“Everything that comes together falls apart. Everything. The chair I’m sitting on. It was built, and so it will fall apart. I’m gonna fall apart, probably before this chair. And you’re gonna fall apart. The cells and organs and systems that make you you—they came together, grew together, and so must fall apart. The Buddha knew one thing science didn’t prove for millennia after his death: Entropy increases. Things fall apart.”

John Green

Looking for Alaska

Entropy Increases things fall apart. The astronomical leap from the second law of thermodynamics and the destruction of everything might sound poetic, but is it true?

In another great novel of Philip K. Dick (Do Androids Dream of Electric Sheep ) protagonist talks about entropy as the destroyer of even Mozart’s music. He writes : *“In a way, he realized, I’m part of the form-destroying process of entropy.” *

Saying that life and death are consequences of Entropy could be quite a stretch. First of all many scientists have argued that Life (living organisms) show a very structural local complexity. They form Cells, tissues and organs. So you could say that Entropy is decreasing in living systems.

But second law does not allow that. The solution to this problem is simple, you cant consider a living system to be closed. It interacts with outer world. So even if an open system decreases its entropy locally the entropy of the universe still should increase with **time.**

In this clip from a sci-fi movie Mr. Nobody. The actor talks about Entropy and Arrow of time. It shows us how the notion of Entropy and Disorder has reached the general audience. (Note: This movie is called Science-Fiction for a reason. But we are gonna talk about the same thing it talks about.

Let’s take our time machine on one last ride.

Teaching in Cambridge Eddington in 1919 got famous for showing that Einstein’s theory of General Relativity was correct by clicking a photograph of a complete solar eclipse and showing light bending through the moon.

Arthur Eddington, (quoted on the left) was very sure of the 2nd Law, so sure that he very famously wrote that it is the “supreme” law of nature.

In 1928 in his book *The Nature of the Physical World* Eddington introduced the concept of Arrow of time.

The concept basically says that since the entropy of the universe must increase so we can consider an imaginary arrow of time where the entropy of the universe is increasing and that is the only permissible direction we can move in time. We call this direction *“the future”*

He used the word “random” instead of saying entropy directly.

But since the second law does not allow us to move to a lower entropy state. The time only flows in one direction. And we can’t actually travel back in time(if the concept is correct)

So actually, the time travel that we have been using to explore the 2nd Law of Thermodynamics and entropy is prohibited by the very same concepts that we have been exploring.

The arrow of time is a concept on which research is presently going on. With some of the finest minds working on it including Late Stephen Hawking.

Here is a video of Brian Cox talking about a concept we discussed today.

Entropy is a difficult concept to wrap your head around. Couple it with some misunderstandings and the use of the word itself in various streams it sometimes might get a bit complicated. We tried to introduce someone with negligible background knowledge to the concepts of Entropy by looking at the evolution of the laws and then used it to explain some of the famous concepts involving Entropy.

A comic strip to end.

Notes:

- (C. C. Gillispie, Carnot, Lazare Nicolas-Marguerite,
*Dictionary of Scientific Biography.*Vol. III Page 70). ↩ *Reflections on the Motive Power of Fire by Carnot Translated by R.H.Thurston (link)*↩- On the Moving Force of Heat and the Laws of Heat which may be Deduced Therefrom London, Edinburgh and Dublin Philosophical Magazine and Journal of Science 1851 (link ) ↩
- Clausius, R. (August 1856). “On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat” ↩
- A Very Brief History of Thermodynamics John Murrell (internet archive) ↩
- http://entropysite.oxy.edu/ The 36 Science Textbooks That Have Deleted “disorder” From Their Description of the Nature of Entropy ↩