The Monte Carlo method is one heck of a Swiss Army Knife. It’s used in nearly any field where quantitative predictions are needed, from engineering and finance to statistics and physics. As a matter of fact, I was even making a living with large-scale Markov Chain Monte Carlo simulations for a while. But that’s not what I want to talk about today. Instead, we’ll be talking about stop and go traffic. Let me explain.

Monte Carlo?

Warning, math content. Feel free to skip to the next section if you should experience discomfort or dizziness.

Let’s assume you want to approximate an expectation value $E[f(X)]$ of a function $f$ on some random variable $X$ with complicated distribution $p(X)$. This could really be anything from the expected capital gains from a stock portfolio to the decay probability of a subatomic particle. What about $X$ could be complicated? Well, it might be unknown in it’s exact form, but there might be a way of approximately sampling from it. In some cases one cannot sample directly from a target distribution but can come up with a process that will spit out samples from it one by one.

So, to give a rough sketch, instead of computing the expectation value using the probability density $p(x)$,

you get you hands on some (let’s say $N$) examples $x_i$ that are distributed correctly according to $p$, and calculate

If we repeat our experiment a bunch of times, we can even see what the distribution of this value will look like and say something sensible about how far off we likely are with our approximation.

Simulating $\pi$

We don’t even have to simulate things that are statistical in nature. Let’s for example approximate $\pi$. If we draw a circle with radius one, it’ll have an area of $\pi$. If we now throw balls in a box we draw around that circle such that the lines of the circle and box touch (and hence the box has side length two, and an area of four), the probability of the ball falling in the circle is equal to

Doing this experiment, say 100 times, we can approximate $\pi$ by simply counting how many balls fall into the circle (by checking that the $x$ coordinate and $y$ coordinate where the ball lands satisfies $x^2 + y^2 < 1$). I performed this experiment for you in Python and produced this plot.

With 100 balls, the approximation is $\pi \approx 3.44$, not bad. But what does this have to do with traffic?

Traffic flow.

If you’ve heard about Monte Carlo methods, chances are that one of the introductory examples you’ve seen was the Nagel-Schreckenberg model. It’s a very simple model used to simulate cars on a road, but gives surprisingly rich results. So having recently started a project concerning road traffic, I thought it would be worthwhile to hack my ow Nagel-Schrechenberg model in Python.

The model is as simple as they come. You have a road, divided into, e.g. $N = 1000$ spots, and $n_{\mbox{cars}}$ of these are occupied by cars. Each car follows three rules.

  1. If going slower than the speed limit, accelerate by 1 unit.
  2. If getting too close to the car in front, decelerate by enough units to avoid a crash.
  3. With a given probability $p$, reduce the speed by 1 unit.

If a car leaves the road on one side, it will go back in from the other one. It’s called periodic boundary conditions. This seems overly naive, but one still gets quite interesting results. I really encourage you to try coding this up for yourself, it’s a fun exercise. In the meanwhile, I’ll share some plots I’ve created with just this simulation. What we’ll do is cramming more and more cars onto a road and see what happens.

In the first panel, with 50 cars on the road, things go smoothly. Increasing to 90 doesn’t do much either. On the x-axis of the plots, you see the distance on the road, on the y-axis the simulation time steps. Each point in the plot is one single car moving along the road.

When we cram more than 90 cars onto our virtual road, we see patterns form. Those are stop and go traffic. Cramming more and more cars into our simulation, the stop and go waves traveling backwards on the road get progressively worse and worse, until total chaos breaks out. In the next episodes of Data Adventures, we’ll do our best to analyze this chaos a little.