I wanted to write about Neural Networks since a while, mainly because I saw this as an opportunity to summarize some of the things I learned working with Keras, a high level Neural Networks API building on top of Tensorflow (among others). I have used and came to like the latter, but always thought it lacks a bit on the usability side. This will be the first of a series of posts on Keras and Neural Networks. We won’t do anything fancy, just introduce a dataset we’ll be working on and showcase the basic usage of Keras.
You probably know I like to work with bike sharing trip data, but this time I wanted to look at the open data provided by the Oslo City Bike program. I downloaded some of their datasets containing 2.5 million trips taken in Oslo between April 1st, 2017 and September 20th, 2017 (Norwegians love to cycle). The number of daily trips varies quite a bit, as the plot below shows (on May 17th, e.g. the system was shut down due to it being a national holiday).
I made a dataset containing the trips taken with a resolution of one hour, looking like this:
The target is the number of trips taken, the lagged data are our covariates. We’ll use the last week of the data as a test set and train our models on the rest.
To get started with using Keras, all you need to do is install it using
Building our first model is pretty straightforward, after reading the excellent user guide and documentation, you’ll soon enough write your own. No comparison to Tensorflow!
So how well does our basic model do? As a comparison, I’ve also fitted a gradient boosted tree and a more fancy neural network employing long short-term memory (LSTM) units (building one is super easy in Keras). The good news is that the problem is a quite harmless one, and predictions are close to the actual values.
The errors are decently distributed, and plotting the predictions against the actuals doesn’t show any significant non-linearities.
Staring at the above plot for a little bit, one could argue that the neural networks show are a little bit less variance than the tree in this example. The residual errors on the test set are virtually identical from all these three methods though. This is not surprising, of course a neural network does as well as any other method on a harmless dataset. The thing is, using Keras, it is also as easy to train as any other model, e.g. from scikit-learn.
I hope you’ve enjoyed this post and feel inspired to give Keras a try. As always, stay tuned for the next data adventure, where we’ll have even more fun with neural networks!