Small baby, no sleep make Graham a dull blogger
Apologies, readers, for my long absence. My family grew this last year, so I’ve had less time for writing.
Recently I came across a popular, succinct post on implementing a neural network in Python. I realized that although I’ve used them before and had a cursory understanding of how they worked under the hood, I didn’t really grok them; I had always turned to a library. Admittedly, turning to a well-developed library is exactly what one should do in production, but for my own edification I decided to put together a Python implementation of my own. This implementation is rather simplistic, but it has three features I like:
- It’s simple enough that someone as sleep deprived as myself can understand it.
- It feels Pythonic; it relies only upon
NumPyand has an API similar to
- It can handle an arbitrary number of hidden layers of arbitrary size.
As awesome as neural nets are, as many varieties and variations there are, and as amazing as some of the applications may be, I discovered that at their base, they’re rather simple.1 Neural networks boil down to iterated linear algebra—which totally makes sense, since they’re being implemented on GPUs all over the place. Though I wouldn’t recommend going back to basics all the time (especially if you’re on a deadline), I’m definitely glad I took the time to work this through.
The vanilla version is, anyway; other variations are admittedly more complex. ↩︎