“Don’t worry about the microphone hanging out the window,” I told a recent house guest. “It’s just another of my Raspberry Pi projects.”
Living in one of the greener inner suburbs of London, the planes heading into Heathrow aren’t the only flying things. Through the worst of the pandemic lockdowns I started a new hobby, documenting the wildlife in the gardens and on the railway cutting behind our home. It’s been quite surprising seeing how many different birds and small mammals frequent the large tree and the elderberry thicket at the bottom of the garden, from tiny wrens to the looming presence of the local sparrowhawk.
It’s a lot easier photographing birds in the winter, when the trees and bushes lose their leaves. But there are many more visitors in the summer, with blackbirds and dunnocks nesting in the shrubs and occasional woodpeckers flitting down the railway lines from the wilder commons and the expanses of the Royal Parks to the west. With the office window open as the days warm up, I can hear them tweeting and chirping, squawking and cooing.
Could there be another way to spot them? A friend had pointed me at an iOS app from the Cornell Lab of Ornithology. Merlin is a free bird identification tool that uses both computer vision and computer audio to identify birds, with models that have been trained by bird watchers and ornithologists all over the world. It’s a powerful tool for seeing what might be around when all you can hear is a song somewhere in the trees.
The audio model that Merlin uses is its own, using a neural net to identify birds from their sounds, analyzing the spectrogram of their calls. It’s a standalone model and can be used even when your phone is offline, although that does limit what it can identify to about 700 mainly North American species. If you want to identify more and in more of the world, then you need to take a look at another Cornell project, BirdNET, from its K. Lisa Yang Center for Conservation Bioacoustics.
BirdNET is available for mobile devices too, with regional options to download models for different parts of the world. The algorithm and models are public, and various open-source projects have been working to implement them on different systems, often using a version designed for lower power systems, BirdNET-Lite, using the TFLite Tensorflow packages.
TFLite supports many different environments, allowing you to run machine-learning models on surprisingly small devices, including the Raspberry Pi. That’s allowed enthusiasts to build an open-source set of tools that turn a Pi into a bird identifying device that’s able to sit there 24 hours a day, spotting birds day and night.
I had a spare 4GB Pi 4 model B that had been running my ADS-B rig before I upgraded to a CM4, so it was already in a ArgonOne SSD case with a 240GB SSD. I’d recommend using an SSD with technologies like BirdNET as they can write a lot of data to a disk, potentially limiting the life of an SD card.
If you want to build a BirdNET system on a Raspberry Pi, there’s an easy enough way to get started, with the BirdNET-Pi project. All you need is a recent Raspberry Pi running a 64-bit version of the Bullseye release of Raspberry Pi OS. I set up my system with the latest Raspberry Pi OS Lite release, which is designed for headless systems and removes the UI components.
Once my system was set up, I used the instructions on the BirdNET-Pi GitHub to download and run the installer. It’s a simple script that loads the required packages and configures a Python environment for the BirdNET Lite machine-learning models. The system is designed to take a 15-second sound sample every minute or so, analysing it for bird sounds. The only additional hardware needed is a USB sound card and a microphone, as the Pi’s audio port doesn’t support a microphone.
Access to BirdNET-Pi is through a web browser, with a built-in live view of its log files and a web-based terminal for system administration. Everything is controlled from a simple console, which can even run system updates. The web console uses the Caddy web server, which is an ideal tool for delivering basic web applications from a Pi or similar, without demanding significant resources. I did have one issue here, as the system is configured to use a .local domain out the box. My network uses a full .co.uk domain, so I had to edit the Caddyfile configuration to use my domain before I could get access to the web UI. This was quick enough, using ssh to log onto my Pi and vi to edit the configuration. A quick reboot and I could see the BirdNET-Pi UI.
Getting the right microphone for your recording setup is important. My first was a simple USB device that worked well enough to show that the system would work, but unfortunately it was the quality you expect for under £5. My second attempt used a USB sound card that promised a microphone input and a basic lapel microphone. This didn’t work, and even though I spent time using the Raspberry Pi ALSA tooling, it was clear that there was something wrong with my hardware.
A search on Amazon showed a USB-based lapel microphone rig that looked promising, and I ordered it. This proved to be a success, using a USB extension cable to keep the mic’s built-in sound card away from interference from the Pi. The microphone lead was thin enough to pass through the closed window frame and long enough for the microphone to dangle in free space.
Over a couple of weeks of operation, I found that increasing the sample time to 30 seconds significantly improved accuracy, thanks to a tip from a fellow BirdNET user, and I was also able to change the notification confidence level to reduce the risk of false positives (a passing diesel locomotive sounds rather like a Great Bittern to the BirdNET model!). My current setup also used Raspberry Pi OS’s ALSA tools to increase microphone gain by 25dB.
There’s a lot of information in BirdNET-Pi, from graphs showing what birds are most common at what time of day, to lists of everything detected. Other tools give you a live spectrogram, so you can learn to identify what local sounds look like on the screen. Hopefully tools like this will allow us to mark regular false positives so they can be ignored in future, allowing the base model to be updated and shared with other users.
BirdNET-Pi also includes the open-source Apprise notification tools as part of its package of tools. This is an easy-to-configure command-line tool that uses a URL-like structure to construct messages that can be delivered to any one of more than 70 services. I experimented with using it to “tweet the tweets”, but it sends notifications for every detection, meaning that a noisy flock of feral parrots in the garden can quickly flood a Twitter account.
I suspect it will be easier to write my own Twitter client for the package in Python to get the type of more complex and nuanced notifications I want, sending a tweet or similar each first identification of a bird a day. That way there’ll be a public record of the birds in my garden that won’t be distracting or annoying. The advantage of building my own tool is that I should also be able to add screenshots of the recorded spectrograms to a notification. I’m also planning on adding new identifications to the Birda social network.
I’ve already had interesting results, capturing the early morning chirps of a local wren and the call of a dunnock deep in the hedge, even the shrieks of the summer swifts high overhead, and the sound of a passing woodpecker. However, the most interesting identification was one I almost ignored as a false positive. I kept getting late notifications of a tawny owl, which was surprising given I live in an inner suburb of London. I put it down to the siren of a passing police car or ambulance.
But then late one night I was up on our small roof terrace, shutting the doors and windows and bringing the cats in after a warm late May day. And in the distance, there it was, the hoot of an owl, hunting somewhere down the railway lines. There’s still plenty out there to surprise me. It’ll be fascinating to see what else my growing collection of Pi-based sensors finds for me.