It was hard to find a good title for this blog post, so let me explain it a bit further.
You may have API's or applications that are secured with IP-Whitelisting or even only reachable using DNAT when a valid IP tries to connect. Therefore services like Pingdom, UptimeRobot or others tell you which IP addresses they use to monitor your app, so you can whitelist them.
In case of Pingdom, this was always as pain to maintain, since the probes change over time and Pingdom only give out simple lists of IP addresses (IPv4 and IPv6) through a webpage, so i had to manually grab them and add them as new Hosts to our Firewall in order to whitelist them.
We even sometimes had the problem of false positives, as Pingdom wanted to check a service with a new, not yet whitelisted IP, and therefore thought our app was down.
Because i recently started to really learn Python better and because i wanted to somehow stop the pain of maintaining those probes, i used my old Bash script named "Pingdom Probes as a Service" and rewrote it in Python and made it much smarter.
Automating maintenance with Probecollector
So, how did i solved this maintenance hell? Of course, with Probecollector
I work a lot with with Sophos UTM Firewall Clusters and they have something called
DNS Groups. A DNS group is defined just by a DNS hostname like (and for this example)
pingdomprobes.sysorchestra.com. The UTM then resolves not just one but all available A/AAAA Records for that hostname. So for this example there are many of them and you can also resolve them with
dig A pingdomprobes.sysorchestra.com.
The UTM then saves all of them into this DNS group defintion and you can use this single defintion to whitelist all probes at once.
Here is how you can do it yourself.
Run Probecollector with cron
I integrated Cloudflare as the first DNS provider into Probecollector, as well as Pingdom as the first monitoring service.
That being said, you need a free or paid Cloudflare account to use Probecollector in the currently released version.
Probecollector is built to run as a recurring cronjob, once a day or once a week, that's up to you. If the subdomain is not existent, it will create it, load in all A/AAAA records and save an additional TXT record with the current UNIX timestamp. You can then also use Probecollector as a Nagios-compatible check to see when the last Probecollector run has been occured and if it was too long ago or not (e.g. WARN/CRIT parameters).
If there is already a domain and some probes in there, it will fetch them and compare them to the latest list of probes available and update/delete them from Cloudflare if necessary. Older versions of Probecollector, named PPaaS, always deleted all of them and recreated them afterwards. That was not very efficient.
I don't speak about the configuration details here, as they're described in the Github repo of Probecollector. Please read them first to install and configure Probecollector.
If you can manually run Probecollector with
python probecollector.py -u <domain> just insert a cronjob in the user's crontab of your choice, like this:
0 5 * * * python /home/user/probecollector/probecollector.py -u pingdomprobes.sysorchestra.com
That's it. Probecollector will now run daily at 5:00 am and update the subdomain with the current probes.
Check the probes domain with Nagios
As i said, Probecollector has also a built-in method to check if a domain has been updated recently according to user-defined (with defaults) WARNING and CRITIAL parameters. The output is Nagios/Icinga-compatible, so you can easily use Probecollector also as a Nagios check in your existing monitoring. Simply use it like this:
python probecollector.py -q pingdomprobes.sysorchestra.com -w 86400 -c 172800
When the domain has been updated recently, it'll show something like this:
OK - Last update of pingdomprobes.sysorchestra.com was less than 86400s ago.
CRITICAL - Last update of pingdomprobes.sysorchestra.com was more than 200s ago!
Get rid of Probecollector?
If you don't want to use Probecollector anymore you can simply use another method to erase it's DNS traces, e.g. purge all Resource Records and the subdomain using the purge command:
python probecollector.py -p pingdomprobes.sysorchestra.com
This purges everything. Afterwards you simply have to the remove the Probecollector folder and you're done.
If you want reuse it, simply run it once with the
-u parameter and the domain will be recreated freshly.
I also use Probecollector daily, so there is already a subdomain available that you can also use. It's
pingdomprobes.sysorchestra.com and stores all current A/AAAA records from Pingdom and it's updated daily at 5:00am like shown above. Feel free to use it as well and check the domain yourself in your existing monitoring.
As this is my first Python project, feel free to contribute to it. It's released under the MIT license. I appreciate any Issues and Pull requests as well.
I will also actively maintain Probecollector from today on and will add more DNS and monitoring services in the future. Let me know, which one you prefer the most.
This project took only a good day to build but will save me hours of maintenance for the next months and years to come and i hope it will do the same for you.