My side project is a JS canvas library. I don't monitor it: no logs; no tracking; no service uptime stuff. I do check the GitHub repo daily, just in case someone has wandered in and asked a question (3 questions so far in 2022). In a moment of vanity I did set up a discord channel; I check it weekly or so just to make sure nobody's messed up its pristine state with comments and stuff.
What I do monitor is the competition's GitHub repos. Some of them are a lot busier than my library's repo. Sometimes people ask questions like "how do I do X and/or Y with your library" which leads me to think how X and/or Y can be achieved in my library which sometimes leads to me committing code to solve the ask - just for the heck of it. It helps keep me engaged with the project.
Sentry and Cloudwatch logs mainly, with paging set-up if anything bad happens.
Though I run a status page + uptime monitoring service and also dogfood monitoring my own service (https://onlineornot.com)
Created a "health check" web page that does a whole series of checks on page load (free diskspace, db connectivity, email queue etc etc). If no errors then a success keyword is displayed on the page.
UptimeRobot checks the page every 5 mins and if no success keyword I get an email. I can then load the same web page to see the list of status checks to see which failed.
I use:
- google search console: to monitor any dramatic SEO changes. I don’t monitor ranks, just clicks through from organic results.
- honeybadger.io for errors and uptime. Free tier.
- plausible.io for analytics
Some of my projects are repo / dev related. So I monitor their GitHub repos, and those of competitors. I built a tool to monitor repo growth/stats. Kind of like plausible but for a repo: https://RepoRanks.com
Errors -> rollbar (free tier)
Website being reachable -> uptime robot
App metrics -> influxdb + grafana on home server, because it's already there and ready
Alerts -> grafana sends messages to telegram so they're received on my phone
Web metrics -> matomo on home server
Logs -> mostly just local, but some remote + home server stuff is logged to loki + grafana
I did setup a mongodb database for the logs of all my apps, and built an admin dashboard to look up into it.
I just googled for a free uptime monitor and used that. Occasionally get a downtime email.
I have logs on my server but I choose not to check them because I'm afraid I'll have to debug something and I don't feel like doing that
For small projects I just have all the errors emailed to my inbox, along with the user id.
I then use luckyorange so I can review what the user was doing on the UI to trigger the error.
By using them!
I'm not worried about availability beyond being able to personally use them.
I do have a couple of statping instances but it's not that useful
Matomo for analytics - it has every Google Analytics feature I need. I don’t want to blight my users with a stupid cookie warning (thanks EU GDPR :-/ ) so I self-host it with cookies disabled. https://matomo.org Prometheus, Grafana and Slack for metrics and alerting
BetterUptime to get notified if it goes down.
I deliberately avoid measuring traffic.
That's all
Uptime Kuma runs on dokku. It's great.
statuscake and uptime robot. on the ones i kinda care about at least.
That's the neat part, I don't. Cause my side projects are never completed
Ain't got 2 Gb RAM to spare for Grafana so I use cron jobs and gnuplot to build the shittiest looking replacement.
https://www.marginalia.nu/stats/
https://www.marginalia.nu/status/
(I also run the sites on the motto "89.9999% also has five nines"; so there's that)