Hi, I have a bunch of Raspberry Pies hosting all kinds of stuff and I want to have a monitoring solution for all of that. What would be your recommendations?

My goal is to be able to have an overview of CPU load, network load, CPU temp and to see what’s going on inside docker containers as I have everything dockerized. I’d like the solution to be open source. I want the solution to be web browser accessible and have nice load graphs with history. I don’t want to spend too much time setting it up.

All my Pies are running RaspberryOS, which is Debian based.

  • TOR-anon1
    link
    fedilink
    English
    11 year ago

    I don’t use Docker, so this may not help you, but I find bpytop and ssh works just fine. :)

  • @snekerpimp@lemmy.world
    link
    fedilink
    English
    101 year ago

    Second for Netdata for the temps and load info, portainer for docker monitoring. Netdata gives you more real time info than even glances. Portainer is an easy way to look at logs and such, I don’t use it to manage, prefer command line for that. Netdata we’ll give you some docker info, but not logs.

    • @Aux@lemmy.worldOP
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      I use Ansible for management, I just want to see nice graphs and maybe get alerts when things go south. Thanks for recommendation.

  • Mellow
    link
    fedilink
    English
    61 year ago

    Grafana, influxdb, telegraf agents. Easy to setup. Barely any configuration required. Everything you asked for in the default telegraf agent config. There are dashboards with plenty of examples on grafanas website.

    • @Aux@lemmy.worldOP
      link
      fedilink
      English
      21 year ago

      What’s the difference between Prometheus and Telegraf? Why do you prefer Telegraf?

      • Mellow
        link
        fedilink
        English
        21 year ago

        Influxdb is a “time series” database for storing metrics. Temperatures, ram usage, cpu usage with time stamps. Telegraf is the client side agent that sends those metrics to the database in json format. Prometheus does pretty much the same thing but is a bit too bloated for my liking, so I went back to Influx.

      • @keyez@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        1 year ago

        My work environments use Prometheus and node-exporter and grafana. At home I use telegraf, influxdb and grafana (and Prometheus for other app specific metrics) but the biggest reason I went with telegraf and influxdb at home is because Prometheus scrapes data from the configured clients (pull), while telegraf sends the data on the configured interval to influxdb (push) and starting my homelab adventure I had 2 VMS in the cloud and 2 pis at home and having telegraf sending the data in to my pis rather than going out and scraping made it a lot easier for that remote setup. I had influxdb setup behind a reverse proxy and auth so telegraf was sending data over TLS and needed to authenticate to just the single endpoint. That is the major difference to me, but there are also subsets of other exporters and plugins and stuff to tailor data for each one depending on what you want.

  • MaggiWuerze
    link
    fedilink
    English
    241 year ago

    Standard solution would be grafana + Prometheus on one server and a node exporter running on each pi. You then register the node exporters in Prometheus and use that as a data source for grafana. There you build a dashboard showing whatever metrics you want. It can also show some information about the Docker socket, like number of running/stopped containers and such.

  • @johntash@eviltoast.org
    link
    fedilink
    English
    31 year ago

    I didnt see it recommended yet, UptimeKuma is really simple if you just want to monitor the basics like if a url works or ping, tcp, etc without an agent.

    It doesn’t do CPU/memory style metrics, but I find myself checking it more often because of how simple it is.

    • @Aux@lemmy.worldOP
      link
      fedilink
      English
      01 year ago

      I need CPU and other metrics because recently one of my Docker containers got infected with DDOS software and CPU spike was a tell tale.

      • TheMurphy
        link
        fedilink
        English
        11 year ago

        Omg I have CPU spikes on my Raspberry Pi. Maybe it’s infected too, and how would I ever find out?

        Is there some software I can run to check?

        • @Aux@lemmy.worldOP
          link
          fedilink
          English
          01 year ago

          Are they small spikes spread across time or large chunks of heavy load, like 80%+ load for hours? If it’s the first, then probably it’s just normal operation. Otherwise check your running processes and start tracking what’s going on during high loads.

          • TheMurphy
            link
            fedilink
            English
            11 year ago

            I would say it’s 100% load for maybe 3 minutes, so maybe it’s normal.

            It makes my system overload so my PiHole stops processing.

            But it sounds like maybe it’s normal and a background service using too much sometimes?

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    PiHole Network-wide ad-blocker (DNS sinkhole)
    SAN Storage Area Network
    SSL Secure Sockets Layer, for transparent encryption
    TLS Transport Layer Security, supersedes SSL

    3 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #353 for this sub, first seen 14th Dec 2023, 15:25] [FAQ] [Full list] [Contact] [Source code]