Category: tech_stuff

  • Remote Server Monitoring with Uptime Kuma

    Remote Server Monitoring with Uptime Kuma

    When you have as much random crap as I do running on your home network, it is nice to have something in place that will let you know when things aren’t working properly. There are a few free uptime monitoring solutions out there, but Uptime Kuma may be the easiest to set up and use.

    The Basics

    Uptime Kuma is a simple tool that can let you know if your self-hosted services are online. Of course, this won’t work so well if Uptime Kuma itself goes down, so it makes sense to host this off-site. In my case, I bought a cheap VPS from my domain registrar and threw it on there. It can connect to basically any service with a little bit of configuration, and it can also send notifications many different ways.

    Connecting Online Services

    A service that serves a web page (that is accessible from the internet) should work without any extra setup at all. Just point Uptime Kuma to the URL and tell it what status codes it should be expecting.

    I would suggest allowing a few retries for any service that doesn’t really stay 100% operational all the time, or else you may get quite a few false alarm notifications.

    Connecting Local Services

    If you want to remotely monitor the uptime of services that you also do not want to expose to the internet, you need to get a little bit more creative. You could VPN in with your VPS, but I decided to do something a little bit different. I set up a Flask webserver (running via Gunicorn) on a Raspberry Pi that handles this for me.

    The Python script is actually very basic, but it may grow more complicated depending on the type of services you intend to connect to.

    from flask import Flask,jsonify,request
    import requests
    from icmplib import ping
    
    app = Flask(__name__)
    
    @app.route('/local_service', methods = ['GET'])
    def ReturnLocalService():
        response = requests.get('http://192.168.1.9:port/')
        data = {
            "local_service" : response.status_code,
        }
        return(jsonify(data))
    
    @app.route('/local_ping', methods = ['GET'])
    def ReturnLocalPing():
        result = ping('192.168.1.10', count=1)
        status = result.packets_sent == result.packets_received
        data = {
            "local_ping" : str(status),
        }
        return(jsonify(data))
    
    if __name__=='__main__':
        app.run(debug=True)

    This script has two types of API routes. One is just a simple proxy, that will return the exact status code of the local website. The other type will ping the server via SNMP and return ‘True’ if the server is accessible. The local ping option in Uptime Kuma will look a little bit different.

    Instead of just choosing an HTTP monitor, we use a Json Query to specify what we are expecting to receive. In this case, we are looking for ‘local_ping’ and it should return ‘True’.

    Power Outages

    The Raspberry Pi running the proxy script, my router, and my Fios ONT are all powered by a UPS, so even if the power goes out the Pi should stay online. Unfortunately, the server rack in the garage will go down (no UPS yet), but we can solve that problem when we feel like spending hundreds of dollars on a rack mounted UPS.

    Notifications

    There are many different ways to get Uptime Kuma to send notifications, but the easiest is probably to use Signal. You can spin up signal-rest-api on your VPS to be able to send Signal messages to yourself or others. I already use Signal to message family members, so this worked out great for me.

    One issue you may run into is the signal-rest-api container changing IPs randomly. I have no idea what is up with this, and I haven’t implemented a solution yet. So sometimes I don’t get notifications when things go down. Which kind of defeats the whole purpose of this. All I do to combat this is manually test the notifications every few weeks or so.

    Status Pages

    You can put all of the monitors into groups and add them to status pages. This works great when you have self-hosted services that friends or family rely on. This way instead of getting texts from people whining about things not working, you can just tell them to check the status page.

  • Homelab NAS on a Dell PowerEdge R730xd

    Homelab NAS on a Dell PowerEdge R730xd

    Have you ever wondered if you could turn an ancient 11-year-old server into a relatively cheap NAS? Well, it turns out you can. This isn’t a tutorial, but it does go into some detail on what exactly is sitting in my server rack in my garage.

    The Server

    Sometimes I browse ebay and unfortunately I have access to a credit card. When I saw this bad boy sitting on the virtual shelf, I couldn’t stop myself.

    Just think of all that juicy hard drive space. Anyway, one small trip down the rabbit hole later I decided screw buying a NAS chassis, I’ll just build a NAS myself. Why pay $600 for a 4-bay, 2 core, 4GB RAM Synology DiskStation when I can get a 12-bay, 20 core, 96GB RAM decrepit old man of a server for $300. That’s half the price, 10 times the CPU cores, and 24 times the RAM.

    The Drives

    Of course, my genius plan is slightly hindered by the fact that the HDDs themselves are going to absolutely break the bank. Fortunately, you can get recertified Seageate drives on ebay for a pretty reasonable price. I snagged six 14TB Exos drives for $1,017 delivered.

    All of the drives passed their SMART tests, so I would say it was a reasonably acceptable purchase.

    The Networking

    Of course, what NAS is complete without a 10Gb network card for inter-server communication, as well as another 10Gb network card that can run in 2.5Gb mode for connecting to the rest of your network devices. The beauty of using 10+ year old enterprise hardware is that used NICs are crazy cheap. I only paid $40 for a pair of (questionably genuine) Intel X540-T2s.

    I also grabbed an Intel X550-T2 for $90, not sure if that was the right decision, but I wanted 2.5Gb capability. Now obviously 10Gb switches are not cheap, so instead of even bothering I just connected my Proxmox server directly to my new NAS.

    The Setup

    After a solid 35 seconds of serious research, I decided that TrueNAS Scale would be the perfect NAS operating system. I can’t say that I was wrong here, but I also haven’t ever used anything else. I put all the drives in a 6-wide RAIDZ1, which resulted in a usable 60.9 TiB of storage. This more than meets my needs, as I am really only using this for very legally acquired media storage and PC backups. I also don’t particularly care about this data, so a RAIDZ1’s single allowable disk failure doesn’t bother me.

    The Software

    Obviously, with 20 cores to go around and a cool 94.2 GiB of RAM, we have to put this old man to work one way or another. Unfortunately, it really is just super overkill for a NAS. Even so, I went ahead and threw NextCloud on there for good measure. The TrueNAS NextCloud app recommended 2 CPUs and 4GB of RAM, so we were in the clear on that front.

    I don’t know why they include a bunch of random pictures with new NextCloud installs, but the frog is cute.

    Tech Debt

    TrueNAS also has apps for installing lots of the things that I already have running on my Proxmox server. I could be running Jellyfin and all the associated services on my NAS instead of my VM box. But I already set it all up, and I don’t want to do it again. So instead, I just created a ~30TB share for all of my media.

    The 10Gb link between the VM and the NAS means that no matter what is happening on the NAS/Proxmox server, we should be able to shove a 4k video stream across the link without much issue. Maybe one day I will consolidate some of this to the NAS, but with GPU passthrough already set up, it will be some time before I feel like taking the plunge.

  • Raspberry Pi Temperature/Weather Sensors

    Raspberry Pi Temperature/Weather Sensors

    Overview

    This is a simple tutorial covering how to turn a Raspberry Pi (Zero W in this case) into a sensor that can log weather data in your home.

    Step One: Get the Stuff

    • A Raspberry Pi Zero W (or Zero 2 W). This can technically be done on a Pi Pico as well (don’t ask me how because I don’t know).
    • Also a power supply for the Pi (AliExpress works for that, I use 3A 5v)
    • An SD card fast enough for Pi use (I used MicroCenter brand Speed Class 10 ones that were 8 or 16GB)
    • A Bosch BME280 digital sensor (AliExpress is a good place to snag these)
    • Some breadboard jumper cables (amazon works)
    • (Optional) Any linux machine to handle a database and query script

    Step Two: Install Raspberry Pi OS

    I’m writing this tutorial so a 12 year old can follow it, so bear with me. First we will need to setup the Pi that we will use for the sensor. To start, let’s image the SD card and get the Pi up and running. To do this, we will use the Raspberry Pi Imager tool.

    I’m using a Pi Zero W, so the OS of choice is Raspberry Pi OS Lite (32-bit). You can use the built in settings editor to set up the wifi prior to the first boot.

    If you set up the wifi this way, you probably want to enable SSH and password login as well.

    Next, insert your finished SD card and plug your Pi in. After a good 5-10 minutes (for the Zero W), you should be able to see a new device on your wireless network to SSH into. Congrats, you now have a working Pi!

    Step Three: Configure the Pi for the BME280

    Now that we have a working Pi, we’re going to set it up so it is ready to communicate with our fancy digital sensor. To start, we need to enable I2C via the raspberry pi config command.

    sudo raspi-config

    As we are setting up the I2C interface, we are going to choose Interface Options.

    Then we are going to select the I2C option to enable the automatic loading of the I2C kernel module.

    Next, simply press Yes.

    Now that we have enabled the I2C interface, we need to install some packages to actually facilitate the communication to our sensor. First, update the package lists and then install i2c-tools.

    sudo apt-get update
    sudo apt-get install i2c-tools -y

    Step Four: Physical Sensor Install

    Now that we have the Pi configured and ready to read data from our sensor, we need to actually plug it in. First, we’ll solder the pins that came with the sensor into the BME280 itself. That way we can connect the jumpers to the pins instead of soldering wires directly to the PCB. After soldering, the sensor should look something like this.

    Next, we also have to solder the pins into the Pi Zero itself. For prototyping purposes, you can use a breadboard or any other connection you want. If you choose to solder pins to the Pi, it should look something like this afterwards (no judging of soldering skills allowed).

    Now that the pins are soldered, we need to connect the sensor pins to the Pi pins. For reference, here is the Pi Zero W pin layout

    We need to connect the following pins:

    Pi Zero W PinBME280 Pin
    Pi 3V3 Power (Pin 1)BME280 VCC
    Pi SDA I2C (Pin 3) BME280 SDA
    Pi SCL I2C (Pin 5)BME280 SCL
    Pi Ground (Pin 9)BME280 GND

    The finished product should look something like this, take note of the wire colors:

    Now that we have everything physically connected, we can test the I2C connection to the sensor. To do this, run the following command:

    i2cdetect -y 1

    You should see something like this:

    Step Five: Creating Sensor Python Setup

    Now that our BME280 is all setup and ready to go, we need a way to remotely poll the Pi to find out what the current sensor reading is. To do this, we are going to create a small Python webserver with Flask. First, lets install the required packages. While using Raspberry Pi OS Lite, we need to manage Python module installations via apt instead of pip.

    sudo apt-get install python3-gunicorn python3-flask python3-bme280 python3-requests python3-smbus2 -y

    After installing everything we need, let’s create the following Python script at ~/sensor.py

    from flask import Flask,jsonify,request
    import requests
    import smbus2
    import bme280
    
    app = Flask(__name__)
    
    @app.route('/', methods = ['GET'])
    def hello_world():
        return "<p>This is not an API route. For sensor info, navigate to /read</p>"
    
    @app.route('/read', methods = ['GET'])
    def read_sensor():
        port = 1
        address = 0x76
        bus = smbus2.SMBus(port)
    
        calibration_params = bme280.load_calibration_params(bus, address)
    
        data = bme280.sample(bus, address, calibration_params)
        response = {
            'id': data.id,
            'timestamp': data.timestamp,
            'temperature': data.temperature,
            'pressure': data.pressure,
            'humidity': data.humidity,
        }
    
        bus.close()
    
        return jsonify(response)

    This is a simple Flask web server that will create an API route on /read. This will allow a separate Python script to query the sensor and retrieve the reading. Lets see if the server works by running the following command:

    flask --app sensor run --host=0.0.0.0

    Next, we can navigate to the second address and test the web server.

    Now let’s test the API route that we created. Simply add /read to the end of the URL.

    If everything was set up correctly, you should see something like the above. We have the humidity in percent, id of the sensor (this changes, it doesn’t really matter), pressure in Hectopascals, temperature in Celsius, and the Timestamp.

    Step Six: Create Systemd Gunicorn Service

    Now that we know our web server works, lets create a permanently running systemd service so that we can always access our Pi sensor. First, lets stop the temporary server, and create a file at /etc/systemd/system/sensor.service:

    [Unit]
    Description=Gunicorn instance to serve sensor webserver
    After=network.target
    
    [Service]
    User=root
    Group=www-data
    WorkingDirectory=/home/user/
    ExecStart=python -m gunicorn -w 4 -b 0.0.0.0:5000 'sensor:app'
    Restart=always
    RestartSec=3
    
    [Install]
    WantedBy=multi-user.target

    You will need to replace /home/user with your own username, or change the path to the directory that contains your sensor.py file. After you create the file, we can enable and start our new service.

    sudo systemctl daemon-reload
    sudo systemctl enable sensor.service
    sudo service sensor start

    We’re done! The Pi is ready to get operate as a standalone temperature sensor. Obviously, nothing is currently polling our sensor though.

    Optional: Polling Script/Database Setup

    To read the sensors, I set up a Python script that polls all three of my sensors and stores the readings in a Postgres database.

    I’m not going to go over installing Postgres or MySQL or anything, but I’ll share the script I use to store my data. Here’s the Python script:

    import requests
    import psycopg2
    from json.decoder import JSONDecodeError
    
    conn = psycopg2.connect(database="database_name",
                            host="192.168.1.XXX",
                            user="user",
                            password="password",
                            port="5432")
    
    cursor = conn.cursor()
    
    def read_sensor(ip, offset, name):
        try:
            response = requests.get("http://" + ip + ":5000/read", timeout=5)
            temp = response.json().get('temperature')
            faren = ((temp + offset) * 9/5) + 32
            hum = response.json().get('humidity')
            pressure = response.json().get('pressure')
            sensor_id = response.json().get('id')
            cursor.execute('INSERT INTO readings (temperature, pressure, humidity, sensor) VALUES (%s, %s, %s, %s)', (faren, pressure, hum, name))
            return ""
        except requests.exceptions.HTTPError as errh:
            print("Http Error:", errh)
        except requests.exceptions.ConnectionError as errc:
            print("Error Connecting:", errc)
        except requests.exceptions.Timeout as errt:
            print("Timeout Error:", errt)
        except requests.exceptions.RequestException as err:
            print("OOps: Something Else", err)
        except JSONDecodeError as errde:
            print('Decoding JSON has failed')
    
    
    read_sensor("192.168.1.XXX", 0.4, "Downstairs")
    read_sensor("192.168.1.XXX", -.2, "Living Room")
    read_sensor("192.168.1.XXX", -.5, "Bedroom")
    
    conn.commit()
    cursor.close()
    conn.close()

    I only have three sensors running, so I just hardcoded the IPs and names in the script. I calibrated my BME280s with a cheap room thermometer and adjusted the readings with a saved offset. I set this in the cron to run once every single minute.

    I have a local Grafana instance running as well, which means I can view my temperatures over time with dashboards like this:

    What is the point of all this? I don’t really know either 🙂