austinsnerdythings.com Open in urlscan Pro
104.21.8.85  Public Scan

Submitted URL: http://austinsnerdythings.com/
Effective URL: https://austinsnerdythings.com/
Submission: On November 23 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

GET https://austinsnerdythings.com/

<form role="search" aria-label="Search for:" method="get" class="search-form" action="https://austinsnerdythings.com/">
  <label for="search-form-1">
    <span class="screen-reader-text">Search for:</span>
    <input type="search" id="search-form-1" class="search-field" placeholder="Search …" value="" name="s">
  </label>
  <input type="submit" class="search-submit" value="Search">
</form>

Text Content

Skip to the content
Search


AUSTIN'S NERDY THINGS

Nerding out, one post at a time
Menu
 * Home
 * About
 * Contact
 * Privacy Policy

Menu
Search
Search for:
Close search
Close Menu
 * Home
 * About
 * Contact
 * Privacy Policy

 * Home
 * About
 * Contact
 * Privacy Policy


Categories
Blog Admin


INTRODUCTION / TABLE OF CONTENTS

 * Post author By Austin
 * Post date March 4, 2021
 * No Comments on Introduction / Table of Contents
 * Sticky post

The most recent post is the next post down!

I intend to use this site to document my journey down the path of nerdiness
(past, present, and future). I’ve been learning over the years from various
sites like what I hope this one becomes, and want to give back. I have a wide
variety of topics I’d like to cover. At a minimum, posting about my activities
will help me document what I learned to refer back in the future. I’ll also post
about projects we do ourselves around the house instead of hiring professionals,
saving big $$$$ in the process. Hope you enjoy the journey with me!

Below are some topic I plan on covering (I’ve already done something with every
one of these and plan on documenting it):

 1.  RTL-SDRs (receiving signals from your electric meter, ADS-B, general radio
     stuff)
 2.  Virtual machines and my homelab setup
 3.  Home automation / smart home (Home Assistant, Tasmota, Phillips Hue bulbs,
     automating various tasks throughout the house)
 4.  My mini solar setup (2x300W panels) and not-so-mini battery backup (8x272Ah
     LiFePO4 batteries – should yield 7ish kWh of storage)
 5.  Remote control aircraft running Arduplane with video downlink and two-way
     telemetry
 6.  General computer stuff (building them, what I use mine for, Hyper-V)
 7.  Home network (Ubiquiti setup, VLANs, wiring the house with CAT6, IP
     security cameras on Blue Iris)
 8.  Formation of my LLC if anyone wants to hear about that
 9.  The wheel options trading strategy
 10. Cryptocurrency (mining focus)
 11. SCADA (my day job)
 12. 3D printing
 13. Engine tuning (for my old WRX and new F-150)
 14. All the cool things you can do with a Raspberry pi and other SBCs
 15. Arduino/ESP32/ESP8266 automation devices
 16. My electric bikes
 17. Microsecond accurate Raspberry Pi NTP appliance using GPS pulse per second
     (PPS) timing signals
 18. DIY multi-zone sprinkler system install
 19. Drone survey of property
 20. Securing this WordPress site from hackers (Fail2Ban at both WordPress and
     system service level)
 21. Backing up WordPress sites
 22. General Linux tips/tricks
 23. VPNs (openvpn and wireguard)



--------------------------------------------------------------------------------

Categories
Docker


USING GITHUB ACTIONS TO DEPLOY A FLASK/NGINX/CLOUDFLARED TUNNEL DOCKER-COMPOSE
STACK

 * Post author By Austin
 * Post date November 1, 2023
 * No Comments on Using GitHub Actions to deploy a Flask/NGINX/Cloudflared
   tunnel docker-compose stack

result of docker-compose up showing 3 running containers

Today’s blog post is driven by a desire for simplicity. If you would’ve asked me
even a month ago – “Hey Austin, do you think hooking GitHub actions up to deploy
a docker-compose application stack is a good way to simplify something?” I 1000%
would’ve said no. But I have had to get comfortable with Docker for work over
the last couple months. That, combined with some assistance from my favorite AI
(GPT4), has led to something I would call “simple”. The first attempt at
anything is always a bit rough (one time I did get up on an “air chair”
hydrofoil on my first attempt at Seminoe Reservoir in Wyoming though, and earned
$100 on a bet for it) but this is a super repeatable pattern.

WARNING ABOUT SECURITY (written by me): This solution does not automatically
make your app secure. You still need to have a laser-sharp focus on security.
Any vulnerability in your web app can allow an attacker to gain a foothold in
your network. Your docker host should be firewalled off from the rest of your
network as a first step to prevent traversal into other
computers/services/systems in your network. Any steps after that should be
focused on the usual attack vectors (SQL injection, keeping systems up to date,
etc).

WARNING ABOUT SECURITY (written by ChatGPT): Opening a tunnel directly into your
home network can expose internal systems to external threats if not properly
secured. Reddit commenters (see
https://www.reddit.com/r/homelab/comments/17mc2jg/not_a_fan_of_opening_ports_in_your_firewall_to/)
have pointed out that bypassing traditional port-forwarding and firewall
configurations can lead to unauthorized access if the tunnel is not adequately
protected. The use of Cloudflared or similar tunneling services can alleviate
the need for port-forwarding, but without proper security measures, such as
robust authentication and encryption, the tunnel could become a vector for
malicious activity. It’s crucial to ensure that any tunnel into your (home)
network is securely configured to mitigate potential security risks.

Cloudflare tunnels allow for CF to route traffic to your service without
port-forwarding. That’s the key for how this all works.


TABLE OF CONTENTS

 * Components of the stack
   * Web framework, web server, cloudflare tunnel, GitHub & Actions, docker,
     docker host, etc
 * Example website running this stack – uuid7.com (down as of 8:15am 2023-11-03
   while I take my own security advice and migrate to it’s own network on my VM
   host. still getting DNS figured out the the tunnel can be established. back
   up as of 8:30am, had a block to all internal networks rule, needed to add a
   allow DNS to DMZ interface before it)
 * Docker & Docker compose
 * Self-hosted GitHub runner on a VM
 * GitHub Actions


COMPONENTS OF THE STACK

There are quite a few parts of this stack. Also I am not a fan of the
word/phrase “stack” for describing these kinds of things but here we are. It
really is a stack.

 * Flask – a basic web framework for getting stuff off the ground quickly.
   Substitute in any of your own frameworks here. As long as they can listen on
   a port, you’ll be fine. Django, ASP.NET Core, etc. would all work here.
 * NGINX – not strictly necessary, but it’s the web server I’m most familiar
   with. It logs to a standard format, and works well. Some of the frameworks
   mentioned above do warn against deploying directly to the internet so we
   stick NGINX in front.
 * Cloudflared (Cloudflare Tunnels) – these tunnels are super handy. They
   establish an outbound connection to Cloudflare from whatever they’re running
   on. Once the tunnel is established, anything destined for your service will
   go to Cloudflare first (since they are doing the DNS) and from there, it’ll
   get routed through the tunnel to your service.
 * Docker – runs containers. Hopefully I don’t need to expand on this.
 * Docker compose – runs multiple containers in a group. Allows for easy (well
   “easy” in the sense that each container can talk to each other relatively
   easily) networking, and ability to stand up a “stack” of related containers
   together.
 * GitHub – hosts code. Also not expanding on this
 * GitHub Actions – triggers actions when various criteria are hit for your
   repository hosted on GitHub (for example, pushing a new docker compose file
   when committed to main)
 * A host running Docker (your desktop/laptop, Linux VM, AWS EC2 instance,
   etc.)- place for the docker stack to be deployed
 * Self-hosted GitHub runner – place where the action is run when code is
   committed to main

Okay so now that I’ve wrote out that list, it is not exactly simple. But is is
repeatable. Realistically, the only part you’ll change is Flask and the
docker-compose.yml file. The rest are somewhat copy + paste.


EXAMPLE WEBSITE RUNNING THIS STACK – UUID7.COM

Like quite a few of you reading this, I buy domains far before I actually do
anything with them. I have a side project I’m working on (a “Tinder for
Restaurants”), and decided on GUIDs/UUIDs as the IDs for all my tables. UUIDv4
turns out to not work well with database indexes because it is not sequential.
UUIDv7 is and works great (it has a time component as well as some randomness).
I wanted to make a simple site to demonstrate UUIDv7s hence uuid7.com was born
about a month ago. Screenshot below:

Screenshot of uuid7.com, which was styled by GPT4 (I am not a UI/UX person at
all)

UUID7.COM COMPONENTS

This is a pretty straight-forward site. There is a main.py which is the entry
point for the Docker image, a styles.css, scripts.js, and an index.html
template.

Screenshot of VSCode showing the 4 main aspects of uuid7.com

A typical set of NGINX log entries for a real person visiting the site with a
real browser is such:

5 lines of NGINX logs for a visit, one for each part of the site and a 404 for
the favicon

As you might expect, the site loads fairly quickly:

firefox dev tools showing the site loads in 0.3-0.5 seconds


DOCKER & COMPOSE

I’m not going to elaborate on how Docker works. The Dockerfile for the flask app
is 7 lines (see below). We are getting the Python 3.11 base image, copying the
code, installing flask and uuid7, exposing a port (which I don’t think is
strictly necessary), defining an entry point and the file to run. Doesn’t get
much easier than this.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
FROM python:3.11
WORKDIR /app
COPY . .
RUN pip install flask uuid7
EXPOSE 5602
ENTRYPOINT [ "python" ]
CMD ["main.py"]
FROM python:3.11 WORKDIR /app COPY . . RUN pip install flask uuid7 EXPOSE 5602
ENTRYPOINT [ "python" ] CMD ["main.py"]


FROM python:3.11
WORKDIR /app
COPY . .
RUN pip install flask uuid7
EXPOSE 5602
ENTRYPOINT [ "python" ]
CMD ["main.py"]

Do note that I am running Flask on port 5602:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
if __name__ == '__main__':
# listen on all IPs
app.run(host='0.0.0.0', port=5602, debug=True)
if __name__ == '__main__': # listen on all IPs app.run(host='0.0.0.0',
port=5602, debug=True)


if __name__ == '__main__':
    # listen on all IPs
    app.run(host='0.0.0.0', port=5602, debug=True)

Ah but you might say “Austin, I thought you also had Cloudflare Tunnels and
NGINX going?”. And you would be right.

There is no NGINX “app” container, just two config files (a “default” NGINX-wide
one called nginx.conf and a site-specific one called default.conf).

For the site specific config, we are just saying, listen on port 8755 for SSL
requests, use the defined cert and key, and pass everything to the container
named “flask” via port 5602. You are free to use whatever ports here you want.
There are multiple (nginx listen port -> flask listen port). The IP/Forwarded
headers are so NGINX can log the real requester and not the Cloudflare server
that forwarded the request. If you do not do this step, it will look like all of
your customers/clients are coming from cloudflare and you’ll never see their
real IPs.

NGINX site-specific default.conf:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# default.conf
server {
listen 8755 ssl;

ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;

location / {
limit_req zone=basic_limit burst=16;
proxy_pass http://flask:5602;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $realip_remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# default.conf server { listen 8755 ssl; ssl_certificate
/etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/key.pem; location /
{ limit_req zone=basic_limit burst=16; proxy_pass http://flask:5602;
proxy_set_header Host $host; proxy_set_header X-Real-IP $realip_remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header
X-Forwarded-Proto $scheme; } }


# default.conf
server {
    listen 8755 ssl;

    ssl_certificate /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;

    location / {
        limit_req zone=basic_limit burst=16;
        proxy_pass http://flask:5602;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $realip_remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

For the NGINX-wide config, there is fairly standard stuff. I copied + pasted
most of this from the real default config. I did customize the log format to
include the upstream request time which I like to see (this is how long it takes
the upstream server, be it Flask or php-fpm or ASP.NET core takes to turn around
the request). The IP addresses listed are Cloudflare servers and are where I
should believe them when they say they’re forwarding from someone else. Note the
last CIDR listed – 172.29.0.0/16. This is the docker network. There is actually
a double forward going on and this is also necessary (real_ip_recursive is set
to on).

nginx.conf:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# nginx.conf

user nginx;
worker_processes auto;

error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;


events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

limit_req_zone $binary_remote_addr zone=basic_limit:10m rate=8r/s;

set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2a06:98c0::/29;
set_real_ip_from 2c0f:f248::/32;
set_real_ip_from 172.29.0.0/16;

real_ip_header CF-Connecting-IP;
real_ip_recursive on;

log_format main '$remote_addr - $realip_remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" upstream_time=$upstream_response_time';

access_log /var/log/nginx/access.log main;

sendfile on;

keepalive_timeout 65;

include /etc/nginx/conf.d/*.conf;
}
# nginx.conf user nginx; worker_processes auto; error_log
/var/log/nginx/error.log notice; pid /var/run/nginx.pid; events {
worker_connections 1024; } http { include /etc/nginx/mime.types; default_type
application/octet-stream; limit_req_zone $binary_remote_addr
zone=basic_limit:10m rate=8r/s; set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22; set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20; set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17; set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13; set_real_ip_from 104.24.0.0/14; set_real_ip_from
172.64.0.0/13; set_real_ip_from 131.0.72.0/22; set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32; set_real_ip_from 2405:8100::/32;
set_real_ip_from 2a06:98c0::/29; set_real_ip_from 2c0f:f248::/32;
set_real_ip_from 172.29.0.0/16; real_ip_header CF-Connecting-IP;
real_ip_recursive on; log_format main '$remote_addr - $realip_remote_addr -
$remote_user [$time_local] "$request" ' '$status $body_bytes_sent
"$http_referer" ' '"$http_user_agent" upstream_time=$upstream_response_time';
access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf; }


# nginx.conf

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    limit_req_zone $binary_remote_addr zone=basic_limit:10m rate=8r/s;

    set_real_ip_from 173.245.48.0/20;
    set_real_ip_from 103.21.244.0/22;
    set_real_ip_from 103.22.200.0/22;
    set_real_ip_from 103.31.4.0/22;
    set_real_ip_from 141.101.64.0/18;
    set_real_ip_from 108.162.192.0/18;
    set_real_ip_from 190.93.240.0/20;
    set_real_ip_from 188.114.96.0/20;
    set_real_ip_from 197.234.240.0/22;
    set_real_ip_from 198.41.128.0/17;
    set_real_ip_from 162.158.0.0/15;
    set_real_ip_from 104.16.0.0/13;
    set_real_ip_from 104.24.0.0/14;
    set_real_ip_from 172.64.0.0/13;
    set_real_ip_from 131.0.72.0/22;
    set_real_ip_from 2400:cb00::/32;
    set_real_ip_from 2606:4700::/32;
    set_real_ip_from 2803:f800::/32;
    set_real_ip_from 2405:b500::/32;
    set_real_ip_from 2405:8100::/32;
    set_real_ip_from 2a06:98c0::/29;
    set_real_ip_from 2c0f:f248::/32;
    set_real_ip_from 172.29.0.0/16;

    real_ip_header CF-Connecting-IP;
    real_ip_recursive on;

    log_format main '$remote_addr - $realip_remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" upstream_time=$upstream_response_time';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
}

The NGINX Dockerfile does some magic in that it generates an unsigned SSL
certificate so that things “work” via HTTPS (couldn’t figure out how to do plain
HTTP within the docker compose but HTTPS externally). There is an option in the
Cloudflare Tunnel to ignore SSL errors, which is enabled.

NGINX Dockerfile:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
FROM nginx:alpine

RUN apk add --no-cache openssl

RUN mkdir -p /etc/nginx/ssl/ \
&& openssl req \
-x509 \
-subj "/C=US/ST=Denial/L=Nowhere/O=Dis" \
-nodes \
-days 365 \
-newkey rsa:2048 \
-keyout /etc/nginx/ssl/key.pem \
-out /etc/nginx/ssl/cert.pem

COPY default.conf /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf

EXPOSE 8755

CMD ["nginx", "-g", "daemon off;"]
FROM nginx:alpine RUN apk add --no-cache openssl RUN mkdir -p /etc/nginx/ssl/ \
&& openssl req \ -x509 \ -subj "/C=US/ST=Denial/L=Nowhere/O=Dis" \ -nodes \
-days 365 \ -newkey rsa:2048 \ -keyout /etc/nginx/ssl/key.pem \ -out
/etc/nginx/ssl/cert.pem COPY default.conf /etc/nginx/conf.d/default.conf COPY
nginx.conf /etc/nginx/nginx.conf EXPOSE 8755 CMD ["nginx", "-g", "daemon off;"]


FROM nginx:alpine

RUN apk add --no-cache openssl

RUN mkdir -p /etc/nginx/ssl/ \
    && openssl req \
    -x509 \
    -subj "/C=US/ST=Denial/L=Nowhere/O=Dis" \
    -nodes \
    -days 365 \
    -newkey rsa:2048 \
    -keyout /etc/nginx/ssl/key.pem \
    -out /etc/nginx/ssl/cert.pem

COPY default.conf /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf

EXPOSE 8755

CMD ["nginx", "-g", "daemon off;"]

The Cloudflare Tunnel container is so simple there isn’t even a Dockerfile for
it, just an entry in the docker-compose.yml.

Which brings us to the docker-compose.yml file. This is the “secret sauce” that
brings it all together. This file defines a “stack” of related containers that
form an application. This is still somewhat magic to me.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# docker-compose.yml

version: '3.8'

services:
flask:
build:
context: ./flask_app
dockerfile: Dockerfile
environment:
- FLASK_RUN_HOST=0.0.0.0
restart: always

nginx:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "8755:8755"
depends_on:
- flask
restart: always


cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CF_TOKEN}
restart: always
environment:
- CF_TOKEN

networks:
default:
ipam:
config:
- subnet: 172.29.0.0/16
# docker-compose.yml version: '3.8' services: flask: build: context: ./flask_app
dockerfile: Dockerfile environment: - FLASK_RUN_HOST=0.0.0.0 restart: always
nginx: build: context: ./nginx dockerfile: Dockerfile ports: - "8755:8755"
depends_on: - flask restart: always cloudflared: image:
cloudflare/cloudflared:latest command: tunnel --no-autoupdate run --token
${CF_TOKEN} restart: always environment: - CF_TOKEN networks: default: ipam:
config: - subnet: 172.29.0.0/16


# docker-compose.yml

version: '3.8'

services:
  flask:
    build: 
      context: ./flask_app
      dockerfile: Dockerfile
    environment:
      - FLASK_RUN_HOST=0.0.0.0
    restart: always

  nginx:
    build: 
      context: ./nginx
      dockerfile: Dockerfile
    ports:
      - "8755:8755"
    depends_on:
      - flask
    restart: always


  cloudflared:
    image: cloudflare/cloudflared:latest
    command: tunnel --no-autoupdate run --token ${CF_TOKEN}
    restart: always
    environment:
      - CF_TOKEN

networks:
  default:
    ipam:
      config:
        - subnet: 172.29.0.0/16

Since it is magic, and I am still learning how to describe these various
concepts, ChatGPT did a decent summary:

This docker-compose.yml file outlines a multi-container Docker application,
specifically designed to run a Flask application behind an NGINX reverse proxy,
with a Cloudflare tunnel for secure and fast network connections. The file is
written in version 3.8 of the Docker Compose file format, ensuring compatibility
with newer features and syntax.

FLASK SERVICE

The flask service is configured to build a Docker image from a Dockerfile
located in the ./flask_app directory. The Dockerfile should contain instructions
for setting up the Flask application environment. The FLASK_RUN_HOST environment
variable is set to 0.0.0.0, allowing the Flask application to be accessible from
outside the Docker container. The restart: always directive ensures that the
Flask service is automatically restarted if it stops for any reason.

NGINX SERVICE

The nginx service also builds its Docker image from a Dockerfile, which is
expected to be in the ./nginx directory. This service is set up as a reverse
proxy, forwarding requests to the Flask application. The ports directive maps
port 8755 on the host machine to port 8755 inside the Docker container, making
the NGINX server accessible from outside. The depends_on field specifies that
the NGINX service depends on the Flask service, ensuring that Docker Compose
starts the Flask service before starting NGINX. Like the Flask service, NGINX is
configured to restart automatically if it stops.

CLOUDFLARED SERVICE

The cloudflared service utilizes the official Cloudflare tunnel image and runs
the Cloudflare tunnel with the specified command. The CF_TOKEN environment
variable, which should contain your Cloudflare tunnel token, is passed to the
container through the command line. This service is also configured to restart
automatically if it stops.

NETWORKING

By default, Docker Compose sets up a single network for your application, and
each container for a service joins the default network. In this case, an IPAM
(IP Address Management) configuration is specified with a subnet of
172.29.0.0/16, ensuring that the containers receive IP addresses from this
specified range.


THE CLOUDFLARE TUNNEL

If you don’t have one already, create a cloudflare account, and add your domain
and prove ownership. Then go to Cloudflare zero trust area (usually under the
access link). Create a tunnel. The token is the long alphanumeric string in the
code box for any of the environment types. It always starts with ‘ey’:

Cloudflare tunnel with token

On the public hostname part, you need a minimum of the domain, type, and url.
Here, the domain is obvious. It is going over HTTPS mostly because, again, I
couldn’t figure out how to get it to do plain HTTP within the compose stack. The
port here is where NGINX is listening and the hostname is the name of the NGINX
container (simply ‘nginx’).

uuid7.com cloudflare tunnel settings


DOCKER-COMPOSE UP!

Replace ${CF_TOKEN} in your docker-compose.yml with the actual token for testing
on your local machine. Then from the same folder that contains
docker-compose.yml, run:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
docker-compose up
docker-compose up


docker-compose up

The images will be built if necessary, and after they are built, they will be
started and you’ll see output that looks something like this:

results of docker-compose up, 3 running containers

To kill, just do a Ctrl-C and wait for them to gracefully exit (or not).


A PLACE FOR THE STACK TO LIVE

We need a host where the docker stack can live and exist in a 24/7 uptime world.
This is typically not your dev machine. I have a Dell R630 in a datacenter in
the Denver Tech Center area. That will suffice. Create a VM/VPS somewhere (on a
R630 in a datacenter, Azure, DigitalOcean, AWS EC2, etc), and install docker on
it. Make sure you also install docker-compose. Further make sure you don’t
install Docker on the Proxmox host by accident (oops). Then, install a
self-hosted GitHub runner on the VM/VPS as well.

Create a secret in your repo with the Cloudflare Tunnel token, and name it the
same thing as in your docker-compose.yml file (be sure to change that back!). I
am using CF_TOKEN, which is represented in the docker-compose.yml file:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CF_TOKEN}
restart: always
environment:
- CF_TOKEN
cloudflared: image: cloudflare/cloudflared:latest command: tunnel
--no-autoupdate run --token ${CF_TOKEN} restart: always environment: - CF_TOKEN


  cloudflared:
    image: cloudflare/cloudflared:latest
    command: tunnel --no-autoupdate run --token ${CF_TOKEN}
    restart: always
    environment:
      - CF_TOKEN

What your GitHub secrets will look like with a secret added:


GITHUB ACTIONS

Lastly, we need to define some actions to take place when certain events happen.
In our case, we are interested in when the ‘main’ branch of the repo changes.
This is my full .github\workflows\main.yml file:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
name: Docker Deploy

on:
push:
branches:
- main

jobs:
deploy:
runs-on: self-hosted
steps:
- name: Checkout Code
uses: actions/checkout@v2

- name: Set up Docker Compose
run: echo "CF_TOKEN=${{ secrets.CF_TOKEN }}" >> $GITHUB_ENV

- name: Navigate to Code Directory
run: cd ${{ github.workspace }}

- name: Run Docker Compose Up with Sudo
run: sudo -E docker-compose up -d --build
name: Docker Deploy on: push: branches: - main jobs: deploy: runs-on:
self-hosted steps: - name: Checkout Code uses: actions/checkout@v2 - name: Set
up Docker Compose run: echo "CF_TOKEN=${{ secrets.CF_TOKEN }}" >> $GITHUB_ENV -
name: Navigate to Code Directory run: cd ${{ github.workspace }} - name: Run
Docker Compose Up with Sudo run: sudo -E docker-compose up -d --build


name: Docker Deploy

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: self-hosted
    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Set up Docker Compose
        run: echo "CF_TOKEN=${{ secrets.CF_TOKEN }}" >> $GITHUB_ENV

      - name: Navigate to Code Directory
        run: cd ${{ github.workspace }}

      - name: Run Docker Compose Up with Sudo
        run: sudo -E docker-compose up -d --build


Now, when you commit, it will reach out to your self-hosted runner, which will
pull your code, insert the CF_TOKEN secret into the environment variables, and
then run the docker-compose up command to get your stack going. Here are a
couple executions. You will likely have a couple misfires when setting up the
pipeline if you are doing something different than me. My record is 19 tries
(read: failures) in Azure DevOps to get a build/release pipeline fully
functional.


CONCLUSION

This post turned out to be far longer than I anticipated. I hope you find it
helpful! As noted earlier, the vast majority of this is essentially copy +
paste. By that, I mean once you do it once, you can easily do it again. You can
run these docker compose stacks pretty much anywhere.

There is a “feature” of Cloudflare Tunnels in that they say they are randomly
routed if there is more than one active tunnel. I have not tested this but this
allows for some interesting possibilities. For sites that have zero persistence,
like the uuid7.com example site, that means I can run this stack in more than
one place for redundancy by just doing a git pull/docker-compose up.

Docker containers take very little memory/CPU:

output of ‘docker stats’ showing this stack taking a total of ~40MB memory and
0.76% of 2 vCPUs of a Xeon e5-2697v4

Run wild!

Repo with all the project files here – https://github.com/RzNmKX/uuid7-flask

 * Tags cloudflared, docker, docker-compose, flask, github actions, nginx

--------------------------------------------------------------------------------

Categories
Uncategorized


UUIDV7 SITE LAUNCHED!

 * Post author By Austin
 * Post date October 3, 2023
 * 2 Comments on UUIDv7 site launched!

For a little side project, I build uuid7.com almost entirely with the help of AI
tools. Check it out!

I also built the stack with Docker Compose. I have resisted Docker for so long
because it was such a paint for homelab type stuff. But I recently started
needing to use it at work (we are migrating to AWS Kubernetes – yay! not) so I
figured I’d give it a go.

With the assistance of ChatGPT, I put together a full Docker stack using
Cloudflare tunnels (cloudflared), Nginx as the webserver, and Flask as the
backend all in a couple hours. It works great!

That said, it is running on my main desktop at home to see if it’s a popular
site so fingers crossed it holds up.



--------------------------------------------------------------------------------

Categories
Python


CONTROLLING ASROCKRACK CPU & CHASSIS FAN SPEEDS VIA IPMITOOL & PID LOOPS

 * Post author By Austin
 * Post date July 26, 2023
 * 2 Comments on Controlling AsrockRack CPU & chassis fan speeds via ipmitool &
   PID loops

Screenshot of TrueNAS Core reporting page for CPU temp showing very constant CPU
temperature due to PID fan control loop

I have a 1U Datto NAS unit that I got for super cheap ($150 for 4x 3.5″ SAS3,
D-1541, 4x32GB, 2400MHz, 2x 10GbaseT) that has worked quite well for me. The
only downside, which is present among basically all 1U devices, is the noise.

During my research for how to control the tiny, high-RPM (like 8000+ RPM) fans,
I stumbled across a thread on the FreeNAS forums –
https://www.truenas.com/community/threads/script-to-control-fan-speed-in-response-to-hard-drive-temperatures.41294/.
At the bottom of the post, there are a few other links to improvements. I ran
the Perl logging scripts that made up the improvements a bit but I am no Perl
expert so didn’t up implementing it.

I am not 100% sure of the default AsrockRack behavior but it seemed that if CPU
temp >60C, both case and CPU fans would spike. My BlueIris instance sends videos
over a couple times an hour, which would spike the fans, which would be annoying
during my work from home weeks while I was in the basement, working.

The idea of using a PID loop to control fan speeds stuck with me though, and
with the help of GitHub Copilot, I busted out a proof of concept in an hour or
so during a particularly boring set of meetings. There is a very high
probability this will work for Supermicro motherboards as well with only minor
tweaks.

This is how well it works. Note the drop at the end is due to changing CPU
setpoint from 57C to 55C. The temperature is very, very steady.

Screenshot of TrueNAS Core reporting page for CPU temp showing very constant CPU
temperature due to PID fan control loop

Without further ado, below is the main script (you’ll also need PID.py, which I
borrowed a few years ago for the Coding a pitch/roll/altitude autopilot in
X-Plane with Python series of posts). It can be run via SSH for debugging
purposes (it is no fun to edit python via nano over ssh on FreeBSD), or with
native commands if it detects it is running on the target system.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import logging
import time
import PID
import datetime
import socket
import subprocess

logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO)

# don't care about debug/info level logging from either of these packages
loggers_to_set_to_warning = ['paramiko.transport', 'invoke']
for l in loggers_to_set_to_warning:
logging.getLogger(l).setLevel(logging.WARNING)

user = "root"
password = r"password"
host = None # this is set via hostname detection below
DESIRED_CPU_TEMP = 55.0
DESIRED_MB_TEMP = 35.0
# HDD_TEMP_THRESHOLD = 44.0 # unused
MIN_FAN_PCT = 10.0
drives_to_monitor = ['da0', 'da1', 'da2', 'da3', 'nvme0','nvme1','nvme2']

# command to set fans via ipmitool
# ipmitool raw 0x3a 0x01 0x00 0x04 0x04 0x04 0x04 0x04 0x04 0x04
#cpu #fan #fan #fan #fan #fan #fan ????

BASE_RAW_IPMI = 'raw 0x3a 0x01'
INITIAL_STATE = [32,32,32,32,32,32,32,32] # all 32/64 = half speed
FAN_CURRENT_STATE = INITIAL_STATE

hostname = socket.gethostname()
if 'truenas' in hostname or hostname == 'truenas-datto.home.fluffnet.net':
host = 'localhost'
c = None
else:
from fabric import Connection # importing here because freebsd 13 (or whatever
truenas core 13 is based on lacks pip to install packages)
host = "10.98.1.9"
c = Connection(host, port=22, user=user, connect_kwargs={'password': password})

current_sensor_readings = {}
cpu_temp_sensor = "CPU Temp"
cpu_fan_sensor = "CPU_FAN1"
case_fans = ["FRNT_FAN2","FRNT_FAN3","FRNT_FAN4"]
mb_temp_sensor = "MB Temp"

def limiter(input_value, min_value, max_value):
if input_value < min_value:
return min_value
elif input_value > max_value:
return max_value
else:
return input_value

def set_fans_via_ipmi(connection):
# raw_ipmi_cmd = construct_raw_ipmi_cmd() # not needed unless debug and remote
# logging.info(raw_ipmi_cmd)
if host == 'localhost':
result = subprocess.run(['ipmitool', 'raw', '0x3a', '0x01',
'0x'+FAN_CURRENT_STATE[0],
'0x'+FAN_CURRENT_STATE[1],
'0x'+FAN_CURRENT_STATE[2],
'0x'+FAN_CURRENT_STATE[3],
'0x'+FAN_CURRENT_STATE[4],
'0x'+FAN_CURRENT_STATE[5],
'0x'+FAN_CURRENT_STATE[6],
'0x'+FAN_CURRENT_STATE[7]], stdout=subprocess.PIPE)
else:
raw_ipmi_cmd = construct_raw_ipmi_cmd()
result = connection.run('ipmitool ' + raw_ipmi_cmd, hide=True)
#logging.info(result.stdout)

def scale_to_64ths(input_percent):
result = input_percent / 100.0 * 64.0
# prepend 0 to make it a hex value
result_int = int(result)
result_str = str(result_int)
if len(result_str) == 1:
result_str = '0' + result_str # turn a 0x1 into a 0x01
return result_str

def adjust_cpu_fan_setpoint(hex_value_64ths):
FAN_CURRENT_STATE[0] = hex_value_64ths

def adjust_case_fan_setpoint(hex_value_64ths):
for i in range(len(FAN_CURRENT_STATE) - 1):
FAN_CURRENT_STATE[i + 1] = hex_value_64ths

def construct_raw_ipmi_cmd():
new_state = BASE_RAW_IPMI
for i in range(len(FAN_CURRENT_STATE)):
new_state = new_state + ' 0x' + str(FAN_CURRENT_STATE[i])
return new_state

def populate_sensor_readings(sensor, value):
current_sensor_readings[sensor] = value

def query_ipmitool(connection):
if host == 'localhost':
result = subprocess.run(['ipmitool', 'sensor'], stdout=subprocess.PIPE)
result = result.stdout.decode('utf-8')
else:
result = connection.run('ipmitool sensor', hide=True).stdout
for line in result.split('\n'):
if line == '':
break

row_data = line.split('|')
sensor_name = row_data[0].strip()
sensor_value = row_data[1].strip()
populate_sensor_readings(sensor_name, sensor_value)
logging.debug(sensor_name + " = " + sensor_value)

def wait_until_top_of_second():
# calculate time until next top of second
sleep_seconds = 1 - (time.time() % 1)
time.sleep(sleep_seconds)

def get_drive_temp(connection, drive):
###########################################
# this is copilot generated, and untested #
# not sure about row_data[0] stuff #
###########################################
if host == 'localhost':
result = subprocess.run(['smartctl', '-A', '/dev/' + drive],
stdout=subprocess.PIPE)
result = result.stdout.decode('utf-8')
else:
result = connection.run('smartctl -A /dev/' + drive, hide=True).stdout
for line in result.split('\n'):
if line == '':
break

row_data = line.split()
if len(row_data) < 10:
continue
if row_data[0] == '194':
drive_temp = row_data[9]
logging.info(drive + " = " + drive_temp)

def query_drive_temps(connection):
for drive in drives_to_monitor:
get_drive_temp(connection, drive)

# tune these values. the first one is the most important and basically is the
multiplier for
# how much you want the fans to run in proportion to the actual-setpoint delta.
# example: if setpoint is 55 and actual is 59, the delta is 4, which is
multiplied by 4 for
# 16 output, which if converted to 64ths would be 25% fan speed.
# the 2nd parameter is the integral, which is a cumulative error counter of
sorts.
# the 3rd parameter is derivative, which should probably be set to 0 (if tuned
correctly, it prevents over/undershoot)
cpu_pid = PID.PID(4.0, 2.5, 0.1)
cpu_pid.SetPoint = DESIRED_CPU_TEMP

mb_pid = PID.PID(2.5, 1.5, 0.1)
mb_pid.SetPoint = DESIRED_MB_TEMP

wait_until_top_of_second()

# set last_execution to now minus one minute to force first execution
last_execution = datetime.datetime.now() - datetime.timedelta(minutes=1)

while(True):
if datetime.datetime.now().minute != last_execution.minute:
# TODO: get drive temps
logging.info("getting drive temps")

query_ipmitool(c)
cpu_temp = float(current_sensor_readings[cpu_temp_sensor])
mb_temp = float(current_sensor_readings[mb_temp_sensor])

cpu_pid.update(cpu_temp)
mb_pid.update(mb_temp)

logging.info(f'CPU: {cpu_temp:5.2f} MB: {mb_temp:5.2f} CPU PID:
{cpu_pid.output:5.2f} MB PID: {mb_pid.output:5.2f}')

# note negative multiplier!!
cpu_fan_setpoint = scale_to_64ths(limiter(-1*cpu_pid.output,MIN_FAN_PCT,100))
case_fan_setpoint = scale_to_64ths(limiter(-1*mb_pid.output,MIN_FAN_PCT,100))
adjust_cpu_fan_setpoint(cpu_fan_setpoint)
adjust_case_fan_setpoint(case_fan_setpoint)
set_fans_via_ipmi(c)

last_execution = datetime.datetime.now()
wait_until_top_of_second()
import logging import time import PID import datetime import socket import
subprocess logging.basicConfig( format='%(asctime)s - %(name)s - %(levelname)s -
%(message)s', level=logging.INFO) # don't care about debug/info level logging
from either of these packages loggers_to_set_to_warning = ['paramiko.transport',
'invoke'] for l in loggers_to_set_to_warning:
logging.getLogger(l).setLevel(logging.WARNING) user = "root" password =
r"password" host = None # this is set via hostname detection below
DESIRED_CPU_TEMP = 55.0 DESIRED_MB_TEMP = 35.0 # HDD_TEMP_THRESHOLD = 44.0 #
unused MIN_FAN_PCT = 10.0 drives_to_monitor = ['da0', 'da1', 'da2', 'da3',
'nvme0','nvme1','nvme2'] # command to set fans via ipmitool # ipmitool raw 0x3a
0x01 0x00 0x04 0x04 0x04 0x04 0x04 0x04 0x04 #cpu #fan #fan #fan #fan #fan #fan
???? BASE_RAW_IPMI = 'raw 0x3a 0x01' INITIAL_STATE = [32,32,32,32,32,32,32,32] #
all 32/64 = half speed FAN_CURRENT_STATE = INITIAL_STATE hostname =
socket.gethostname() if 'truenas' in hostname or hostname ==
'truenas-datto.home.fluffnet.net': host = 'localhost' c = None else: from fabric
import Connection # importing here because freebsd 13 (or whatever truenas core
13 is based on lacks pip to install packages) host = "10.98.1.9" c =
Connection(host, port=22, user=user, connect_kwargs={'password': password})
current_sensor_readings = {} cpu_temp_sensor = "CPU Temp" cpu_fan_sensor =
"CPU_FAN1" case_fans = ["FRNT_FAN2","FRNT_FAN3","FRNT_FAN4"] mb_temp_sensor =
"MB Temp" def limiter(input_value, min_value, max_value): if input_value <
min_value: return min_value elif input_value > max_value: return max_value else:
return input_value def set_fans_via_ipmi(connection): # raw_ipmi_cmd =
construct_raw_ipmi_cmd() # not needed unless debug and remote #
logging.info(raw_ipmi_cmd) if host == 'localhost': result =
subprocess.run(['ipmitool', 'raw', '0x3a', '0x01', '0x'+FAN_CURRENT_STATE[0],
'0x'+FAN_CURRENT_STATE[1], '0x'+FAN_CURRENT_STATE[2], '0x'+FAN_CURRENT_STATE[3],
'0x'+FAN_CURRENT_STATE[4], '0x'+FAN_CURRENT_STATE[5], '0x'+FAN_CURRENT_STATE[6],
'0x'+FAN_CURRENT_STATE[7]], stdout=subprocess.PIPE) else: raw_ipmi_cmd =
construct_raw_ipmi_cmd() result = connection.run('ipmitool ' + raw_ipmi_cmd,
hide=True) #logging.info(result.stdout) def scale_to_64ths(input_percent):
result = input_percent / 100.0 * 64.0 # prepend 0 to make it a hex value
result_int = int(result) result_str = str(result_int) if len(result_str) == 1:
result_str = '0' + result_str # turn a 0x1 into a 0x01 return result_str def
adjust_cpu_fan_setpoint(hex_value_64ths): FAN_CURRENT_STATE[0] = hex_value_64ths
def adjust_case_fan_setpoint(hex_value_64ths): for i in
range(len(FAN_CURRENT_STATE) - 1): FAN_CURRENT_STATE[i + 1] = hex_value_64ths
def construct_raw_ipmi_cmd(): new_state = BASE_RAW_IPMI for i in
range(len(FAN_CURRENT_STATE)): new_state = new_state + ' 0x' +
str(FAN_CURRENT_STATE[i]) return new_state def populate_sensor_readings(sensor,
value): current_sensor_readings[sensor] = value def query_ipmitool(connection):
if host == 'localhost': result = subprocess.run(['ipmitool', 'sensor'],
stdout=subprocess.PIPE) result = result.stdout.decode('utf-8') else: result =
connection.run('ipmitool sensor', hide=True).stdout for line in
result.split('\n'): if line == '': break row_data = line.split('|') sensor_name
= row_data[0].strip() sensor_value = row_data[1].strip()
populate_sensor_readings(sensor_name, sensor_value) logging.debug(sensor_name +
" = " + sensor_value) def wait_until_top_of_second(): # calculate time until
next top of second sleep_seconds = 1 - (time.time() % 1)
time.sleep(sleep_seconds) def get_drive_temp(connection, drive):
########################################### # this is copilot generated, and
untested # # not sure about row_data[0] stuff #
########################################### if host == 'localhost': result =
subprocess.run(['smartctl', '-A', '/dev/' + drive], stdout=subprocess.PIPE)
result = result.stdout.decode('utf-8') else: result = connection.run('smartctl
-A /dev/' + drive, hide=True).stdout for line in result.split('\n'): if line ==
'': break row_data = line.split() if len(row_data) < 10: continue if row_data[0]
== '194': drive_temp = row_data[9] logging.info(drive + " = " + drive_temp) def
query_drive_temps(connection): for drive in drives_to_monitor:
get_drive_temp(connection, drive) # tune these values. the first one is the most
important and basically is the multiplier for # how much you want the fans to
run in proportion to the actual-setpoint delta. # example: if setpoint is 55 and
actual is 59, the delta is 4, which is multiplied by 4 for # 16 output, which if
converted to 64ths would be 25% fan speed. # the 2nd parameter is the integral,
which is a cumulative error counter of sorts. # the 3rd parameter is derivative,
which should probably be set to 0 (if tuned correctly, it prevents
over/undershoot) cpu_pid = PID.PID(4.0, 2.5, 0.1) cpu_pid.SetPoint =
DESIRED_CPU_TEMP mb_pid = PID.PID(2.5, 1.5, 0.1) mb_pid.SetPoint =
DESIRED_MB_TEMP wait_until_top_of_second() # set last_execution to now minus one
minute to force first execution last_execution = datetime.datetime.now() -
datetime.timedelta(minutes=1) while(True): if datetime.datetime.now().minute !=
last_execution.minute: # TODO: get drive temps logging.info("getting drive
temps") query_ipmitool(c) cpu_temp =
float(current_sensor_readings[cpu_temp_sensor]) mb_temp =
float(current_sensor_readings[mb_temp_sensor]) cpu_pid.update(cpu_temp)
mb_pid.update(mb_temp) logging.info(f'CPU: {cpu_temp:5.2f} MB: {mb_temp:5.2f}
CPU PID: {cpu_pid.output:5.2f} MB PID: {mb_pid.output:5.2f}') # note negative
multiplier!! cpu_fan_setpoint =
scale_to_64ths(limiter(-1*cpu_pid.output,MIN_FAN_PCT,100)) case_fan_setpoint =
scale_to_64ths(limiter(-1*mb_pid.output,MIN_FAN_PCT,100))
adjust_cpu_fan_setpoint(cpu_fan_setpoint)
adjust_case_fan_setpoint(case_fan_setpoint) set_fans_via_ipmi(c) last_execution
= datetime.datetime.now() wait_until_top_of_second()


import logging
import time
import PID
import datetime
import socket
import subprocess

logging.basicConfig(
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO)

# don't care about debug/info level logging from either of these packages
loggers_to_set_to_warning = ['paramiko.transport', 'invoke']
for l in loggers_to_set_to_warning:
    logging.getLogger(l).setLevel(logging.WARNING)

user = "root"
password = r"password"
host = None # this is set via hostname detection below
DESIRED_CPU_TEMP = 55.0
DESIRED_MB_TEMP = 35.0
# HDD_TEMP_THRESHOLD = 44.0 # unused
MIN_FAN_PCT = 10.0
drives_to_monitor = ['da0', 'da1', 'da2', 'da3', 'nvme0','nvme1','nvme2']

# command to set fans via ipmitool
# ipmitool raw 0x3a 0x01 0x00 0x04 0x04 0x04 0x04 0x04 0x04 0x04
					     #cpu #fan #fan #fan #fan #fan #fan ????

BASE_RAW_IPMI = 'raw 0x3a 0x01'
INITIAL_STATE = [32,32,32,32,32,32,32,32] # all 32/64 = half speed
FAN_CURRENT_STATE = INITIAL_STATE

hostname = socket.gethostname()
if 'truenas' in hostname or hostname == 'truenas-datto.home.fluffnet.net':
    host = 'localhost'
    c = None
else:
    from fabric import Connection # importing here because freebsd 13 (or whatever truenas core 13 is based on lacks pip to install packages)
    host = "10.98.1.9"
    c = Connection(host, port=22, user=user, connect_kwargs={'password': password})

current_sensor_readings = {}
cpu_temp_sensor = "CPU Temp"
cpu_fan_sensor = "CPU_FAN1"
case_fans = ["FRNT_FAN2","FRNT_FAN3","FRNT_FAN4"]
mb_temp_sensor = "MB Temp"

def limiter(input_value, min_value, max_value):
    if input_value < min_value:
        return min_value
    elif input_value > max_value:
        return max_value
    else:
        return input_value
    
def set_fans_via_ipmi(connection):
    # raw_ipmi_cmd = construct_raw_ipmi_cmd() # not needed unless debug and remote
    # logging.info(raw_ipmi_cmd)
    if host == 'localhost':
        result = subprocess.run(['ipmitool', 'raw', '0x3a', '0x01',
                                 '0x'+FAN_CURRENT_STATE[0], 
                                 '0x'+FAN_CURRENT_STATE[1],
                                 '0x'+FAN_CURRENT_STATE[2],
                                 '0x'+FAN_CURRENT_STATE[3],
                                 '0x'+FAN_CURRENT_STATE[4],
                                 '0x'+FAN_CURRENT_STATE[5],
                                 '0x'+FAN_CURRENT_STATE[6],
                                 '0x'+FAN_CURRENT_STATE[7]], stdout=subprocess.PIPE)
    else:
        raw_ipmi_cmd = construct_raw_ipmi_cmd()
        result = connection.run('ipmitool ' + raw_ipmi_cmd, hide=True)
    #logging.info(result.stdout)

def scale_to_64ths(input_percent):
    result = input_percent / 100.0 * 64.0
    # prepend 0 to make it a hex value
    result_int = int(result)
    result_str = str(result_int)
    if len(result_str) == 1:
        result_str = '0' + result_str # turn a 0x1 into a 0x01
    return result_str

def adjust_cpu_fan_setpoint(hex_value_64ths):
    FAN_CURRENT_STATE[0] = hex_value_64ths

def adjust_case_fan_setpoint(hex_value_64ths):
    for i in range(len(FAN_CURRENT_STATE) - 1):
        FAN_CURRENT_STATE[i + 1] = hex_value_64ths

def construct_raw_ipmi_cmd():
    new_state = BASE_RAW_IPMI
    for i in range(len(FAN_CURRENT_STATE)):
        new_state = new_state + ' 0x' + str(FAN_CURRENT_STATE[i])
    return new_state

def populate_sensor_readings(sensor, value):
    current_sensor_readings[sensor] = value

def query_ipmitool(connection):
    if host == 'localhost':
        result = subprocess.run(['ipmitool', 'sensor'], stdout=subprocess.PIPE)
        result = result.stdout.decode('utf-8')
    else:
        result = connection.run('ipmitool sensor', hide=True).stdout
    for line in result.split('\n'):
        if line == '':
            break

        row_data = line.split('|')
        sensor_name = row_data[0].strip()
        sensor_value = row_data[1].strip()
        populate_sensor_readings(sensor_name, sensor_value)
        logging.debug(sensor_name + " = " + sensor_value)

def wait_until_top_of_second():
    # calculate time until next top of second
    sleep_seconds = 1 - (time.time() % 1)
    time.sleep(sleep_seconds)

def get_drive_temp(connection, drive):
    ###########################################
    # this is copilot generated, and untested #
    # not sure about row_data[0] stuff        #
    ###########################################
    if host == 'localhost':
        result = subprocess.run(['smartctl', '-A', '/dev/' + drive], stdout=subprocess.PIPE)
        result = result.stdout.decode('utf-8')
    else:
        result = connection.run('smartctl -A /dev/' + drive, hide=True).stdout
    for line in result.split('\n'):
        if line == '':
            break

        row_data = line.split()
        if len(row_data) < 10:
            continue
        if row_data[0] == '194':
            drive_temp = row_data[9]
            logging.info(drive + " = " + drive_temp)

def query_drive_temps(connection):
    for drive in drives_to_monitor:
        get_drive_temp(connection, drive)

# tune these values. the first one is the most important and basically is the multiplier for
# how much you want the fans to run in proportion to the actual-setpoint delta.
# example: if setpoint is 55 and actual is 59, the delta is 4, which is multiplied by 4 for
# 16 output, which if converted to 64ths would be 25% fan speed.
# the 2nd parameter is the integral, which is a cumulative error counter of sorts.
# the 3rd parameter is derivative, which should probably be set to 0 (if tuned correctly, it prevents over/undershoot)
cpu_pid = PID.PID(4.0, 2.5, 0.1)
cpu_pid.SetPoint = DESIRED_CPU_TEMP

mb_pid = PID.PID(2.5, 1.5, 0.1)
mb_pid.SetPoint = DESIRED_MB_TEMP

wait_until_top_of_second()

# set last_execution to now minus one minute to force first execution
last_execution = datetime.datetime.now() - datetime.timedelta(minutes=1)

while(True):
    if datetime.datetime.now().minute != last_execution.minute:
        # TODO: get drive temps
        logging.info("getting drive temps")

    query_ipmitool(c)
    cpu_temp = float(current_sensor_readings[cpu_temp_sensor])
    mb_temp = float(current_sensor_readings[mb_temp_sensor])

    cpu_pid.update(cpu_temp)
    mb_pid.update(mb_temp)
    
    logging.info(f'CPU: {cpu_temp:5.2f} MB: {mb_temp:5.2f} CPU PID: {cpu_pid.output:5.2f} MB PID: {mb_pid.output:5.2f}')
    
    # note negative multiplier!!
    cpu_fan_setpoint = scale_to_64ths(limiter(-1*cpu_pid.output,MIN_FAN_PCT,100))
    case_fan_setpoint = scale_to_64ths(limiter(-1*mb_pid.output,MIN_FAN_PCT,100))
    adjust_cpu_fan_setpoint(cpu_fan_setpoint)
    adjust_case_fan_setpoint(case_fan_setpoint)
    set_fans_via_ipmi(c)

    last_execution = datetime.datetime.now()
    wait_until_top_of_second()



As you can see, it is not quite complete. I still need to add the hard drive
temp detection stuff to ramp case fans a bit if the drives get hot. Those NVMe
drives sure get hot (especially the Intel P4800X I have in one of the PCIe slots
– see Intel Optane P1600X & P4800X as ZFS SLOG/ZIL for details).

This is what the output looks like (keep in mind the -1 multiplier in the
setpoint stuff!):

screenshot showing second-by-second output of the PID fan control loop keeping a
very consistent 55C CPU temp

And here is a summary of the script provided by the ever helpful ChatGPT with
some high-level summaries. I fed it the code and said “write a blog post about
this”. I took out the intro paragraph but left the rest.


THE SCRIPT OVERVIEW

This script leverages the PID controller – a control loop mechanism that
calculates an “error” value as the difference between a measured process
variable and a desired setpoint. It attempts to minimize the error by adjusting
the process control inputs.

In this script, we are implementing a fan speed control system that reacts to
temperature changes dynamically. Our desired setpoint is the optimal temperature
we want to maintain for both the CPU (DESIRED_CPU_TEMP) and the motherboard
(DESIRED_MB_TEMP).


EXPLORING THE SCRIPT

The Python script begins by setting up the necessary libraries and logging. The
logging library is used to log useful debug information, such as the current CPU
temperature and fan speed, which can help you understand what’s happening in the
script.

Next, we have a section where we define some constants, such as the desired
temperatures and minimum fan speed percentage. It also defines a connection to
the localhost or to a remote host, depending on the hostname.

It uses ipmitool – a utility for managing and configuring devices that support
the Intelligent Platform Management Interface (IPMI) to control fan speeds.

The limiter() function ensures the fan speed remains within the predefined
minimum and maximum thresholds. It’s important as it prevents the fan speed from
reaching potentially harmful levels.

The script also includes several functions to set and adjust fan speeds, as well
as to construct the appropriate ipmitool command. One thing to note is that the
fan speeds are set using hexadecimal values, so there are functions to convert
the desired fan speed percentages to hexadecimal.

A very useful function is query_ipmitool(). This function runs the ipmitool
command, gets the current sensor readings, and stores them in the
current_sensor_readings dictionary for further processing.

The script utilizes two PID controllers, cpu_pid for the CPU and mb_pid for the
motherboard, with specific setpoints set to desired temperatures.

The core logic is inside the infinite loop at the end of the script. The loop
constantly reads temperature sensor data and adjusts the fan speeds accordingly.
The loop runs once every second, so it can respond quickly to changes in CPU and
motherboard temperatures.


CONCLUSION

This script demonstrates a neat way of controlling fan speed in response to CPU
and motherboard temperatures. It’s an effective approach to ensure that your
system runs smoothly and without overheating, while minimizing noise.

 * Tags asrock, control loop, control theory, pid, python, truenas

--------------------------------------------------------------------------------

Categories
AI Blog Admin


USING CHATGPT TO FIGHT SPAM ON WORDPRESS

 * Post author By Austin
 * Post date June 2, 2023
 * 2 Comments on Using ChatGPT to fight spam on WordPress



Like all other WordPress blogs, this one attracts a good number of spam
comments. I get usually 5-10 per day, but yesterday there were like 30. Almost
all of them contain Cyrillic characters:

Since I specify that all comments are held until approved, that means I need to
either approve or trash or spam every comment.


ENTER CHATGPT

I use ChatGPT (specifically GPT 4) for a number of minor coding tasks. I find it
helpful. It is not perfect. That doesn’t mean it isn’t useful. I decided to have
it ponder this issue. I work with Python a lot at work and it’s typically my
scripting language of choice. My initial request is as follows:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
write a python script to log into a wordpress site as an admin, get the top 5
comments, see if there are any Cyrillic characters in them, and if there are,
delete said comment
write a python script to log into a wordpress site as an admin, get the top 5
comments, see if there are any Cyrillic characters in them, and if there are,
delete said comment


write a python script to log into a wordpress site as an admin, get the top 5 comments, see if there are any Cyrillic characters in them, and if there are, delete said comment

It was understandably unhappy about potentially being asked to “hack” a
WordPress site, so I had to reassure it that I was the owner of said site:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
yes, I have my own blog. I am the owner. I have the admin credentials. please
proceed with how to delete those comments
yes, I have my own blog. I am the owner. I have the admin credentials. please
proceed with how to delete those comments


yes, I have my own blog. I am the owner. I have the admin credentials. please proceed with how to delete those comments

It happily complied and spit out some very usable code:

After a bit more back and forth:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
does this get comments in a pending state? I don't let them be published
instantly because most of them are spam
does this get comments in a pending state? I don't let them be published
instantly because most of them are spam


does this get comments in a pending state? I don't let them be published instantly because most of them are spam

I was informed there are 5 different comment states: approved, hold, spam,
trash, unapproved.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
perfect. can you please adjust the script to get the pending, unapproved, and
hold comments. also make it top 20
perfect. can you please adjust the script to get the pending, unapproved, and
hold comments. also make it top 20


perfect. can you please adjust the script to get the pending, unapproved, and hold comments. also make it top 20

It ran perfectly after copy + pasting the Python. Unfortunately I created an
application password for my main login on this site and forgot to change the
delete URL so it happily sent my application password and username to
yourwebsite.com. After revoking that password and realizing there should be a
base url:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
please split out the site url (https://austinsnerdythings.com) from base_url for
both retrieving the comments as well as deleting
please split out the site url (https://austinsnerdythings.com) from base_url for
both retrieving the comments as well as deleting


please split out the site url (https://austinsnerdythings.com) from base_url for both retrieving the comments as well as deleting

I was left with a 100% functional script. This took 3-4 min of back and forth
with ChatGPT 4.0. I definitely could’ve code this up myself with the basic
structure in 15 minutes or so but I would’ve had to work out the json format for
comments and all that. It is so much easier to just test out what ChatGPT
provides and alter as necessary:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
import requests
import json
import re

def has_cyrillic(text):
return bool(re.search('[\u0400-\u04FF]', text))

site_url = "https://austinsnerdythings.com"
base_url = f"{site_url}/wp-json/wp/v2/comments?per_page=20&status="
statuses = ['pending', 'hold', 'unapproved']
auth = ('yourusername', 'yourpassword')

for status in statuses:
url = base_url + status
response = requests.get(url, auth=auth)
comments = json.loads(response.text)

cyrillic_comments = []

for comment in comments:
if has_cyrillic(comment['content']['rendered']):
cyrillic_comments.append(comment)

# delete comments with Cyrillic characters
for comment in cyrillic_comments:
delete_url = f"{site_url}/wp-json/wp/v2/comments/" + str(comment['id'])
response = requests.delete(delete_url, auth=auth)
if response.status_code == 200:
print(f"Successfully deleted comment with id {comment['id']}")
else:
print(f"Failed to delete comment with id {comment['id']}. Response code:
{response.status_code}")
import requests import json import re def has_cyrillic(text): return
bool(re.search('[\u0400-\u04FF]', text)) site_url =
"https://austinsnerdythings.com" base_url =
f"{site_url}/wp-json/wp/v2/comments?per_page=20&status=" statuses = ['pending',
'hold', 'unapproved'] auth = ('yourusername', 'yourpassword') for status in
statuses: url = base_url + status response = requests.get(url, auth=auth)
comments = json.loads(response.text) cyrillic_comments = [] for comment in
comments: if has_cyrillic(comment['content']['rendered']):
cyrillic_comments.append(comment) # delete comments with Cyrillic characters for
comment in cyrillic_comments: delete_url = f"{site_url}/wp-json/wp/v2/comments/"
+ str(comment['id']) response = requests.delete(delete_url, auth=auth) if
response.status_code == 200: print(f"Successfully deleted comment with id
{comment['id']}") else: print(f"Failed to delete comment with id
{comment['id']}. Response code: {response.status_code}")


import requests
import json
import re

def has_cyrillic(text):
    return bool(re.search('[\u0400-\u04FF]', text))

site_url = "https://austinsnerdythings.com"
base_url = f"{site_url}/wp-json/wp/v2/comments?per_page=20&status="
statuses = ['pending', 'hold', 'unapproved']
auth = ('yourusername', 'yourpassword')

for status in statuses:
    url = base_url + status
    response = requests.get(url, auth=auth)
    comments = json.loads(response.text)

    cyrillic_comments = []

    for comment in comments:
        if has_cyrillic(comment['content']['rendered']):
            cyrillic_comments.append(comment)

    # delete comments with Cyrillic characters
    for comment in cyrillic_comments:
        delete_url = f"{site_url}/wp-json/wp/v2/comments/" + str(comment['id'])
        response = requests.delete(delete_url, auth=auth)
        if response.status_code == 200:
            print(f"Successfully deleted comment with id {comment['id']}")
        else:
            print(f"Failed to delete comment with id {comment['id']}. Response code: {response.status_code}")



FINISHING TOUCHES

The other finishing touches I did were as follows:

 * Created a user specific for comment moderation. I used the ‘Members’ plugin
   to create a very limited role (only permissions granted are the necessary
   ones: Moderate Comments, Read, Edit Posts, Edit Others’ Posts, Edit Published
   Posts) and assigned said user to it. This greatly limits the potential for
   abuse if the account password falls into the wrong hands.
 * Copied the script to the web host running the blog
 * Set it to be executed hourly via crontab

Now I have a fully automated script that deletes any blog comments with any
Cyrillic characters!

You may be asking yourself why I don’t use Akismet or Recaptcha or anything like
that. I found the speed tradeoff to not be worthwhile. They definitely slowed
down my site for minimal benefit. It only took a couple minutes a day to delete
the spam comments. But now it takes no time because it’s automated!

Here’s the link to the full ChatGPT conversation:

https://chat.openai.com/share/aad6a095-9b90-42c5-b1ca-de2a18828ba2


RESULTS

I created a spam comment and ran the script (after adding a print line to show
the comment). Here’s the output:

And the web logs showing the 3 status being retrieved via GET and the DELETE for
the single spam comment:

I am quite satisfied with this basic solution. It took me far longer to type up
this blog post than it did to get the script working.

 * Tags ai, chatgpt, gpt4, python, WordPress, wordpress spam

--------------------------------------------------------------------------------

Categories
LifeProTips


HOW A TRAVEL ROUTER CAN SAVE YOU MONEY AND SHARE WI-FI ON FLIGHTS

 * Post author By Austin
 * Post date April 17, 2023
 * 3 Comments on How a Travel Router can Save you Money and Share Wi-Fi on
   Flights

GL.iNet AR750S-EXT sitting on an airplane tray rebroadcasting the in-flight
Wi-Fi


INTRODUCTION

I was on a flight from Denver to Phoenix last Thursday and after I got my travel
router all set up and shared with the family, I realized that people may not
know how much money they can save on in-flight Wi-Fi with said travel routers.
Despite being a self proclaimed nerd (on a blog titled Austin’s Nerdy Things no
less), I had never purchased in-flight Wi-Fi until January this year on a flight
from Denver to Orlando. For that four hour flight, I brought along my little
GL.iNet device and a small battery pack to power it and shared the $10 Wi-Fi
with my own phone, my wife’s phone, our daughter’s iPad, and both my mom and
dad’s phones. That’s $50 worth of Wi-Fi for 5 devices ($10×5) on a single $10
Wi-Fi purchase. It paid for itself in a single flight.

Update 2023-04-18: I was also made aware that recent Pixel and Samsung phones
have this same capability! A few capable devices are listed below with the
travel routers.

GL.iNet AR750S-EXT sitting on an airplane tray rebroadcasting the in-flight
Wi-Fi


WHAT IS A TRAVEL ROUTER?

A travel router is a portable and compact Wi-Fi device (see picture above) that
allows you to create your own wireless network. It works by connecting to an
existing Wi-Fi network, such as the one available on a plane, and then sharing
that connection with multiple devices. This means that you can connect your
laptop, smartphone, tablet, and other devices simultaneously to the internet
without needing to purchase individual Wi-Fi passes for each device. The travel
router appears as a single device connected to the main Wi-Fi network and it
channels traffic from your devices to make it look like a single device.


WHERE ELSE CAN YOU USE A TRAVEL ROUTER?

You can use a travel router anywhere you pay for Wi-Fi, or anywhere that
provides a Wi-Fi signal that must be signed into. I use the same travel router
when we get to hotels also. There are a couple benefits:

 * The travel router has external antennas which provide a bit more gain than
   the internal one in devices. It can also be located where the Wi-Fi signal is
   strongest and repeat it further into the room/unit.
 * All devices know the travel router SSID and don’t need to be signed into the
   hotel Wi-Fi separately
 * Some hotels limit the number of devices per room/name combo, which isn’t an
   issue with a travel router


HOW MUCH CAN YOU SAVE ON IN-FLIGHT WI-FI WITH A TRAVEL ROUTER?

Let’s say you are a family of four. All four of you have a phone, one has an
extra tablet, and one has a work laptop. That’s a total of 6 devices. To use all
six devices would be $60 per flight at United’s current rate of $10 per device
per flight. If you use a travel router to rebroadcast the in-flight Wi-Fi, you
are only spending $10 per flight for the router to gain Wi-Fi access, and then
sharing it among you own devices. That’s a savings of $50 for a relatively
standard family of four per flight. Do that a few times a year and you can
upgrade your room for a couple nights, or bump up to the next level of rental
car.


WHAT ARE SOME GOOD TRAVEL ROUTERS?

I personally have a GL.iNet GL-AR750S-EXT. It appears this is no longer
manufactured/sold, but GL.iNet has plenty of other devices. They all run an open
source networking software called OpenWrt, which is a very popular OS and runs
on hundreds of millions of devices. They’re also named after rocks/minerals
which my geologist wife enjoys.

A couple considerations for getting a travel router:

 * Buy one with at least two radios (often marked as “dual band”). This ensures
   you can connect to the host Wi-Fi on one band and rebroadcast your own Wi-Fi
   on the other band
 * USB power input – so they play nice with USB battery packs
 * External antenna – external antennas have a bit more gain than internal
   antennas so they have a longer range
 * Do you need to share files? If so, get one with a SD card slot.
 * Processor speed – directly influences how fast any VPN connections would be.
   Slower processors can’t encrypt/decrypt packets as fast as fast processors.
   Faster processors also consume more power.
 * Some are their own battery pack, which means no need to carry both a travel
   router and battery pack! Example: GL.iNet GL-E750, which has a 7000 mAh
   battery inside.

Here are a few options (I am not being paid by GL.iNet, I just like their
devices):

 * GL.iNet GL-SFT1200 (Opal) – this would be a great introductory travel router
   so you can get your feet wet and play around for not much money. It is dual
   band with external antennas and will be fast enough for casual browsing. Note
   that this model does not use a fully open-source version of OpenWrt.
 * GL.iNet GL-MT1300 (Beryl) – a step up from the Opal device, with a USB 3 port
   instead of USB 2 and a more powerful processor. Both have 3x gigabit ethernet
   ports in case you’re looking for wired connectivity.
 * GL.iNet GL-AXT1800 (Slate AX) – supports the latest Wi-Fi standard (Wi-Fi 6,
   or 802.11ax), and has the fastest processor. If you use WireGuard, it can do
   up to 550 Mbps for VPN, or 120 Mbps for OpenVPN. I would expect this travel
   router to be future-proofed for many years, and it would actually do well for
   an in-home router as well.
 * Recent Samsung and Pixel phones (running Android 10 or newer) such as the
   Pixel 6, Pixel 7, Galaxy S22, Galaxy S23, and others

You’ll also need a battery pack. The MoKo pack we’ve used for years appears to
also not be manufactured/sold anymore. Here are some other battery packs. Ensure
you select the correct USB type (you probably want USB-C for everything at this
point in 2023).


USING A GL.INET DEVICE WITH UNITED WI-FI (AND RELATED NUANCES)

I have found that quite a few different host Wi-Fi networks have some nuance to
them. United Wi-Fi specifically does not work if connecting over the 2.4 GHz
band to the aircraft’s access point. It will broadcast the network over 2.4 GHz
and allow you to connect, but nothing will actually work. So make sure you
connect with the 5 GHz band and the rebroadcast your own Wi-Fi on the 2.4 GHz
band. Some networks will be the other way around, like the Residence Inn we
stayed at in Phoenix this past weekend.

United Wi-Fi is surprisingly quick. There isn’t much waiting at all for casual
browsing, and all social media apps work as expected.

Below will be a few screenshots of how I do things. TravelCat is the SSID I use
for our devices on the travel router. I have a TravelCat set up on both bands
and enable/disable as necessary to switch bands.

Screenshot of GL.iNet connected to United in-flight Wi-Fi on radio0 (5 GHz band)
and broadcasting TravelCat on radio1 (2.4 GHz band) Screenshot showing the
GL.iNet device connected to “Unitedwifi.com” BSSID on radio0 (wlan0) and my
iPhone, my wife’s iPhone, and our daughter’s iPad connected to TravelCat SSID on
radio1/wlan1.


HOW TO SET UP A TRAVEL ROUTER ON UNITED WI-FI

This is how I set up the travel router on United Wi-Fi. I’m guessing most other
airlines/hotels are similar. Steps 1 and 2 can be completed prior to your flight
and only need to be done once.

 1. On the travel router, navigate to DNS settings and uncheck “rebind
    protection”. This is a setting that generally is useful and protects from
    malicious attacks but it breaks captive portals. Captive portals are how you
    get signed into various Wi-Fis so it breaks those. Just disable it, you’ll
    be fine.
 2. Set up your SSID on both 2.4 GHz and 5 GHz bands. One must be enabled at all
    times or you’ll need to plug in via ethernet or reset the device to access
    it again.
 3. Connect to the host Wi-Fi on the 5 GHz band if possible. There should be a
    “scan” button. Tap it and select the network wit the right name that has the
    negative value closest to 0 (for example -40 dBm is better than -60 dBm).
 4. Open the captive portal page name if you know it, for example
    unitedwifi.com. If you don’t, just try to go to google.com or yahoo.com or
    something boring like that and it should redirect you to complete the login
    process.
 5. Pay if necessary.
 6. All done! Start browsing as usual!

Travel router in seat back pocket with battery pack. You could also just leave
it in your suitcase/backpack for the flight.


CONCLUSION

Investing in a travel router can pay for itself in just a single flight
(depending on family size), making it an essential piece of tech for any flyer.
By sharing Wi-Fi connections among multiple devices and splitting the cost with
travel companions, you can save money and stay connected while traveling. So, on
your next flight, consider bringing along a travel router and enjoy the
convenience and cost-saving benefits it offers. Not gonna lie, I wish I had
started using a travel router sooner and coughing up the $8-10 per flight to
keep myself entertained with something more than endless games of 2048 or Chess
or Catan. Besides, what self-respecting nerd doesn’t like playing with new
technology?

Disclosure: Some of the links in this post are affiliate links. This means that,
at zero cost to you, I will earn an affiliate commission if you click through
the link and finalize a purchase.

 * Tags gl.inet, lifeprotips, travel, travel-hacks, wifi

--------------------------------------------------------------------------------


POSTS NAVIGATION

← Newer Posts1 2 … 11 Older Posts →


ABOUT THIS SITE

austinsnerdythings.com is a way for me to give back to all the people who have
posted something helpful to me over the years. I’ll be documenting nerdy
activities for everyone to learn!


FIND US ELSEWHERE!

 * GitHub
 * LinkedIn
 * Reddit
 * Instagram
 * Facebook
 * YouTube

© 2023 Austin's Nerdy Things

To the top ↑ Up ↑