Theo's Site

Writing about technology, self-hosting, and things I find interesting.

Posts

Caddy in a SmartOS Native Zone

Published on

On my home server, I am currently using Caddy as a reverse-proxy. For the public sites such as this Bookstack app, Caddy also provides SSL and other key security requirements.

I have Caddy running in a native SmartOS zone. Caddy isn't that resource-intensive, so it could run on as little as 512 MB of RAM. In my zone, I gave it 2 GB of RAM. This is because I want to compile Caddy from scratch. I also gave it access to free CPU cores. I configured the zone in the SmartOS web UI instead of performing manual configuration.

The following are my steps to get Caddy running in a native SmartOS zone.

First, install the Golang compiler.

pkgin update
pkgin install go122 git-base gcc12 gmake

You might also want to install some creature comforts in your container, such as your preferred text editor. Personally, I like Nano, but you can pick what you want.

pkgin install nano

Next, set the environment variables to run the compiler.

export GOROOT=/opt/local/go122
export GOPATH=/root/go
export PATH=$GOROOT/bin:$GOPATH/bin:$PATH

To make installing updates easier, you might want to make this persistent in your shell profile.

nano /root/.profile
source /root/.profile

Next, pull and install the Caddy source code.

go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest

Next, build Caddy. You can add any extra plugins here using –with flags.

xcaddy build --output /opt/local/bin/caddy

Create a user and group for Caddy.

groupadd -g 800 caddy
useradd -u 800 -g caddy -d /var/caddy -s /usr/bin/false caddy

Create a folder for the Caddy server data and give the user and group permissions on it.

mkdir -p /opt/local/etc/caddy
mkdir -p /var/caddy/data
mkdir -p /var/log/caddy

chown -R caddy:caddy /var/caddy /var/log/caddy
chmod 750 /var/caddy /var/log/caddy

Create a Caddy file with the configuration options needed for your exact use cases.

nano /opt/local/etc/caddy/Caddyfile

Create the SMF manifest needed to define the background service.

mkdir -p /var/svc/manifest/site
nano /var/svc/manifest/site/caddy.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="manifest" name="caddy">
  <service name="site/caddy" type="service" version="1">

    <create_default_instance enabled="true"/>
    <single_instance/>

    <dependency name="network" grouping="require_all" restart_on="error" type="service">
      <service_fmri value="svc:/milestone/network:default"/>
    </dependency>

    <dependency name="filesystem" grouping="require_all" restart_on="error" type="service">
      <service_fmri value="svc:/system/filesystem/local:default"/>
    </dependency>

    <exec_method type="method" name="start"
      exec="/opt/local/bin/caddy run --config /opt/local/etc/caddy/Caddyfile --adapter caddyfile &amp;"
      timeout_seconds="60">
      <method_context>
        <method_credential user="caddy" group="caddy" privileges="basic,net_privaddr"/>
      </method_context>
    </exec_method>

    <exec_method type="method" name="stop"
      exec=":kill"
      timeout_seconds="30"/>

    <exec_method type="method" name="refresh"
      exec="/opt/local/bin/caddy reload --config /opt/local/etc/caddy/Caddyfile --adapter caddyfile"
      timeout_seconds="30"/>

    <property_group name="startd" type="framework">
      <propval name="duration" type="astring" value="child"/>
      <propval name="ignore_error" type="astring" value="core,signal"/>
    </property_group>

    <stability value="Evolving"/>
    <template>
      <common_name><loctext xml:lang="C">Caddy web server (compiled from source)</loctext></common_name>
    </template>
  </service>
</service_bundle>

Import and enable the service.

svccfg import /var/svc/manifest/site/caddy.xml
svcadm enable svc:/site/caddy:default

If you change the Caddyfile, you can reload the Caddy configuration using the following command.

svcadm refresh svc:/site/caddy:default

Experience Using Opencode on the Latest Models

Published on

I've been experimenting more with the latest LLM models for coding. And it's pretty impressive how far things have come, and how these tools are pretty impressive.

I've mostly been using the Kimi 2.5 model with Opencode as the coding agent. I still find that mix pretty great. I think the whole vibe coding/AI assisted programming workflow that Opencode and similar encourage might not be the best for quality code, but it is pretty addictive seeing that type of rapid progress. Until you get to the very highest (expensive) tier of Claude/Anthropic and OpenAI models, Kimi performs basically on par or better than what the biggest companies offer.

And these coding agents can take care of a lot of the boring drudgework of programming. They are good enough right now that I don't have to spend too much time manually intervening and fixing what the LLM did — these tools are getting pretty accurate.

I've spent many hours working on a project, tweaking it back and forth, with the main thing stopping me from spending even more time is the fact that I have to pay for the credits to run inference for the models. Once you spend the time working through all of quirks these tools have gotten pretty smooth as far as workflow goes. It's genuinely fun to do this on the recent LLM models that have come out. to do this on the recent LLM models that have come out.

Cost right now is the only real problem, these things will burn through tokens by the millions. It's pretty clear that the $20/mo for coding agents tier from OpenAI etc (even though the limits are being tightened) are being subsidized pretty aggressively. When you compare the amount of time you get on a coding agent from such a plan vs what open source alternatives cost, OpenAI etc can't be making money from the coding agent offerings. On the other hand, it's probably cheaper to use a self hosted frontend (OpenWebUI etc with a hosted inference API) than it is to pay for a paid tier of ChatGPT.

I've also noticed that Opencode and other open source agents/frontends are very sensitive to token output performance. Using a somewhat more expensive inference provider that provides fast performance will improve the experience quite a bit. Switching API providers basically fixed some of the issues I was having with the model freezing etc.

The project I've been working on as part of my testing is this https://git.selfhosted.onl/theo/marginleaf

It's a personal blogging CMS. It can do the typical blogging engine things, but instead of a frontend editing interface, I created an API, and I built some tools that allow me to fully manage it from Open WebUI, which opens some pretty neat possibilities. It feels like a somewhat interesting possibility to have some of these chat tools getting good enough that they can be the main interface with an application — instead of a more traditional web UI.

In particular, the Open WebUI tools can be found here https://git.selfhosted.onl/theo/marginleaf/src/branch/main/openwebui_tools

But mostly I created it because it's fun to work on that type of thing.

Kimi 2.5 and Self-Hosting Open WebUI

Published on

Been poking around with the Kimi 2.5 LLM and also started self-hosting Open WebUI on my server (a self-hosted ChatGPT-style web frontend for LLM APIs). on my server (a self-hosted ChatGPT-style web frontend for LLM APIs).

Kimi probably isn't the best model on the market, but Kimi 2.5 is the first time I've used a truly open source model that feels to be vaguely in the same category of performance as ChatGPT, etc. And I don't really feel much of a penalty using it vs ChatGPT.

Of course, running it directly is way beyond what any device I have can do reasonably well. beyond what any device I have can do reasonably well.

But there are already API providers around offering it with very favorable privacy and data retention policies, so I'm probably going to switch to using it over ChatGPT.

I wouldn't recommend using the chat/API offered by the model's creator–I don't really trust that company.

If I self-host the front end, all of the actually sensitive data like chat logs etc are stored on my server.

Open WebUI is pretty cool. It works almost as well as ChatGPT does. I've run into some issues with the model occasionally freezing during processing, but I've occasionally seen that type of thing with other LLM providers.

It has a search integration that works with the model so it can web search etc. It's pretty customizable.

I quickly created a custom tool that the model can use which queries the OpenAlex API to find open access academic articles. The code for that can be found here API to find open access academic articles. The code for that can be found here https://git.selfhosted.onl/theo/openwebui-tools-skills/src/branch/main

Pinning Footer to Bottom of Page in Bootstrap Studio

Published on

Some of the themes that come with bootstrap studio don't have the footer pinned to the bottom of the page.

The instructions in this link are helpful here: https://forum.bootstrapstudio.io/t/footer-always-at-the-bottom-of-the-page/7517

Basically, create custom sitewide CSS (ie. through a .css file under the styles folder of the design), with the following:

body {
    display: flex;
    flex-direction: column;
    height: 100vh;
}
footer {
    margin-top: auto;
}

Stenomasks and Speech to Text

Published on

For a while I've had this StenoMask thing, which is a sound isolated box that can be talked into for speech recognition. I think the notational thing it's commonly used for is court reporters speaking into it for notes that can be transcribed later. Of course, my use case with it is writing without a keyboard and similar.

When I first started experimenting with it, I found that it was really hard to get any kind of acceptable accuracy with speech recognition software.

I've been trying it again now. Speech recognition software has gotten to the point where I can talk to it normally and it basically just works when transcribing.

Which makes the thing actually useful for me now.

This is what I am using https://whispertyping.com/

It would be interesting to try to give Dragon NaturallySpeaking, which is used by a lot of formal disability accommodation places, a try again. It's what psychologists and stuff have recommended for me for some of the relevant disabilities I have. I just haven't been able to get good accuracy out of past versions of it. Dragon is very expensive, as in hundreds of dollars, so doesn't feel worth it to give it another try.

Philadelphia Chinatown (2023 Oct)

Published on

Old post (2 years old) - may be outdated

While taking these photos, I saw a lot of signage about a proposed stadium for the 76ers.

Most of this was in opposition (I am not informed enough to give a direct opinion regarding the issue).

Chinatown Photo
Chinatown Photo
Chinatown Photo
Chinatown Photo
Chinatown Photo
Chinatown Photo
Chinatown Photo

ChatGPT Makes Automation Symmetrical with Doing

Published on

Old post (2 years old) - may be outdated

One of the clearest implications of ChatGPT for systems administrators is that it makes automating a task almost symmetrical with doing a task.

On the new file server I use for personal projects (dedicated server with a NVMe SSD boot drive and four hard drives as secondary file storage drives, I recently did a reinstall of Debian. I set this server up with the hard drives in a BTRFS raid 5. I installed Docker on it, and I set up the Apache web server to make the files on that server public. Cloudflare Tunnel was used to put that Apache server behind SSL.

I took quick and rough notes on what commands were used and what may vary between servers and had ChatGPT create an automation script in Python.

The notes can be found here https://gist.github.com/theopjones/a7f2b6ba17f3de23826f688f0a87d01d

The prompt I used is

Create a python script to automate the server setup task in the following notes/log of a manual setup. Assume that the python script is running as root. In the case of commands which require manual intervention, wait for the user to conduct the manual intervention, the command should be started as part of the script.

The result of ChatGPT was the following, which is good enough to make this setup easily reproducible across servers or to document with code what the setup was so that it can be easily reproduced.

https://gist.github.com/theopjones/6147770b550356e55d209e67549fb948

I’m looking for work

Published on

Old post (2 years old) - may be outdated

I was recently laid off from my previous company.

I'm a seasoned IT and customer service professional with over five years of experience. My skills extend from software deployment and support to Linux administration and Python scripting for automation.

I've acted as an administrator for major SaaS platforms such as Google Workspace, Docusign, email marketing tools (PersistIQ, ActivePipe), CRMs (CopperCRM, Contactually, Follow Up Boss), and Okta, effectively resolving email infrastructure issues. Also, I've offered on-call and after-hours support for urgent user requests.

My proficiency in open-source platforms includes managing LAMP + Nginx servers, working with cloud compute/VPS hosting platforms, and utilizing Linux for desktop and server projects. I have automated tasks using Python and other scripting languages, focusing on account creation, data migrations, and infrastructure management. Additionally, I've used low-code platforms like Zapier, and have some familiarity with the Dell Boomi Platform.

One noteworthy accomplishment is automating most of the user onboarding process, allowing accelerated growth without increasing IT staff. I've also efficiently transitioned data from one CRM system to another, leveraging APIs to rebuild account environments.

I am adept at defining requirements with software engineers and vendors for new product rollouts. I am well-versed in IT security, including implementing and documenting new security processes and mitigating threats.

My experience with support ticketing and project management systems spans Service Cloud, atSpoke, Jira, and Asana.

Furthermore, I hold degrees in Geography and Ecology and Evolutionary Biology from the University of Arizona, with a focus on geographic information systems. I've tutored STEM and geography subjects and have experience in GIS and scientific data analysis from internships.

My desired salary for a new role is $85,000/yr, though I'm open to $60,000-$85,000 depending on the total compensation package, the nature of the employer, and the status of my other interviews. While I prefer a W2 role, I'm also open to contract-to-hire and independent contractor status, and am available for freelance work that doesn't conflict with full-time employment.

For more information, please reach out to me by email tjones2@fastmail.com or through my LinkedIn profile https://www.linkedin.com/in/theodore-jones-7b89b7269/

Setting up GoBlog on FreeBSD

Published on

Old post (3 years old) - may be outdated

GoBlog is a blogging engine that I have used on my personal blog, and various other personal projects. I’m going to do a walkthrough of how to set this up on a FreeBSD server. is a blogging engine that I have used on my personal blog, and various other personal projects. I'm going to do a walkthrough of how to set this up on a FreeBSD server.

If you want a quick TLDR, here is a shell script that automatically spins up GoBlog. It doesn't set up a jail or other container, but it can be used in one.

https://gist.github.com/theopjones/e09c9713c10f4000d154de50c438d2ba

Its a blogging engine with fairly few users, and I wouldn't recommend it for important business websites, or people who aren't at least somewhat technically oriented and who know their way around UNIX-like operating systems.

But for the technically inclined, it makes a good personal blog. It is very performant and supports a lot of interesting social features, including most of the IndieWeb standards. It can also (with some limitations), talk to Mastedon and other similar ActivityPub using services, and allow these social services to subscribe to your blog.

I previously had my personal blog on a Debian home server, using Docker for containerization.

I've discussed an overview of this setup here

https://theopjones.blog/notes/2022/09/2022-09-12-oxjfr

https://theopjones.blog/posts/2022/09/2022-09-17-exlan

Unfortunately, my new apartment doesn't have any internet with the fast upload speeds needed for this type of home server setup, so I'm moving my setup to a dedicated server.

I've decided to go with FreeBSD for this setup because it has a lot of powerful features, and in my opinion is a often lot more streamlined and elegant than Linux in how it handles things.

I'd recommend spinning up a jail to act as a container to separate this setup from the rest of your system, particularly if you want to run more than one service on your server/VPS.

In the future, I'll write up instructions and a shell script on how to build this in a jail and set up a reverse proxy with SSL for this (either Caddy or Nginx would make a good fit for reverse proxy).

There are multiple helper tools to set this up. I like BastilleBSD for this role. for this role.

Likely because it is a small blogging engine without very many users, there isn't a FreeBSD port or package for this, so, we will need to compile it from the Git repo.

We will need the following FreeBSD packages to do this go-devel git gcc sqlite3 bash

GoBlog can also use tor for creating a .onion service for site visitors who want additional privacy when viewing your blog. for creating a .onion service for site visitors who want additional privacy when viewing your blog.

I have created a Python script (discussed later) to help with generating a config file, if you want to use that, you will also need python3 py39-yaml..

See the following command for how to install all of these packages

pkg install go-devel git gcc sqlite3 bash tor python3 py39-yaml

To clone the Goblog source code from Git, run the following command

git clone https://github.com/jlelse/GoBlog.git

Change directory into the newly downloaded source code repo.

cd GoBlog

Build the GoBlog source code

go-devel build -tags=sqlite_fts5 -ldflags '-w -s' -o GoBlog

Copy GoBlog to /usr/local/bin/ (the appropriate folder given standard FreeBSD folder structure. Give the Goblog executable the right permissions to be ran by all users. (the appropriate folder given standard FreeBSD folder structure. Give the Goblog executable the right permissions to be ran by all users.

install -m 755 GoBlog /usr/local/bin/GoBlog

The data directory that our RC script (more details later in this post) will use as the working directory is /var/GoBlog/

Additional data used by Goblog is contained in the following folders in the Git repo pkgs testdata templates leaflet hlsjs dbmigrations strings plugins

Create a corresponding folder for each of these under /var/GoBlog/ and copy the contents. and copy the contents.

Create empty folders /var/GoBlog/data and and /var/GoBlog/config. This is for user generated data which persists across versions. Thedata` folder will be populated on the first run of GoBlog.data` folder will be populated on the first run of GoBlog.

The config file will need to be manually generated. An example config file is contained in the GoBlog git repo as example-config.yml..

You can also use the following python script I have created to guide you through the process of creating the config file. It will prompt you for the information needed to set up the most common configurations.

https://gist.github.com/theopjones/748c296b3c33881352bb7ac72772ae67

Next up we will need to create an RC file for GoBlog. I have created one as follows

https://gist.github.com/theopjones/d62e480a71f5cbcead7e381ffd422fda

(Both of the above scripts are created and used by the whole installation shell script mentioned at the beginning of the post.

Write it to /usr/local/etc/rc.d/goblog

Then make the rc script file executable

chmod +x "$rc_script_file"

Enable GoBlog to load when the system does

echo 'goblog_enable="YES"' >> /etc/rc.conf