Theo's Site

Writing about technology, self-hosting, and things I find interesting.

Category: Uncategorized

Caddy in a SmartOS Native Zone

Published on

On my home server, I am currently using Caddy as a reverse-proxy. For the public sites such as this Bookstack app, Caddy also provides SSL and other key security requirements.

I have Caddy running in a native SmartOS zone. Caddy isn’t that resource-intensive, so it could run on as little as 512 MB of RAM. In my zone, I gave it 2 GB of RAM. This is because I want to compile Caddy from scratch. I also gave it access to free CPU cores. I configured the zone in the SmartOS web UI instead of performing manual configuration.

The following are my steps to get Caddy running in a native SmartOS zone.

First, install the Golang compiler.

pkgin update
pkgin install go122 git-base gcc12 gmake

You might also want to install some creature comforts in your container, such as your preferred text editor. Personally, I like Nano, but you can pick what you want.

pkgin install nano

Next, set the environment variables to run the compiler.

export GOROOT=/opt/local/go122
export GOPATH=/root/go
export PATH=$GOROOT/bin:$GOPATH/bin:$PATH

To make installing updates easier, you might want to make this persistent in your shell profile.

nano /root/.profile
source /root/.profile

Next, pull and install the Caddy source code.

go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest

Next, build Caddy. You can add any extra plugins here using –with flags.

xcaddy build --output /opt/local/bin/caddy

Create a user and group for Caddy.

groupadd -g 800 caddy
useradd -u 800 -g caddy -d /var/caddy -s /usr/bin/false caddy

Create a folder for the Caddy server data and give the user and group permissions on it.

mkdir -p /opt/local/etc/caddy
mkdir -p /var/caddy/data
mkdir -p /var/log/caddy

chown -R caddy:caddy /var/caddy /var/log/caddy
chmod 750 /var/caddy /var/log/caddy

Create a Caddy file with the configuration options needed for your exact use cases.

nano /opt/local/etc/caddy/Caddyfile

Create the SMF manifest needed to define the background service.

mkdir -p /var/svc/manifest/site
nano /var/svc/manifest/site/caddy.xml
<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="manifest" name="caddy">
  <service name="site/caddy" type="service" version="1">

    <create_default_instance enabled="true"/>
    <single_instance/>

    <dependency name="network" grouping="require_all" restart_on="error" type="service">
      <service_fmri value="svc:/milestone/network:default"/>
    </dependency>

    <dependency name="filesystem" grouping="require_all" restart_on="error" type="service">
      <service_fmri value="svc:/system/filesystem/local:default"/>
    </dependency>

    <exec_method type="method" name="start"
      exec="/opt/local/bin/caddy run --config /opt/local/etc/caddy/Caddyfile --adapter caddyfile &amp;"
      timeout_seconds="60">

<method_context>
        <method_credential user="caddy" group="caddy" privileges="basic,net_privaddr"/>
      </method_context>
    </exec_method>

    <exec_method type="method" name="stop"
      exec=":kill"
      timeout_seconds="30"/>

    <exec_method type="method" name="refresh"
      exec="/opt/local/bin/caddy reload --config /opt/local/etc/caddy/Caddyfile --adapter caddyfile"
      timeout_seconds="30"/>

    <property_group name="startd" type="framework">
      <propval name="duration" type="astring" value="child"/>
      <propval name="ignore_error" type="astring" value="core,signal"/>
    </property_group>

    <stability value="Evolving"/>

<template>

<common_name><loctext xml:lang="C">Caddy web server (compiled from source)</loctext></common_name>
    </template>
  </service>
</service_bundle>

Import and enable the service.

svccfg import /var/svc/manifest/site/caddy.xml
svcadm enable svc:/site/caddy:default

If you change the Caddyfile, you can reload the Caddy configuration using the following command.

svcadm refresh svc:/site/caddy:default

Kimi 2.5 and Self-Hosting Open WebUI

Published on

Been poking around with the Kimi 2.5 LLM and also started self-hosting Open WebUI on my server (a self-hosted ChatGPT-style web frontend for LLM APIs).

Kimi probably isn’t the best model on the market, but Kimi 2.5 is the first time I’ve used a truly open-source model that feels vaguely in the same category of performance as ChatGPT and similar systems. I don’t really feel much of a penalty using it versus ChatGPT.

Of course, running it directly is way beyond what any device I have can do reasonably well.

But there are already API providers offering it with very favorable privacy and data retention policies, so I’m probably going to switch to using it over ChatGPT.

I wouldn’t recommend using the chat/API offered by the model’s creator — I don’t really trust that company.

If I self-host the frontend, all of the actually sensitive data like chat logs, etc. are stored on my server.

Open WebUI is pretty cool. It works almost as well as ChatGPT does.

I’ve run into some issues with the model occasionally freezing during processing, but I’ve occasionally seen that type of thing with other LLM providers too.

It has a search integration that works with the model so it can do web searches, etc. It’s pretty customizable.

I also quickly created a custom tool that the model can use which queries the OpenAlex API to find open-access academic articles.

The code for that can be found here:

https://git.selfhosted.onl/theo/openwebui-tools-skills/src/branch/main