hdrz.cc

My webserver, scm and ssg setup

# Why??

So, I kind of dig all that own your data and dig the tech stuff, and I thought it’s about time that I setup my own server and host my sites. Some context is in order:

# What I learned in the process

OK, I am not a programmer by trade, or devops, or anything of the sort. Here is what I had to learn (endure..?) in order to get all the pieces in place, working together, and not messing up other things in the process (like my email suddenly not working):

Phewww, that’s a lot.. Some of it is gathered from different sources on the web, some by trial and error. I document everything here for my future self (like 2 days from now..). Let’s start.

# Server install and network setup

OK, so these are the steps to get a fully functional (and minimal) gcp micro vm with http, https network working and althttpd server serving static files and dynamic CGI content. It is an amalgamation of few sources and some trials and errors.. For the most part I followed this guide, but with few important changes. 1st, watch the next video and setup your micro vm with https and http ports open:

Example HTML5 Document

Note that this video is a bit dated, so some things look different, but all the functionality is the same. No need to install a web server, since we will do this later on. Now you need to enable “network API” on your gcp account, then setup the DNS records like it is explained, and update your domain servers to point to the google servers.

If you get your email at the same domain name, you will need to add two more DNS entries, MX record and TXT record. I use google workspace for email, so the additional entries look like this:

DNS nameTypeTTL [s]Record data
mydoma.in.MX601 smtp.google.com.
mydoma.in.TXT300“v=spf1 include:_spf.google.com ~all”

The MX record routes your incoming mail, while the TXT spf record lets other mail servers verify your identity for your outgoing mail. Notice the dot at the end of domain names, you might need them as well, depending on your DNS provider. Read more here

# Static apps

Webservers use chroot jail for security. The apps/scripts invoked by the webserver inside this ‘jail’ directory cannot access the file system outside the ‘jail’, so cannot harm the critical parts of your server. Well static apps are needed if the apps are working inside a chroot jail, in order to minimize dependencies that will be needed inside the ‘jail’ directory. By static I mean completely static if possible, statically linking all dependencies, including libc. And for that there is a good resource from wanderinghorse, where they explain how to create docker image of a recent Alpine linux, and compile inside the container of that image. The trick is that Alpine linux uses musl libc, which can be statically linked inside the executable.

I did all this docker stuff on my computer, uploading the compiled apps to the server. Technically I only needed to compile fossil this way, as it is the only app that will be invoked directly by the webserver inside the chroot jail, but decided to compile all my apps the same way, it just keeps everything tidy. Working exactly as described, I just changed the files a bit to compile more stuff:

build-script.sh

#!/bin/sh
set -e
set -x
docker build -t static_builds \
       --build-arg cachebust=$(date +%s) \
       "$@" \
       .
docker create --name builds static_builds

docker cp builds:/fossil-src/fossil fossil
strip fossil

docker cp builds:/althttpd-src/althttpd althttpd
strip althttpd

docker cp builds:/bagatto/build/bag bag
strip bag

ls -lah fossil althttpd bag
set +x

# docker container rm builds
cat <<EOF
Now maybe do:
  docker image rm \$(docker image ls | grep -e static_builds -e alpine | awk '{print $3}')
or:
  docker system prune --force
EOF

Dockerfile

FROM    alpine:3.21.3

RUN <<EOF
apk update
apk upgrade
apk add --no-cache                         \
  curl gcc make tcl musl-dev git           \
  openssl-dev zlib-dev openssl-libs-static \
  zlib-static janet janet-dev janet-static
EOF

########################################################################
# Builds a static fossil SCM binary from the latest trunk
# source code.
# Optional --build-arg entries:
#    repoUrl=source repo URL (default=canonical tree)
#    version=tag or version to pull (default=trunk)
#    cachebust=an arbitrary value to invalidate docker's cache
########################################################################

ARG repoUrl
ARG version
ARG cachebust

ENV repoUrl=https://fossil-scm.org/home
ENV version=trunk
ENV cachebust=0

RUN <<EOF
curl "${repoUrl}/tarball/fossil-src.tar.gz?name=fossil-src&uuid=${version}"  \
  -o fossil-src.tar.gz
tar -xzf fossil-src.tar.gz
cd fossil-src
./configure --static --disable-fusefs --json
make
cd ..
EOF

########################################################################
# Builds a static althttpd binary from the latest trunk
# source code.
########################################################################

ENV repoUrl=https://sqlite.org/althttpd
ENV version=trunk
ENV cachebust=0

RUN <<EOF
curl "${repoUrl}/tarball/althttpd-src.tar.gz?name=althttpd-src&uuid=${version}"  \
  -o althttpd-src.tar.gz
tar -xzf althttpd-src.tar.gz
cd althttpd-src
make VERSION.h
make static-althttpsd
mv static-althttpsd althttpd
EOF

########################################################################
# Builds a static bagatto binary from the latest trunk
# source code.
########################################################################

RUN <<EOF
git clone --depth=1 https://github.com/janet-lang/jpm.git
cd jpm
janet bootstrap.janet
# fix the jpm script (there are two shebang lines, 1st one is wrong for our system)
sed -i '1d' $(which jpm)
cd ..

git clone https://git.sr.ht/~subsetpark/bagatto
cd bagatto
jpm -local load-lockfile
# make sure the linker can find libjanet.a
ln -s /usr/lib/libjanet.a /usr/local/lib/libjanet.a
# make it static
export BAG_STATIC_BUILD=1
jpm -local build
EOF

When done compiling, you have three small static files which can work on any recent linux machine. Now upload them to the server and move them to relevant directories. I copied althttpd and bag to /usr/local/bin/, while fossil went into /var/www/bin/ (more on this in the next section).

# Directories, files

Set up directories and permissions, where the chroot jail directory of the webserver is chosen to be /var/www, the same as the default for apache webserver. www-data user will be the owner and group of the server process.

#!/bin/bash

# make sure the webserver and ssg are owned by root and have +x permission
sudo chown root:root /usr/local/bin/*
sudo chmod 755 /usr/local/bin/*

# setup default.website dir and our domain dir
domain_=<example_com>  # put your domain name here, change . to _

# setup main site dir for althttpd
sudo mkdir -p /var/www/${domain_}.website
# link www.<domain> to be served from the main <domain>
cd /var/www
sudo ln -s ${domain_}.website www_${domain_}.website
# link the default site to be served from the main <domain>
sudo ln -s ${domain_}.website default.website
sudo mkdir log    # for server logs
sudo mkdir repos  # for fossil repositories
sudo mkdir bin    # for the fossil binary

# adjust ownership
sudo chown -h www-data:www-data /var/www/*
# make sure anyone in www-data group can write here
sudo chmod 775 /var/www/*

# Services, services, services

Let’s set up the required systemd services for network. This follows the guide, but with changes for compatibility and security.

#!/bin/bash

# set root password (not strictly needed on gcp vm but can't hurt)
sudo passwd

# add your regular user to the www-data group. logout after this and login again
sudo usermod -a -G www-data $USER

# create http service
sudo cat <<EOF > /etc/systemd/system/http.socket
[Unit]
Description=HTTP socket

[Socket]
Accept=yes
ListenStream=80
NoDelay=true

[Install]
WantedBy=sockets.target
EOF

# we are using /var/www as our root dir, and www-data as the web server user.
# It keeps compatibility with apache server if we ever want to switch back.
# Also, the log files will be saved in /var/www/log/
sudo cat <<EOF > /etc/systemd/system/http@.service
[Unit]
Description=HTTP socket server
After=network-online.target

[Service]
WorkingDirectory=/var/www
ExecStart=/usr/local/bin/althttpd --root /var/www --user www-data --logfile /log/http.log
StandardInput=socket

[Install]
WantedBy=multi-user.target
EOF

# create https service
sudo cat <<EOF > /etc/systemd/system/https.socket
[Unit]
Description=HTTPS socket

[Socket]
Accept=yes
ListenStream=443
NoDelay=true

[Install]
WantedBy=sockets.target
EOF

sudo cat <<EOF > /etc/systemd/system/https@.service
[Unit]
Description=HTTPS socket server
After=network-online.target

[Service]
WorkingDirectory=/var/www
ExecStart=/usr/local/bin/althttpd --root /var/www --user www-data --cert /var/certs/fullchain.pem --pkey /var/certs/key.pem --logfile /log/https.log
StandardInput=socket

[Install]
WantedBy=multi-user.target
EOF

# enable the services and start http
sudo systemctl daemon-reload
sudo systemctl enable http.socket
sudo systemctl enable https.socket

sudo systemctl start http.socket

# Certificates, certificates, certificates!

Now that the webserver is working on http protocol, it’s time to setup https. And in order to do this, we need to get the certificates for your domain using the ACME protocol. I am using acme.sh to do this, it is a small bash script that does everything, no need to become root for this. Read acme.sh site before proceeding.

#!/bin/bash

domain=<example.com>   # put your domain name here
domain_=<example_com>  # put your domain name here, change . to _
email=<my@example.com> # put your email here

# make dir for the certificates, outside the chroot dir of the webserver (not in /var/www)
sudo mkdir /var/certs
# make sure we can write there as regular user in the www-data group
sudo chown root:www-data /var/certs
sudo chmod 775 /var/certs

# install acme.sh, log out and log in again after it finishes
curl https://get.acme.sh | sh -s email=${email}

# change issuing server, since the default zeroSSL hangs (at the time of writing)
acme.sh --set-default-ca --server letsencrypt

# issue certificate for your domain, you can use "*.${domain}" instead of
# www.${domain} if you want, also multiple domains are possible
acme.sh --issue -d ${domain} -d www.${domain} -w /var/www/${domain_}.website

# change owner:group of the generated hashes
sudo chown -R www-data:www-data /var/www/${domain_}.website/*

# install certificates so the webserver can load them
acme.sh --install-cert -d optifem.com \
--key-file /var/certs/key.pem \
--fullchain-file /var/certs/fullchain.pem \
--reloadcmd "sudo systemctl restart https.socket"

# the https service should have been started by now, but if not, issue:
#sudo systemctl start https.socket
# everything should work now. Let's make a small html file and check that it is
# served by the webserver.
sudo -u www-data cat <<EOF > /var/www/${domain_}.website/index.html
<h1>Hello from ${domain} !!</h1>
<p>Yep, althttpd is working smoooothly</p>
EOF

# make sure this file is not executable
sudo chmod 644 /var/www/${domain_}.website/index.html

Now point your browser to your domain, and it should just work. Make sure both http/https work.

# Make fossil do it’s thing

OK, now for the interesting part. althttpd will serve any file without +x permission as a static file, but files with +x permission are served as CGI scripts/binaries. No need for special cgi-bin directory like in apache or most other webservers. For example such a script on my setup above is just

#!/bin/fossil
repository: /repos/myrepo.fossil

where the pathes are relative to the chroot jail, so /var/www/bin/fossil maps to /bin/fossil.

#althttpd  #webserver  #google-cloud