Decompiling a game essentially means to reverse-engineer it back to its source code. A few years ago, the Mario 64 decompilation project reached a milestone with a complete source code with annotations. If you take the source code and match it with the original build arguments, it can reproduce the exact original USA ROM file. What does this mean for enthusiasts and modders?
While exploring the depths of the decompiled code, a rather interesting discovery came to light. Nintendo, in their original build, left out the O2 optimisation flag.
Unlocking the source code wasn’t just a peek into the past; it was a gateway to the future. With the complete decompiled source in hand, enthusiasts started creating native ports for PCs. No more emulating the Nintendo 64 hardware. Instead, direct ports were created that run seamlessly on DirectX 12, offering enhanced gameplay experiences. Sm64pcbuilder2 is a tool that streamlines the compilation process, integrating mods with ease.
With the transition to PC, the modding community found a new playground. Over the last three years, numerous patches have emerged, introducing enhancements ranging from upscaling and HD textures to intricate details like ray tracing. While altering textures and models is subjective and hinges on individual taste, I’ve shared some videos to show what is possible.
While experiencing Mario 64 on modern hardware with enhancements in HD models & graphics is fun, there’s a unique charm in pushing the boundaries of the original Nintendo 64 hardware. Thanks to the decompiled source, developers have found innovative ways to optimize the game to run more smoothly on its native platform.
Enter @KazeEmanuar, a developer who has been pushing the N64 hardware to its limits. Tackling the challenges of memory bandwidth, Kaze has been introducing a plethora of improvements, from FPS boosts to refining base models and textures. If low-level hacking and optimization enthuse you, Kaze’s explorations will eat up hours of your time.
The legacy of Mario 64 goes beyond its initial release. It has evolved and been reimagined by a dedicated community of enthusiasts, hackers, and developers. The dedication of the community and developers like Kaze, showcase what can be done when you apply modern techniques to retro hardware and software.
]]>This led me to putting together a quick screening checklist that I would share with account managers, to allow them to do early screening of their accounts. Now when I am eventually brought in, the scenario is in a much better place for me to work with.
These are not necessarily showstoppers, but do indicate that the technology is not being used optimally.
If a customer is trying to “make it more secure” without involving other parties, then other services may solve it easier
Run the docker container, SSH in, and you’re on an emulated RISC-V core running Debian.
I wanted to see if I could make a simpler version of the RISC-V emulated system on WSL2, allowing anyone to quickly get started without too much setup. I was able to solve by packing it all into a Docker container.
A pre-configured QEMU & Debian RISC-V Docker image. Allows you to get started working on an emulated RISC-V environment. Is a quick way to see what libraries and frameworks work on RISC-V without sourcing hardware.
Structure:
To get started, you need to run the Docker image, and SSH in. You’ll then be in a Debian envornment running an emulated RISC-V core, ready to compile and run applications.
# 1 Get the image
docker pull davidburela/riscv-emulator
# 2.
# Run with QEMU default of 2CPU & 2G ram.
# Expose port 2222 which is routed through into the QEMU RISC-V guest
docker run -d --publish 127.0.0.1:2222:2222/tcp davidburela/riscv-emulator
# 3. SSH directly into the QEMU RISC-V guest, the default password is "root". (Might take a few minutes for guest to start)
ssh root@localhost -p 2222
The docker file is available on GitHub. If you want to see how it is put together. Also allows you to manually upgrade versions of docker base image, QEMU, or the risc-v debian image.
I have been experimenting with getting different projects running on RISC-V. This docker image allowed me to quickly spin up new experiments and validate
Today: I was able to get Ethereum running on RISC-V, after finding upstreamed fixes that are not yet in the latest releases of Golang.
I have submitted a GitHub issue to track the progress of RISC-V support in go-ethereum.
First was to try compiling the Geth source with the latest version of Go 1.15.8
. But I ran into an issue with cgo
.
/usr/bin/ld: $WORK/b009/_x008.o: in function `x_cgo_thread_start':
/usr/lib/go-1.15/src/runtime/cgo/gcc_util.c:21: undefined reference to `_cgo_sys_thread_start'
collect2: error: ld returned 1 exit status
Digging into the Epic to port Golang to RISC-V, there are references to cgo not working. Following the threads you eventually get to the issue & PR that is tracking and resolving the cgo issues https://github.com/golang/go/issues/36641
The PR to fix this has been approved and merged, and is being upstreamed. But as of go 1.15.8
it has not yet been included. Which means for the moment we’ll need to compile Go ourselves from source to get this working
This walk-through is assuming you are running on a RISC-V system, or have set up your system for cross compiling.
Easiest: RISC-V emulator docker image
Use docker to load a pre-built image that has a running RISC-V QEMU guest.
https://github.com/DavidBurela/riscv-emulator-docker-image
Emulated environment with QEMU:
I have written a guide on creating your own emulated RISC-V environment with QEMU: Emulating RISC-V Debian
Advanced: Cross compiling for RISC-V:
If you are not running an emulated environment, and want to build geth on your current machine. You can cross compile and target RISC-V in order to test out the toolchain, you can view my previous post on Cross compiling Golang for RISC-V.
Once you have riscv64-linux-gnu-gcc
and Go environment set up, the rest of this walk-through will work.
NOTE: If you are cross compiling, you will also need to install libc6, see the bottom of my cross compiling link above, and see the troubleshooting section at the end.
First, compile Go from source to get the cgo fix. The easiest way did this is via bootstrapping toolchain from binary release https://golang.org/doc/install/source#bootstrapFromBinaryRelease
## Compile Go from source https://golang.org/doc/install/source
# install toolchain
sudo apt update
sudo apt install git golang -y
# clone source
cd ~
git clone https://go.googlesource.com/go goroot
cd goroot
cd src
# compile golang
./make.bash # MUCH faster
./all.bash # run this instead if you want all of the validation tests to run
Pay attention to where it installs the commands. in this case they are in /root/goroot/bin
, but will be different on your system.
https://github.com/ethereum/go-ethereum
https://github.com/ethereum/go-ethereum/wiki/Installation-Instructions-for-Ubuntu
# install more toolchains
sudo apt install git build-essential -y
cd ~
git clone https://github.com/ethereum/go-ethereum.git
cd go-ethereum
## Build geth using our compiled version of golang, targeting RISC-V.
# Pay attention to where your go binary was placed in previous step.
# on my system the built version was placed in /root/goroot/bin/go
## `make geth ` does not work, we need to run ci.go to specify the architecture
/root/goroot/bin/go run build/ci.go install -arch=riscv64 ./cmd/geth
# run your built geth! (if you are on RISC-V)
build/bin/geth
I have pinned a version of Geth I built for you to try ONLY if you want to see it quickly working on your RISC-V system.
DO NOT use it for anything serious, as you should never trust a version of Geth compiled by a random person on the internet ;-)
wget http://dweb.link/ipfs/QmWgJ68JocF15aBnsup6tcLnD6DbsMiXMjn4CsufmFuyLA -O geth
Important: the riscv64
architecture was not added until Go version 1.15.x
, you will need to make sure you are running a recent release.
Here is the Github issue that tracked the implementation https://github.com/golang/go/issues/36641
# Install standard GCC for your system
sudo apt install build-essential
# Add the RISC-V GCC package
sudo apt install g++-riscv64-linux-gnu gcc-riscv64-linux-gnu
# Tell golang what the compilation target is
# Importantly, set the CC (Cross Compiler) to use the riscv64 version of GCC, instead of our system's default
export GOOS=linux
export GOARCH=riscv64
export CGO_ENABLED=1
export CC=riscv64-linux-gnu-gcc
Now files will compile when cross compiling for RISC-V
# Code sample from issue https://github.com/golang/go/issues/36641#issuecomment-630204733
# Create a sample app
cat > cgo.go << EOF
package main
import "runtime"
/*
#include <stdio.h>
void hello(char *goos, char *goarch) {
printf("Hi from cgo on %s/%s!\n", goos, goarch);
}
*/
import "C"
func main() {
goos := C.CString(runtime.GOOS)
goarch := C.CString(runtime.GOARCH)
C.hello(goos, goarch)
}
EOF
# build it
go build ./cgo.go
When you run go env
you should see this similar output.
$ go env
GOOS="linux"
GOARCH="riscv64"
CGO_ENABLED="1"
CC="riscv64-linux-gnu-gcc"
...
You can preface the go command with the environment settings you want, if you cannot export
env GOOS=linux GOARCH=riscv64 CGO_ENABLED=1 CC=riscv64-linux-gnu-gcc go build
If you see something similar to this, then you have an older version of golang.
Solution: Install a version later than 1.15.x
$ go build ./cgo.go
/usr/bin/ld: $WORK/b009/_x008.o: in function `x_cgo_thread_start':
/usr/lib/go-1.15/src/runtime/cgo/gcc_util.c:23: undefined reference to `_cgo_sys_thread_start'
collect2: error: ld returned 1 exit status
This is what got me stuck for quite a while. This happens when CC
is still set to your system default gcc.
Solution: set export CC=riscv64-linux-gnu-gcc
$ go build ./cgo.go
gcc_riscv64.S: Assembler messages:
gcc_riscv64.S:15: Error: no such instruction: `sd x1,-200(sp)'
gcc_riscv64.S:16: Error: no such instruction: `addi sp,sp,-200'
gcc_riscv64.S:17: Error: no such instruction: `sd x8,8(sp)'
gcc_riscv64.S:18: Error: no such instruction: `sd x9,16(sp)'
gcc_riscv64.S:19: Error: no such instruction: `sd x18,24(sp)'
...
gcc_riscv64.S:29: Error: no such instruction: `fsd f8,104(sp)'
gcc_riscv64.S:30: Error: no such instruction: `fsd f9,112(sp)'
gcc_riscv64.S:31: Error: no such instruction: `fsd f18,120(sp)'
...
gcc_riscv64.S:43: Error: no such instruction: `mv s1,a0'
gcc_riscv64.S:44: Error: no such instruction: `mv s0,a1'
gcc_riscv64.S:45: Error: no such instruction: `mv a0,a2'
gcc_riscv64.S:46: Error: no such instruction: `jalr ra,s0'
gcc_riscv64.S:47: Error: no such instruction: `jalr ra,s1'
gcc_riscv64.S:49: Error: no such instruction: `ld x1,0(sp)'
gcc_riscv64.S:50: Error: no such instruction: `ld x8,8(sp)'
...
gcc_riscv64.S:62: Error: too many memory references for `fld'
gcc_riscv64.S:63: Error: too many memory references for `fld'
gcc_riscv64.S:64: Error: too many memory references for `fld'
...
gcc_riscv64.S:74: Error: no such instruction: `addi sp,sp,200'
gcc_riscv64.S:76: Error: no such instruction: `jr ra'
/lib/ld-linux-riscv64-lp64d.so.1: No such file or directory
Solution:
Unfortunately I could not figure out a clean way to do this. Apparently you can set export CGO_LDFLAGS="-L/path/to/the/lib
but that did not work for me. So in the short term, copying them into the /lib
folder works after installing. http://www.gridengine.eu/index.php/other-stories/232-avoiding-the-ldlibrarypath-with-shared-libs-in-go-cgo-applications-2015-12-21
apt install libc6-dbg-riscv64-cross
cp /usr/riscv64-linux-gnu/lib/*.* /lib
IPFS is written in Go. The Golang team have done a lot of hard work getting the compilers and runtime working on different architectures, with RISC-V being one of them. I encountered one bug in a deep dependency of Go on RISC-V, involving procfs
and an undefined: parseCPUInfo
compile error.
...
go build "-asmflags=all='-trimpath='" "-gcflags=all='-trimpath='" -ldflags="-X "github.com/ipfs/go-ipfs".CurrentCommit=79a55305e" -o "cmd/ipfs/ipfs" "github.com/ipfs/go-ipfs/cmd/ipfs"
# github.com/prometheus/procfs
../../go/pkg/mod/github.com/prometheus/procfs@v0.1.3/cpuinfo.go:71:9: undefined: parseCPUInfo
make: *** [cmd/ipfs/Rules.mk:22: cmd/ipfs/ipfs] Error 2
It involved some digging to get to the root cause and find a way to resolve, but luckily there is a PR available to fix it https://github.com/prometheus/procfs/pull/325
After applying the patch, I was able to successfully build IPFS on my emulated IPFS environment, as well as pin & host my blog on the IPFS network.
Tracking RISC-V support: I have submitted a new issue on the go-ipfs repo to track the support of RISC-V https://github.com/ipfs/go-ipfs/issues/7781
These steps assume you have running version of Debian on RISC-V.
See my previous post Emulating RISC-V Debian on WSL2.
Then it is a matter of mostly following the standard IPFS build instructions
# Install build tools
apt update
apt install golang git make
# Clone IPFS repo
git clone https://github.com/ipfs/go-ipfs.git
cd go-ipfs
# Apply temporary patch to fix a broken dependency https://github.com/prometheus/procfs/pull/325
# Maybe try building first before applying the patch in case this has been resolved.
go mod edit -replace=github.com/prometheus/procfs=github.com/prometheus/procfs@910e685
# Build!
# Will output to ./cmd/ipfs/ipfs
make build
# Can optionally install
make install
From here you’ll be able to use the standard IPFS commands https://docs.ipfs.io/how-to/command-line-quick-start/
]]>Things that helped:
Requirements:
These instructions will also work inside of a Ubuntu Docker container, if you are on another system.
First we need to prepare our toolchain, and configure QEMU to run with RISC-V emulation.
## Setup build environment
# install base toolchain
sudo apt update
sudo apt install git wget build-essential ninja-build python3-setuptools
# need some additional build libraries for QEMU
sudo apt install libglib2.0-dev libpixman-1-dev
## Build QEMU
# clone it
cd ~
git clone https://github.com/qemu/qemu
# make & install QEMU for RISC-V
cd qemu
mkdir build
cd build
../configure --target-list=riscv64-softmmu
make -j3
sudo make install
Debian needs a special kernel for QEMU
## Download RISC-V compatible kernel.
# Option 1: Install if available on stable
sudo apt install u-boot-qemu
# Option 2: **IF** the package is not found, manually download & install package. As it is on testing, not stable.
# https://packages.debian.org/bullseye/u-boot-qemu
cd ~
wget http://ftp.us.debian.org/debian/pool/main/u/u-boot/u-boot-qemu_2021.01+dfsg-2_all.deb
sudo dpkg -i ./u-boot-qemu_2021.01+dfsg-2_all.deb
Now that everything is set up, download a pre-built debian distro, extract, and run it.
# Will need unzip
sudo apt install unzip
# download images from https://people.debian.org/~gio/dqib/
cd ~
wget https://gitlab.com/api/v4/projects/giomasce%2Fdqib/jobs/artifacts/master/download?job=convert_riscv64-virt -O artifacts.zip
unzip artifacts.zip
cd artifacts
# Start the VM
# -smp number of processors
# -m for RAM
# -netdev hostfwd 2222 will port forward for SSH
qemu-system-riscv64 \
-nographic \
-machine virt \
-cpu rv64 \
-smp 4 \
-m 4G \
-kernel /usr/lib/u-boot/qemu-riscv64_smode/uboot.elf \
-device virtio-blk-device,drive=hd \
-drive file=image.qcow2,if=none,id=hd \
-device virtio-net-device,netdev=net \
-netdev user,id=net,hostfwd=tcp::2222-:22 \
-object rng-random,filename=/dev/urandom,id=rng \
-device virtio-rng-device,rng=rng \
-append "root=LABEL=rootfs console=ttyS0"
User/pass: root/root
Recommended:
The QEMU terminal window is not great. From a new terminal, SSH in to get a better experience using the forwarded port.
ssh root@localhost -p 2222 #password root
Now that your Debian image is up and running you can use it.
I was able to successfully apt install
a number of packages such as golang
, htop
, screenfetch
.
# show CPU info
lscpu
# show distro information
apt install screenfetch
screenfetch
If you want to play further, you can optionally install an additional package that has a generic, and a SiFive BIOS.
# RISC-V firmware
# Additional firmwares will be available under /usr/lib/riscv64-linux-gnu/opensbi/
sudo apt install opensbi
# can now use the bios flag on QEMU
qemu-system-riscv64 \
-bios /usr/lib/riscv64-linux-gnu/opensbi/qemu/virt/fw_jump.elf \
... \
You won’t need to install the additional Kernel that was required for Debian. Once you have QEMU configured for RISC-V you can just download run the latest RISC-V image from https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/
# Download images from https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/
wget https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/Fedora-Minimal-Rawhide-20200108.n.0-fw_payload-uboot-qemu-virt-smode.elf
wget https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/Fedora-Minimal-Rawhide-20200108.n.0-sda.raw.xz
unxz Fedora-Minimal-Rawhide-20200108.n.0-sda.raw.xz
# Boot the image. Forward port 10000 for SSH
# user: riscv
# pass: fedora_rocks!
qemu-system-riscv64 \
-nographic \
-machine virt \
-smp 2 \
-m 2G \
-kernel Fedora-Minimal-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf \
-bios none \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-device,rng=rng0 \
-device virtio-blk-device,drive=hd0 \
-drive file=Fedora-Minimal-Rawhide-20200108.n.0-sda.raw,format=raw,id=hd0 \
-device virtio-net-device,netdev=usernet \
-netdev user,id=usernet,hostfwd=tcp::10000-:22
However last week SiFive announced a new dev board at a cost less than 1/4 of the original board + expansion. This has enticed me back into the ecosystem and the opportunities here.
Lets back up for a moment, why am I so excited about this platform?
Currently for many projects, it isn’t worth spending time and money on custom silicon, as it is just too cost prohibitive. While there are existing chip designs available for licensing (x86, ARM), the licensing fees, NDAs, and lead times make it not worth it. What this boils down to, it is currently too expensive to custom fabricate your own tailored chips.
Vecause of this, most companies currently just use what is available off the shelf. As it is cheaper to just use an existing COTS chip (Common Off The Shelf), as the economics and time to market just make it easier. Even if that existing COTS chip is overkill, has additional power draw, has unneeded features, etc.
RISC-V is an open source ISA that can be used to create open hardware microprocessors. You are able to use pre-defined designs, modify to create new custom chips with just the subset of features you require, with a reduced lead time. The advantage of using an open community supported ISA, is that there is an entire ecosystem of buildchains, libraries, OS, etc that will support your chip.
RISC-V is going to shake up a lot of industries and create new startup opportunities. Industries will be able to create smaller, optimised, feature reduced chips that target exactly the use case they are targetting.
For example, Western Digital have announced that all their hard drives will be utilising RISC-V chips, and they talk of the potential of it https://www.westerndigital.com/company/innovations/risc-v
Why am I interested in RISC-V?
For me personally, I am excited for the fully open stack opportunities in a decentralised web.
For me professionally, this will have a large impact on the industries I am working with.
We have the opportunity to create a completely free and open source stack all the way from silicon, hardware, OS, and application stack.
Complete beginner overview: If you know nothing at all about RISC-V then Linus has an very beginners intro to what the opportunities are.
[YouTube] [IPFS mirror]
More technical overview: If you have any knowledge of CPUs then this is probably a better intro video.
[YouTube] [IPFS mirror]
Opportunities for customisation: This video dives into the types of extensions you can use to optimise the chips for power draw, or acceleration (like ML). Shows the flexibility of design
[YouTube] [IPFS mirror]
If you want to play around, you can design your own custom RISC-V chip. Tweak the number of cores, cache, capabilities, etc. https://scs.sifive.com/core-designer/
The previous development board cost a total of US$3,000 for the base board + Expansion board to give you the ability USB, PCI-E slots, etc. This was just too far out of my price range and is why I waited for later revisions.
Which brings us to the latest development board:
It now has 4x SiFive U74 cores (general purpose), and 1x SiFive S7 core (real time). 8GB integrated ram, 2x m.2 slots (M-Key for bluetooth/wifi, E-Key for NVME). x16 PCI-E for graphics card. Mini-ITX form factor so it can be mounted in a standard PC case, etc etc. And most importantly is that reduced price.
More articles on the hardware
It is still expensive for a development board that will be roughly speed of a Raspberry Pi 4. But it does have all those expansion slots, and is supporting further development.
I plan on pre-ordering the new SiFive development board. I would like to experiment and write up my experiences using it for projects like:
Previously I shared how I am hosting my blog 100% on the decentralised web, with a HTTP gateway provided by cloudflare in this post. This is working well and I am happy with how it has turned out so far.
I have been thinking how to apply this to larger scale deployments such as docs.microsoft.com and what hurdles may hinder adoption.
An issue that stood out for me was potential issues on initial load. If the site is not well seeded or peer lookup after a new site upload is slow, then general users may encounter slow initial loads while caches warm up.
I wanted to find a solution to help enterprise adoption that would allow the “general user” to load a site fast as expected, while also enabling “Dweb power users” to experience all the benefits of a decentralised web.
The solution I came to was to combine Azure static web apps, with IPFS and DNSLink.
This allows standard users to access it with the speed and ease of normal Azure hosting, with enablement for access via the decentralised web. The IPFS blog explains some of current and upcoming browser support
/-> [Standard] Azure static web apps
Browser - DNS
\-> [Dweb enabled] IPFS network
Normal browser flow
Can access as normal, with speed benefits of Azure.
Browser -> DNS -> Azure static web site
Dweb flow
Will load the site from peers on the decentralised network
Browser -> DNSLink -> IPFS peers
Enabled features
If you have a static website ready, this can be set up in 5 minutes with Azure + GitHub action. There are quick starts to walk you through setting it up.
Cloudflare is the recommended way to do this currently, due to the support in ipfs-deploy used later.
You will need to create a Cloudflare API token for use later.
This DNS entry is used as additional metadata that Dweb browsers can use to be notified that there is a decentralised version available. Which will trigger the browser to load it from the peer to peer network instead.
They are a hosted IPFS offering. IPFS-deploy has support for it, and they provide 1GB free storage to help you get started. You will need your API key from the account section for use later.
These secrets are used in the pipeline. Enter your Cloudflare and Pinata secrets here for use in the GitHub action.
# sample secrets
IPFS_DEPLOY_CLOUDFLARE__API_TOKEN
IPFS_DEPLOY_CLOUDFLARE__ZONE=davidburela.com
IPFS_DEPLOY_CLOUDFLARE__RECORD=_dnslink.www.davidburela.com
IPFS_DEPLOY_PINATA__API_KEY
IPFS_DEPLOY_PINATA__SECRET_API_KEY
IPFS-deploy is able to push your site to common IPFS hosts to store your site, it is also able to update cloudflare
Modify your GitHub action .yml file to include a job to install Node, and then run ipfs-deploy.
You will need to modify _site
with your compiled asset folder.
# deploy to IPFS and update cloudflare
- name: Install Node
uses: actions/setup-node@v1
with:
node-version: '12.x'
- name: Run IPFS-deploy
run: npx ipfs-deploy _site/ -O -C -p infura -p pinata -d cloudflare
env:
IPFS_DEPLOY_PINATA__API_KEY: $
IPFS_DEPLOY_PINATA__SECRET_API_KEY: $
IPFS_DEPLOY_CLOUDFLARE__API_TOKEN: $
IPFS_DEPLOY_CLOUDFLARE__ZONE: $
IPFS_DEPLOY_CLOUDFLARE__RECORD: $
# Explanation of flags:
# _site/ is the folder to deploy. In my example I'm deploying the Jekyll build folder.
# -p can pin it to multiple providers. Infura is free, and we are using our Pinata account
# -d which DNS provider we are using (cloudflare)
# -O don't open the URL afterwards
# -C don't copy to clipboard (can cause issue due to no clipboard being present)
With the move of my blog to a static website hosted on the decentralised web, one thing that I lost was rich analytics on the views of each page. Wordpress gave me a lot of insights into which blog posts had historical views, which was interesting to dive into.
I could have taken the easy way and added Google Analytics, but the amount of tracking that Google does on the web already made me a bit uneasy. Which meant I was looking for a lighter weight solution. I also needed something that was completely client side, as the blog is a static website deployed to IPFS.
Perfect timing, a top HackerNews post linked to an article by LWN.net on Lightweight alternatives to Google Analytics. I read through the first, as well as the follow up with More alternatives to Google Analytics.
After reading through the options, I settled on GoatCounter. I liked his goal of making a lightweight analytics platform, that was easy and did not track personal data. It is also OpenSource on GitHub.
Adding it to my blog as easy, a single <script>
tag. I also opted for adding a <noscript>
pixel as well to support a wider array of browsers. I also put a little HTML comment above it to explain that it was a privacy preserving counter.
I did this by adding it to _includes\custom-head.html
in my Jekyll layout.
Ironically though, it doesn’t work on my machine. As uBlock Origin extension I have installed in Edge blocks the domain ^_^;;
That is fine though, I’m not after exact numbers. Just a general idea of which pages are getting views is sufficient, so I’m happy with this implementation.
GoatCounter had an easy onboarding flow, and I got the whole thing set up and deployed in a couple of minutes. It is open source, doesn’t track personal data. So far I can recommend it to others if they are looking for a lightweight solution.
It is great that they have a free tier for personal non-commercial use, with 6 months of retention. I only have 100s of views a month, don’t need advanced analytics, don’t need custom domains, etc. But longer data retention might be a nice supporter feature.
Paying to support projects you use, “how much is it worth”, and especially business use, is always a hot topic. I’d be happy paying $10/year ($1/month on Patreon?) for unlimited retention for a non-commercial++ tier. But $100/year for page views for a personal blog I update infrequently seems overkill for my use case. Right now I’m comfortable with the features I get on the free tier, became a Patreon of the creator, and will recommend it to others looking for a solution.