Over-the-air update system for FxBlox devices and PCs. Manages Docker-based services for IPFS storage, cluster pinning, auto-pinning, and the Fula protocol on ARM64 hardware (RK3588/RK1) and x86_64 desktops (Windows/Linux/macOS).
Each Fula node runs nine Docker containers orchestrated by docker-compose:
+---------------------+ +-------------------+ +---------------------+
| fxsupport | | kubo | | ipfs-cluster |
| (alpine, scripts) | | (ipfs/kubo:release| | (functionland/ |
| | | bridge network) | | ipfs-cluster) |
| Carries all config | | | | host network |
| and scripts in | | Ports: 4001, 5001 | | Ports: 9094-9096 |
| /linux/ directory | | 8080, 8081 | | |
+---------------------+ +---+---------------+ +---------------------+
| | ^
| docker cp to | IPFS | pin/add, pin/ls
| /usr/bin/fula/ | block |
v | storage +-+-----------------+ +-------------------+
+----------------------------+ | | fula-pinning | | fula-gateway |
| /uniondrive (merged) | | | (auto-pin daemon) | | (S3-compatible |
| union-drive.sh (mergerfs) | | | host network | | storage gateway) |
+----------------------------+ | | Port: 3501 | | host network |
| | +---+---+-----------+ | Port: 9000 |
+--------+--------+ +------+------+ | | +--------+-----------+
| go-fula | | kubo-local | Syncs | | registry.cid |
| (functionland/ | | (ipfs/kubo | pins | +----> shared file --+
| go-fula) | | :release) | from | /internal/fula-gateway/
| host network | | bridge | remote |
| Port: 40001 | | Port: 5002 | pinning |
+------------------+ +-------------+ |
+---------------------+ |
| watchtower | |
| auto-update | |
| every 3600s | |
+---------------------+ |
+---------------------+ |
| fula.service | |
| (systemd) | |
| runs fula.sh | |
+---------------------+ |
| Container | Image | Network | Purpose |
|---|---|---|---|
fula_fxsupport |
functionland/fxsupport:release |
default | Carries scripts and config files. Acts as a delivery mechanism — fula.sh copies /linux/ from this container to /usr/bin/fula/ on the host. |
ipfs_host |
ipfs/kubo:release |
bridge | Main IPFS node. Stores blocks on /uniondrive. Config template at kubo/config, runtime config at /home/pi/.internal/ipfs_data/config. |
ipfs_local |
ipfs/kubo:release |
bridge | Local-only IPFS node for fula-gateway and fula-pinning. Separate IPFS repo at /internal/ipfs_data_local. Port 5002 (localhost only). |
ipfs_cluster |
functionland/ipfs-cluster:release |
host | IPFS Cluster follower node. Manages pin replication across the network. Config at /uniondrive/ipfs-cluster/service.json. |
fula_go |
functionland/go-fula:release |
host | Fula protocol: blockchain proxy (ports 4020/4021), libp2p blox (port 40001), WAP server (port 3500). |
fula_pinning |
functionland/fula-pinning:release |
host | Auto-pinning daemon. Syncs pins from a remote IPFS Pinning Service to local kubo. Writes registry.cid for fula-gateway. See docker/fula-pinning/README.md. |
fula_gateway |
functionland/fula-gateway:release |
host | Local S3-compatible gateway. Serves the paired user's files from local kubo on port 9000 (LAN only). Auth via pairing secret from box_props.json. |
fula_updater |
containrrr/watchtower:latest |
default | Polls Docker Hub hourly for updated images, auto-restarts containers. |
- kubo and kubo-local run in Docker bridge networking (ports mapped:
4001:4001,5001:5001,5002:5002) - go-fula, ipfs-cluster, fula-pinning, and fula-gateway run with
network_mode: "host"(direct host networking) - go-fula cannot reach kubo at
127.0.0.1from host network — it uses thedocker0bridge IP (typically172.17.0.1), auto-detected ingo-fula/blox/kubo_proxy.go - Docker Compose bridge caveat: Compose creates its own bridge interfaces (
br-<hash>), notdocker0. Firewall rules must match-i br-+(iptables wildcard) in addition to-i docker0. - fula-gateway reaches kubo at
127.0.0.1:5001via host network (kubo's port mapping), serves S3 API on0.0.0.0:9000(LAN-only via firewall) - PC installer networking: All services run in bridge mode (
fula-net) since Docker Desktop doesn't support host networking on Windows/macOS. kubo's LAN and public IP are injected intoAddresses.AppendAnnounceso it advertises reachable addresses to the DHT.
/uniondrive/ # mergerfs mount (union of all attached drives)
ipfs_datastore/blocks/ # IPFS block storage (flatfs)
ipfs_datastore/datastore/ # IPFS metadata (pebble)
ipfs-cluster/ # Cluster state, service.json, identity.json
ipfs_staging/ # IPFS staging directory
/home/pi/.internal/ # Device-internal state (Armbian)
ipfs_data/config # Deployed kubo config (runtime)
ipfs_data_local/config # Local kubo config (runtime)
ipfs_config # Template copy (used by initipfs)
config.yaml # Fula device config (pool, authorizer, etc.)
box_props.json # Pairing credentials (JWT, secret, endpoint)
fula-gateway/registry.cid # Bucket registry CID (written by fula-pinning)
/usr/bin/fula/ # On-host scripts and configs (copied from fxsupport)
fula.sh # Main orchestrator script
docker-compose.yml # Container definitions
.env # Image tags (GO_FULA, FX_SUPPROT, IPFS_CLUSTER)
.env.cluster # Cluster env var overrides
kubo/config # Kubo config template
kubo/kubo-container-init.d.sh # Kubo init script
kubo-local/config-local # Local kubo config template
kubo-local/kubo-local-container-init.d.sh # Local kubo init script
ipfs-cluster/ipfs-cluster-container-init.d.sh # Cluster init script
update_kubo_config.py # Selective kubo config merger
union-drive.sh # UnionDrive mount management
bluetooth.py # BLE command handler
local_command_server.py # Local TCP command server
control_led.py # LED control
readiness-check.py # Health monitoring and auto-recovery
commands.sh # File-based command handler (reboot, LED, partition)
firewall.sh # iptables firewall rules
plugins/ # Plugin system
...
Each Fula node has two peer IDs derived from a single private key:
- Cluster Peer ID: The original
peer.IDFromPrivateKey(identity)— used for on-chain pool membership and IPFS Cluster identity - Kubo Peer ID: HMAC-SHA256 derived with domain
"fula-kubo-identity-v1"— used for IPFS kubo identity (prevents collision with cluster)
go-fula uses the cluster peer ID for discoverPoolAndChain() membership checks, while kubo uses the derived peer ID.
The fula-pinning daemon replicates data from the remote Fula Storage API to this device's local IPFS node. The local blox is a sync consumer only — uploads go to the remote Fula Gateway S3 server, not to this device:
User uploads via S3 API -> remote Fula Gateway -> stores in Gateway's Kubo + pins on Remote Pinning Service
|
fula-pinning daemon (this device) <-- fetches pin list every 3 min (user's JWT) -------+
|
+---> pins missing CIDs on local Kubo (fetches data via IPFS P2P network)
User isolation: The daemon uses the paired user's JWT (auto_pin_token) to query the pinning service. The service only returns pins belonging to that token, so the local kubo only pins the paired user's data.
Configuration: Pairing credentials are stored in /home/pi/.internal/box_props.json:
{
"auto_pin_token": "user-jwt-token",
"auto_pin_endpoint": "https://api.pinata.cloud/psa",
"auto_pin_pairing_secret": "local-api-secret"
}The daemon monitors this file every 30 seconds — pair/unpair/rotate tokens without restarting the container. When unpaired (empty token/endpoint), the daemon idles.
Local HTTP API (port 3501, requires Bearer {pairing_secret}):
GET /api/v1/auto-pin/status— pinned count, last/next sync timesPOST /api/v1/auto-pin/report-missing— request immediate pinning of specific CIDs
See docker/fula-pinning/README.md for full documentation.
The fula-gateway container is a standalone Rust binary (fula-local-gateway) that provides an S3-compatible API for local file access. It uses fula-core and fula-blockstore crates from fula-api as shared libraries, but contains no cloud code.
FxFiles (client-side encryption/decryption)
|
+-- Remote: S3 API -> s3.cloud.fx.land:443 -> remote fula-cli -> remote kubo
|
+-- Local (LAN): S3 API -> blox-ip:9000 -> local fula-gateway -> local kubo
^
registry.cid written by
fula-pinning daemon
Key features:
- Bearer-only auth: Uses
auto_pin_pairing_secretfrombox_props.json. When unpaired, auth is disabled (safe — port 9000 is LAN-only via firewall). - User scoping: Derives owner ID from BLAKE3 hash of JWT
subclaim. All bucket operations are scoped to the paired user. - Multipart uploads: Full lifecycle support (
create_multipart_upload,upload_part,complete_multipart_upload). Creates unified DAG viaput_ipld()for multi-part files. - CID watcher: Polls
/internal/fula-gateway/registry.cidevery 30 seconds, reloads bucket registry when CID changes. - Content CID header: Always returns
X-Fula-Content-Cidheader on object responses. - Registry persistence: Uses
persist_registry()(no token) for local operation. - mDNS: go-fula advertises
s3Port=9000in mDNS TXT records so FxFiles discovers the local gateway automatically.
Firewall: Port 9000 is restricted to RFC1918 private addresses only (192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12).
The readiness-check.py script runs as a systemd service and provides comprehensive health monitoring:
- Container health: Monitors all 8 containers, detects crashes and error logs
- Config validation: Detects YAML invalid control characters, auto-restores from backup if corrupted
- PeerID collision detection: Detects and fixes kubo/cluster PeerID collisions by regenerating cluster identity
- Proxy health: Checks go-fula proxy ports (4020, 40001) reachability
- Disk space: Triggers
docker system pruneif <1GB free - Kubo config: Strips deprecated Provider/Reprovider fields (kubo 0.40+)
- LED status: Green=healthy, Yellow=restarting, Blue=restarted, Red=critical failure
- Auto-recovery: Up to 4 restart attempts, then activates WireGuard fallback and triggers re-partition after 12+ hours of failure
Optional plugins extend device functionality. Each plugin includes install.sh, start.sh, stop.sh, uninstall.sh, a docker-compose.yml, and an info.json manifest.
| Plugin | Purpose | Requirements |
|---|---|---|
| streamr-node | Runs a Streamr node to earn $DATA token rewards | Port 32200 forwarding |
| loyal-agent | Local AI agent using NPU (deepseek-llm-7b-chat model) | 32GB RAM, 10GB storage, ARM64 only |
Active plugins are tracked in /home/pi/active-plugins.txt. The PC installer excludes hardware-specific plugins (e.g. loyal-agent).
The commands.sh script watches /home/pi/commands/ via inotifywait for file-based commands:
| Command File | Action |
|---|---|
.command_partition |
Runs resize.sh for disk repartitioning |
.command_repairfs |
Runs repairfs.sh for external storage filesystem repair |
.command_led |
Sets LED color/duration (file content: color duration) |
.command_reboot |
Triggers system reboot |
The pc-installer/ directory contains an Electron desktop application that brings the full Fula node stack to Windows, Linux, and macOS PCs via Docker Desktop.
| Aspect | Armbian (fula.sh) | PC Installer (Electron) |
|---|---|---|
| Runtime | Bash scripts on bare metal | Electron GUI + Node.js managers |
| Docker networking | Mixed (host for go-fula, bridge for kubo) | All bridge (fula-net) |
| Firewall | iptables rules |
Windows Firewall rules via Squirrel hooks |
| Storage paths | Fixed: /home/pi, /media/pi |
User-chosen dataDir + storageDir |
| Health monitoring | readiness-check.py (systemd) |
Real-time HealthMonitor + tray icon colors |
| mDNS | Python advertisement.py |
Node.js bonjour-service with interface detection |
| Hardware ID | ARM cpuinfo hash | SHA-256 of first non-internal MAC address |
| UI | CLI only | Setup wizard (6 steps) + dashboard |
src/main/
index.js # Main entry, IPC handlers, lifecycle orchestration
constants.js # Ports, container names, bootstrap peers
config-store.js # Electron-store config persistence
docker-manager.js # Docker Compose lifecycle, waitForDocker(), ghost container cleanup
health-monitor.js # Continuous health checks, auto-recovery, Docker recovery
tray-manager.js # System tray icon + context menu (color = health status)
storage-manager.js # Directory tree init, template copying
mdns-advertiser.js # mDNS advertisement via bonjour-service
update-manager.js # Stale image detection
plugin-manager.js # Plugin extraction from fxsupport container
logger.js # Winston-based logging (file + console)
src/renderer/
wizard/ # Setup wizard UI (terms, Docker check, storage, ports, pull, launch)
dashboard/ # Dashboard UI (containers, logs, health, plugins)
- Setup wizard: 6-step guided setup (terms, Docker check, storage selection, port availability, image pull, launch)
- System tray: Color-coded health status (green/yellow/red/blue/cyan/grey)
- Health monitor: Continuous checks for container health, kubo API, relay connection, bootstrap peers, proxy ports, cluster errors, config validity, disk space, PeerID collision
- Docker Desktop recovery: Detects unresponsive Docker daemon, kills and relaunches Docker Desktop, waits for recovery. Health monitor triggers recovery after 3 consecutive failures.
- mDNS: Advertises
_fulatower._tcpwith device properties (bloxPeerIdString, authorizer, hardwareID, ipfsClusterID, s3Port). Polls go-fula/propertiesevery 60s to update. - kubo network hardening: Injects LAN + public IP into
AppendAnnounce, forcesLibp2pForceReachability=privatefor relay-v2 (AutoNAT can't work in Docker bridge) - Template sync:
scripts/sync-shared-templates.jscopies kubo/cluster configs from Armbian sources, transforming bridge DNS names where needed (127.0.0.1:5001->kubo:5001) - Windows install hooks: Squirrel installer creates Start Menu shortcuts, adds Windows Firewall rules (mDNS UDP 5353, services TCP 3500/4001/9000/9094) via UAC-elevated PowerShell
{dataDir}/ # e.g. E:\.fula or %LOCALAPPDATA%\FulaData
config/
docker-compose.pc.yml # Container orchestration
.env.pc # Image tags and paths
.env.cluster # Cluster identity and bootstrap
.env.gofula # go-fula env (HARDWARE_ID injected at runtime)
kubo/config # Kubo config template
kubo-local/config-local # Local kubo config template
ipfs-cluster/ # Cluster init script
internal/
config.yaml # Fula device config
box_props.json # Pairing credentials
ipfs_data/config # Kubo runtime config
ipfs_data_local/config # Local kubo runtime config
fula-gateway/registry.cid # Bucket registry CID
plugins/ # Extracted plugins
storage/ # IPFS block storage (or separate storageDir)
logs/
fula-node.log # Application logs
How code changes in this repo reach devices:
1. Push to GitHub repo
2. GitHub Actions builds Docker images (on release) OR manual build
3. Images pushed to Docker Hub (functionland/fxsupport:release, etc.)
4. Watchtower on device detects updated images (polls hourly)
5. fula.service triggers: fula.sh start
6. First restart() — runs with CURRENT on-device files
7. docker cp fula_fxsupport:/linux/. /usr/bin/fula/ — copies NEW files from updated fxsupport image
8. If fula.sh itself changed -> second restart() — runs with NEW files
On every fula.sh start (before docker-compose up):
update_kubo_config.py(or inline fallback infula.sh) runs- Reads the template (
/usr/bin/fula/kubo/config) and the deployed config (/home/pi/.internal/ipfs_data/config) - Merges only managed fields (Bootstrap, Peering, Swarm, Experimental, Routing, etc.) from template into deployed config
- Preserved fields (Identity, Datastore paths, API/Gateway addresses) are never touched
- Dynamic
StorageMaxis calculated as 80% of/uniondrivetotal space (minimum 800GB floor) - Writes updated deployed config, then runs
docker-compose up
.env.clusteris loaded by docker-compose asenv_file(injects env vars into container).env.clusteris also bind-mounted at/.env.clusterinside the containeripfs-cluster-container-init.d.shruns as entrypoint:- Waits for pool name from
config.yaml - Generates cluster secret from pool name
- Runs
jqto patchservice.jsonwith connection_manager, pubsub, batching, timeouts, informer settings - Starts
ipfs-cluster-service daemonwith bootstrap address
- Waits for pool name from
fula-ota/
docker/
build_and_push_images.sh # Builds all images and pushes to Docker Hub
env_release.sh # ARM64 release env vars (image names, tags)
env_release_amd64.sh # AMD64 release env vars
env_test.sh # ARM64 test env vars
env_test_amd64.sh # AMD64 test env vars
run.sh # Local dev docker-compose runner
fxsupport/
Dockerfile # alpine + COPY ./linux -> /linux
build.sh # buildx build + push
linux/ # All on-device scripts and configs
docker-compose.yml # Container orchestration (9 services)
.env # Docker image tags
.env.cluster # Cluster env var overrides
fula.sh # Main orchestrator (start/stop/restart/rebuild)
union-drive.sh # mergerfs mount management
update_kubo_config.py # Selective kubo config merger
readiness-check.py # Health monitoring and auto-recovery (1500+ lines)
commands.sh # File-based command handler (reboot, LED, partition)
firewall.sh # iptables firewall rules
kubo/
config # Kubo config template
kubo-container-init.d.sh
kubo-local/
config-local # Local kubo config template
kubo-local-container-init.d.sh
ipfs-cluster/
ipfs-cluster-container-init.d.sh
bluetooth.py # BLE setup and command handling
local_command_server.py # TCP command server
control_led.py # LED control for device status
plugins/ # Plugin system (loyal-agent, streamr-node)
...
fula-pinning/
Dockerfile # Go build stage + alpine runtime
build.sh # buildx build + push
*.go # Daemon source: config, sync loop, HTTP server, kubo/pinning clients
README.md # Full service documentation
fula-gateway/
build.sh # buildx build from fula-local-gateway/
download_image.sh # Pull pre-built image from Docker Hub
fula-local-gateway/ # Standalone Rust binary
Cargo.toml # Uses fula-core + fula-blockstore as git deps
Dockerfile # Rust build stage + debian runtime
src/
main.rs # CLI, config loading, kubo wait loop
server.rs # Axum router, CID watcher spawn
auth.rs # Bearer-token middleware
box_props.rs # JWT sub extraction, BLAKE3 owner ID
state.rs # AppState, IPFS connection, CID watcher
handlers/ # S3 API handlers (bucket, object, multipart, service)
...
go-fula/
Dockerfile # Go build stage + alpine runtime
build.sh # Clones go-fula repo, buildx build + push
go-fula.sh # Container entrypoint
ipfs-cluster/
Dockerfile # Go build stage + alpine runtime
build.sh # Clones ipfs-cluster repo, buildx build + push
pc-installer/ # Electron desktop app for Windows/Linux/macOS
package.json # Electron 40, electron-forge
forge.config.js # Build config (Squirrel/DEB/ZIP)
scripts/
sync-shared-templates.js # Copies Armbian configs with bridge transforms
templates/ # PC-specific Docker compose, env files
src/
main/ # Electron main process (10 manager modules)
renderer/ # Wizard + Dashboard UI
preload.js # IPC bridge
.github/workflows/
docker-image.yml # Release CI: ARM64 images + tar upload
docker-image-test.yml # ARM64 test build (all services)
docker-image-selective-test.yml # Selective service test build
docker-image-amd64-test.yml # AMD64 test build
docker-image-amd64-release.yml # AMD64 release build
pc-installer-release.yml # PC installer build (Windows .exe + Linux .deb)
tests/ # Shell-based test scripts
README.md # Test documentation
test-config-validation.sh # Scan for stale config references
test-docker-setup.sh # Validate Docker Compose syntax and env vars
test-container-dependencies.sh # Analyze service dependencies and startup order
test-fula-sh-functions.sh # Test fula.sh install/update/run operations
test-docker-services.sh # Test container runtime behavior
test-uniondrive-readiness.sh # Test uniondrive and readiness-check
test-go-fula-system-integration.sh # WiFi, hotspot, storage, network tests
test-device-hardening.sh # Full production update flow on real RK3588
install-ubuntu.sh # Ubuntu/Debian automated installer
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.shOptionally manage Docker as a non-root user (docs):
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp dockerThe PC installer requires Docker Desktop on Windows and macOS. The setup wizard checks for Docker Desktop and provides install instructions if missing.
sudo curl -L "https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-composesudo systemctl start NetworkManager
sudo systemctl enable NetworkManagersudo apt-get install gcc python3-dev python-is-python3 python3-pip
sudo apt-get install python3-gi python3-gi-cairo gir1.2-gtk-3.0
sudo apt install net-tools dnsmasq-base rfkill lshwRaspberry Pi OS handles auto-mounting natively. On Armbian, set up automount:
sudo apt install net-tools dnsmasq-base rfkill gitsudo nano /usr/local/bin/automount.sh#!/bin/bash
MOUNTPOINT="/media/pi"
DEVICE="/dev/$1"
MOUNTNAME=$(echo $1 | sed 's/[^a-zA-Z0-9]//g')
mkdir -p ${MOUNTPOINT}/${MOUNTNAME}
FSTYPE=$(blkid -o value -s TYPE ${DEVICE})
if [ ${FSTYPE} = "ntfs" ]; then
mount -t ntfs -o uid=pi,gid=pi,dmask=0000,fmask=0000 ${DEVICE} ${MOUNTPOINT}/${MOUNTNAME}
elif [ ${FSTYPE} = "vfat" ]; then
mount -t vfat -o uid=pi,gid=pi,dmask=0000,fmask=0000 ${DEVICE} ${MOUNTPOINT}/${MOUNTNAME}
else
mount ${DEVICE} ${MOUNTPOINT}/${MOUNTNAME}
chown pi:pi ${MOUNTPOINT}/${MOUNTNAME}
fisudo chmod +x /usr/local/bin/automount.shsudo nano /etc/udev/rules.d/99-automount.rulesACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount@%k.service"
ACTION=="add", KERNEL=="nvme[0-9]n[0-9]p[0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount@%k.service"
ACTION=="remove", KERNEL=="sd[a-z][0-9]", RUN+="/bin/systemctl stop automount@%k.service"
ACTION=="remove", KERNEL=="nvme[0-9]n[0-9]p[0-9]", RUN+="/bin/systemctl stop automount@%k.service"
sudo nano /etc/systemd/system/automount@.service[Unit]
Description=Automount disks
BindsTo=dev-%i.device
After=dev-%i.device
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/automount.sh %I
ExecStop=/usr/bin/sh -c '/bin/umount /media/pi/$(echo %I | sed 's/[^a-zA-Z0-9]//g'); /bin/rmdir /media/pi/$(echo %I | sed 's/[^a-zA-Z0-9]//g')'sudo udevadm control --reload-rules
sudo systemctl daemon-reloadgit clone https://github.com/functionland/fula-ota
cd fula-ota/docker/fxsupport/linux
sudo bash ./fula.sh rebuild
sudo bash ./fula.sh startcurl -fsSL https://raw.githubusercontent.com/functionland/fula-ota/main/install-ubuntu.sh | sudo bashThe install-ubuntu.sh script handles: OS detection, dependency installation, Docker setup, repo cloning, automount configuration, and systemd service enablement. Supports Ubuntu 22.04+ (Jammy, Lunar, Mantic, Noble).
Download the installer from GitHub Releases:
- Windows:
.exe(Squirrel installer) - Linux:
.debpackage
GUI mode (default): The setup wizard guides through Docker Desktop verification, storage selection, port checks, image pulling, and service launch.
CLI mode (headless): Pass --wallet to skip the GUI entirely. Useful for Linux servers, scripted deployments, and headless machines.
# Basic setup (identity + services, no pool):
fula-node --wallet 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 \
--password "mypassword" \
--authorizer 12D3KooWAazUofWiZPiTovPDTqk8pucGw5k9ruSfbwncaktWi7DN
# Full setup with pool joining:
fula-node --wallet 0xac09...f80 --password "mypass" --authorizer 12D3KooWAaz... \
--chain skale --poolId 1
# Custom data directory:
fula-node --wallet 0xac09...f80 --password "mypass" --authorizer 12D3KooWAaz... \
--data-dir /opt/fula-data| Parameter | Required | Description |
|---|---|---|
--wallet <hex> |
Yes | Ethereum private key (64 hex chars, with or without 0x prefix) |
--password <string> |
Yes | Password for identity derivation (same as in mobile app) |
--authorizer <peerID> |
Yes | Peer ID of the authorizing mobile device (starts with 12D3KooW) |
--chain <skale|base> |
No | Blockchain for pool joining (default: skale) |
--poolId <number> |
No | Pool ID to join (requires --chain) |
--data-dir <path> |
No | Data directory (default: %LOCALAPPDATA%\FulaData on Windows, ~/.fula on Linux) |
--help, -h |
No | Show help message and exit |
The CLI mode performs the full setup sequence: input validation, identity derivation from wallet+password, config.yaml and box_props.json generation, Docker image pull (with per-image progress output), service startup, and optional pool joining. On success it prints a summary and exits with code 0.
cd docker
source env_release.sh # ARM64
# or: source env_release_amd64.sh # AMD64
bash ./build_and_push_images.shThis builds and pushes all images:
functionland/fxsupport:releasefunctionland/go-fula:releasefunctionland/ipfs-cluster:releasefunctionland/fula-pinning:releasefunctionland/fula-gateway:release
Use test-tagged images (e.g. test147) to validate changes on a device before promoting to release.
env_test.sh defaults to test147. Override at runtime without editing the file:
cd docker
# Use default tag (test147):
source env_test.sh
# Or override:
TEST_TAG=test148 source env_test.shThis exports all image tags with your tag and writes a matching .env file.
Option A: GitHub Actions (recommended)
Trigger one of these workflows manually from the Actions tab:
docker-image-test.yml— builds all ARM64 images with the test tagdocker-image-amd64-test.yml— builds all AMD64 imagesdocker-image-selective-test.yml— selectively build individual services (checkbox per service, custom tag override)
Option B: Local build + push to Docker Hub
cd docker
source env_test.sh # or: TEST_TAG=test148 source env_test.sh
bash ./build_and_push_images.shOption C: On-device build (no Docker Hub push)
# Build fxsupport (seconds — just file copies)
cd /tmp/fula-ota/docker/fxsupport
sudo docker build --load -t functionland/fxsupport:test147 .
# Build go-fula (requires Go compilation)
cd /tmp/fula-ota/docker/go-fula
git clone -b main https://github.com/functionland/go-fula
sudo docker build --load -t functionland/go-fula:test147 .
# Build ipfs-cluster (requires Go compilation)
cd /tmp/fula-ota/docker/ipfs-cluster
git clone -b master https://github.com/ipfs-cluster/ipfs-cluster
sudo docker build --load -t functionland/ipfs-cluster:test147 .If you built with Options A or B (images pushed to Docker Hub), just update .env and restart:
# 1. Update .env to use test tags
sudo tee /usr/bin/fula/.env << 'EOF'
GO_FULA=functionland/go-fula:test147
FX_SUPPROT=functionland/fxsupport:test147
IPFS_CLUSTER=functionland/ipfs-cluster:test147
FULA_PINNING=functionland/fula-pinning:test147
FULA_GATEWAY=functionland/fula-gateway:test147
WPA_SUPLICANT_PATH=/etc
CURRENT_USER=pi
EOF
# 2. Restart fula — it will pull test147 images from Docker Hub and start them
sudo systemctl restart fulaThat's it. fula.sh runs docker-compose pull --env-file .env which pulls whatever tags are in .env. Watchtower also respects the running container's tag — it will check for updates to :test147, not :release.
The docker cp step (which copies files from fula_fxsupport to /usr/bin/fula/) is also safe: the fxsupport:test147 image was built with env_test.sh, so the .env baked inside it already has test tags.
Note:
FULA_PINNINGandFULA_GATEWAYare optional in.env. Thedocker-compose.ymluses default fallbacks (e.g.${FULA_GATEWAY:-functionland/fula-gateway:release}), so older.envfiles without these variables will still work.
If you built on-device (Option C), you must also block Docker Hub so fula.sh doesn't pull release images over your local builds:
# Block Docker Hub BEFORE restarting
sudo bash -c 'echo "127.0.0.1 index.docker.io registry-1.docker.io" >> /etc/hosts'
sudo systemctl restart fula# Check running images have correct tags
sudo docker ps --format '{{.Names}}\t{{.Image}}'
# Expected: fula_go -> functionland/go-fula:test147, etc.
# Check logs
sudo docker logs fula_go --tail 50
sudo docker logs ipfs_host --tail 50
sudo docker logs ipfs_cluster --tail 50# 1. If Docker Hub was blocked, unblock it
sudo sed -i '/index.docker.io/d' /etc/hosts
# 2. Restore release .env
sudo tee /usr/bin/fula/.env << 'EOF'
GO_FULA=functionland/go-fula:release
FX_SUPPROT=functionland/fxsupport:release
IPFS_CLUSTER=functionland/ipfs-cluster:release
FULA_PINNING=functionland/fula-pinning:release
FULA_GATEWAY=functionland/fula-gateway:release
WPA_SUPLICANT_PATH=/etc
CURRENT_USER=pi
EOF
# 3. Restart fula (will pull release images)
sudo systemctl restart fula- On-device builds need Docker Hub blocked:
fula.shrunsdocker-compose pullon every restart if internet is available. This pulls from Docker Hub using tags from.env. For remote-built test images (Options A/B), this is fine — it pulls your:test147images. But for on-device builds (Option C), the pull would overwrite your local images with Docker Hub versions. Block Docker Hub in/etc/hostsfor on-device builds. - Watchtower tag matching: Watchtower checks for updates to the exact tag the container is running. Containers running
:test147won't be replaced with:release. docker cpand.env: On every restart,fula.shcopies files fromfula_fxsupportcontainer to/usr/bin/fula/, including.env. This is safe when using test images built throughenv_test.sh(the.envinside the image matches your test tags). Usestop_docker_copy.txtonly if you manually edited.envon-device without rebuilding fxsupport.stop_docker_copy.txtexpires after 24 hours: If needed, this file in/home/pi/blocks thedocker cpstep. Expires after 24h — touch it again to extend.- Docker Compose bridge != docker0: Docker Compose creates its own bridge interfaces (
br-<hash>), notdocker0. Firewall rules matching-i docker0don't cover Compose bridges. Must also match-i br-+(iptables wildcard). This can cause kubo->go-fula proxy traffic to be silently dropped. ((var++))withset -e: In bash, post-increment((0++))evaluates to 0 (falsy), returning exit code 1, which kills the script underset -e. Use pre-increment((++var))instead.
If you only changed files under docker/fxsupport/linux/ (scripts, configs, env files):
# On the device:
cd /tmp && git clone --depth 1 https://github.com/functionland/fula-ota.git
cd /tmp/fula-ota/docker/fxsupport
sudo docker build --load -t functionland/fxsupport:release .
sudo docker stop fula_updater
sudo systemctl restart fula# Build fxsupport (seconds)
cd /tmp/fula-ota/docker/fxsupport
sudo docker build --load -t functionland/fxsupport:release .
# Build ipfs-cluster (requires Go compilation)
cd /tmp/fula-ota/docker/ipfs-cluster
git clone -b master https://github.com/ipfs-cluster/ipfs-cluster
sudo docker build --load -t functionland/ipfs-cluster:release .
# Build go-fula (requires Go compilation)
cd /tmp/fula-ota/docker/go-fula
git clone -b main https://github.com/functionland/go-fula
sudo docker build --load -t functionland/go-fula:release .
# Stop watchtower, restart
sudo docker stop fula_updater
sudo systemctl restart fulaFor rapid iteration, copy files directly and prevent fula.sh from overwriting them via docker cp:
# 1. Copy changed files
sudo cp /tmp/fula-ota/docker/fxsupport/linux/.env.cluster /usr/bin/fula/.env.cluster
sudo cp /tmp/fula-ota/docker/fxsupport/linux/ipfs-cluster/ipfs-cluster-container-init.d.sh /usr/bin/fula/ipfs-cluster/ipfs-cluster-container-init.d.sh
sudo cp /tmp/fula-ota/docker/fxsupport/linux/kubo/config /usr/bin/fula/kubo/config
sudo cp /tmp/fula-ota/docker/fxsupport/linux/fula.sh /usr/bin/fula/fula.sh
sudo cp /tmp/fula-ota/docker/fxsupport/linux/update_kubo_config.py /usr/bin/fula/update_kubo_config.py
# 2. Block docker cp from overwriting your files (valid for 24 hours)
touch /home/pi/stop_docker_copy.txt
# 3. Restart using your files
sudo /usr/bin/fula/fula.sh restart
# 4. When done testing, remove the block so normal OTA updates resume
rm /home/pi/stop_docker_copy.txtThe tests/ directory contains shell-based test scripts organized into three categories:
test-config-validation.sh— Scans scripts and configs for stale referencestest-docker-setup.sh— Validates Docker Compose syntax and environment variablestest-container-dependencies.sh— Analyzes service dependencies and startup order
test-fula-sh-functions.sh— Tests fula.sh install/update/run operationstest-docker-services.sh— Tests container runtime behaviortest-uniondrive-readiness.sh— Tests uniondrive service and readiness-check.py
test-go-fula-system-integration.sh— WiFi, hotspot, storage, network teststest-device-hardening.sh— Full production update flow on real RK3588 hardware
Run tests from the project root:
./tests/test-config-validation.sh
./tests/test-docker-setup.sh
# Device tests (require hardware):
sudo bash ./tests/test-device-hardening.sh --build --deploy --verify --finishSee tests/README.md for full documentation.
| Workflow | Trigger | Purpose |
|---|---|---|
docker-image.yml |
GitHub release | ARM64 release build, uploads watchtower tar |
docker-image-test.yml |
Manual | ARM64 test build (all services) |
docker-image-selective-test.yml |
Manual | Selective service build (checkbox per service) |
docker-image-amd64-test.yml |
Manual | AMD64 test build |
docker-image-amd64-release.yml |
GitHub release | AMD64 release build |
pc-installer-release.yml |
GitHub release | Windows .exe + Linux .deb installer build |
# Kubo StorageMax (should be ~80% of drive size, min 800GB)
cat /home/pi/.internal/ipfs_data/config | jq '.Datastore.StorageMax'
# Kubo AcceleratedDHTClient
cat /home/pi/.internal/ipfs_data/config | jq '.Routing.AcceleratedDHTClient'
# Expected: true
# Cluster connection_manager
cat /uniondrive/ipfs-cluster/service.json | jq '.cluster.connection_manager'
# Expected: high_water: 400, low_water: 100
# Cluster concurrent_pins
cat /uniondrive/ipfs-cluster/service.json | jq '.pin_tracker.stateless.concurrent_pins'
# Expected: 5
# Cluster batching
cat /uniondrive/ipfs-cluster/service.json | jq '.consensus.crdt.batching'
# Expected: max_batch_size: 100, max_batch_age: "1m"
# Fula-pinning status (requires pairing secret)
curl -s -H "Authorization: Bearer YOUR_PAIRING_SECRET" http://127.0.0.1:3501/api/v1/auto-pin/status | jq .
# Expected: {"paired":true,"total_pinned":N,...}
# Fula-pinning logs
docker logs fula_pinning --tail 20
# Fula-gateway health (no auth)
curl -s http://127.0.0.1:9000/healthz
# Expected: 200 OK
# Fula-gateway bucket listing (requires pairing secret)
curl -s -H "Authorization: Bearer YOUR_PAIRING_SECRET" http://127.0.0.1:9000/ | head -20
# Expected: XML bucket listing
# Fula-gateway logs
docker logs fula_gateway --tail 20
# Registry CID (written by fula-pinning, read by fula-gateway)
cat /home/pi/.internal/fula-gateway/registry.cid
# IPFS repo stats
docker exec ipfs_host ipfs repo stat
# IPFS DHT status (should show full routing table with AcceleratedDHTClient)
docker exec ipfs_host ipfs stats dht
# Cluster peers
docker exec ipfs_cluster ipfs-cluster-ctl peers ls | wc -l| Command | Description |
|---|---|
start |
Start all containers (runs config merge, docker-compose up). |
restart |
Same as start. |
stop |
Stop all containers (docker-compose down). |
rebuild |
Full rebuild: install dependencies, copy files, docker-compose build. |
update |
Pull latest docker images. |
install |
Run the initial installer. |
help |
List all commands. |
Env vars injected into the ipfs-cluster container. Key settings:
| Variable | Value | Purpose |
|---|---|---|
CLUSTER_CONNMGR_HIGHWATER |
400 | Max peer connections (prevents 60-node ceiling) |
CLUSTER_CONNMGR_LOWWATER |
100 | Connection pruning target |
CLUSTER_MONITORPINGINTERVAL |
60s | How often peers ping the monitor |
CLUSTER_STATELESS_CONCURRENTPINS |
5 | Parallel pin operations (lower for ARM I/O) |
CLUSTER_IPFSHTTP_PINTIMEOUT |
15m0s | Timeout for individual pin operations |
CLUSTER_PINRECOVERINTERVAL |
8m0s | How often to retry failed pins |
Template for IPFS node configuration. Managed fields are merged into the deployed config on every restart. Key settings:
AcceleratedDHTClient: true- Full Amino DHT routing table with parallel lookupsStorageMax: "800GB"- Static fallback; dynamically set to 80% of drive on startupConnMgr.HighWater: 200- IPFS swarm connection limitLibp2pStreamMounting: true- Required for go-fula p2p protocol forwarding
Selectively merges managed fields from template into deployed config while preserving device-specific settings (Identity, Datastore). Runs on every fula.sh start.
- Watchtower polls Docker Hub every 3600 seconds (1 hour). After pushing new images, devices update within the next polling cycle.
stop_docker_copy.txt— when this file exists in/home/pi/and was modified within the last 24 hours,fula.shskips thedocker cpstep. Useful for testing local file changes without them being overwritten.- Kubo uses the upstream
ipfs/kubo:releaseimage (not custom-built). Only the config template and init script are customized. - go-fula, ipfs-cluster, and fula-pinning are custom-built from source in their respective Dockerfiles using Go 1.25.
- fula-gateway is a standalone Rust binary built from
docker/fula-gateway/fula-local-gateway/. It usesfula-coreandfula-blockstorecrates from fula-api as shared libraries but contains no cloud code. - fula-pinning and fula-gateway run with
no-new-privileges:true(no elevated privileges needed). - All other containers except kubo run with
privileged: trueandCAP_ADD: ALL. - Dead containers: Docker Compose can leave Dead containers with project labels. These prevent
--no-recreatefrom creating new containers.fula.shand the PC installer purge ghost containers viadocker ps -a --filter "label=com.docker.compose.project=fula" -q | xargs -r docker rm -fbefore starting.