Self-hosted AI operations platform · v0.3.31

Your personal
AI command center.

Chat with local LLMs, manage project memory, track live transit,
and automate your devices — all from one self-hosted platform.

FastAPI + Python Ollama Local LLM Offline PWA Docker Compose

Platform capabilities

AI Chat Memory Core Transit Live Agent Ops Device Control Telegram Bot Ollama LLM Offline PWA Screenshot Analysis File Upload Calendar Weather Task Manager Reports AI Chat Memory Core Transit Live Agent Ops Device Control Telegram Bot Ollama LLM Offline PWA Screenshot Analysis File Upload Calendar Weather Task Manager Reports

What's inside

Everything you need in one platform.

Six core modules, all running on your hardware. No subscriptions, no data leaks.

AI Chat

ChatGPT-style interface backed by your local Ollama models or Gemini. Full conversation history, smart prompts, and file attachments.

Open Chat →

Memory Core

Portable project memory that persists across devices. Save structured context, launch briefed chats, and reuse template packs instantly.

Open Memory Core →

Transit Live

Real-time Malaysia GTFS transit data. Live vehicle positions on an interactive map, route planning, and nearby stop detection.

Open Transit Live →

Agent Ops

Deploy lightweight daemon agents on your desktop machines. Get status reports, trigger automation, and monitor running processes remotely.

Open Chat →

Device Control

Browser-based panel for remote device management. Send commands, capture screenshots, and manage files on connected machines.

Open Control Panel →

Telegram Bot

Full Telegram bot integration. Receive messages, trigger commands, get AI responses and system reports directly in your Telegram app.

Open Chat →

Quick access

Jump straight in.

Three core apps — each fully responsive and installable on your phone.

Under the hood

Built to run anywhere.

A fully self-hosted stack with zero mandatory cloud dependency. Your data stays on your hardware — LLM inference, database, background workers and all.

FastAPI backend

High-performance Python API server with SQLAlchemy + SQLite persistence and Celery task queue.

Ollama + Gemini

Local LLM inference via Ollama (any model) with Gemini as a cloud fallback option.

Redis + Celery

Distributed background job queue for reports, screenshots, and long-running tasks.

Offline PWA

Service worker with app shell caching. Installs on iOS Safari, Android Chrome, and desktop browsers.

UI Layer Progressive Web App

Vanilla HTML/CSS/JS · Service Worker · Offline Shell

API Layer FastAPI + Python

REST endpoints · SQLAlchemy · SQLite · Celery Workers

AI Layer Ollama / Gemini

Local inference · Vision models · Multi-model switching

Infra Layer Docker Compose

Redis · n8n Workflows · Self-hosted on VPS or local

Agent downloads

Install your agents directly from this server.

When the latest build files are present on the VPS, these buttons download the actual installer packages immediately.

Checking build

Windows Desktop Agent

One-click Windows installer for screenshots, shell tasks, VS Code actions, and continuous heartbeats back to FitClaw.

Looking for the newest `.exe` in the server build folders.
Download Windows `.exe`
Checking build

Android Agent APK

Install the Android agent companion directly from your VPS so phones and tablets can register, heartbeat, and stay connected.

Looking for the newest `.apk` in the server build folders.
Download Android `.apk`

Install as app

Add FitClaw to your home screen.

Works as a native-feeling app on iOS Safari, Android Chrome, and desktop browsers. One tap — it appears in your launcher like any other app.

1

Open /app in your browser

2

Tap Share → Add to Home Screen on iOS, or the install icon on Chrome

3

FitClaw appears in your launcher and works offline

Open App Now