Offline · Air-Gapped

THE
SHARD

Offline AI. For organisations that cannot touch the cloud. Sovereign intelligence that stays inside the building.

The Cloud Problem

Every commercial AI service requires an internet connection. Your queries leave your network. They pass through a third-party server. Someone logs them. Someone processes them. Someone retains them. For most organisations, this is an inconvenience. For some, it is an unacceptable security risk.

Defense and intelligence environments. Legal teams handling privileged client material. Municipal bodies processing sensitive citizen data. Healthcare organisations under strict data residency obligations. Academic institutions with classified research. These organisations need AI. They cannot use cloud AI.

What The Shard Is

The Shard is a local, offline inference environment. A hardware unit — preloaded with a Sovereign SLM fine-tuned on your verified corpus — that runs entirely without internet access. No cloud. No API calls. No data leaves the building.

You query it from your local network. It answers from your data. It cites the source document every time. When the network cable is unplugged, it still works.

Hardware Options

FIELD UNIT

Compact · Desktop-class

A compact, low-power inference unit for chambers, consultancies, and small professional teams. Runs 7B–14B parameter models. Sits on a desk. Silent. No specialist infrastructure required. Queries answered in seconds. Source cited with every response.

  • Desktop form factor
  • 7B–14B parameter models
  • No GPU required for inference
  • Standard power supply
  • Local network connection only

SOVEREIGN RACK

Ruggedised · Enterprise-class

A ruggedised rack-mounted inference server for government, defense, and enterprise environments. Runs 32B+ parameter models at production scale. Handles concurrent users across a team or department. Designed for environments with strict physical security requirements.

  • Rack-mounted, ruggedised chassis
  • 32B+ parameter model support
  • GPU-accelerated inference
  • Multi-user concurrent access
  • Air-gap and Faraday-cage compatible

The Ingest Subscription

The model is trained on your verified data at deployment. As your corpus changes — new documents, updated policies, revised records — the Ingest Subscription keeps it current.

Encrypted update packs are prepared, verified, and delivered via your chosen secure channel: secure physical media, encrypted transfer over a controlled network, or a defined air-gap update protocol. The model updates. No internet required. The intelligence updates. The data never leaves.