Enter the Basilica
Basilica is on-demand compute for AI. GPUs and CPUs, ready when you need them.
Built on the Bittensor network, we aggregate datacenter supply at market rates while decentralized miners compete when they can beat the baseline. Best price wins.
No gatekeepers. No waitlists. No genuflecting before cloud providers for quota increases.
Deploy containerized applications, serve LLMs, or claim raw compute nodes via SSH. Pay by the minute. Leave whenever you want.
from basilica import BasilicaClient
client = BasilicaClient()
deployment = client.deploy(
name="my-model",
source="app.py",
gpu_count=1,
gpu_models=["H100"],
)
print(f"Blessed with a URL: {deployment.url}")That's it. Your code is running on an H100.
Two Ways to Compute
Rentals
Grab a GPU node via SSH. Full root access, your container, your rules. Perfect for training runs, interactive development, or anything where you want direct control.
Managed Deployments
Ship your code, get a public URL. We handle containers, HTTPS, and routing. Your app runs, you pay per minute.
Both access models draw from the same underlying infrastructure.
Where the Compute Comes From
Every rental and deployment runs on one of two infrastructure sources:
The Bourse
A competitive marketplace where Bittensor miners bid for your workloads. Validators verify hardware. Prices flex with supply and demand. Often the best deals on GPUs.
The Citadel
Curated datacenter partners: DataCrunch, Lambda, Hyperstack, HydraHost. Fixed pricing, SLA guarantees, enterprise reliability. The fortress for production workloads.
Why the names? Basilica uses Knights Templar naming. The Bourse was the medieval merchant exchange. The Citadel, the fortified stronghold.
Why Developers Choose Basilica
No YAML confessionals. Your infrastructure is defined in Python, not sprawling configuration files.
Pay for what you use. Per-minute billing. Spin up an H100 for a few minutes of inference, pay for a few minutes.
No quota purgatory. Access A100s, H100s, and B200s without begging for limit increases or waiting for approval.
Decentralized by design. Built on Bittensor's incentive network. Miners compete to provide the best compute, validators keep them honest.
The Essentials
| Concept | What It Means |
|---|---|
| Deployment | Your application running on Basilica. Gets a public URL automatically. |
| Rental | Direct compute access via SSH. For when you need raw control. |
| Credits | Your account balance. Funded by TAO deposits. |
What Can You Build?
LLM Inference
Run Llama, Mistral, or your fine-tuned models. vLLM or SGLang, your choice. OpenAI-compatible APIs out of the box.
ML Model Serving
PyTorch, TensorFlow, Hugging Face. GPU-accelerated, publicly accessible.
Web Applications
FastAPI, Flask, Gradio, Streamlit. If it runs in a container, it gets a public HTTPS URL. Simple as that.
Batch Processing
Train models, process datasets, generate embeddings. Jobs clean up after themselves.
Development Environments
Spin up in seconds. Tear down when you're done. No commitments, no lingering bills.
Begin Your Journey
- Installation: Install the CLI and SDK
- Authentication: Get your credentials
- Getting Started: Deploy your first application
- Core Concepts: Understand the fundamentals
Join the Community
The faithful gather here:
- GitHub: one-covenant/basilica
- Discord: Join the congregation
- X: @basilic_ai