Use Cases/Fine-Tuning

LoRA. Full fine-tunes.
Your weights, fully isolated.

Proprietary training data demands compute you can trust. Isolated environments, private storage, no cross-tenant access — ever.

Start a fine-tune Read the docs
Why Fine-Tuning Needs Isolated Compute

Your training data is the asset.
Don't let it touch shared infra.

When you fine-tune on proprietary data — internal documents, customer interactions, domain-specific corpora — that data cannot be on infrastructure where another tenant could infer its existence, let alone access it.

Model ownership follows the same logic. Your fine-tuned adapter weights represent IP. They live in isolated storage, accessible only within your namespace.

Isolated storage

Your dataset buckets are namespace-scoped. No shared mount points with other tenants' jobs.

No weight exfiltration risk

Adapter files and checkpoints are stored in your private object storage. Not cached on shared NFS.

Network isolation

Training jobs run in isolated VPCs. Data does not traverse shared network segments.

Auditability

Every data access is logged. You get a full audit trail for compliance — who ran what, when, against which dataset.

LoRA vs Full Fine-Tune: GPU Memory Considerations

LoRA / QLoRA

Parameter-efficient
Typical GPU1× A100 80GB or L40S 48GB
Model rangeGemma 4 27B, GLM 5.1 32B, MiniMax 2.7 9B
Peak VRAM16–48GB
Only adapter weights stored
Base model frozen — load in INT4/8
Fast iteration. Good for instruction-tuning and domain adaptation.
QLoRA fits Gemma 4 27B or GLM 5.1 32B on a single A100

Full Fine-Tune

Maximum flexibility
Typical GPU2–4× H100 80GB
Model rangeGLM 5.1 ~355B MoE, MiniMax 2.7 ~230B MoE
Peak VRAM80–400GB+
All weights updated — highest quality delta
Requires optimizer states + activations in memory
FSDP or DeepSpeed ZeRO for GLM 5.1 / MiniMax 2.7 scale
Use spot with auto-checkpoint for cost control
Aircloud Fit for Fine-Tuning Workloads

On-demand for experiments

Short LoRA runs, hyperparameter sweeps, dataset debugging. Pay per second. Kill it when you're done.

Spot for long runs

Full fine-tunes can run for hours. Spot pricing cuts cost by 40–60%. Auto-checkpoint handles preemptions.

Auto-checkpoint

Jobs save checkpoints to private object storage at configurable intervals. Interruptions resume — not restart.

Dataset Privacy

Your data never touches shared storage.

Datasets mount directly into your isolated job environment from your private storage bucket. There is no intermediary staging layer shared with other tenants. When the job finishes, the mount is gone.

Upload pathDirect to your private S3-compatible bucket
MountingNamespace-scoped bind mount, per job
Shared storageNone — no NFS or shared volume involved
RetentionYou control lifecycle and deletion
EncryptionAt-rest and in-transit
Supported Frameworks

HuggingFace Transformers + PEFT

The standard. LoRA, QLoRA, and full fine-tuning via Trainer. Pre-configured images available.

Axolotl

YAML-driven fine-tuning. Multi-GPU, LoRA, QLoRA, Flash Attention — out of the box.

LLaMA-Factory

WebUI and CLI for instruction tuning. Broad model support, easy dataset prep.

Unsloth

2× faster LoRA training. Lower VRAM usage with hand-written CUDA kernels.

DeepSpeed

ZeRO-3 offloading for large full fine-tunes. Works with any HF Trainer.

Custom Docker

Bring your own image. Full root access to the container. No restrictions on framework.

Fine-tune on your data.
On your terms.

Start a LoRA run today. Isolated GPU, private storage, auto-checkpoint on spot.

Get Started Talk to Sales