Experiments:

Structuring Social Data for AI
How Vivly used Reddit, X, and Hacker News discussions around Meta Ray-Ban glasses to build a structured JSONL training dataset, processed through a multi-stage pipeline and ingested into Aquin end-to-end.
Read case study →

Fine-tuning LLaMA 3.2 Instruct 1B with QLoRA on a Healthcare Dataset
Fine-tuning LLaMA 3.2 Instruct 1B with QLoRA on a healthcare dataset covering gene editing, regenerative medicine, AI-assisted diagnostics, and brain-computer interfaces, monitored end-to-end with the Aquin Experimental SDK.
Witness experiment →

The Weight Editing System
Agentic ROME on Pythia 2.8B: causal trace layer location, rank-one MLP updates, and a three-check validation loop that rolls back and retries on failure. Includes case studies on factuality, bias correction, and censor auditing.
Witness experiment →
Documentation:
| Title | Tag |
|---|---|
| Security System | docs |
| Training Inspect System | docs |
| The Data Inspection System | docs |
| Attribution System | docs |
| Eval System | docs |
| Benchmarks | docs |
Not sure if Aquin is right for you?
Aquin
