Provably Correct Machine Learning
Machine learning where bugs are caught at compile time, not in production.
Axiom.jl is a next-generation ML framework that combines:
-
Compile-time verification - Shape errors caught before runtime
-
Formal guarantees - Verification checks and certificate workflows
-
Optional acceleration - Rust/GPU backend paths with explicit fallback behavior
-
Julia elegance - Express models as mathematical specifications
using Axiom
model = Sequential(
Dense(784, 128, relu),
Dense(128, 10),
Softmax()
)
x = Tensor(randn(Float32, 16, 784))
y = model(x)
result = verify(model, properties=[ValidProbabilities(), FiniteOutput()], data=[(x, nothing)])
@assert result.passed# PyTorch: Runtime error after hours of training
# Axiom.jl: Compile error in milliseconds
@axiom BrokenModel begin
input :: Tensor{Float32, (224, 224, 3)}
features = input |> Conv(64, (3,3))
output = features |> Dense(10) # COMPILE ERROR!
# "Shape mismatch: Conv output is (222,222,64), Dense expects vector"
end@axiom SafeClassifier begin
# ...
@ensure valid_probabilities(output) # Runtime assertion
@prove ∀x. sum(softmax(x)) == 1.0 # Experimental proof workflow
end
# Generate verification certificates
cert = verify(model) |> generate_certificate
save_certificate(cert, "fda_submission.cert")# Import from a PyTorch checkpoint (.pt/.pth/.ckpt) via built-in Python bridge
# (requires python3 + torch in the selected runtime)
model = from_pytorch("model.pt")
# Or import canonical descriptor JSON
model2 = from_pytorch("model.pytorch.json")
# Export supported models to ONNX
to_onnx(model, "model.onnx", input_shape=(1, 3, 224, 224))Current scope:
-
from_pytorch(…): canonical descriptor import + direct.pt/.pth/.ckptbridge. -
to_onnx(…): export forSequential/Pipelinemodels built from Dense/Conv/Norm/Pool + common activations.
# Development: Julia backend
model = Sequential(Dense(784, 128, relu), Dense(128, 10))
# Production path: optional Rust backend
prod_model = compile(model, backend=RustBackend("/path/to/libaxiom_core.so"), optimize=:aggressive)# Non-GPU accelerator targets with safe fallback
cop = detect_coprocessor() # TPU/NPU/PPU/MATH/FPGA/DSP or nothing
if cop !== nothing
model_accel = compile(model, backend=cop, verify=false)
endmetadata = create_metadata(
model;
name="my-model",
architecture="Sequential",
version="1.0.0",
)
verify_and_claim!(metadata, "FiniteOutput", "verified=true; source=ci")
bundle = export_model_package(model, metadata, "build/model_package")
entry = build_registry_entry(bundle["manifest"]; channel="stable")
export_registry_entry(entry, "build/model_package/registry-entry.json")reset_verification_telemetry!()
result = verify(model, properties=[FiniteOutput()], data=[(x, nothing)])
run_payload = verification_result_telemetry(result; source="inference-gate")
summary = verification_telemetry_report()# REST
rest_server = serve_rest(model; host="0.0.0.0", port=8080, background=true)
# GraphQL
graphql_server = serve_graphql(model; host="0.0.0.0", port=8081, background=true)
# gRPC bridge server + contract generation
# - binary unary protobuf (`application/grpc`)
# - JSON bridge mode (`application/grpc+json`)
grpc_server = serve_grpc(model; host="0.0.0.0", port=50051, background=true)
generate_grpc_proto("axiom_inference.proto")using Axiom
# Define a simple classifier
model = Sequential(
Dense(784, 256, relu),
Dense(256, 10),
Softmax()
)
# Generate sample data
x = randn(Float32, 32, 784)
# Inference
predictions = model(x)
# Verify properties
@ensure all(sum(predictions, dims=2) .≈ 1.0)using Axiom
@axiom MNISTClassifier begin
input :: Tensor{Float32, (:batch, 28, 28, 1)}
output :: Probabilities(10)
features = input |> Conv(32, (3,3)) |> ReLU |> MaxPool((2,2))
features = features |> Conv(64, (3,3)) |> ReLU |> MaxPool((2,2))
flat = features |> GlobalAvgPool() |> Flatten
output = flat |> Dense(64, 10) |> Softmax
@ensure valid_probabilities(output)
end
model = MNISTClassifier()ML models are deployed in critical applications:
-
Medical diagnosis
-
Autonomous vehicles
-
Financial systems
-
Criminal justice
Yet our tools allow bugs to slip through to production.
-
Home - Start here
-
User Guide - Install, infer, verify
-
Developer Guide - Build/test/release workflow
-
Release Checklist - Pre-release and release-day gates
-
Vision - Why we built this
-
@axiom DSL - Model definition guide
-
Verification - @ensure and @prove
-
Migration Guide - From PyTorch
-
FAQ - Common questions
-
Roadmap - Tracked commitments and delivery criteria
Axiom.jl/
├── src/ # Julia source
│ ├── Axiom.jl # Main module
│ ├── types/ # Tensor type system
│ ├── layers/ # Neural network layers
│ ├── dsl/ # @axiom macro system
│ ├── verification/ # @ensure, @prove
│ ├── training/ # Optimizers, loss functions
│ └── backends/ # Julia/Rust backends
├── rust/ # Rust performance backend
│ ├── src/
│ │ ├── ops/ # Matrix, conv, activation ops
│ │ └── ffi.rs # Julia FFI bindings
│ └── Cargo.toml
├── test/ # Test suite
├── examples/ # Example models
└── docs/ # Documentation & wiki-
✓ v0.1 - Core framework, DSL, verification basics
-
❏ v0.2 - Full Rust backend, GPU support
-
❏ v0.3 - Hugging Face integration, model zoo
-
❏ v0.4 - Advanced proofs, SMT integration
-
❏ v1.0 - Production ready, industry certifications
We welcome contributions! See CONTRIBUTING.md.
-
Bug reports and feature requests
-
Documentation improvements
-
New layers and operations
-
Performance optimizations
-
Verification methods
Axiom’s proof system is Julia-native by default. SMT solving runs through
packages/SMTLib.jl with no Rust dependency. The Rust SMT runner is an optional
backend you can enable for hardened subprocess control.
Julia-native example:
@prove ∃x. x > 0Optional Rust runner:
export AXIOM_SMT_RUNNER=rust
export AXIOM_SMT_LIB=/path/to/libaxiom_core.so
export AXIOM_SMT_SOLVER=z3@prove ∃x. x > 0Palimpsest-MPL-1.0 License - see LICENSE for details.
Axiom.jl builds on the shoulders of giants:
The future of ML is verified.