End-to-End Tutorial¶
Train a small classifier with the dg SDK, export its schema, and benchmark it on an MCU with Bitweaver.
Prerequisites¶
dginstalled (SDK Quickstart).- A Bitweaver account with access to a project workspace. Don't have one yet? Get in touch.
1. Define a model¶
Subclass DLGModel and override build_model_graph().
import torch
from dg.base_model import DLGModel
from dg.layer import Linear, Norm
class TinyMLP(DLGModel):
def __init__(self, in_features: int = 20, num_classes: int = 10):
super().__init__()
def build_model_graph(self):
self.norm = Norm()
self.fc1 = Linear(in_features=20, out_features=64, bias=True, act_func="relu")
self.fc2 = Linear(in_features=64, out_features=10, bias=True)
def forward(self, x):
x = self.norm(x)
x = self.fc1(x)
return self.fc2(x)
2. Train it¶
Use dg.Train to run the training loop.
from dg import Train
trainer = Train(model=TinyMLP(), ...)
trainer.fit(train_loader, val_loader, epochs=10)
See Train for full parameters.
3. Export the schema¶
Save the trained weights and the schema JSON that Bitweaver needs.
The schema JSON describes the model's I/O contract and layer layout.
4. Benchmark on hardware¶
Follow Your First Benchmark to:
- Create a Bitweaver project.
- Upload the schema JSON.
- Pick a target board from Supported Hardware.
- Review on-MCU inference time, memory footprint, and flash.
5. Iterate¶
Use the measured numbers to decide what to change:
- Accuracy too low — retrain or fine-tune in
dg. - Inference time, memory, or flash too high — redesign the model (fewer parameters, smaller layers, quantization) and re-benchmark.
Next steps¶
- API Reference - every public
dgclass and function. - Supported Hardware - tested MCUs.