TRANSFORMER FELLOW 1

Edge LLM SoCs

Performance
Model size: 1.5B - 70B parameters
Inference speed: > 100 tokens/s
Innovative core architecture
Compute-in-memory technology
Dataflow computing
Outstanding cost-effectiveness and power efficiency

Adapted for diverse edge inference scenarios

Digital SRAM DIMC
Stable and high-precision
Preferred choice for high-speed, low-power inference
Ideal for cloud-edge-device collaborative computing
New-Media DIMC | FeFET
Extreme energy efficiency: 4× performance boost
Designed for next-gen, ultra-low-power edge inference deployment