Applied ML on FPGA: An End-to-End Perspective
Saturday Edition — First Online Cohort
Technical Workshop | Online
March 21 & 28, 2026 (two sessions × 3 hours) | 16:00 to 19:00 hs (CET)
This workshop provides a practitioner-oriented, experience-based introduction to deploying Machine Learning models on FPGAs, covering the complete ML → compression → hardware workflow.
It combines practical, guided examples with a strong emphasis on system-level understanding: where deployments commonly fail, which design decisions have the biggest impact, and how to navigate trade-offs when transitioning from software to hardware.
The workshop will also include a demo of KalEdge-Lite, a lightweight version of the upcoming ML-to-FPGA toolchain. KalEdge-Lite automates model compression, comparison analysis, and hls4ml project generation, providing a clear and reproducible end-to-end workflow.
What We'll Explore
- ML foundations for real deployment. What truly matters once a model leaves the notebook: training loops, evaluation metrics, and deployment-oriented thinking for constrained systems.
- Model compression in practice. Quantization, pruning, and knowledge distillation as practical engineering tools, understanding accuracy vs. efficiency trade-offs.
- FPGA fundamentals for ML engineers. Parallelism, dataflow, memory bottlenecks, and performance intuition (without requiring deep HDL expertise).
- End-to-end ML → FPGA workflows. How complete pipelines look in practice, common pitfalls, and effective iteration strategies from model design to hardware realization.
- Introduction to KalEdge-Lite. A web-based platform for training, compression, and automated hardware generation for ML accelerators on FPGA.
About the Instructor
Romina S. Molina is a Machine Learning & Hardware Acceleration Engineer (PhD in Computer Science / Industrial & Information Engineering), specialized in model efficiency, neural network compression, and on-device machine learning optimization.
She holds a PhD focused on FPGA/SoC acceleration and has over a decade of experience designing and deploying end-to-end machine learning pipelines, from hardware-aware model design to FPGA-based execution.
Format
Online via Discord | 2 sessions × 3 hours | Materials included | Short lectures + live demos | Certificate of participation provided.
The goal is to provide clarity and technical intuition, not to deliver production-ready designs.
Who Is This For?
- Engineers and researchers curious about ML deployment on FPGA.
- ML practitioners who want to understand hardware constraints.
- Students looking for a realistic end-to-end view of ML acceleration.
- Anyone exploring hls4ml, compression, or PYNQ for the first time.
Registration Fee
Saturday Edition — Special Pricing
This is the first online cohort of the workshop. Participants join at a reduced price in exchange for active feedback that will shape future editions. Same content, same instructor, but with the understanding that this is a first run.
Standard: €100 | Student ticket: €50 (limited seats)
This includes access to all live sessions, materials, Discord server, Q&A channel, and the private GitHub repository with all code and notebooks used throughout the workshop.
Registration is now open. Application deadline extended -> now closing March 19, 2026.
🎓 Scholarship Program — 3 spots available
Three scholarship spots are available at a symbolic fee of €20, reserved for students who demonstrate a genuine interest in the topic.
To apply, complete the registration form and answer the scholarship question. Scholarship recipients will be notified by March 16.
Complete your registration
Register — €100 / €50 students