ZigZag is a novel HW Architecture-Mapping Design Space Exploration (DSE) framework for Deep Learning (DL) accelerators. It bridges the gap between algorithmic DL decisions and their acceleration cost on specialized hardware, providing fast and accurate HW cost estimation. Through its advanced mapping engines, ZigZag automates the discovery of optimal mappings for complex DL computations on custom architectures.
✔ ONNX Integration: Directly parse ONNX models for seamless compatibility with modern deep learning workflows.
✔ Flexible Hardware Architecture: Supports multi-dimensional (>2D) MAC arrays, advanced interconnection patterns, and high-level memory structures.
✔ Enhanced Cost Models: Includes detailed energy and latency analysis for memories with variable port structures through inferred spatial and temporal data sharing and reuse patterns.
✔ Modular and Extensible: Fully revamped structure with object-oriented paradigms to support user-friendly extensions and interfaces.
✔ Integrated In-Memory Computing Support: Seamlessly define digital and analog in-memory-computing (IMC) cores via an intuitive user interface.
✔ Comprehensive Output Options: Outputs results in YAML format, enabling further analysis and integration.
Visit the Installation Guide for step-by-step instructions to set up ZigZag on your system.
Get up to speed with ZigZag using our resources:
- Check out the Getting Started Guide.
- Explore the Jupyter Notebook Demo to see ZigZag in action.
We are continuously improving ZigZag to stay at the forefront of HW design space exploration. Here’s what we’re working on:
- 🧠 ONNX Operator Support: Expanding compatibility for modern generative AI workloads.
- 📂 Novel Memory Models: Integrating advanced memory models and compilers for better performance analysis.
- ⚙️ Automatic Hardware Generation: Enabling end-to-end generation of hardware configurations.
- 🚀 Enhanced Mapping Methods: Developing more efficient and intelligent mapping techniques.
Learn more about the concepts behind ZigZag and its applications:
- ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators
L. Mei, P. Houshmand, V. Jain, S. Giraldo, M. Verhelst
IEEE Transactions on Computers, vol. 70, no. 8, pp. 1160-1174, Aug. 2021.
- Uniform Latency Model for DNN Accelerators
L. Mei, H. Liu, T. Wu, et al.
DATE 2022. - LOMA: Fast Auto-Scheduling on DNN Accelerators
A. Symons, L. Mei, M. Verhelst
AICAS 2021.
For more publications and detailed case studies, refer to the full list in our Documentation.
We welcome contributions! Feel free to fork the repository, submit pull requests, or open issues. Check our Contributing Guidelines for more details.