Skip to content

tud-ccc/Cinnamon

Repository files navigation


CINM (Cinnamon): A Compilation Infrastructure for Heterogeneous Compute In-Memory and Compute Near-Memory Paradigms

An MLIR Based Compiler Framework for Emerging Architectures
Paper Link»

About The Project

Emerging compute-near-memory (CNM) and compute-in-memory (CIM) architectures have gained considerable attention in recent years, with some now commercially available. However, their programmability remains a significant challenge. These devices typically require very low-level code, directly using device-specific APIs, which restricts their usage to device experts. With Cinnamon, we are taking a step closer to bridging the substantial abstraction gap in application representation between what these architectures expect and what users typically write. The framework is based on MLIR, providing domain-specific and device-specific hierarchical abstractions. This repository includes the sources for these abstractions and the necessary transformations and conversion passes to progressively lower them. It emphasizes conversions to illustrate various intermediate representations (IRs) and transformations to demonstrate certain optimizations.

Getting Started

This is an example of how you can build the framework locally.

Prerequisites

CINM depends on a patched version of LLVM 18.1.6. Additionally, a number of software packages are required to build it, like CMake.

Download and Build

The repository contains a script, build.sh that installs all needed dependencies and builds the sources.

  • Clone the repo
    git clone https://github.com/tud-ccc/Cinnamon.git
  • Build the sources
    cd Cinnamon
    chmod +x build.sh
    ./build.sh

Usage

All benchmarks at the cinm abstraction are in this repository under cinnamon/benchmarks/. The compile-benches.sh script compiles all the benchmarks using the Cinnamon flow. The generated code and the intermediate IRs for each bench can be found undercinnamon/benchmarks/generated/.

chmod +x compile-benches.sh
./compile-benches.sh

The user can also try running individual benchmarks by manually trying individual conversions. The benchmark files have a comment at the top giving the command used to lower them to the upmem IR.

Roadmap

  • cinm, cnm and cim abstractions and their necessary conversions
  • The upmem abstraction, its conversions and connection to the target
  • The tiling transformation
  • PyTorch Front-end
  • The xbar abstraction, conversions and transformatons
    • Associated conversions and transformations
    • Establshing the backend connection

See the open issues for a full list of proposed features (and known issues).

Contributing

If you have a suggestion, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". If you want to contribute in any way , that is also greatly appreciated.

License

Distributed under the BSD 2-claues License. See LICENSE.txt for more information.

Contributors