Special Session: XTA: Open Source eXtensible, Scalable and Adaptable Tensor Architecture for AI Acceleration
In: 2020 IEEE 38th International Conference on Computer Design (ICCD), 2020-10-01
Online
unknown
Zugriff:
Accelerator frameworks have gained prominence since the advent of AI applications. The limitation with current open source accelerator solutions is that it was not designed to be scalable and adaptable for commercial MPSoC products that have different network requirements and higher performance goals. We have implemented a new AI accelerator framework, XTA, derived from TVM-VTA which is a popular, first known, open source backend AI accelerator for Xilinx MPSoC. XTA is scalable and adaptable to various network types and workloads of AI applications. XTA is a multi-core architecture that can dynamically scale and adapt to a given AI problem at both hardware and software layers. At the hardware layer it can adapt to compute and memory configurations of the system and at the software layer it can hide hardware complexity and adapt to changing user workloads or data flows of a given AI problem. XTA also supports parallel, pipelined processing and autotuning of subgraphs in a MPSoC environment. We hope that with this Open Source AI accelerator, industry can not only push the performance limits but also quickly innovate new AI applications based on the flexibly the architecture provides. Simulator version of the XTA shows significant performance improvements over TVM-VTA for a wide range of networks and workloads.
Titel: |
Special Session: XTA: Open Source eXtensible, Scalable and Adaptable Tensor Architecture for AI Acceleration
|
---|---|
Autor/in / Beteiligte Person: | Jiang, Hua ; Chakaravarthy, Ravikumar V. |
Link: | |
Zeitschrift: | 2020 IEEE 38th International Conference on Computer Design (ICCD), 2020-10-01 |
Veröffentlichung: | IEEE, 2020 |
Medientyp: | unknown |
DOI: | 10.1109/iccd50377.2020.00026 |
Schlagwort: |
|
Sonstiges: |
|