On the Viability of using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators
2023
Online
report
Deep Neural Networks (DNNs) have demonstrated impressive performance across a wide range of tasks. However, deploying DNNs on edge devices poses significant challenges due to stringent power and computational budgets. An effective solution to this issue is software-hardware (SW-HW) co-design, which allows for the tailored creation of DNN models and hardware architectures that optimally utilize available resources. However, SW-HW co-design traditionally suffers from slow optimization speeds because their optimizers do not make use of heuristic knowledge, also known as the ``cold start'' problem. In this study, we present a novel approach that leverages Large Language Models (LLMs) to address this issue. By utilizing the abundant knowledge of pre-trained LLMs in the co-design optimization process, we effectively bypass the cold start problem, substantially accelerating the design process. The proposed method achieves a significant speedup of 25x. This advancement paves the way for the rapid and efficient deployment of DNNs on edge devices.
Titel: |
On the Viability of using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators
|
---|---|
Autor/in / Beteiligte Person: | Yan, Zheyu ; Qin, Yifan ; Hu, Xiaobo Sharon ; Shi, Yiyu |
Link: | |
Veröffentlichung: | 2023 |
Medientyp: | report |
Schlagwort: |
|
Sonstiges: |
|