Volltext verfügbar nach Anmeldung bzw. im Campus-Netz.
4.6-Bit Quantization for Fast and Accurate Neural Network Inference on CPUs.
In: Mathematics (2227-7390), Jg. 12 (2024-03-01), Heft 5, S. 651-672
Online
academicJournal
Zugriff:
Quantization is a widespread method for reducing the inference time of neural networks on mobile Central Processing Units (CPUs). Eight-bit quantized networks demonstrate similarly high quality as full precision models and perfectly fit the hardware architecture with one-byte coefficients and thirty-two-bit dot product accumulators. Lower precision quantizations usually suffer from noticeable quality loss and require specific computational algorithms to outperform eight-bit quantization. In this paper, we propose a novel 4.6-bit quantization scheme that allows for more efficient use of CPU resources. This scheme has more quantization bins than four-bit quantization and is more accurate while preserving the computational efficiency of the later (it runs only 4% slower). Our multiplication uses a combination of 16- and 32-bit accumulators and avoids multiplication depth limitation, which the previous 4-bit multiplication algorithm had. The experiments with different convolutional neural networks on CIFAR-10 and ImageNet datasets show that 4.6-bit quantized networks are 1.5–1.6 times faster than eight-bit networks on the ARMv8 CPU. Regarding the quality, the results of the 4.6-bit quantized network are close to the mean of four-bit and eight-bit networks of the same architecture. Therefore, 4.6-bit quantization may serve as an intermediate solution between fast and inaccurate low-bit network quantizations and accurate but relatively slow eight-bit ones. [ABSTRACT FROM AUTHOR]
Copyright of Mathematics (2227-7390) is the property of MDPI and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Titel: |
4.6-Bit Quantization for Fast and Accurate Neural Network Inference on CPUs.
|
---|---|
Autor/in / Beteiligte Person: | Trusov, Anton ; Limonova, Elena ; Nikolaev, Dmitry ; Arlazarov, Vladimir V. |
Link: | |
Zeitschrift: | Mathematics (2227-7390), Jg. 12 (2024-03-01), Heft 5, S. 651-672 |
Veröffentlichung: | 2024 |
Medientyp: | academicJournal |
ISSN: | 2227-7390 (print) |
DOI: | 10.3390/math12050651 |
Schlagwort: |
|
Sonstiges: |
|