This book explores the technological developments at various levels of abstraction, of the new paradigm of approximate computing. The authors describe in a single-source the state-of-the-art, covering the entire spectrum of research activities in approximate computing, bridging device, circuit, architecture, and system levels. Content includes tutorials, reviews and surveys of current theoretical/experimental results, design methodologies and applications developed in approximate computing for a wide scope of readership and specialists.
Computing systems at all scales (from mobile handheld devices to supercomputers, servers, and large cloud-based data centers) have seen significant performance gains, mostly through the continuous shrinking of the complementary metal-oxide semiconductor (CMOS) feature size that has doubled the number of transistors on a chip with every technology generation. However, power dissipation has become the fundamental barrier to scale computing performance across all platforms. As the classical Dennard scaling is coming to an end, reduction in on-chip power consumption as well as a throughput increase (as per Moore’s Law) have become serious challenges. Computation at the nanoscales necessitates fundamentally different approaches. These approaches rely on different computational paradigms that exploit features in the targeted set of applications as well as exploiting unique interactions between hardware, software, and the processing algorithms of a computing system.
Approximate computing has been proposed as a novel paradigm for efficient and low-power design at nanoscales. Efficiency is related to the computation of approximate results with at least comparable performance and lower power consumption compared to the fully accurate counterpart. Therefore, approximate computing generates results that are good enough rather than always fully accurate. Although computational errors generally are not desirable, applications such as multimedia, signal processing, Machine Learning (ML), pattern recognition, and data mining are tolerant to the occurrence of some errors. Therefore, approximate computing is mostly applicable to computing systems that are related to human perception/cognition and have inherent error resilience. Many of these applications are based on statistical or probabilistic computation, where different approximations can be made to better suit the desired objectives. Therefore, it is possible to achieve not only energy efficiency but also a simpler design and lower latency, while relaxing the accuracy requirement for these applications.
Serves as a single-source reference to state-of-the-art of approximate computing; Covers broad range of topics, from circuits to applications; Includes contributions by leading researchers, from academia and industry.
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.