- Добавил: literator
- Дата: 13-06-2023, 05:56
- Комментариев: 0
Название: Artificial Intelligence For Science: A Deep Learning Revolution
Автор: Alok Choudhary, Geoffrey Fox, Tony Hey
Издательство: World Scientific Publishing
Год: 2023
Страниц: 803
Язык: английский
Формат: pdf (true)
Размер: 172.5 MB
This unique collection introduces AI, Machine Learning (ML), and deep neural network technologies leading to scientific discovery from the datasets generated both by supercomputer simulation and by modern experimental facilities. To work with Deep Learning methods, practitioners rely on Deep Learning frameworks — a specialized software which allows users to design and train Artificial Neural Networks (ANNs). Popular titles include PyTorch, TensorFlow, and MXNet among others. In summary, the frameworks allow users to (a) construct an ANN from basic tensor operations and a predefined set of basic building blocks (convolutional kernels, dense layers, recurrent units, various activation functions, etc.), (b) compute derivatives of the loss function for back-propagation, (c) execute forward and backward passes efficiently on a given hardware, and (d) provide additional helper functions for distributed training, data handling, etc.
Автор: Alok Choudhary, Geoffrey Fox, Tony Hey
Издательство: World Scientific Publishing
Год: 2023
Страниц: 803
Язык: английский
Формат: pdf (true)
Размер: 172.5 MB
This unique collection introduces AI, Machine Learning (ML), and deep neural network technologies leading to scientific discovery from the datasets generated both by supercomputer simulation and by modern experimental facilities. To work with Deep Learning methods, practitioners rely on Deep Learning frameworks — a specialized software which allows users to design and train Artificial Neural Networks (ANNs). Popular titles include PyTorch, TensorFlow, and MXNet among others. In summary, the frameworks allow users to (a) construct an ANN from basic tensor operations and a predefined set of basic building blocks (convolutional kernels, dense layers, recurrent units, various activation functions, etc.), (b) compute derivatives of the loss function for back-propagation, (c) execute forward and backward passes efficiently on a given hardware, and (d) provide additional helper functions for distributed training, data handling, etc.