Machine Learning and Non-volatile Memories

Machine Learning and Non-volatile Memories PDF Author: Rino Micheloni
Publisher: Springer Nature
ISBN: 303103841X
Category : Technology & Engineering
Languages : en
Pages : 178

Get Book

Book Description
This book presents the basics of both NAND flash storage and machine learning, detailing the storage problems the latter can help to solve. At a first sight, machine learning and non-volatile memories seem very far away from each other. Machine learning implies mathematics, algorithms and a lot of computation; non-volatile memories are solid-state devices used to store information, having the amazing capability of retaining the information even without power supply. This book will help the reader understand how these two worlds can work together, bringing a lot of value to each other. In particular, the book covers two main fields of application: analog neural networks (NNs) and solid-state drives (SSDs). After reviewing the basics of machine learning in Chapter 1, Chapter 2 shows how neural networks can mimic the human brain; to accomplish this result, neural networks have to perform a specific computation called vector-by-matrix (VbM) multiplication, which is particularly power hungry. In the digital domain, VbM is implemented by means of logic gates which dictate both the area occupation and the power consumption; the combination of the two poses serious challenges to the hardware scalability, thus limiting the size of the neural network itself, especially in terms of the number of processable inputs and outputs. Non-volatile memories (phase change memories in Chapter 3, resistive memories in Chapter 4, and 3D flash memories in Chapter 5 and Chapter 6) enable the analog implementation of the VbM (also called “neuromorphic architecture”), which can easily beat the equivalent digital implementation in terms of both speed and energy consumption. SSDs and flash memories are strictly coupled together; as 3D flash scales, there is a significant amount of work that has to be done in order to optimize the overall performances of SSDs. Machine learning has emerged as a viable solution in many stages of this process. After introducing the main flash reliability issues, Chapter 7 shows both supervised and un-supervised machine learning techniques that can be applied to NAND. In addition, Chapter 7 deals with algorithms and techniques for a pro-active reliability management of SSDs. Last but not least, the last section of Chapter 7 discusses the next challenge for machine learning in the context of the so-called computational storage. No doubt that machine learning and non-volatile memories can help each other, but we are just at the beginning of the journey; this book helps researchers understand the basics of each field by providing real application examples, hopefully, providing a good starting point for the next level of development.

Machine Learning and Non-volatile Memories

Machine Learning and Non-volatile Memories PDF Author: Rino Micheloni
Publisher: Springer Nature
ISBN: 303103841X
Category : Technology & Engineering
Languages : en
Pages : 178

Get Book

Book Description
This book presents the basics of both NAND flash storage and machine learning, detailing the storage problems the latter can help to solve. At a first sight, machine learning and non-volatile memories seem very far away from each other. Machine learning implies mathematics, algorithms and a lot of computation; non-volatile memories are solid-state devices used to store information, having the amazing capability of retaining the information even without power supply. This book will help the reader understand how these two worlds can work together, bringing a lot of value to each other. In particular, the book covers two main fields of application: analog neural networks (NNs) and solid-state drives (SSDs). After reviewing the basics of machine learning in Chapter 1, Chapter 2 shows how neural networks can mimic the human brain; to accomplish this result, neural networks have to perform a specific computation called vector-by-matrix (VbM) multiplication, which is particularly power hungry. In the digital domain, VbM is implemented by means of logic gates which dictate both the area occupation and the power consumption; the combination of the two poses serious challenges to the hardware scalability, thus limiting the size of the neural network itself, especially in terms of the number of processable inputs and outputs. Non-volatile memories (phase change memories in Chapter 3, resistive memories in Chapter 4, and 3D flash memories in Chapter 5 and Chapter 6) enable the analog implementation of the VbM (also called “neuromorphic architecture”), which can easily beat the equivalent digital implementation in terms of both speed and energy consumption. SSDs and flash memories are strictly coupled together; as 3D flash scales, there is a significant amount of work that has to be done in order to optimize the overall performances of SSDs. Machine learning has emerged as a viable solution in many stages of this process. After introducing the main flash reliability issues, Chapter 7 shows both supervised and un-supervised machine learning techniques that can be applied to NAND. In addition, Chapter 7 deals with algorithms and techniques for a pro-active reliability management of SSDs. Last but not least, the last section of Chapter 7 discusses the next challenge for machine learning in the context of the so-called computational storage. No doubt that machine learning and non-volatile memories can help each other, but we are just at the beginning of the journey; this book helps researchers understand the basics of each field by providing real application examples, hopefully, providing a good starting point for the next level of development.

Non-Volatile In-Memory Computing by Spintronics

Non-Volatile In-Memory Computing by Spintronics PDF Author: Hao Yu
Publisher: Springer Nature
ISBN: 3031020324
Category : Technology & Engineering
Languages : en
Pages : 147

Get Book

Book Description
Exa-scale computing needs to re-examine the existing hardware platform that can support intensive data-oriented computing. Since the main bottleneck is from memory, we aim to develop an energy-efficient in-memory computing platform in this book. First, the models of spin-transfer torque magnetic tunnel junction and racetrack memory are presented. Next, we show that the spintronics could be a candidate for future data-oriented computing for storage, logic, and interconnect. As a result, by utilizing spintronics, in-memory-based computing has been applied for data encryption and machine learning. The implementations of in-memory AES, Simon cipher, as well as interconnect are explained in details. In addition, in-memory-based machine learning and face recognition are also illustrated in this book.

Design Exploration of Emerging Nano-scale Non-volatile Memory

Design Exploration of Emerging Nano-scale Non-volatile Memory PDF Author: Hao Yu
Publisher: Springer Science & Business
ISBN: 1493905511
Category : Technology & Engineering
Languages : en
Pages : 192

Get Book

Book Description
This book presents the latest techniques for characterization, modeling and design for nano-scale non-volatile memory (NVM) devices. Coverage focuses on fundamental NVM device fabrication and characterization, internal state identification of memristic dynamics with physics modeling, NVM circuit design and hybrid NVM memory system design-space optimization. The authors discuss design methodologies for nano-scale NVM devices from a circuits/systems perspective, including the general foundations for the fundamental memristic dynamics in NVM devices. Coverage includes physical modeling, as well as the development of a platform to explore novel hybrid CMOS and NVM circuit and system design. • Offers readers a systematic and comprehensive treatment of emerging nano-scale non-volatile memory (NVM) devices; • Focuses on the internal state of NVM memristic dynamics, novel NVM readout and memory cell circuit design and hybrid NVM memory system optimization; • Provides both theoretical analysis and practical examples to illustrate design methodologies; • Illustrates design and analysis for recent developments in spin-toque-transfer, domain-wall racetrack and memristors.

Metal Oxides for Non-volatile Memory

Metal Oxides for Non-volatile Memory PDF Author: Panagiotis Dimitrakis
Publisher: Elsevier
ISBN: 0128146303
Category : Technology & Engineering
Languages : en
Pages : 534

Get Book

Book Description
Metal Oxides for Non-volatile Memory: Materials, Technology and Applications covers the technology and applications of metal oxides (MOx) in non-volatile memory (NVM) technology. The book addresses all types of NVMs, including floating-gate memories, 3-D memories, charge-trapping memories, quantum-dot memories, resistance switching memories and memristors, Mott memories and transparent memories. Applications of MOx in DRAM technology where they play a crucial role to the DRAM evolution are also addressed. The book offers a broad scope, encompassing discussions of materials properties, deposition methods, design and fabrication, and circuit and system level applications of metal oxides to non-volatile memory. Finally, the book addresses one of the most promising materials that may lead to a solution to the challenges in chip size and capacity for memory technologies, particular for mobile applications and embedded systems. Systematically covers metal oxides materials and their properties with memory technology applications, including floating-gate memory, 3-D memory, memristors, and much more Provides an overview on the most relevant deposition methods, including sputtering, CVD, ALD and MBE Discusses the design and fabrication of metal oxides for wide breadth of non-volatile memory applications from 3-D flash technology, transparent memory and DRAM technology

Advances in Non-volatile Memory and Storage Technology

Advances in Non-volatile Memory and Storage Technology PDF Author: Yoshio Nishi
Publisher: Woodhead Publishing
ISBN: 0081025858
Category : Science
Languages : en
Pages : 662

Get Book

Book Description
Advances in Nonvolatile Memory and Storage Technology, Second Edition, addresses recent developments in the non-volatile memory spectrum, from fundamental understanding, to technological aspects. The book provides up-to-date information on the current memory technologies as related by leading experts in both academia and industry. To reflect the rapidly changing field, many new chapters have been included to feature the latest in RRAM technology, STT-RAM, memristors and more. The new edition describes the emerging technologies including oxide-based ferroelectric memories, MRAM technologies, and 3D memory. Finally, to further widen the discussion on the applications space, neuromorphic computing aspects have been included. This book is a key resource for postgraduate students and academic researchers in physics, materials science and electrical engineering. In addition, it will be a valuable tool for research and development managers concerned with electronics, semiconductors, nanotechnology, solid-state memories, magnetic materials, organic materials and portable electronic devices. Discusses emerging devices and research trends, such as neuromorphic computing and oxide-based ferroelectric memories Provides an overview on developing nonvolatile memory and storage technologies and explores their strengths and weaknesses Examines improvements to flash technology, charge trapping and resistive random access memory

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing PDF Author: Sudeep Pasricha
Publisher: Springer Nature
ISBN: 303119568X
Category : Technology & Engineering
Languages : en
Pages : 418

Get Book

Book Description
This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits.

Machine Learning Algorithms for Industrial Applications

Machine Learning Algorithms for Industrial Applications PDF Author: Santosh Kumar Das
Publisher: Springer Nature
ISBN: 303050641X
Category : Technology & Engineering
Languages : en
Pages : 321

Get Book

Book Description
This book explores several problems and their solutions regarding data analysis and prediction for industrial applications. Machine learning is a prominent topic in modern industries: its influence can be felt in many aspects of everyday life, as the world rapidly embraces big data and data analytics. Accordingly, there is a pressing need for novel and innovative algorithms to help us find effective solutions in industrial application areas such as media, healthcare, travel, finance, and retail. In all of these areas, data is the crucial parameter, and the main key to unlocking the value of industry. The book presents a range of intelligent algorithms that can be used to filter useful information in the above-mentioned application areas and efficiently solve particular problems. Its main objective is to raise awareness for this important field among students, researchers, and industrial practitioners.

Emerging Non-volatile Memory Technologies

Emerging Non-volatile Memory Technologies PDF Author: Wen Siang Lew
Publisher: Springer Nature
ISBN: 9811569126
Category : Science
Languages : en
Pages : 439

Get Book

Book Description
This book offers a balanced and comprehensive guide to the core principles, fundamental properties, experimental approaches, and state-of-the-art applications of two major groups of emerging non-volatile memory technologies, i.e. spintronics-based devices as well as resistive switching devices, also known as Resistive Random Access Memory (RRAM). The first section presents different types of spintronic-based devices, i.e. magnetic tunnel junction (MTJ), domain wall, and skyrmion memory devices. This section describes how their developments have led to various promising applications, such as microwave oscillators, detectors, magnetic logic, and neuromorphic engineered systems. In the second half of the book, the underlying device physics supported by different experimental observations and modelling of RRAM devices are presented with memory array level implementation. An insight into RRAM desired properties as synaptic element in neuromorphic computing platforms from material and algorithms viewpoint is also discussed with specific example in automatic sound classification framework.

Fundamentals

Fundamentals PDF Author: Katharina Morik
Publisher: Walter de Gruyter GmbH & Co KG
ISBN: 3110785943
Category : Science
Languages : en
Pages : 506

Get Book

Book Description
Machine learning is part of Artificial Intelligence since its beginning. Certainly, not learning would only allow the perfect being to show intelligent behavior. All others, be it humans or machines, need to learn in order to enhance their capabilities. In the eighties of the last century, learning from examples and modeling human learning strategies have been investigated in concert. The formal statistical basis of many learning methods has been put forward later on and is still an integral part of machine learning. Neural networks have always been in the toolbox of methods. Integrating all the pre-processing, exploitation of kernel functions, and transformation steps of a machine learning process into the architecture of a deep neural network increased the performance of this model type considerably. Modern machine learning is challenged on the one hand by the amount of data and on the other hand by the demand of real-time inference. This leads to an interest in computing architectures and modern processors. For a long time, the machine learning research could take the von-Neumann architecture for granted. All algorithms were designed for the classical CPU. Issues of implementation on a particular architecture have been ignored. This is no longer possible. The time for independently investigating machine learning and computational architecture is over. Computing architecture has experienced a similarly rampant development from mainframe or personal computers in the last century to now very large compute clusters on the one hand and ubiquitous computing of embedded systems in the Internet of Things on the other hand. Cyber-physical systems’ sensors produce a huge amount of streaming data which need to be stored and analyzed. Their actuators need to react in real-time. This clearly establishes a close connection with machine learning. Cyber-physical systems and systems in the Internet of Things consist of diverse components, heterogeneous both in hard- and software. Modern multi-core systems, graphic processors, memory technologies and hardware-software codesign offer opportunities for better implementations of machine learning models. Machine learning and embedded systems together now form a field of research which tackles leading edge problems in machine learning, algorithm engineering, and embedded systems. Machine learning today needs to make the resource demands of learning and inference meet the resource constraints of used computer architecture and platforms. A large variety of algorithms for the same learning method and, moreover, diverse implementations of an algorithm for particular computing architectures optimize learning with respect to resource efficiency while keeping some guarantees of accuracy. The trade-off between a decreased energy consumption and an increased error rate, to just give an example, needs to be theoretically shown for training a model and the model inference. Pruning and quantization are ways of reducing the resource requirements by either compressing or approximating the model. In addition to memory and energy consumption, timeliness is an important issue, since many embedded systems are integrated into large products that interact with the physical world. If the results are delivered too late, they may have become useless. As a result, real-time guarantees are needed for such systems. To efficiently utilize the available resources, e.g., processing power, memory, and accelerators, with respect to response time, energy consumption, and power dissipation, different scheduling algorithms and resource management strategies need to be developed. This book series addresses machine learning under resource constraints as well as the application of the described methods in various domains of science and engineering. Turning big data into smart data requires many steps of data analysis: methods for extracting and selecting features, filtering and cleaning the data, joining heterogeneous sources, aggregating the data, and learning predictions need to scale up. The algorithms are challenged on the one hand by high-throughput data, gigantic data sets like in astrophysics, on the other hand by high dimensions like in genetic data. Resource constraints are given by the relation between the demands for processing the data and the capacity of the computing machinery. The resources are runtime, memory, communication, and energy. Novel machine learning algorithms are optimized with regard to minimal resource consumption. Moreover, learned predictions are applied to program executions in order to save resources. The three books will have the following subtopics: Volume 1: Machine Learning under Resource Constraints - Fundamentals Volume 2: Machine Learning and Physics under Resource Constraints - Discovery Volume 3: Machine Learning under Resource Constraints - Applications Volume 1 establishes the foundations of this new field (Machine Learning under Resource Constraints). It goes through all the steps from data collection, their summary and clustering, to the different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Several machine learning methods are inspected with respect to their resource requirements and how to enhance their scalability on diverse computing architectures ranging from embedded systems to large computing clusters.

Compact and Fast Machine Learning Accelerator for IoT Devices

Compact and Fast Machine Learning Accelerator for IoT Devices PDF Author: Hantao Huang
Publisher: Springer
ISBN: 9811333238
Category : Technology & Engineering
Languages : en
Pages : 149

Get Book

Book Description
This book presents the latest techniques for machine learning based data analytics on IoT edge devices. A comprehensive literature review on neural network compression and machine learning accelerator is presented from both algorithm level optimization and hardware architecture optimization. Coverage focuses on shallow and deep neural network with real applications on smart buildings. The authors also discuss hardware architecture design with coverage focusing on both CMOS based computing systems and the new emerging Resistive Random-Access Memory (RRAM) based systems. Detailed case studies such as indoor positioning, energy management and intrusion detection are also presented for smart buildings.