2017年第78回応用物理学会秋季学術講演会

講演情報

一般セッション(口頭講演)

10 スピントロニクス・マグネティクス » 10.3 スピンデバイス・磁気メモリ・ストレージ技術

[7a-C18-10~12] 10.3 スピンデバイス・磁気メモリ・ストレージ技術

10.1と10.2と10.3のコードシェアセッションあり

2017年9月7日(木) 11:15 〜 12:00 C18 (C18)

小山 知弘(東大)

11:15 〜 11:30

[7a-C18-10] [JSAP Young Scientist Award Speech] Characterizing Analogue Spin-Orbit Torque Devices for Artificial Neural Networks

William Andrew Borders1、Hisanao Akima1、Shunsuke Fukami1、Satoshi Moriya1、Shouta Kurihara1、Aleksandr Kurenkov1、Yoshihiko Horio1、Shigeo Sato1、Hideo Ohno1 (1.Tohoku Univ.)

キーワード:Spin-Orbit Torque, Artificial Neural Networks

Development of nonvolatile memories for computers with the von Neumann architecture has been one of the mainstream outlets of spintronics research in the last few decades. Meanwhile, non-von Neumann artificial intelligence(AI) technologies have attracted great attention in the field of information processing, completing complex tasks at high speeds and at low power consumption levels that conventional computers struggle with. In this work, we introduce a previously reported spin-orbit torque (SOT) induced switching device [1] and show its capability to demonstrate a brain-like associative memory operation [2]. The device’s material stack structure, mainly comprised of an antiferromagnet (AFM)-ferromagnet (FM) stack structure, which was found to show analogue-like resistance switching, is first improved upon to characterize an artificial synapse. These characteristics involve improving the dynamic switching range of anomalous Hall resistance in the device, which represents the perpendicular component of magnetization in the FM layer, and increasing the stability of the device to external effects [3]. The fabricated 36-devices’ array is then implemented into a demonstration system as synapses to associate several 3x3 block patterns through learning. The system determines a synaptic weight matrix that describes the weight relating one block to the other blocks, then produces a "recalled" vector based on the synaptic weight matrix and compares it to a "memorized" vector stored in the computer memory. If the "recalled" vector and "memorized" vectors differ, an iterative learning process [4] is conducted, where the synaptic weights of the devices are adjusted in an analog manner. The direction cosine of each test, or the agreement between the recalled vectors and memorized vectors (1 being complete agreement) is determined to test the system's learning ability, when one block in the pattern is 'flipped'. Over 100 tests, the neural network 'recovered' from a direction cosine value of 0.601 before learning, to a value of 0.852, demonstrating the improved SOT device's capability, as a synapse, to learn patterns for associative memory [2].
A portion of this work was supported by the R&D Project for ICT Key Technology of MEXT, ImPACT Program of CSTI, JST-OPERA, and JSPS KAKENHI 17H06093.
[1] S. Fukami, C. Zhang, S. DuttaGupta, A. Kurenkov, and H. Ohno, Nature Mater., 15, 535 (2016).
[2] W. A. Borders et al., Appl. Phys. Express 10, 013007 (2017).
[3] W. A. Borders et al., IEEE Trans. Magn., doi: 10.1109/TMAG.2017.2703817 (2017).
[4] D. H. Ackley and G. E. Hinton, Cognitive Sci., 9, 147 (1985).