日本地球惑星科学連合2021年大会

講演情報

[J] ポスター発表

セッション記号 M (領域外・複数領域) » M-IS ジョイント

[M-IS17] 結晶成⻑、溶解における界⾯・ナノ現象

2021年6月5日(土) 17:15 〜 18:30 Ch.21

コンビーナ:木村 勇気(北海道大学低温科学研究所)、三浦 均(名古屋市立大学大学院理学研究科)、佐藤 久夫(日本原燃株式会社埋設事業部)

17:15 〜 18:30

[MIS17-P05] 機械学習を用いた透過型電子顕微鏡像の改善

*勝野 弘康1、平川 静1、山﨑 智也1、瀧川 一学2、木村 勇気1 (1.北海道大学低温科学研究所、2.理化学研究所革新知能統合研究センター )

キーワード:機械学習、透過電子顕微鏡、ニューラルネットワーク

Transmission electron microscopy (TEM) is a powerful tool in the field of material science and provides structural information by visualizing at the atomic level. In order to obtain a clear image, improvements of hardware and software are proceeded. Despite of its benefits, the application is limited due to the electron beam damage and the time resolution. One of the important techniques in the high resolution TEM imaging is dictionary learning method, which is called sparse coding [A. Stevens et al., Microscopy 63, 41 (2014)]. This method provides a denoised image from a noisy image by using a linear combination of basic elements. Sparse coding is succeeded in the field of scanning TEM imaging and electron holography [S. Anda et al., Ultramicroscopy 206 112818 (2019)].

Recently, the image improvement by machine learning is evolved remarkably. To support the improvement for low-light image processing, a dataset, including short-exposure low-light images and corresponding long-exposure reference images, are introduced [C. Chen et al., arXiv:1805.091934 (2018)]. In this paper, we apply this idea to the TEM imaging and demonstrate the improvement of TEM image by a convolutional neural network (CNN) model.
For the training of CNN, we prepare the training dataset of TEM images, which is a pair of low-exposure images and corresponding long-exposure reference images. The exposure time is about 1—10 ms and 1 s for low-exposure and long-exposure images, respectively. The images were observed using a CMOS camera (Flash, EM-Z15327TCMOS; JEOL Ltd.). Before the training, the dataset is preprocessed: adjusting the position and equalizing/rescaling the brightness of each image. The network architecture of our model is UNet with ResNet encoder. During training, we minimize the L1 loss between the image predicted from low-exposure image and the long-exposure image. Using about 100 images for training, the noise is removed, and the edges become clear on the verification images. We will show the condition of our model in detail and examples for some materials.