10:45 AM - 12:15 PM
[MIS14-P02] Development of in-situ imaging method using machine learning on liquid-cell transmission electron microcopy
Keywords:Machine Learning, LC-TEM, in-situ observation
Our machine learning model is constructed with the U-Net architecture and the residual neural network (Resnet) encoder/decoder. The U-Net is a convolutional neural network that was developed for the segmentation of images. The ResNet is a standard encoder for segmentation and has skip connections used to jump over some layers. It is effective for avoiding the problem of vanishing gradients. For the training of machine learning model, we prepare the dataset, which contains images of gold nano particles acquired in a vacuum and in a liquid. Our transmission electron microscope is equipped with a filed-emission gun (JEM-2100F, JEOL, Tokyo) operated at an acceleration voltage of 200 kV and a CMOS camera, a OneView IS (Gatan. Inc., Pleasanton, CA, USA). Typical magnification is 20,000x. All images were acquired with drift correction using the function of the software (Digital Micrograph, Gatan. Inc., CA, USA). First, images are acquired in a vacuum with a resolution of 4,096 x 4,096 pixels. Next, after water is introduced into the cell, images are acquired in a liquid with the same resolution. In most cases, a single image taken in the vacuum never matches exactly to a single image taken in the liquid. Our dataset is made by cropping the locations corresponding to the images taken in vacuum from the images taken in the liquid. The number of datasets is about 300.
The model parameters are tuned to convert images acquired in the liquid to those acquired in the vacuum using the L1 loss function and the Adam optimizer. After training of our machine learning model, we obtain the model parameters. For a qualitative evaluation of the quality of the output images, the output images are evaluated with the peak signal to noise ratio (PSNR) and the structural similarity (SSIM). The PSNR indicates the degree of agreement of two images in pixels. The SSIM is used for measuring the similarity between two images. The PSNR of images acquired in the vacuum and in the liquid is 13.58. The PSNR of the images acquired in the vacuum and the output images from our model is 21.96. The SSIM is also improved from 0.25 to 0.63. In this talk, we will discuss the details of the improved images, and the relation between the output images and the model.