17:15 〜 18:45
[MIS18-P06] その場液中透過電子顕微鏡像の機械学習によるリアルタイム画像処理手法の検証
キーワード:透過電子顕微鏡法、機械学習、その場観察、溶液セル
Liquid cell transmission electron microscopy (LC-TEM) is a powerful tool for studying nano- and micro-scale phenomena and their dynamics in liquids. Despite the great progress in TEM, there are some inevitable problems such as blurred images and unexpected reactions due to the radiolysis. To reduce unexpected reactions, it is necessary to observe a sample in liquid with low-dose electrons. However, it is difficult to observe with low-dose electrons because the images become unclear by the interaction of electrons with the liquid. We have developed a new machine-learning (ML) model to improve the images captured by LC-TEM. Since the time required for image improvement by a deep learning method is very small, our method is expected to be useful for in-situ observation.
Our transmission electron microscope was equipped with a field-emission gun (JEM-2100F, JEOL, Tokyo) operated at an acceleration voltage of 200 kV and a CMOS camera, OneView IS (Gatan. Inc., Pleasanton, CA, USA). The liquid cell consisted of a pair of silicon chips with an amorphous silicon nitride membrane of 50 nm thickness as an observation window. The LC-TEM holder (Poseidon, Protochips, Morrisville, NC, USA) is equipped with liquid injection ports, which were open in our operation. All images were acquired with drift correction using the function of the software (Digital Micrograph, Gatan. Inc., CA, USA). For the training of the ML model constructed with the U-Net architecture and the residual neural network encoder/decoder, we prepare the dataset. We chose Au nanoparticles as a sample and water as a liquid. Since Au and water do not react, we can use images without a liquid as ground truth images and those with a liquid as noisy images. The total electron dose of the former was about 103 –105 e-/nm2 and that of the latter was 1–103 e-/nm2. In ordinary setup, the samples would flow during the injection of the liquid. To avoid this problem, the samples were located outside the membrane. In addition, a single image taken in the vacuum never matches exactly a single image taken in the liquid because of the slight difference in the membrane bulge. Our dataset is made by clipping the areas corresponding to the images taken without the liquid from the images taken in the liquid. The number of datasets is more than 1000.
The model parameters are tuned to convert images acquired with the liquid to those acquired without liquid using the L1 loss function and the Adam optimizer. The PSNR of the images acquired without the liquid and the output images from our model was about 29. The SSIM was also improved to 0.85. We incorporated our machine learning model into Digital Micrograph and confirmed the refinement proceeded within a few tens of milliseconds. Therefore, even at a low-electron dose, it is now possible to observe while viewing an improved image. In this talk, we will discuss some examples of in-situ observation.
Our transmission electron microscope was equipped with a field-emission gun (JEM-2100F, JEOL, Tokyo) operated at an acceleration voltage of 200 kV and a CMOS camera, OneView IS (Gatan. Inc., Pleasanton, CA, USA). The liquid cell consisted of a pair of silicon chips with an amorphous silicon nitride membrane of 50 nm thickness as an observation window. The LC-TEM holder (Poseidon, Protochips, Morrisville, NC, USA) is equipped with liquid injection ports, which were open in our operation. All images were acquired with drift correction using the function of the software (Digital Micrograph, Gatan. Inc., CA, USA). For the training of the ML model constructed with the U-Net architecture and the residual neural network encoder/decoder, we prepare the dataset. We chose Au nanoparticles as a sample and water as a liquid. Since Au and water do not react, we can use images without a liquid as ground truth images and those with a liquid as noisy images. The total electron dose of the former was about 103 –105 e-/nm2 and that of the latter was 1–103 e-/nm2. In ordinary setup, the samples would flow during the injection of the liquid. To avoid this problem, the samples were located outside the membrane. In addition, a single image taken in the vacuum never matches exactly a single image taken in the liquid because of the slight difference in the membrane bulge. Our dataset is made by clipping the areas corresponding to the images taken without the liquid from the images taken in the liquid. The number of datasets is more than 1000.
The model parameters are tuned to convert images acquired with the liquid to those acquired without liquid using the L1 loss function and the Adam optimizer. The PSNR of the images acquired without the liquid and the output images from our model was about 29. The SSIM was also improved to 0.85. We incorporated our machine learning model into Digital Micrograph and confirmed the refinement proceeded within a few tens of milliseconds. Therefore, even at a low-electron dose, it is now possible to observe while viewing an improved image. In this talk, we will discuss some examples of in-situ observation.