13:45 〜 15:15
[SEM14-P02] The study of the multiple parameters estimation method for sparse magnetic inversion analysis
キーワード:パラメータ探索、磁気インバージョン
When determining the subsurface structure from the total magnetic data observed above the ground, we have to solve an ill-posed problem because the number of the unknown parameters is larger than that of the data. For this reason, it is widely used to constrain the solution when performing the inversion calculations, and the properties of the obtained solution vary greatly depending on the constraint conditions. When we use the smoothing condition (L2 penalty), which has been commonly used in the previous studies, we can usually get an unfocused solution that blurs the actual structure, and it is pointed out that the interpretation of such a structure is difficult. On the other hand, sparse regularization represented by Lasso (Tibshirani, 1995) has recently attracted attention and is actively used in magnetic inversion analysis. This method is an optimization method that imposes a constraint to minimize the L1 norm (sum of absolute values of each component) of the solution vector, and it is known that sparse solutions can be obtained.
However, in the sparse regularization, the feature of the solution vector is greatly influenced by the regularization parameter λ, and the problem how to determin this parameter is an essential problem for the inversion scheme. Conventionally, the optimal parameters have been estimated by the cross validation, the L-curve methods, and the information criteria.
However, to give complex features into the solution, the penalty function has to be consists of some penalty terms. Thus, in this case, we have to estimate multi hyperparameters. The penalty terms used in this study are the L1-L2 penalty (Utsugi, 2019), which combines the L1 and L2 norms of the solution vector using a distribution factor α, and the L1-TV1 penalty (Utsugi, 2022), which combines the L1 and Total Variation norms of the solution vector using a distribution factor γ. In these cases, it is desirable to estimate at least two hyperparameters simultaneously, the regularization parameter λ and the distribution factor α or γ. The estimation of the multiple hyperparameters by applying the conventional methods involves some difficulties in terms of the computational cost, and we believe that it may be necessary to construct a new scheme to estimate the optimal multiple hyperparameters.
Based on our experience of the real data analysis, it has been shown that, the hyperparameter estimation is affected by (1) the arrangement of the observation points, (2) grid partitioning, and (3) noise level of the observed data. Now, we are trying to construct a calculation scheme to estimate the multiple hyperparameters, and in our scheme, the regularization parameters are estimated in advance through a synthetic test in accordance with (1) to (3). In this presentation, we will show the progress of this study.
However, in the sparse regularization, the feature of the solution vector is greatly influenced by the regularization parameter λ, and the problem how to determin this parameter is an essential problem for the inversion scheme. Conventionally, the optimal parameters have been estimated by the cross validation, the L-curve methods, and the information criteria.
However, to give complex features into the solution, the penalty function has to be consists of some penalty terms. Thus, in this case, we have to estimate multi hyperparameters. The penalty terms used in this study are the L1-L2 penalty (Utsugi, 2019), which combines the L1 and L2 norms of the solution vector using a distribution factor α, and the L1-TV1 penalty (Utsugi, 2022), which combines the L1 and Total Variation norms of the solution vector using a distribution factor γ. In these cases, it is desirable to estimate at least two hyperparameters simultaneously, the regularization parameter λ and the distribution factor α or γ. The estimation of the multiple hyperparameters by applying the conventional methods involves some difficulties in terms of the computational cost, and we believe that it may be necessary to construct a new scheme to estimate the optimal multiple hyperparameters.
Based on our experience of the real data analysis, it has been shown that, the hyperparameter estimation is affected by (1) the arrangement of the observation points, (2) grid partitioning, and (3) noise level of the observed data. Now, we are trying to construct a calculation scheme to estimate the multiple hyperparameters, and in our scheme, the regularization parameters are estimated in advance through a synthetic test in accordance with (1) to (3). In this presentation, we will show the progress of this study.