16:15 〜 17:15
[SL] PRACTICAL MULTISCALE FRAMEWORK
The talk presents a practical multiscale framework, which possesses the following salient features: (i) computational efficiency, (ii) absence of the usual scale separation assumption, and (iii) reliance only on limited experimental data for its calibration. The multiscale software developed based on this framework, known as the Multiscale Designer, has been deployed by hundreds of industrial users around the globe including, but not limited to: aerospace industry (Lockheed-Martin, Northrop-Grumman, Boeing, Rolls-Royce, Airbus, General Electric, Blue Origin, etc.) for durability, life prediction and environmental degradation of ceramic- and polymer- based composite components, automotive industry (General Motors, Ford, Chrysler, etc.) for crash prediction of composite cars, electronics industry (HP, Motorola, etc.), and other industries, such as healthcare, consumer goods, civil engineering, just to name a few. The formulation is endowed with fine-scale details, introduces no scale separation, makes no assumption about infinitesimality of the fine-scale features, requires no higher order continuity, introduces no new degrees-of-freedom, is free of higher order boundary conditions and employs model hierarchy that permits reliance on limited experimental database. The computational efficiency of the formulation stems from the residual-free formulation that eliminates the bottleneck of satisfying fine-scale equilibrium equations and hybrid impotent-incompatible eigenstrain formulation that alleviates locking arising from a lower order approximation of eigenstrains. The formulation employs models of various fidelity. A high-fidelity model (HFM) is first calibrated to limited experimental data. Subsequently, the HFM is employed to train a low fidelity model (LFM). Finally, the calibrated low fidelity model is utilized for component analysis. The rational for utilizing HFM in the initial stage stems from the fact that constitutive laws of individual microphases in HFM are rather simple, so that the number of material parameters that needs to be identified is less than in the LFM. The added complexity of material models in LFM is necessary to compensate for simplified kinematical assumptions made in LFM and for smearing discrete defect structure. The first or higher order computational homogenization model, which resolves microstructural details including the structure of defects, such as voids, dry spots and residual stresses resulting from the manufacturing process, is employed as the HFM, whereas the reduced order approach, is employed as the LFM.\n\n\nThe talk includes theory and applications in aerospace, automotive, healthcare and civil engineering industries.
抄録パスワード認証
抄録の閲覧にはパスワードが必要です。パスワードを入力して認証してください。