[3-C-1-03] Regulations and International Standardizations to Realize "Trusted AI"
Trusted AI, regulation of AI, Standardization of AI, Artificial Intelligence
With the recent rise of AI businesses in all industries around the world, AI can also be used to make important decisions for society. However, it is also true that using AI incorrectly or without understanding the characteristics of AI can have a negative impact on society due to concerns about its impact on human rights and social values. There is a need for wisdom and knowledge to properly utilize these new technologies, and many countries have now developed principles and guidelines for the use of AI. In particular, the European Union (EU), which takes a hard law approach, announced the AI Act in April 2021 and plans to implement it in 2025. Companies that violate the European AI bill will be subject to sanctions, including substantial fines, but many countries and industries have indicated their willingness to accept this, making it urgent to develop specific processes and measures to comply with the bill. This article summarizes the current activities of regulations and international standardization related to the realization of trusted AI. In particular, we will focus on trends and outlines of international standards (harmonized standards) that provide for AI regulations, particularly the EU AI Act, and its implementation details. In addition, regarding international standards for AI, this article provides an overview of international standards for the realization of trusted AI, including international standards related to certification and certification and their background, ethics, and relationships between AI and humans.