The Thai government and regulatory bodies have taken a proactive approach to overseeing artificial intelligence technologies and systems. In 2022, the Digital Economy and Society (DES) Ministry spearheaded the approval of a new set of mandatory standards governing AI development and use.
These standards categorize AI into three tiers: low, medium and high-risk. AI applications are designated high-risk if they could significantly impact people’s rights, opportunities, or access to critical services like healthcare and education. Examples include AI used in job candidate screening, credit-lending decisions, and diagnosing medical conditions.
Organizations creating or deploying high-risk AI systems will undergo substantial auditing and review processes. These include algorithmic bias testing, evaluations of data quality and security protections, assessments of transparency and explainability, and more. The goal is to minimize the dangers of inaccurate, discriminatory or otherwise faulty AI.
Critics argue the regulations take a heavy-handed approach compared to countries where governance of AI is more voluntary and collaborative with industry. But supporters believe clear oversight and accountability mechanisms are crucial for guiding innovation in a ethical and socially-beneficial direction.
Effective enforcement of the rules will require boosting institutional expertise in AI audits, technical standards, and bias detection. Funding these governance mechanisms also remains a challenge. Still, Thailand aims to balance enabling AI innovation that can bring economic and societal gains, while ensuring human control and cautious regulation especially for high-impact applications.
The DES Ministry acknowledges AI governance policies will continue evolving. But by prioritizing mandatory assessments and restraints on high-risk systems, Thailand hopes to maximize AI’s potential through responsible development and deployment.