South Korea's 'World-First' AI Laws Face Criticism from Startups and Rights Groups
South Korea's AI Laws Face Pushback from Startups and Rights Groups

South Korea has launched what it describes as the world's first comprehensive set of laws specifically designed to regulate artificial intelligence, positioning itself as a potential global model for AI governance. The legislation, which took effect last week, represents a significant milestone in the country's ambition to become one of the world's three leading AI powers alongside the United States and China.

Comprehensive Framework with Controversial Provisions

The AI Basic Act establishes a detailed regulatory framework that requires companies providing AI services to implement specific measures depending on the type of artificial intelligence they deploy. For clearly artificial outputs such as cartoons or artwork, companies must add invisible digital watermarks. More significantly, for realistic deepfakes, visible labels are mandatory to alert users to the synthetic nature of the content.

High-Impact AI Systems Face Stricter Requirements

The legislation introduces a category for "high-impact AI" systems, which includes technologies used for medical diagnosis, hiring processes, and loan approvals. Operators of these systems must conduct thorough risk assessments and maintain detailed documentation about how decisions are made. However, the law contains a notable exemption: if a human makes the final decision in the process, the system may fall outside this stringent regulatory category.

For extremely powerful AI models, the legislation requires safety reports, though government officials acknowledge that the threshold is set so high that no models worldwide currently meet this standard. Companies found violating the new rules face potential fines of up to 30 million won (approximately £15,000), though authorities have promised a grace period of at least one year before penalties will be imposed.

Industry Concerns and Compliance Challenges

The new regulations have sparked significant concern within South Korea's tech startup community. A December survey conducted by the Startup Alliance revealed that 98% of AI startups were unprepared for compliance with the new requirements. Lim Jung-wook, co-head of the alliance, expressed widespread frustration within the industry, noting that many companies feel resentment about being the first to face such comprehensive regulation.

Companies must self-determine whether their systems qualify as high-impact AI, a process that critics describe as lengthy and creating substantial uncertainty. There are also concerns about competitive imbalance, as all Korean companies face regulation regardless of their size, while only foreign firms meeting certain thresholds – such as Google and OpenAI – must comply with the same requirements.

Civil Society Groups Argue Protections Are Insufficient

While tech startups complain the regulations go too far, civil society groups argue they don't go nearly far enough to protect citizens from potential AI harms. South Korea faces particular challenges with AI-generated content, accounting for 53% of all global deepfake pornography victims according to a 2023 report by Security Hero, a US-based identity protection firm.

Four prominent organisations, including Minbyun (a collective of human rights lawyers), issued a joint statement the day after the law was implemented, arguing that it contains almost no provisions to protect citizens from AI risks. The groups noted that while the law stipulates protection for "users," those users are defined as hospitals, financial companies, and public institutions that use AI systems – not the individuals who might be affected by AI decisions.

Regulatory Blind Spots and Enforcement Challenges

The country's human rights commission has criticised the enforcement decree for lacking clear definitions of high-impact AI, noting that those most likely to suffer rights violations remain in regulatory blind spots. Civil society groups have also pointed out that the law establishes no prohibited AI systems, and that exemptions for "human involvement" create significant loopholes that could undermine protections.

In response to these concerns, the Ministry of Science and ICT stated that it expects the law to "remove legal uncertainty" and build "a healthy and safe domestic AI ecosystem," adding that it would continue to clarify the rules through revised guidelines as implementation progresses.

A Distinct Approach to AI Governance

Experts note that South Korea has deliberately chosen a different regulatory path from other major jurisdictions. Unlike the European Union's strict risk-based model, the United States and United Kingdom's largely sector-specific, market-driven approaches, or China's combination of state-led industrial policy and detailed service-specific regulation, South Korea has opted for a more flexible, principles-based framework.

Melissa Hyesun Yoon, a law professor at Hanyang University who specialises in AI governance, describes this approach as centred on "trust-based promotion and regulation." She suggests that "Korea's framework will serve as a useful reference point in global AI governance discussions" as other nations develop their own regulatory responses to rapidly advancing artificial intelligence technologies.

The legislation's origins predate recent AI controversies, with the first AI-related bill submitted to parliament in July 2020. The bill stalled repeatedly, partly due to provisions that were accused of prioritising industry interests over citizen protection. Government officials now maintain that the law is 80-90% focused on promoting industry rather than restricting it, reflecting South Korea's broader ambition to establish itself as a global AI leader while navigating the complex challenges of regulating this transformative technology.