SOF 2026 Update Brief: Global Digital Compact
Introduction
Artificial intelligence (AI) is evolving at a rapid pace, transitioning from a specialized tool to a fundamental part of global infrastructure. Because of this influence, governments and technology experts are increasingly focused on the ethical challenges of AI, particularly regarding algorithmic bias and the unintentional spread of misinformation. Rather than being a passive technology, AI can actively shape how the public perceives reality, making its regulation a critical priority for national safety and information integrity.
Government Attempts to Control Ideological Bias in AI
2025 saw the rapid rise of large language models (LLMs), bringing the lack of clear regulation on AI use into sharp focus. Inadequate AI legislation across the world has become more apparent. Fears have spread that AI outputs will reflect the political views of developers. It is also a concern that AI will ignore minority views or reinforce certain points of view.
Some governments have attempted to pass laws to stop or control the impact of LLMs on public viewpoints and health. The United States spent much of the fall in debate over an executive order that would prevent AI from violating established national values. India applied rules on digital control to artificial intelligence. The goal was to prevent the creation of content that is harmful or misleading. In both cases, the precise definition of what constitutes harmful content remains vague, but these measures clearly demonstrate government oversight of private companies. The European Union also began enforcement of the EU AI Act at the end of 2025. It had the goal of further regulating negative uses of AI.
Laws surrounding AI have been in progress for years, but 2025 and early 2026 have seen a major pivot toward process-based regulation. Rather than just policing the final output, like a deepfake or a chat response, new laws are scrutinizing the how and where of AI development. This shift moves the focus to algorithmic transparency, the origin of training data, and the explainability of how a system reaches its conclusions. Controlling the bias behind AI has proven difficult for governments. Setting regulations on content is easier than controlling an algorithm established by a private corporation. However, the EU’s requirement of transparency for algorithms opens a potential avenue for greater insight into the content generation of LLMs. It is not clear what the role of governments will be in controlling AI. However, the notion of AI bias has come to the forefront of political agendas in 2025.
Increasing Use of AI to Spread Propaganda
AI propaganda has been an early and consistent concern. Cases at the end of 2025 escalated the stakes. While prior propaganda campaigns have appeared targeted, there has been a recent emergence of “AI Slop.” This refers to the spread of low-quality misinformation in high quantities. While it is likely easier to identify, the pace and quick spread of this misinformation can overwhelm the media.
In December of 2025, following gun violence at Bondi Beach in Australia, the internet was quickly swamped with a large quantity of shallow information. Whether it was images of the criminals, victim stories, or police reports, there was no shortage of information about the tragedy. News sources were quick to ask the public to wait for verified information. However, the age of social media news and the quick spread of information made it nearly impossible for many sources to prove the truth in a stream of false information. AI‑generated content and AI‑amplified errors contributed to this spread, creating realistic but misleading posts that complicated verification and fueled confusion.
The speed and scope of the information spread alone have raised concerns. During emergencies and crises, factual news can be quickly drowned out. This is not limited to criminal acts. Following Hurricane Melissa in the Caribbean in October 2025, AI-generated images of destruction quickly circulated online, spreading false impressions of the damage. Historically, misinformation and propaganda have often been used to gain support or political leverage. AI-driven misinformation presents a different challenge because there is frequently no clear beneficiary. Individuals who share or amplify content may gain attention or followers, but no central actor is orchestrating the spread. This lack of an identifiable motive makes the phenomenon particularly alarming, as the apparent goal is simply to create chaos.
Conclusion
States are taking different approaches to shaping AI. The diversity in views and priorities is clear across the world. It is important to find a balance between censorship and freedom. Different definitions are used by different countries. These describe what level of censorship should be involved with AI. As AI technology develops, shared frameworks will guide AI standards. The international community must decide the direction that laws about AI will take.
Bibliography
- Bhatia, Aatish. “AI’s Synthetic-Data Promise: How Artificial Intelligence Turns Real Data Into Virtual Datasets.” The New York Times. Last modified August 26, 2024. https://www.nytimes.com/interactive/2024/08/26/upshot/ai-synthetic-data.html.
- Britannica. “Artificial Intelligence (AI): Pros, Cons, Debate, Arguments.” Last modified November 19, 2025. https://www.britannica.com/procon/artificial-intelligence-AI-debate.
- Canadian Legal Information Institute. “Artificial Intelligence & Criminal Justice: Cases and Commentary.” 2024. https://www.canlii.org/en/commentary/doc/2024CanLIIDocs3035.
- Chayka, Kyle. “The Year in Slop.” The New Yorker. December 17, 2025. https://www.newyorker.com/culture/infinite-scroll/the-year-in-ai-slop
- Conner, Adam. “President Trump’s AI National Policy Executive Order Is an Unambiguous Threat to States Beyond Just AI.” Center for American Progress. Last modified December 12, 2025, https://www.americanprogress.org/article/president-trumps-ai-national-policy-executive-order-is-an-unambiguous-threat-to-states-beyond-just-ai/.
- CTV News. “AI Chatbots Changing Online Threat Landscape as Ottawa Reviews Legislation.” Last modified September 3, 2025. https://www.ctvnews.ca/sci-tech/article/ai-chatbots-changing-online-threat-landscape-as-ottawa-reviews-legislation/.
- Durbin, Adam. “Fake hurricane videos shared online including AI-generated sharks.” BBC. Last modified October 28, 2025. https://www.bbc.com/news/live/cge5qzwxgqvt
- Kelley, Bradford J. and Andrew B. Rogers. “The Sound and Fury of Regulating AI in the Workplace.” Harvard Journal on Legislation Online. December 6, 2025. https://journals.law.harvard.edu/jol/2025/12/06/the-sound-and-fury-of-regulating-ai-in-the-workplace/.
- Matias, Yossi. “Google Research 2025: Bolder Breakthroughs, Bigger Impact.” Google Research Blog. Last modified December 18, 2025. https://research.google/blog/google-research-2025-bolder-breakthroughs-bigger-impact/.
- National Crowdfunding & Fintech Association. “EU AI Transparency Rules Take Effect Setting New Benchmark.” Last modified October 6, 2025. https://ncfacanada.org/eu-ai-transparency-rules-take-effect-setting-new-benchmark/.
- Romanishyn, Andrii. “AI-Driven Disinformation: Policy Recommendations for Democratic Resilience.” NCBI. 2025. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12351547/.
- Ronin Legal Consulting. “How Does India Regulate AI? 10 Things You Need to Know.” 2025. https://roninlegalconsulting.com/how-does-india-regulate-ai-10-things-you-need-to-know/.
- Samples, John. Why the Government Should Not Regulate Content Moderation of Social Media. Policy Analysis No. 865. Washington, DC: Cato Institute, April 9, 2019. https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media.
- Taylor, Josh. “Fake Minns, Altered Images and Psyop Theories: Bondi Attack Misinformation Shows AI’s Power to Confuse.” The Guardian. Last modified December 18, 2025. https://www.theguardian.com/australia-news/2025/dec/18/fake-minns-altered-images-and-psyop-theories-bondi-attack-misinformation-shows-ais-power-to-confuse-ntwnfb.