Written by Marijn Overvest | Reviewed by Sjoerd Goedhart | Fact Checked by Ruud Emonds | Our editorial policy
AI Regulations in 2024 — Blueprints For Responsible AI
- More companies and governments around the world are prioritizing AI regulation to address global concerns about transparency, ethics, and risk management.
- Clear guidelines for AI establish the importance of ethical practices, data security, and user protection.
- IBM reported that 25% of companies impose strategies to limit AI use to specific applications.
In 2024, AI regulations become even more crucial for a simple reason: innovation doesn’t stop. With advancements in artificial intelligence and machine learning technology, establishing barriers and common sense guidelines is non-negotiable.
In this article, I will discuss the key AI regulations in 2024, what they present, and how they will shape the future of regulating AI.
AI in procurement is the latest game-changer for procurement professionals. But the question we aim to answer in this article is: How can we regulate our use of AI? Join me as we explore the 5 AI regulations and their importance.
Without further ado. Let’s dive in!
5 AI Regulations in 2024
Around the world, governments, authorities, and organizations are looking into ways to monitor AI applications. A report by IBM showed that 25% of companies have strategies to limit AI’s use to specific applications.
In procurement, these regulations protect all levels from data breaches and potentially unethical behavior. Let’s dive into AI regulations set to transform 2024.
1. European Union — The Digital Services Act
The Digital Services Act (DSA) was implemented in late 2022 and focuses on social media, user-centric spaces, and online services aimed at advertising.
The main goal of the Digital Services Act is to enforce transparency and prevent companies from self-regulation practices.
The Digital Services Act covers online services, marketplaces, and online platforms that operate in the EU, no matter what kind of sector they fall under. In procurement, the DSA can ensure AI is not being misused in handling company data, assessing supplier information, or even in e-sourcing activities.
The EU’s long-term initiative is to let companies test and develop new technologies while protecting users and the public from unsafe use.
2. United States — The Algorithmic Accountability Act
The Algorithmic Accountability Act (AAA) was introduced in 2019 but was reintroduced in 2022 into both houses of Congress. When put into effect, the AAA would be binding — companies would need to assess their automated systems for bias and overall quality.
This would cover any individual, partnership, or organization that uses an automated system. If they manage the data of over 1 million people, they would need to report according to the Federal Trade Commission (FTC) guidelines.
This act would help organizations provide visibility into procurement automation as well as contract management, logistics, and more.
3. China — Deep Synthesis Provisions
The Deep Synthesis Provisions were imposed by China in January 2024, with the government’s aim to increase its monitoring of deep synthesis technologies.
The provisions are made of four pillars:
- Data security and personal information protection
This is to safeguard sensitive data and personal information used in deep synthesis technologies.
It includes encryption measures, access control, and other protocols for securing data to prevent breaches or misuse.
- Transparency
This is essential for accountable and safe use in any AI policy, such as a ChatGPT policy in procurement. It means companies must disclose information about their deep synthesis technologies.
- Content management and labeling
Think of this as moderation and detection in case of hate speech or harmful content. By requiring labeling and content moderation, companies would have to classify and provide visibility into their content sources.
This promotes accountability and shows users how to distinguish deep synthesis content from original content.
- Technical security
Measures for technical security are put in place to protect companies, clients, and external users from cybersecurity breaches.
This can involve protocols for AI and deep synthesis development, system structure, and data communications.
In the future, we can expect to see a major shift in AI-generated content for more than 1.4 billion people. Furthermore, China’s Deep Synthesis Provisions also target the regulation of deepfake content.
4. United Kingdom — Pro-innovation AI approach
The UK’s approach to AI does not include specific legislation thus far, but its government has shown dedication to the regulation of artificial intelligence.
However, a policy paper released by the DCMS, the Department for Business, Energy, & Industrial Strategy, and the Office for Artificial Intelligence highlights that AI regulations should be context-specific and based on impact.
Under this approach, regulators will define AI regulations based on context and usage. The approach is pro-innovation and centers risk analysis to limit usage.
5. Australia — AI Ethics Framework
Australia aims to align the development and usage of AI with eight ethical principles. These principles will help bring safer outcomes for the public, lower risks for AI applications, and enforce ethical standards when developing AI models.
- Human, social, and environmental wellbeing
- Human-centered values
- Fairness
- Privacy protection and security
- Reliability and safety
- Transparency and explainability
- Contestability
- Accountability
The Australian Government developed the AI Ethics Framework to highlight principles to guide the public, businesses, and corporations in using AI safely and efficiently.
The Importance of AI Regulations
What is the purpose of AI regulations being passed and put into effect? Let’s assess the main reasons why regulating artificial intelligence is non-negotiable:
Transparency
Transparency should be present in all businesses, especially in the context of AI. Without transparency, you could destroy your brand image and effectively, the business. This could lead to less trust in your brand due to a lack of visibility.
Being transparent about AI means that organizations should be willing to disclose information on their AI usage. This includes sources for AI-generated content and assessments about the accuracy of the content.
Innovation
For AI technologies to move forward, there needs to be innovation. Regulations about AI ensure this by creating frameworks to guide people about how to use it correctly and efficiently.
Some regulations even encourage the testing of AI, which consequently leads to better research, learning, and innovation.
Risk Management
There are risks to using AI, including data sensitivity, ethics, bias, and accuracy. Thus, it is important to adhere to AI regulations to lower the probability of a risk.
Because AI regulations outline guidelines for AI applications, they help data handling and enforce compliance to prevent legal penalties for misuse.
An Example of AI Regulations
IBM is a global technology company, with leading advancements in artificial intelligence and cloud solutions. The way IBM regulates its development and use of AI is driven by the belief that AI should augment human intelligence.
IBM states that its clients fully own personal data and that government policies about data should promote fairness and equity. Moreover, IBM’s AI policy is built on five pillars with corresponding toolkits: explainability, fairness, robustness, transparency, and privacy.
- AI Explainability 360: Focuses on transparency and easy interpretation of AI tools, with algorithms that help users and developers by explaining how decisions are made
- AI Fairness 360: Aims to address fairness and bias concerns, offering metrics to understand and tackle biases in AI systems.
- Adversarial Robustness 360: Offers techniques for detecting and defending AI models against adversarial attacks.
- AI Factsheets 360: Generates factsheets for AI models and aims to build transparency and accountability in AI systems.
- AI Privacy 360 Toolkit: Protects privacy and data confidentiality in AI applications by offering data anonymization and secure multi-party computation.
IBM’s regulation of AI doesn’t stop at its principles — the AI Ethics board puts IBM’s principles into action to build an environment of responsible AI throughout the organization.
Furthermore, IBM has built partnerships to support AI ethics initiatives, including partnerships with the US Chamber of Commerce, the European Commission Expert Group on AI, and others.
In solidifying its stand on AI ethics, IBM is one of the leading players in the tech world for ethical and innovative AI development.
Conclusion
To sum up, we need AI regulations to ensure new technology is created, tested, and used responsibly. Whether it’s a policy or law, AI regulations come into effect to set clear parameters and guidelines.
Without regulations for AI, sensitive data — identities, insights, and other private information are at risk of misuse.
Meanwhile, while some AI laws are yet to come into effect, other organizations and governments are taking significant steps to protect users and ensure responsibility.
These regulations exist not only for security but to ensure that we drive new advancements in AI development.
Frequentlyasked questions
Why are AI regulations necessary?
AI regulations make it possible for companies to build transparency, innovation, and firm risk management within their organization’s core. They serve as guidelines for ethical and responsible AI usage.
How do AI regulations differ across countries?
AI regulations across countries will vary depending on their laws, culture, and technological advancements. The EU focuses on transparency and user protection, while China focuses on data security and responsible content.
What is important to remember about regulating AI?
When regulating AI, it’s crucial to prioritize transparency, responsible innovation, and risk management.
About the author
My name is Marijn Overvest, I’m the founder of Procurement Tactics. I have a deep passion for procurement, and I’ve upskilled over 200 procurement teams from all over the world. When I’m not working, I love running and cycling.