EU Releases General Purpose AI Code of Practice: Key Steps for AI Developers to Meet AI Act Requirements

According to DeepLearning.AI, the European Union has published a 'General Purpose AI Code of Practice' that outlines voluntary steps developers can take to align with the AI Act's requirements for general‑use models. The code specifically directs developers of models considered to pose 'systemic risks' to rigorously document data sources, maintain detailed logs, and adopt transparent development practices. This initiative provides AI companies with practical guidelines to ensure compliance, reduce regulatory uncertainty, and build trustworthy AI systems for the European market. The code is expected to accelerate adoption of responsible AI frameworks in commercial AI product development, highlighting business opportunities for compliance consulting, auditing, and data governance solutions (source: DeepLearning.AI, August 2, 2025).
SourceAnalysis
From a business perspective, the General Purpose AI Code of Practice opens up new market opportunities while imposing strategic considerations for monetization in the AI sector. Companies developing general-purpose AI can leverage compliance as a competitive differentiator, attracting investments and partnerships in regulated markets. For example, according to a 2024 PwC report, businesses that prioritize AI ethics see up to 15 percent higher revenue growth due to enhanced trust from consumers and regulators. This code could drive monetization strategies such as premium compliance consulting services or certified AI tools, especially in industries like autonomous vehicles, where systemic risk models must adhere to strict logging requirements. However, implementation challenges include the high costs of documentation and auditing, which small enterprises might find burdensome, potentially leading to market consolidation favoring larger players like OpenAI or Google, as noted in a 2025 Gartner analysis predicting that 40 percent of AI startups could face compliance hurdles by 2026. To address these, businesses can adopt scalable solutions like automated logging tools, creating opportunities for SaaS providers in AI governance. The competitive landscape is shifting, with EU-based firms gaining an edge in global tenders that require ethical AI certifications. Regulatory considerations are paramount, as non-compliance could result in fines up to 6 percent of global turnover under the AI Act, enforced since 2024. Ethically, the code promotes best practices in bias detection and data privacy, aligning with GDPR standards from 2018, encouraging companies to integrate ethical audits into their development cycles for sustainable growth.
On the technical side, the code outlines detailed requirements for logging model behaviors and documenting data sources, which involves advanced techniques like provenance tracking and audit trails, essential for models with systemic risks. Implementation considerations include integrating these into existing pipelines, where challenges like data volume—often exceeding petabytes, as per a 2024 IBM study—require efficient storage solutions. Developers can overcome this by using blockchain for immutable logs or cloud-based analytics, reducing overhead while ensuring compliance. Looking to the future, this could lead to standardized AI frameworks by 2027, predicting a 25 percent increase in interoperable AI systems, according to Forrester's 2025 projections. The outlook suggests accelerated adoption of safe AI practices, impacting global supply chains and fostering innovations in explainable AI. Key players must navigate these by investing in R&D for compliant architectures, while ethical implications underscore the need for diverse datasets to mitigate biases, promoting inclusive AI development.
DeepLearning.AI
@DeepLearningAIWe are an education technology company with the mission to grow and connect the global AI community.