With 18 years of experience within IT and business transformation, Abhinav Mittal provides unique insights, where abstract ethical principles collide with the concrete realities of software development.

The Journey to Practical AI Governance

Mittal's journey to understand the practical implementation of responsible AI was both deliberate and comprehensive. Having previously obtained an MBA accredited by the Stuart School of Management, IIT Chicago, and gone through the Emerging Leaders Program at Harvard Business School, Mittal had focused his career on digital transformation. But, driven by a fundamental question about the purpose of technology, he enrolled in a six-month Certified Ethical Emerging Technologist (CEET) program, way before we had ChatGPT.

Recognizing a gap between theory and practice, Mittal took his education further. "I said, Okay, I do have a conceptual understanding of responsible AI. But I need to learn how to put my skills to practise," he explains. This led him to a further six-months of study, pursuing the certification program offered by BABL AI.

The program demanded Mittal's full attention, requiring him to take a break from his IT cost optimisation work. "I spent a whole lot of time understanding the nuances of AI ethics and laws in the world, and how organisations can actually implement AI governance," Mittal recalls. This intensive study covered practical aspects such as conducting AI risk assessments, identifying potential harms, and establishing governance functions within organisations. The experience equipped Mittal with the tools to bridge the gap between ethical principles and their real-world application, a skillset he now employs in helping clients navigate the complexities of AI implementation, especially with regards to emerging regulation like the EU AI Act.

Organisational Challenges in Implementing the EU AI Act

Whilst ground-breaking, the EU AI Act presents challenges for organisations seeking compliance. Mittal paints a picture of confusion and uncertainty across various levels of corporate hierarchies. At the developer level, there's a cry for clarity. "Their core concern is 'I don't have anything black and white,' and ‘unless and until I get clear requirements, I don't think I can act’," Mittal explains. This ambiguity poses a significant challenge for those accustomed to working with clear specifications, who now find themselves in uncharted territory where ethical considerations must be translated into code.

Product managers face similar dilemmas, struggling to classify their AI algorithms and features according to the Act's risk categories. The lack of clear rubrics for risk assessment makes it difficult to prioritise features and functionalities in product roadmaps, potentially slowing down innovation as companies grapple with compliance concerns.

The confusion extends to the C-suite, where they grapple with the multiplicity of potential regulations and standards. As one chief legal officer put it, "I really don't know which regulations we need to comply with. Is it only the EU AI Act? Is it something that will come from the United States? Is it something that will come from other parts of the world?" This regulatory uncertainty makes it challenging for companies to develop comprehensive compliance strategies, potentially exposing them to legal risks.

The role of standards bodies adds another layer of complexity. Organisations like ISO, IEEE, and NIST are all working on AI governance standards, but the lack of a unified approach compounds the confusion. Mittal cites examples such as ISO 42001 (which focuses on ethical development and addressing areas like bias mitigation, transparency, accountability, and governance) and ISO 23894 (which offers guidance to organisations for managing risks connected to the development and use of AI), illustrating the multifaceted nature of these standards.

This uncertainty has a cascading effect on decision-making and resource allocation. CEOs and boards, faced with a lack of clear direction, are hesitant to allocate funds for compliance efforts. As Mittal summarises a common view he has heard from the C-suite: "My entire organisation is confused right now. If they cannot decide what they need to comply with and by when, I cannot spare any funding right now."

The political dimension adds another layer of complexity. In election years, legislators may be hesitant to upset companies with strict regulations, potentially leading to delays or watered-down implementation of AI governance measures. [Editor’s note: The upcoming US elections pose a different dilemma, with Biden’s and Trump’s views often divergent. Biden seems to favour stronger regulation with the government being an active partner, while Trump wants a lighter regulatory approach and reducing government intervention. Add to this Trump’s tendency to focus on American leadership and America-first, which could lead to a unilateral approach to international cooperation that may not fully align with something like the EU-AI Act.]

“Despite these challenges, some organisations are taking the lead in responsible AI implementation. Hyperscalers and cloud providers, in particular, are at the forefront, driven by both ethical considerations and business imperatives. Many of these companies own foundation models. These models are fuelling a lot of growth in cloud spending for providers. Recent financial results of some of these cloud providers show that AI-related spending is offsetting sluggish demand in other areas of digital transformation. This economic incentive is pushing cloud providers to invest heavily in responsible AI initiatives, with companies like Microsoft actively hiring for numerous positions in this field,” according to Mittal.

Mittal advocates for clearer guidelines and rubrics that can help companies self-audit and demonstrate compliance. He emphasises the need for specific documentation requirements and self-audit procedures that companies can follow to ensure they are meeting the EU AI Act's standards.

The challenge of translating ethical principles into actionable guidelines is at the heart of the AI governance debate. Mittal defines responsible AI as "Transparent communication of inputs, processing, and output of an AI system, in any means that a high school student can understand." This emphasis on transparency and comprehensibility is crucial for building public trust in AI systems and ensuring their responsible deployment.

The Future of AI Governance Standards

The question of which standards will ultimately prevail in the AI governance landscape remains open. The proliferation of standards from various sources reflects the complex ethical landscape surrounding AI, where different cultural, social, and economic values can lead to varying interpretations of what constitutes responsible AI.

Mittal believes that market forces will eventually determine which standards take the lead, with the EU and NIST (National Institute of Standards and Technology, part of the US Department of Commerce) policies likely to be frontrunners.

In Mittal’s view, the core principles of AI governance - fairness, transparency, privacy, accountability, explainability, and safety - will likely remain consistent across different frameworks. The real differentiation will come in the implementation, reporting, and enforcement of these principles. This observation points to the need for flexible governance frameworks that can adapt to rapidly evolving AI technologies while maintaining a consistent ethical foundation.

The implementation of the EU AI Act will serve as a testing ground for these governance principles. The first fines issued under the Act will be a crucial moment, signalling how seriously regulators intend to enforce the new rules and potentially setting precedents for AI governance globally.

The path to effective AI governance is far from straightforward. As Mittal's insights reveal, the industry stands at a crossroads, grappling with a patchwork of emerging standards and the monumental task of translating ethical principles into actionable code. The coming months will likely see a flurry of activity as organisations scramble to interpret and implement the EU AI Act's requirements.

However, amidst this uncertainty, one thing remains clear: the need for a common language of AI governance that bridges the gap between policymakers, developers, and the public. As Mittal suggests, the ultimate measure of success may not be in the complexity of our standards, but in our ability to make AI systems transparent and comprehensible to a high school student. This benchmark of clarity could well become the touchstone for responsible AI in the years to come.


Share this post
The link has been copied!