Hill Dickinson, an international law firm with over 1,000 employees worldwide, has implemented a strategic governance shift for AI tools after detecting more than 32,000 ChatGPT interactions in just one week—many unauthorised under the firm's AI policies. This intervention illustrates the growing challenges professional services firms face in balancing AI innovation with compliance requirements.
Hill Dickinson's governance shift demonstrates the complex challenges enterprises face in managing "shadow AI" usage. The firm's strategic response balances innovation enablement with risk management by establishing a formal request process rather than implementing a complete ban. According to the BBC report, this approach appears to be working, with multiple access requests already approved within days of implementation.
The situation highlights a critical tension in enterprise AI adoption: 62% of UK solicitors anticipated increased AI usage in a September 2024 survey (as reported by the BBC), yet many organisations lack formal frameworks to govern this technology. The Information Commissioner's Office warns in the BBC article that outright bans simply drive staff to "use AI under the radar," creating greater security risks.
For regulated professional services firms handling sensitive client data, establishing appropriate AI governance frameworks represents a critical business imperative. The firm's approach provides a valuable blueprint for enterprises seeking to transition from reactive to proactive AI management strategies.