Article Details

Scrape Timestamp (UTC): 2025-02-12 11:23:11.933

Source: https://thehackernews.com/2025/02/how-to-steer-ai-adoption-ciso-guide.html

Original Article Text

Click to Toggle View

How to Steer AI Adoption: A CISO Guide. CISOs are finding themselves more involved in AI teams, often leading the cross-functional effort and AI strategy. But there aren't many resources to guide them on what their role should look like or what they should bring to these meetings. We've pulled together a framework for security leaders to help push AI teams and committees further in their AI adoption—providing them with the necessary visibility and guardrails to succeed. Meet the CLEAR framework. If security teams want to play a pivotal role in their organization's AI journey, they should adopt the five steps of CLEAR to show immediate value to AI committees and leadership: If you're looking for a solution to help take advantage of GenAI securely, check out Harmonic Security. Alright, let's break down the CLEAR framework. Create an AI Asset Inventory A foundational requirement across regulatory and best-practice frameworks—including the EU AI Act, ISO 42001, and NIST AI RMF—is maintaining an AI asset inventory. Despite its importance, organizations struggle with manual, unsustainable methods of tracking AI tools. Security teams can take six key approaches to improve AI asset visibility: Learn: Shift to Proactive Identification of AI Use Cases Security teams should proactively identify AI applications that employees are using instead of blocking them outright—users will find workarounds otherwise. By tracking why employees turn to AI tools, security leaders can recommend safer, compliant alternatives that align with organizational policies. This insight is invaluable in AI team discussions. Second, once you know how employees are using AI, you can give better training. These training programs are going to become increasingly important amid the rollout of the EU AI Act, which mandates that organizations provide AI literacy programs: "Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems…" Enforce an AI Policy Most organizations have implemented AI policies, yet enforcement remains a challenge. Many organizations opt to simply issue AI policies and hope employees follow the guidance. While this approach avoids friction, it provides little enforcement or visibility, leaving organizations exposed to potential security and compliance risks. Typically, security teams take one of two approaches: Striking the right balance between control and usability is key to successful AI policy enforcement. And if you need help building a GenAI policy, check out our free generator: GenAI Usage Policy Generator. Apply AI Use Cases for Security Most of this discussion is about securing AI, but let's not forget that the AI team also wants to hear about cool, impactful AI use cases across the business. What better way to show you care about the AI journey than to actually implement them yourself? AI use cases for security are still in their infancy, but security teams are already seeing some benefits for detection and response, DLP, and email security. Documenting these and bringing these use cases to AI team meetings can be powerful – especially referencing KPIs for productivity and efficiency gains. Reuse Existing Frameworks Instead of reinventing governance structures, security teams can integrate AI oversight into existing frameworks like NIST AI RMF and ISO 42001. A practical example is NIST CSF 2.0, which now includes the "Govern" function, covering: Organizational AI risk management strategies Cybersecurity supply chain considerations AI-related roles, responsibilities, and policies Given this expanded scope, NIST CSF 2.0 offers a robust foundation for AI security governance. Take a Leading Role in AI Governance for Your Company Security teams have a unique opportunity to take a leading role in AI governance by remembering CLEAR: By following these steps, CISOs can demonstrate value to AI teams and play a crucial role in their organization's AI strategy. To learn more about overcoming GenAI adoption barriers, check out Harmonic Security.

Daily Brief Summary

MISCELLANEOUS // Guide for CISOs on Leading AI Governance and Security

CISOs are increasingly integral in guiding AI strategies and cross-functional teams within organizations.

The article introduces the "CLEAR" framework to aid security leaders in enhancing the adoption and governance of AI technologies.

A key aspect of the CLEAR framework is maintaining an AI asset inventory to comply with various regulatory requirements.

Security teams are encouraged to proactively identify and train on AI use cases, rather than restricting them, to enhance AI literacy in line with upcoming regulations like the EU AI Act.

The enforcement of AI policies is crucial, with emphasis on balancing usability with control to mitigate risks effectively.

Real-world AI applications for security tasks such as detection, response, and data loss prevention are highlighted as beneficial.

The integration of AI oversight into existing frameworks like NIST AI RMF and ISO 42001 is recommended to avoid creating redundant governance structures.

CISOs are advised to leverage the CLEAR approach to demonstrate their leadership and value in their organization's AI journey.