HomeCybersecurityISACA Survey Reveals Cybersecurity Teams in India Are Largely Excluded from AI...

ISACA Survey Reveals Cybersecurity Teams in India Are Largely Excluded from AI Policy Development

The latest State of Cybersecurity 2024 survey report from ISACA reveals that only 27% of cybersecurity professionals in India are involved in AI policy development within their organizations, with 50% reporting no involvement in developing, onboarding, or implementing AI solutions. Conducted in partnership with Adobe, the survey includes responses from over 1,800 global cybersecurity professionals, reflecting the broader impact of AI on the cybersecurity workforce and threat landscape.

Primary Applications of AI in Cybersecurity Operations

  • Indian cybersecurity teams that are leveraging AI are primarily focused on:
  • Endpoint security (31%)
  • Automating threat detection/response (29%)
  • Routine security task automation (27%)
  • Fraud detection (17%)

Jon Brandt, ISACA’s Director of Professional Practices and Innovation, emphasized that AI can relieve cybersecurity professionals of repetitive tasks, potentially easing workforce strain in a complex threat environment. However, he warned that security teams’ involvement in AI governance is critical to ensuring secure and responsible deployment.

A Call for Greater Cybersecurity Participation in AI Governance

RV Raghu, Director at Versatilist Consulting India Pvt Ltd and ISACA India Ambassador, expressed concern over the lack of cybersecurity team involvement in AI decision-making. “There is an urgent need for organizations to rethink how they integrate cybersecurity professionals in AI governance,” Raghu stated, underscoring that integrating cybersecurity experts in AI policy development is essential for security and responsible implementation.

AI Resources and Policy Guidance from ISACA

ISACA has responded to the increased integration of AI with new resources for cybersecurity professionals:

  • EU AI Act White Paper: A guide for organizations to prepare for the EU AI Act, effective from August 2026, with recommendations on audit and traceability, cybersecurity adaptations, and designating an AI lead.
  • Authentication in the Deepfake Era: Insights on adaptive authentication systems using AI, which can enhance security but also pose risks if manipulated. The report also discusses research on AI integration with quantum computing.
  • Generative AI Policy Considerations: A framework to help organizations adopt AI policies with guiding questions to ensure compliance and ethical standards.
    Advancing AI Skills and Certifications

To help professionals keep up with AI advancements, ISACA has expanded its education offerings, including courses on Machine Learning, Neural Networks, and Deep Learning. In addition, ISACA will launch its Certified Cybersecurity Operations Analyst certification in early 2025, focusing on skills to address emerging AI-driven cyber threats.

Empowering Cybersecurity Teams for AI-Driven Future

As AI transforms the cybersecurity field, ISACA’s initiatives aim to equip professionals with the knowledge and tools to meet these challenges. The report underscores the importance of embedding cybersecurity expertise into AI development processes, recognizing that collaborative governance is crucial for secure and responsible AI integration.

Also read – boAt Partners with Blinkit to Deliver Diwali Joy in Just 10 Minutes

Join our WhatsApp News Channel for quick updates – FYI9 News WhatsApp Channel

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read