Summary
On 16 October CTech launched its roundtable series, ‘AI – Stick or Twist,’ in London. Senior Lecturer at Essex University, Dr Javier Andreu-Perez and George Lynch, Director of Technology and Solutions at CTech, presented as keynote speakers, sharing strategic insights on generative AI implementation and challenges faced by technology leaders.
The main discussion addressed the question, ‘should organisations invest in generative AI today or are risks too high?’.
Highlights
- Privacy data and protection, cybersecurity and unreliable outputs were highlighted as the top three AI risks by attendees
- Lack of clarity around future regulation is hindering large-scale AI implementations. Document creation will become a basic and essential process for ensuring regulatory compliance
- Skill gaps and talent shortages may widen as society becomes dependent on generative AI
- Generative AI should add quantifiable business value. Not all use cases are equal and many are yet to prove valuable outcomes
- Build capabilities slowly. Start with small-scale challenges in the organisation, demonstrating tangible value before expanding use cases to other areas
- Businesses need to be prepared to fail, before seeing the full advantages of generative AI
The risks are irrefutable
The risks of generative AI adoption are widely felt at executive level (reflected by its low penetration rate across organisations). Data privacy and protection, cybersecurity and unpredictability of outputs were raised as the top three concerns of GenAI by senior-level attendees.
1. Data privacy and protection
Generative AI tools increase the probability of data protection and privacy breaches. Particularly in cases where unstructured internal data is left unprotected. Attendees commented that off-the-shelf Large Language Models (LLM), such as Google’s Gemini, can leak sensitive or private enterprise data fed during its training phase. For example, unique client identifiers. The reputational and financial implications can be devastating for enterprises. According to a study by IBM, the average cost of a data breach in the UK alone reached £3.58 million in 2024.
Data governance programmes and monitoring were discussed as key mitigations. But many organisations are yet to define their data strategies and truly understand their total data availability. Attendees agreed that trust, security and privacy must first be addressed before moving forward with generative AI implementations.
2. Input and output risks
Even with RAG or fine-tuning practices, generative AI can produce unreliable outputs that lead to costly errors or erroneous decision-making. This is exacerbated by the limited traceability and irreproducibility of GenAI outputs. In the US, a $5,000 fine was administered after fake AI-generated court cases were presented, highlighting the need for strict monitoring and, in some cases, retraining of AI models.
One interesting hypothetical raised by an attendee was, ‘if 98% of generative AI outputs are correct and only 2% incorrect, wouldn’t its benefits outweigh any potential errors?’.
3. Cybersecurity
48% of security professionals consider AI to be the greatest security risk to their organisation, reports HackerOne. Attendees stressed that generative AI introduces new vulnerabilities across their supply chain by adding sophistication to attack methods, like phishing campaigns and deepfakes. In February 2024, a multinational finance firm paid $25 million to cybercriminals who used deepfake technology to impersonate their CFO – a frightening prospect for business leaders.
Forward-thinking organisations will strengthen their attack surface strategies by integrating GenAI for real-time incident analysis and threat intelligence.
Read more about how GenAI is changing the threat landscape. https://our-thinking.ctechglobal.com/insights/massive-threats-posed-by-generative-ai-and-quantum