Publication date : 02/28/2025

Course : Artificial intelligence security: challenges, risks and best practices

Securing models, agents and managing the risks of generative AI

Seminar - 2d - 14h00 - Ref. SIA
Price : 1850 € E.T.

Artificial intelligence security: challenges, risks and best practices

Securing models, agents and managing the risks of generative AI



AI is profoundly transforming businesses and opening up unprecedented prospects. But this revolution also brings new challenges, particularly in terms of security. This training course will help you identify and understand the specific risks associated with AI, such as prompt injection, hallucinations, shadow AI and data poisoning. You'll see how these threats impact the security of AI systems, and what strategies you can adopt to anticipate and mitigate them, and ensure the secure and compliant use of artificial intelligence in the enterprise.


INTER
IN-HOUSE
CUSTOM

Seminar in person or remote class
Available in English on request

Ref. SIA
  2d - 14h00
1850 € E.T.




AI is profoundly transforming businesses and opening up unprecedented prospects. But this revolution also brings new challenges, particularly in terms of security. This training course will help you identify and understand the specific risks associated with AI, such as prompt injection, hallucinations, shadow AI and data poisoning. You'll see how these threats impact the security of AI systems, and what strategies you can adopt to anticipate and mitigate them, and ensure the secure and compliant use of artificial intelligence in the enterprise.


Teaching objectives
At the end of the training, the participant will be able to:
Identifying new risks linked to artificial intelligence
Understanding AI-specific vulnerabilities and attacks
Master the methods and best practices for securing an AI project or application
Harnessing generative AI to strengthen cybersecurity

Intended audience
CIOs, CISOs, project managers, security managers, consultants, IT/IA project managers and digital transformation managers.

Prerequisites
General knowledge of information systems.

Course schedule

1
AI security: major issues and challenges

  • AI fundamentals: introduction to key concepts: generative AI, APIs, machine learning and deep learning.
  • AI's strategic impact on business: meteoric adoption and new security challenges.
  • Risk identification: exploring the threats posed by cyber attacks, algorithmic biases and AI models.
  • Comprehensive approach to AI security: technical, legal, organizational and ethical dimensions.

2
New threats and cyber attacks

  • AI-assisted social engineering and fraud: deepfake phishing, voice cloning, fraudulent site generation.
  • Malware: generative malware, polymorphic malware.
  • Enhanced keylogger, intelligent obfuscation, stealth spyware, self-evolving ransomware.
  • Attacks on authentication systems: bypassing biometric systems, assisted CAPTCHA resolvers.
  • Automated vulnerability discovery, zero-day vulnerability exploitation, neural fuzzing.
  • Improved evasion techniques, supply chain attacks.

3
AI system vulnerabilities

  • Top 10 for LLM Applications from OWASP.
  • Attacks on interactions: injection of prompts, output manipulation.
  • Attacks on data integrity: poisoning, falsification or corruption of training data sets.
  • Attacks on confidentiality and intellectual property: model extraction or inversion, data leakage, etc.
  • Attacks on learning and decision-making mechanisms: falsification of inputs/outputs, noise, etc.
  • Attacks on models: inversion, substitution, hijacking, exploitation of biases, hijacking of algorithms.
  • Attacks on infrastructures: exploiting vulnerabilities in frameworks and APIs.

4
Risk management in artificial intelligence projects

  • MIT risk mapping: AI Risk Repository.
  • Threat taxonomy: MITRE ATLAS, OWASP AI Exchange and NIST Adversarial ML.
  • Risk management frameworks: NIST AI Risk Management Framework (RMF), EBIOS RM methodology.
  • Risk management in the implementation of an ISO 42001 SMIA.
  • Personal data protection and privacy impact assessment (PIA/AIPD).
  • Risk management: technical and organizational measures.

5
AI application security (LLM, agents and APIs)

  • A: General safety principles for AI applications :
  • Secure AI by design approach.
  • Identify the attack surface and threats specific to AI models.
  • Access control and data encryption.
  • B: Model and algorithm security :
  • Dataset security (protection of training data, prevention of data poisoning and data leakage).
  • Hardening AI models: adversarial training, model watermarking.
  • Techniques for detecting bias and drift in AI models.
  • Protection against prompt injection and adversarial attacks on LLMs.
  • C: Infrastructure and API security :
  • API security (authentication, access control, input/output validation).
  • Securing MLOps and LLMOps pipelines: access management, model integrity.
  • Validation and monitoring of models in production, detection of anomalies.

6
Securing the use of generative AI in business

  • A: Identify and understand the risks associated with the use of generative AI :
  • Sensitive data leaks: inadvertent exposure of internal, personal information and industrial secrets.
  • Generation of incorrect or misleading answers that can impact decision-making.
  • Shadow AI and uncontrolled use: uncontrolled deployment of AI tools outside the framework authorized by the company.
  • Intellectual property and legal framework: responsibilities associated with AI-generated content.
  • B: Defining a policy for the use of generative AI in companies :
  • Drafting of a charter for internal use of IAGen solutions.
  • Delimitation of authorized and prohibited uses.
  • Raising awareness and training employees in best practices.
  • C: Awareness and governance of AI use in business :
  • Ongoing employee training on risks and best practices.
  • Appointment of a Chief AI Officer (CAIO) or an AI governance committee.
  • Set up monitoring and reporting on the use of AI in the company.
  • D: Best practices for safe use :
  • Limit decision-making based solely on generated recommendations.

7
Audit, transparency and safety assessment of AI systems

  • A: Compliance and audits of IA systems :
  • AI risk assessment with COMPL-AI, AI Risk Repository (MIT).
  • Alignment with ISO 42001 (SMIA) requirements.
  • Transparency policies and documentation of IA decisions.
  • B: Transparency and explicability of AI models :
  • Importance of transparency and auditability of AI models (XAI - Explainable AI).
  • Tools and methodologies for auditing an AI model (explicability, robustness, drifts).
  • Validation of the fairness of AI models and management of drifts over time.

8
Harnessing generative AI for cybersecurity

  • A: Governance and risk management (GRC) :
  • Automation of strategic tasks: assistance with the drafting of PSSI, summary of current regulations.
  • Proactive intelligence: analysis and structuring of information arising from regulatory developments.
  • B: Application development and security support :
  • Automated security testing: creation of attack scenarios and generation of polymorphic payloads.
  • C: Strengthening incident detection and response capabilities :
  • Advanced log analysis, incident reconstruction and automated response plan generation.
  • D: Cybersurveillance and Threat Intelligence :
  • Analysis and contextualization of cyberthreats: automatic translation and categorization of emerging attacks.
  • E: Regulatory compliance assistance :
  • Gap analysis and standards compliance: interpreting the requirements of regulatory frameworks (RGPD, DORA, NIS2).
  • F: Integrating AI into cybersecurity solutions:
  • XDR Firewall, SOAR, Microsoft Security Copilot, IBM Watson GenAI, Darktrace Prevent/Antigena, Zynamp, Cymulate.


Customer reviews
4,4 / 5
Customer reviews are based on end-of-course evaluations. The score is calculated from all evaluations within the past year. Only reviews with a textual comment are displayed.
FRÉDÉRIC B.
19/03/26
5 / 5

Contenu dense et intéressant, formateur très compétent sur le thème.Pas de questionnaire de validation des acquis (formateur non informé par Orsys sur la fourniture de ce questionnaire)
ALEXANDRE C.
19/03/26
5 / 5

Contenu assez complet et mériterait un jours de plus.
OLIVIER V.
19/03/26
5 / 5

Contenu trop dense pour 2 jours.



Dates and locations
Select your location or opt for the remote class then choose your date.
Remote class

Last places available
Guaranteed date, in person or remotely
Guaranteed session

REMOTE CLASS
2026 : 18 June, 24 Sep., 1 Dec.

PARIS LA DÉFENSE
2026 : 4 June, 22 Sep., 26 Nov.