Home > Digital technologies > Application testing and quality > AI for software quality: checklists and best practices

AI for software quality: checklists and best practices

Published on 15 January 2026

What if quality became a team reflex? With AI, your checklists become your co-pilot: test case generation, guided code reviews, risk prioritisation, and automatic control of acceptance criteria... to deliver better, faster and with confidence.

Image Article AI and software quality

For a long time, software quality was seen as the preserve of developers, or even a constraint imposed at the end of a software development cycle. This view is now a thing of the past.

Today, quality involves the whole team: tech leads to define standards, QA to orchestrate the testing strategy, SRE and DevOps to guarantee reliability in production, Product Owners and managers to set quality objectives.

And AI is changing the game: it's making it possible to industrialise rigour without adding to development cycles.

Quality, a shared responsibility

Before talking about tools, let's set the scene. Software quality rests on three pillars:

  • Prevention avoiding faults at the design stage
  • Detection Identify anomalies as early as possible
  • Continuous improvement learning from every incident

Each member of the team contributes to these three pillars. The developer writes maintainable code, the QA defines critical scenarios, DevOps automates controls and the PO specifies acceptance criteria. AI amplifies each of these roles, provided you give it a clear framework.

Checklists enhanced by AI

Checklists are the simplest and most powerful tool for guaranteeing quality. But their limitations are well known: they are static, sometimes ignored and often incomplete. AI makes them dynamic and contextual.

Code review checklist

Don't let basic mistakes go by.

In a classic checklist, before each merge request, you check:

  • ☐ The code respects the project's naming conventions
  • ☐ Complex functions (>20 lines) are documented
  • ☐ Error cases are explicitly managed
  • ☐ Added dependencies are justified and up to date
  • ☐ Unit tests cover new code paths
  • ☐ No secret (API key, password) is present in the code
  • ☐ Compatibility: migration/rollback, backward compatibility

Prompt AI to automate this review:

Prompt IA
Act as a Senior Tech Lead expert in [Language].
Analyse this code and check the following points:
1. Compliance with the project's naming conventions based on [Language] (for example, camelCase or snake case for variables, PascalCase for classes, etc.).
2. Presence of documentation for functions longer than 20 lines.
3. Explicit error handling depending on the language (try/except vs try/catch, None/null/nil value checking, timeouts/retries, etc.).
4. Detection of potential secrets (API key patterns, hard passwords).
5. Likely security risks (injection, authz, deserialization, SSRF, path traversal) and minimum recommendations.
6. Propose 1 to 3 relevant tests (unit/int/E2E) to prove the corrections.

For each non-compliant point, indicate the line concerned and propose a correction.

Definition of Done« (DoD) checklist

For the Product Owner and the team, a ticket is only completed if the criteria have been validated.

In a classic checklist :

  • ☐ Acceptance criteria are all validated
  • ☐ Automated tests pass (unit, integration, E2E)
  • ☐ User documentation is updated
  • ☐ The code review is approved by at least one peer
  • ☐ Performance metrics are within acceptable thresholds
  • ☐ Deployment in a staging environment is validated

Prompt AI to check acceptance criteria :

Prompt IA
Here are the acceptance criteria for a user story:
[Paste criteria]

Here is the implemented code:
[Paste code or diff]

Analyses whether each criterion is covered by the implementation.
Generates an array, for each criterion indicates: ✅ Covered/⚠️ Partially covered/❌ Not covered
Justify your answer with references to the code.

Integrating AI into your CI/CD pipeline

Automation is the key to making quality a reflex. Here's how to integrate AI controls into every stage of your pipeline.

Stage 1: Pre-commit (the first bulwark)

Configure pre-commit hooks (or IDE extensions) to intercept problems before they enter the Git repository. :

Hook
# Example of pre-commit configuration with AI analysis
- repo : local
  hooks :
    - id: ai-code-review
      name: IA code analysis
      entry : python scripts/ai_review.py
      language : python
      types: [python]

Stage 2: Pull Request (the enhanced review)

Automatically trigger AI analysis (using tools such as Qodo (formerly CodiumAI), SonarQube AI or custom GitHub Actions) on each PR :

  • Summary of changes to facilitate human review
  • Detection of problematic patterns (duplicated or dead code, excessive complexity)
  • Contextual suggestions for improvement
  • Check test coverage (suggest missing unit tests)

Stage 3: Post-deployment (ongoing monitoring)

AI doesn't stop at deployment. Use it to :

  • Analyse error logs and suggest corrections
  • Detecting performance anomalies
  • Correlate incidents with recent changes

Useful tips for each role

Copy and paste these prompts to speed up your daily routine.

For the developer

Prompt IA
I'm working on [describe the feature].
Here's my code: [paste code].

Identify:
1. Any edge boxes I may have missed
2. Potential security vulnerabilities
3. Possible performance optimisations

For QA (test generation)

Prompt IA
Here is a user story: [paste story].
Here are the acceptance criteria: [paste criteria]

Generates a list of test cases including :
- Nominal scenarios (happy path)
- Limit cases (extreme values, empty fields)
- Error scenarios (timeout, insufficient permissions)
- Regression tests to consider

For the Tech Lead (architecture)

Prompt IA
This is the current architecture of our [name] module:
[Describe or paste diagram/code].

We need to add [new functionality].
Propose 2-3 implementation approaches. Present them in the form of a comparative table with their advantages/disadvantages in terms of maintainability, testability, performance, development cost and scalability.

For DevOps/SRE (incident response)

Prompt IA
Here are the error logs for the last 24 hours:
[Paste logs]

Analyse these errors and :
1. Group them by probable root cause
2. Prioritise them by user impact
3. Suggest immediate corrective actions
4. Identify the alerts to be configured to prevent these incidents

Essential safeguards

AI is a powerful assistant, but it needs to be rigorously supervised to avoid abuses.

What AI does well:

  • Detect known patterns (vulnerabilities, anti-patterns)
  • Generate exhaustive test cases
  • Summarising and structuring information
  • Speed up repetitive tasks

What AI does not replace:

  • Business judgement on priorities
  • Final validation by a human
  • Understanding the business context
  • Responsibility for decisions

Golden rule AI makes suggestions, people make decisions. Each AI suggestion must be reviewed and validated by a competent member of the team.

Other AI usage rules must be respected:

Data confidentiality Never copy proprietary code, API keys or customer data into a public AI (free ChatGPT, etc.). Use «Enterprise» versions or local models (Ollama, LM Studio) to guarantee the security of your IP.

Hallucinations AI can invent libraries that don't exist or propose code that doesn't compile. Always test the generated code.

Business context AI does not know your specific business constraints or the undocumented history of a project.

Measuring impact to convince

To convince and sustain the use of AI, follow these KPIs:

  • Production defect rate target -30 % in 6 months
  • Code review time target of -40 % thanks to IA pre-analysis
  • Test coverage target +20 %
  • Bug detection time from days to hours

Conclusion: quality as a team culture

AI can't work miracles, but it can democratise good practice. By equipping each team member with intelligent checklists and prompts adapted to their role, you can turn quality into a collective reflex.

Start small: choose a checklist, a prompt, a stage in the pipeline. Measure the results. Iterate. Software quality is no longer a distant goal, it's a continuous process that AI is finally making accessible to everyone.

Our expert

Jean-Louis Guenego

AI, software quality, software architecture

IT consultant and trainer since 1998. A former student at ENS Cachan, he has worked with major institutions [...].

field of training

associated training