Home > Management > Management and leadership > Best practices for working as a team with AI

Best practices for working as a team with AI

Publié le 24 February 2026

Working as a team with AI is becoming a major managerial challenge, as these tools become established in professional practices. For managers, it is no longer a question of putting on the brakes or letting things happen, but of establishing an operational framework. How do you allocate roles between employees and AI, secure usage and maintain responsibility for deliverables? Find out from Gauthier Lamothe, an experienced entrepreneur and trainer in AI for management, how to transform AI into an ally of teamwork and sustainable performance.

Working as a team with AI: best practices

Generative artificial intelligence has rapidly become an integral part of the day-to-day work of professional teams. According to McKinsey, almost 80 % of global organisations were using AI tools generative in 2025, This compares with 65 % a year earlier and 33 % in 2023. Poorly managed, this adoption exposes companies to risks in terms of quality, data security and liability. If it is well integrated collectively, AI becomes a powerful lever for performance.

here are concrete benchmarks for :

  • working effectively as a team with AI
  • clarify human and automated roles
  • securing professional use

Why work as a team with AI?

In many organisations, generative AI is either banned or tolerated without a clear framework. A survey Cisco from 2024 shows that 27 % of companies have initially banned generative AI tools, while 63 % have restricted the types of data that can be subject to them. However, these bans do not reflect actual usage.

According to a study Gartner published in early 2025, 69 % of employees use AI tools without official validation when their company does not provide them with a suitable solution. This phenomenon of shadow IA creates a gap between the formal rules and the day-to-day practice of the teams.

Integrating AI in a collective way therefore involves going beyond binary logic. It's not a question of prohibiting or delegating everything, but of defining a shared framework, aligned with actual usage and business objectives.

Clarifying roles: what AI can do and what it should not do

Generative AI can be considered as a augmented teammate. It speeds up certain tasks, but it can neither take decisions nor assume professional responsibility. The CNIL reminds us that the final decision and the associated responsibility always remain human. Thus, the people responsible for selecting the database may be de facto designated as those responsible for processing the data. They must therefore ensure compliance with regulations (liability in the event of non-compliance).

Recommended distribution of roles for working as a team with AI

Type of taskContribution of AIEssential human role
EditorialDraft generation, reformulationValidation of content and tone
AnalysisSuggestions, summariesChoice, arbitration, interpretation
DecisionScenario simulationFinal responsibility
DataFormatting, summaryReliability checks

This clarification limits the illusions of total delegation and secures the collective production of deliverables.

Working as a team with AI: transparency, with a shared framework for use

To provide a framework for the use of AI, many organisations are introducing user charters. The CNIL also provides numerous resources for companies to act in compliance with the RGPD, whether in the use of AI with customer data, or in the training of personalised AI.

Key principles of a responsible use charter

  1. Adapting the tool to the sensitivity of the data
  2. Making usage visible and shared within the team
  3. Differentiating use cases by business line

Example of an operational framework

Type of dataRecommended toolAssociated rule
Confidential informationSecure internal AIStrictly controlled use
Anonymised dataPublic AI (ChatGPT, Gemini, etc.)Transparency between teams
Public dataPublic AIFree use

This type of frame is more effective than general bans, This significantly reduces the risk of data leakage or misuse.

«In training, I tend to tell participants that when you use a public AI, you're taking but not giving anything in terms of information.»

Maintaining human responsibility for deliverables

European and French recommendations all agree on one principle: a deliverable produced with AI always involves an identified person. The European regulation on AI (AI Act) stresses the need to maintain clear human responsibility, particularly in professional use.

In practice :

  • the project manager validates the deliverable
  • the manager arbitrates in case of doubt
  • the decision-maker accepts the consequences

This simple rule makes processes more secure and preserves trust within teams and with customers.

Encourage collective iteration rather than individual use

The most effective uses of AI are collective. According to the MIT Sloan Management Review in collaboration with the Boston Consulting Group, In fact, organisations that support AI through shared practices achieve better results than those that allow each employee to experiment in isolation.

Some teams are setting up shared prompt libraries, This is good practice, but only if two conditions are met. This is good practice, but only on two conditions:

1/ include a prompt chain and not one of them too heavy

2/ subject them to a expert review

It is the expertise of an employee that enables us to see that the same question in two different contexts will produce a deliverable that is sometimes satisfactory, sometimes unusable.

Best practice

  • Documenting prompts by use case
  • Identifying limits and points to watch out for
  • Update versions regularly

Validation checklist

  • Prompt tested in real-life situations
  • Clearly identified objective
  • Result reviewed and corrected by a human
  • Archived and accessible version

This approach encourages consistent quality of deliverables and a collective skills development.

Respecting actual working hours

AI can speed up certain tasks, but it does not eliminate the complexity of professional work. An MIT study shows that professionals using AI save an average of 30 to 40 % of time on intermediate tasks, but that the analysis, decision and validation phases remain unavoidable.

Working as a team with AI: examples of uses by sector

SectorUse of AIHuman validation
MarketingGeneration of campaign variantsCreative team
Human resourcesSuggested interview gridsRecruiter
LegalSimplification of contractsLawyer
FinanceData visualisation and synthesisAnalyst
ITCode proposal or documentationDeveloper

In all cases, the AI assists, the human decides and takes responsibility.

Ultimately, working as a team with AI requires method, transparency and responsibility. Used as an augmented teammate, AI strengthens collective performance. Used without a framework, it amplifies existing weaknesses in the organisation. Implementing shared practices enables teams to take full advantage of AI while retaining control of their decisions, data and deliverables.

Our expert

Gauthier Lamothe

Management, entrepreneurship, education

Co-founder of the company MuKn, he is an experienced entrepreneur, particularly in audiovisual production […]

field of training

associated training