Working as a team with AI is becoming a major managerial challenge, as these tools become established in professional practices. For managers, it is no longer a question of putting on the brakes or letting things happen, but of establishing an operational framework. How do you allocate roles between employees and AI, secure usage and maintain responsibility for deliverables? Find out from Gauthier Lamothe, an experienced entrepreneur and trainer in AI for management, how to transform AI into an ally of teamwork and sustainable performance.

Generative artificial intelligence has rapidly become an integral part of the day-to-day work of professional teams. According to McKinsey, almost 80 % of global organisations were using AI tools generative in 2025, This compares with 65 % a year earlier and 33 % in 2023. Poorly managed, this adoption exposes companies to risks in terms of quality, data security and liability. If it is well integrated collectively, AI becomes a powerful lever for performance.
here are concrete benchmarks for :
- working effectively as a team with AI
- clarify human and automated roles
- securing professional use
Why work as a team with AI?
In many organisations, generative AI is either banned or tolerated without a clear framework. A survey Cisco from 2024 shows that 27 % of companies have initially banned generative AI tools, while 63 % have restricted the types of data that can be subject to them. However, these bans do not reflect actual usage.
According to a study Gartner published in early 2025, 69 % of employees use AI tools without official validation when their company does not provide them with a suitable solution. This phenomenon of shadow IA creates a gap between the formal rules and the day-to-day practice of the teams.
Integrating AI in a collective way therefore involves going beyond binary logic. It's not a question of prohibiting or delegating everything, but of defining a shared framework, aligned with actual usage and business objectives.
[Training]
What if artificial intelligence were to become your best ally for more humane and effective management? Training Managers, boosting performance with AI will enable you to discover how to take advantage of artificial intelligence to optimise the day-to-day management of your team, while strengthening proximity, communication and ethical decision-making.
Clarifying roles: what AI can do and what it should not do
Generative AI can be considered as a augmented teammate. It speeds up certain tasks, but it can neither take decisions nor assume professional responsibility. The CNIL reminds us that the final decision and the associated responsibility always remain human. Thus, the people responsible for selecting the database may be de facto designated as those responsible for processing the data. They must therefore ensure compliance with regulations (liability in the event of non-compliance).
Recommended distribution of roles for working as a team with AI
| Type of task | Contribution of AI | Essential human role |
| Editorial | Draft generation, reformulation | Validation of content and tone |
| Analysis | Suggestions, summaries | Choice, arbitration, interpretation |
| Decision | Scenario simulation | Final responsibility |
| Data | Formatting, summary | Reliability checks |
This clarification limits the illusions of total delegation and secures the collective production of deliverables.
[Testimonial]
«AI is a fantastic tool for suggesting ways of developing our new products: sometimes it even allows us to innovate on the basis of our remaining stocks of materials. But every time we didn't check its suggestions carefully, small errors crept into the reasoning. If we didn't have this rigour in our work, which consists of checking, a good proportion of the products we design with the help of AI would have been defective or built according to impractical specifications, and therefore commercially unfeasible».»
Michael Ronsin - Product specifications manager for the Hook brand
[Also read] Legal AI: automation, ethics and data protection
Working as a team with AI: transparency, with a shared framework for use
To provide a framework for the use of AI, many organisations are introducing user charters. The CNIL also provides numerous resources for companies to act in compliance with the RGPD, whether in the use of AI with customer data, or in the training of personalised AI.
Key principles of a responsible use charter
- Adapting the tool to the sensitivity of the data
- Making usage visible and shared within the team
- Differentiating use cases by business line
Example of an operational framework
| Type of data | Recommended tool | Associated rule |
| Confidential information | Secure internal AI | Strictly controlled use |
| Anonymised data | Public AI (ChatGPT, Gemini, etc.) | Transparency between teams |
| Public data | Public AI | Free use |
This type of frame is more effective than general bans, This significantly reduces the risk of data leakage or misuse.
«In training, I tend to tell participants that when you use a public AI, you're taking but not giving anything in terms of information.»
Maintaining human responsibility for deliverables
European and French recommendations all agree on one principle: a deliverable produced with AI always involves an identified person. The European regulation on AI (AI Act) stresses the need to maintain clear human responsibility, particularly in professional use.
In practice :
- the project manager validates the deliverable
- the manager arbitrates in case of doubt
- the decision-maker accepts the consequences
This simple rule makes processes more secure and preserves trust within teams and with customers.
Encourage collective iteration rather than individual use
The most effective uses of AI are collective. According to the MIT Sloan Management Review in collaboration with the Boston Consulting Group, In fact, organisations that support AI through shared practices achieve better results than those that allow each employee to experiment in isolation.
Some teams are setting up shared prompt libraries, This is good practice, but only if two conditions are met. This is good practice, but only on two conditions:
1/ include a prompt chain and not one of them too heavy
2/ subject them to a expert review
It is the expertise of an employee that enables us to see that the same question in two different contexts will produce a deliverable that is sometimes satisfactory, sometimes unusable.
Best practice
- Documenting prompts by use case
- Identifying limits and points to watch out for
- Update versions regularly
Validation checklist
- Prompt tested in real-life situations
- Clearly identified objective
- Result reviewed and corrected by a human
- Archived and accessible version
This approach encourages consistent quality of deliverables and a collective skills development.
[Training]
Prompt engineering aims to optimise communication with generative artificial intelligence models. Would you like to create effective prompts to generate specific responses or carry out precise tasks? Discover the training programme Prompt engineering, communicate effectively with artificial intelligence.
Respecting actual working hours
AI can speed up certain tasks, but it does not eliminate the complexity of professional work. An MIT study shows that professionals using AI save an average of 30 to 40 % of time on intermediate tasks, but that the analysis, decision and validation phases remain unavoidable.
For example :
A strategic analysis requiring two days of human effort cannot be produced reliably in ten minutes. AI, on the other hand, can speed up structuring, idea-finding and intermediate formatting.
Acknowledging this reality avoids unrealistic expectations and operational disappointments.
Working as a team with AI: examples of uses by sector
| Sector | Use of AI | Human validation |
| Marketing | Generation of campaign variants | Creative team |
| Human resources | Suggested interview grids | Recruiter |
| Legal | Simplification of contracts | Lawyer |
| Finance | Data visualisation and synthesis | Analyst |
| IT | Code proposal or documentation | Developer |
In all cases, the AI assists, the human decides and takes responsibility.
Ultimately, working as a team with AI requires method, transparency and responsibility. Used as an augmented teammate, AI strengthens collective performance. Used without a framework, it amplifies existing weaknesses in the organisation. Implementing shared practices enables teams to take full advantage of AI while retaining control of their decisions, data and deliverables.





