In the 25th session of the dialogue series
"Coaching Meets AI," the idea was discussed of how AI, particularly GPT models, can be analyzed through targeted questioning and by
challenging assumptions and norms. The aim is to identify and critically evaluate the implicit assumptions, norms, and values embedded in
the AI’s responses.
Key Topics and Discussions:
-
Analysis Strategy:
- Participants proposed confronting GPT with specific problem scenarios to uncover its implicit assumptions.
- Examples included coaching questions in the context of change management, such as handling resistance or addressing goal
conflicts.
- One suggested method involved waiting for the AI’s response before specifically probing the underlying assumptions and values.
-
Examples of Questions:
- A change management scenario was developed: The project manager is tasked with downsizing staff while moderating resistance and
conflicting departmental interests.
- The AI responded with typical textbook strategies (e.g., fostering transparency, building trust, defining common goals). It was
critically noted that such responses often rely on idealized assumptions, such as the feasibility of achieving consensus.
-
Findings and Insights:
- The AI demonstrated a strong reliance on conventional literature and Western management models in its analysis.
- Participants observed that many of the implicit assumptions did not align with the realities of many projects, such as the
notion of a harmonious balance of interests.
- A suggestion was made to deepen the analysis, particularly by exploring the ethical and normative foundations of the AI’s
responses.
-
Critical Reflection on AI Responses:
- Discussions included whether the AI leans towards an employer or employee perspective. While the AI claimed neutrality, it
exhibited a strong focus on economic goals.
- Some participants criticized the responses for being superficial or overly textbook-like, failing to address practical
challenges effectively.
-
Future Prospects:
- Suggestions for future sessions included moral dilemmas or culture-specific scenarios to test the range of AI responses.
- The question of biases in the AI’s programming was raised, such as whether ethical assumptions were embedded by the developers.
Conclusion
The session provided valuable insights into the possibilities and limitations of using GPT models for coaching and change management
scenarios. A critical issue remained that the AI’s responses often rely on idealized assumptions and standard literature. Future
discussions aim to explore these points further, particularly through moral and culture-specific scenarios.