Redson Dev brief · VIDEO
I BUILT A FULLY AUTOMATIC MANSPLAINER
Yannic Kilcher · March 6, 2026
In an era increasingly shaped by algorithms and automation, the cultural implications of AI are becoming as significant as their technical specifications. Yannic Kilcher's latest video offers a provocative exploration into this space, moving beyond typical performance benchmarks to examine how AI can both amplify and parody human behaviors, particularly those that are less than ideal. Kilcher’s work consistently pushes boundaries, and this recent project serves as a compelling, if unsettling, demonstration of computational mimicry reaching into social dynamics. The video documents Kilcher's creation of what he terms a "fully automatic mansplainer," an AI system designed to generate condescending explanations on demand. The core of his demonstration lies in showcasing how a language model, with careful prompting and fine-tuning, can adopt a specific, often stereotypical, conversational persona. He illustrates the system's ability to take a user's straightforward question or statement and reframe it with an air of superior knowledge, complete with unsolicited advice and subtle dismissals of prior understanding. The humor, and perhaps the underlying critique, emerges from the AI's consistent adherence to this persona, regardless of the input's complexity. Kilcher highlights the mechanics of this creation, touching upon the choice of a specific language model and the iterative process of crafting effective prompts that guide the AI towards its intended output. He provides examples of the AI responding to diverse topics, from explaining basic coding concepts to offering patronizing takes on seemingly complex philosophical ideas, showcasing the broad applicability of the chosen persona. A particularly telling moment involves the AI’s explanation of a simple data structure, where it manages to inject several layers of unrequested simplification and thinly veiled condescension that feel distinctly human. For software, AI, and product builders, this demonstration offers a crucial lesson in the malleability of large language models and the ethical considerations involved in their deployment. It underscores that AI systems are not merely neutral tools but can be engineered to embody specific biases or behavioral patterns, whether intentionally for comedic or critical purposes, or inadvertently. Builders should consider how their prompt engineering, data curation, and model choices can contribute to the creation of systems that either reinforce or challenge existing social dynamics, and how even seemingly harmless applications can reveal deeper insights into the implications of AI-driven automation.
Source / further reading
Learn more at Yannic Kilcher →