digital-twin
Reverse-engineer how you think and talk. Build a stress-tested AI system prompt of yourself.
Most AI 'clones' capture diction. Digital Twin captures decision patterns. The difference is whether the output sounds like you or actually thinks like you.
The Problem
When you paste five blog posts into a custom GPT and tell it to 'write like me,' it copies your sentence structure and loses your actual reasoning. The output sounds vaguely familiar and decides nothing the way you would.
What It Does
- 01Generates a stress-tested system prompt that captures decision patterns, not just diction
- 02Includes adversarial test cases to verify the twin holds under pressure
- 03Works on any LLM — Claude, GPT, Gemini, local models
What Digital Twin actually captures
Standard voice training captures three things: vocabulary, sentence cadence, common phrases. That's the easy part. It's also the part that AI tools have been doing badly for two years.
Digital Twin captures three additional layers most tools skip:
- Decision patterns. When you're given a choice between A and B, which one do you pick and why? The model learns your priorities, not your prose.
- Negative space. What do you refuse to say? What topics do you sidestep? What tones do you reject? This is often more distinctive than what you do say.
- Stress responses. When the input is provocative, hostile, or weird, how do you respond? The skill includes 20 adversarial test cases to verify the twin holds.
The output is a system prompt that any LLM can load. Tested on Claude, GPT-4, Gemini, and several local models.
Why this is the most-starred WhyStrohm skill
Of all the skills in this package, Digital Twin has the highest stars (14) and most forks (2). The reason: it solves a problem every founder has but nobody else solves well. "Brand voice" tools exist; "personal voice + decision pattern" tools don't.
This is the skill we recommend installing first if you're a solo founder. Brand voice without personal voice produces content that sounds like the company. Personal voice without brand voice produces content that sounds like you. You need both, layered.
Install
git clone https://github.com/whystrohm/digital-twin-of-yourself.git ~/.claude/skills/digital-twin
Restart Claude Code. Run /digital-twin to start the interview.
Full docs on GitHub →
How It Composes
Digital Twin layers personal voice on top of brand voice. media-tsunami captures the brand at company level. Digital Twin captures you specifically. Together they produce content that sounds like the brand and decides like the founder.
Related Skills
media-tsunami
The empirical layer. Extracts brand voice as executable code — cadence, vocabulary, forbidden words, exemplar sentences — serialized as a CLAUDE.md any LLM can load.
Install →whystrohm-voice-extract
Extract a 6-dimension voice profile from any URL. Generate 15-20 enforceable guardrails. Outputs as CLAUDE.md.
Install →