1. Vision
The ability to conceive what to build and why
Human: THIS purpose, cultural mission 85%
AI: GENERAL options 50%
2. Research
The ability to find information and examples
Human: deep in few, expert navigation 75%
AI: broad scan, fast pull 90%
3. Pattern Recognition
The ability to see structures and apply templates
Human: tacit, intuitive, felt 70%
AI: documented, explicit 80%
4. Contextual Awareness
The ability to understand THIS specific situation
Human: THIS situation, local, tacit 90%
AI: GENERAL common, documented 60%
5. Judgment
The ability to evaluate quality and decide
Human: taste, values, cultural fit 85%
AI: correctness, syntax, logic 75%
6. Execution
The ability to act fast and iterate tirelessly
Human: careful 30%
AI: fast iteration, tireless 95%
Key insight: Human leads in Vision, Contextual Awareness, Judgment (the "THIS specific situation" capacities).
AI leads in Research, Pattern Recognition, Execution (the "general and fast" capacities).
Vision
Human
THIS purpose, cultural mission
85%
Research
Human
deep in few, expert navigation
75%
Context
Human
THIS situation, local, tacit
90%
AI
GENERAL common, documented
60%
Judgment
Human
taste, values, cultural fit
85%
AI
correctness, syntax, logic
75%
Interactive: 6 Capacities × 2 Dimensions
Each capacity has two values: Stakes (how critical) and Domain (what type)
■ High Stakes = Human leads
■ Technical = AI helps more
Stakes: Low (AI can handle) ← → High (Human must lead)
Domain: Cultural (Human expertise) ← → Technical (AI excels)
The Complete Framework: 3 Paths to Output
Each path has different quality determinants and ceilings
= typical case for THIS research
Human Alone
No AI assistance — Output affected by:
Capacity
Low 40%
Med 70%
Expert 95%
Time
Hobby 20%
Part-time 50%
Full-time 90%
Breadth
Generalist 50%
Specialist 90%
Learning
New 30%
Mixed 50%
Known 95%
Bottleneck: limited by weakest factor
AI Alone
No human direction — Output affected by:
Weak 40%
Med 70%
Strong 90%
Model
Cultural 30%
Mixed 60%
Tech 90%
Task fit
None 20%
Some 50%
Well 80%
Docs
Bottleneck: lacks THIS context, cultural values, sparse documentation
Human + AI Collaboration
Output affected by QUALITY FACTORS: Low ↔ High
Ceilings: Human alone 75% | AI alone 55% | Human+AI 85%
← HUMAN-LED: You control these — AI just responds
1 shot 20%
accept first 30%
no refine 25%
5+ rounds 80%
rebuild OK 85%
pivot freely 90%
"fix it" 15%
no context 25%
unclear goal 30%
examples 80%
constraints 85%
why + what 90%
trust blindly 10%
copy-paste 20%
no check 15%
run tests 85%
check logic 90%
expert eye 95%
"build app" 15%
huge scope 20%
no steps 25%
1 file 85%
1 function 90%
clear scope 95%
"wrong" 10%
no why 20%
vague reject 25%
explain why 85%
show correct 90%
teach pattern 95%
AI-SIDE: Depends on AI's training data & capabilities →
Vietnamese 40%
niche 40%
đàn tranh 30%
JS/Web 90%
audio APIs 85%
common libs 95%
chat only 30%
no exec 40%
no files 50%
code exec 90%
file I/O 85%
web search 80%
4K tokens 30%
no memory 40%
fresh start 50%
200K+ 85%
file access 90%
project mem 80%
CEILING 85%
Limited by: Domain (35%) — Vietnamese music is obscure to AI
Quality Factors are the BRIDGE — they're the skills that make Human + AI work together
Key Insight:
No stage has a fixed color — each can be
Human-led
,
Balanced
, or
AI-led
depending on the scenario
Internal Feedback
Self, team, iteration
↩ any stage
→
Delivery
Ship to higher tier
→
External Feedback
Users, stakeholders
↩ any stage
Both feedback types can return to ANY stage — depends on what's revealed
Key Insight: Same stage, different scenarios = different Human/AI balance. Cultural mission = human-dominant. Technical exploration = balanced. Brainstorm = AI generates, human judges.
Key Insight: Cultural research = human-led (ethnographic). Literature scan = balanced (AI gathers, human filters). API survey = AI-led (technical knowledge).
Key Insight: Cultural core = human decides. Impact/effort matrix = true partnership. Sprint planning = AI generates, human approves.
Key Insight: Cultural framework = human owns. Component architecture = true partnership. Technical scouting = AI explores fast.
Key Insight: Core logic = human owns. Integration = true partnership. Boilerplate = AI generates fast.
Key Insight: Cultural quality = human only. Functional tests = partnership. Automated tests = AI runs, human interprets.
Key Insight: Public narrative = human owns. Thesis defense = human defends, AI helps prepare. CI/CD = AI automates.
Reading the Bars:
Format: Human%:AI% per capacity (e.g., 80H:20A means 80% Human, 20% AI for that capacity)
Totals can exceed 100% because Human and AI effort are independent. High total = intensive task. Low total = light task.
Grayed bars (--) = capacity not relevant for this approach. Yellow (50:50) = balanced, both contribute equally.
Key Insight: Same goal, different approaches yield different effort profiles and tradeoffs. Cultural feedback needs depth (A1) or speed (A2). Pivot decisions can be balanced (B1), gut-led (B2), or data-led (B3). Analytics can be automated (C1) or human-interpreted (C2).