AI Product Researcher — GenAI Systems for Community Services
A Better Community• 2025.03 — Present
User Research
Capability Assessment
Prompt & Flow Design
RAG Systems
Knowledge Structuring
Behavior Analysis
AI Product Strategy
Human-Centered AI
Product Research · User Understanding · AI Systems Design · Capability Evaluation
What I Worked On
Focused on understanding how AI systems interact with real-world users and organizational workflows, especially in low-digital-literacy contexts:
• Research Track 1: User Capability & Interaction Modeling
- Designed structured interview frameworks to assess user capability (digital literacy, comprehension, decision behavior) across different groups (e.g., elderly vs. educators).
- Translated qualitative interview data into personas, intent structures, and behavioral patterns to guide AI system design.
- Identified key gaps between “what users can do” vs. “what AI expects,” informing interaction simplification strategies.
• Research Track 2: AI System Design & Knowledge Structuring
- Designed multi-turn interaction frameworks (discovery → intent clarification → execution → fallback → handoff) to improve task success and reduce confusion.
- Built structured knowledge systems combining text, video, and retrieval pipelines to support consistent and scalable responses.
- Developed prompt strategies, guardrails, and role definitions to improve reliability and reduce hallucination risks.
- Diagnosed system-level issues such as retrieval gaps, latency, and response inconsistency, proposing iterative improvements.
• Research Track 3: Deployment, Adoption & Organizational Integration
- Studied real-world deployment challenges including permission control, workflow integration, and operational alignment.
- Designed onboarding, training materials, and scenario-based demos to enable non-technical teams to use AI systems effectively.
- Observed user interaction data and feedback loops to refine system behavior and improve adoption outcomes.
• Research Track 4: AI Learning & Capability Development
- Participated as a technical mentor in AI co-learning (“vibe coding”) programs, supporting participants in building 0→1 AI applications.
- Observed differences in learning patterns, abstraction ability, and execution across participants, forming early insights into capability-based evaluation.
Key Takeaways
• AI is ultimately a human problem, not a technical one: the biggest gap is not model capability, but the mismatch between system design and human understanding.
• Capability-aware design is critical: users differ significantly in cognitive load tolerance, abstraction ability, and interaction behavior. Systems must adapt to people, not the other way around.
• From “feature building” to “system thinking”: effective AI products require integrating prompts, workflows, knowledge systems, and guardrails into one coherent structure.
• Knowledge structuring is more valuable than model tuning: well-organized, modular knowledge systems consistently outperform ad-hoc or prompt-only solutions.
• Real-world deployment is a socio-technical problem: adoption depends on trust, clarity, and usability as much as on system performance.
• Interviews are not just for requirements — they reveal capability: user behavior, confusion patterns, and decision-making processes are strong signals of underlying ability.
• Early signals of talent are observable in learning environments: through AI co-building, differences in learning speed, abstraction, and execution become visible beyond traditional credentials.
• Bridging AI and people requires interdisciplinary thinking: combining product logic, behavioral understanding, and technical intuition is key to designing meaningful systems.