A decade delivering AI to Korea's largest enterprises — now building LLM-based agent systems as a Forward Deployed Engineer. 13 AI patents. 3 AI agents deployed. 20,000+ pharmacy network.
Direct alignment between Wonderful's FDE & Field CTO requirements and my 10+ year track record building and deploying AI systems for enterprise customers.
Currently serving as CTO at Pevo across two enterprise engagements in South Korea's healthcare ecosystem, owning system architecture, database design, DevOps, MLOps, and security.
Pharmacies served nationwide. Market leader in pharmacy POS systems and prescription processing with ~50% domestic market share. Deployed 2 AI agents (document + voice) on top of their existing infrastructure.
One of South Korea's largest non-life insurers. Conducted a proof-of-concept for automated insurance claim damage assessment using multi-agent AI architecture with human-in-the-loop verification.
Led all technical decisions across healthcare and insurance AI systems at Pevo. Prior enterprise track record detailed below.
Built and operated AI chatbot systems for major Korean enterprises: Amore Pacific (Etude House cosmetics recommendation), Hyundai Motor Group (Kona test-drive chatbot), and Gangnam District Office (public service inquiries). Managed end-to-end delivery including regulatory compliance, data security, and production operations.
Presented "AI Platforms for Game Customer Service" at Inven Game Conference 2016 (IGC), one of Korea's largest gaming conferences. Demonstrated NLP-based customer inquiry classification using multi-dimensional vector transformation — the same foundational approach that later evolved into today's LLM-based agent systems.
Each agent addresses a distinct operational challenge — from document processing to real-time voice interaction — connected through shared OCR/LLM infrastructure.
Multi-agent system that automates the labor-intensive process of reviewing veterinary medical receipts and determining insurance coverage eligibility. Built for KB Insurance's pet insurance division, reducing claim processing time while maintaining accuracy through human-in-the-loop verification.
OCR Module → LLM structured extraction. Handles receipts, ID cards, pet registration docs, and bank account documents. Vision AI + Medical AI agents digitize messy handwritten receipts into normalized JSON.
Strategy pattern per insurer (KB, DB, Meritz, Samsung, Hyundai). Common classification agent (1 LLM call) → per-insurer rule calculation (0 LLM calls). 9-category exclusion framework with surgical procedure linking.
Certified loss adjusters review AI classifications via drag-and-drop interface. Override capability with structured reason codes. Every manual decision is logged to create a continuous feedback loop for model improvement.
Reduced LLM calls from N (one per insurer) to 1 via common classification layer. Individual strategy calculations are pure rule-based — no LLM, no latency. DB-driven rules allow A/B testing without code changes.
Receipt Image → OCR Module → Text Lines → LLM Extraction → Structured Items ↓ Common Classification Agent ← 1 LLM call ↓ ↓ covered_items excluded_items ↓ ↓ ↓ ↓ KB Strategy DB Strategy Meritz Samsung ← 0 LLM calls ↓ Loss Adjuster Review → Approve / Reject / Override ↓ Feedback Loop → Model Retraining Data
Extracts structured data from 20+ hospital-specific diabetes prescription formats using the same OCR + LLM pipeline as Agent 01 — then automatically submits reports to South Korea's National Health Insurance Service (NHIS). This agent's prescription analysis output directly powers Agent 03's personalized voice interactions.
Auto-detects 3 prescription types (general diabetes, medical aid, CGM continuous monitoring). Type-specific LLM prompts extract 30+ fields: patient info, diagnosis codes (E10-E14), medication items, dosage schedules, insulin usage, and institutional codes.
Extracted patient info + diagnosis codes are automatically formatted and submitted to the National Health Insurance Service. Dynamic protocol adaptation based on diabetes type and level. End-to-end automation from paper prescription to government submission.
Prescription Image → OCR Module → Text → Type Auto-Detection ↓ ┌────────────────────┼────────────────────┐ General Diabetes Medical Aid CGM Electrode └────────────────────┼────────────────────┘ ↓ LLM (type-specific prompt) ↓ Structured Prescription JSON ↓ ↓ NHIS Auto-Report → Agent 03 (Voice AI) medication data feeds personalized calls
A real-time voice AI agent that calls patients at scheduled times to verify medication intake, check for side effects, and escalate safety concerns to their prescribing pharmacy or hospital. Built on Agent 02's prescription analysis — the system knows exactly which medications each patient takes, their dosage schedule, and relevant drug interactions.
Instead of streaming raw audio to the server (which degrades quality over mobile networks), the system performs on-device STT and sends only text over a dedicated text channel. A separate audio channel handles TTS playback from the server. This dual-channel WebSocket design minimizes latency and preserves voice quality — critical for elderly patients who make up the majority of medication adherence users.
Server triggers FCM (Android) / APNs VoIP Push (iOS) at scheduled times. CallKit integration presents native phone UI — patients answer a "real" phone call. Cross-platform: iOS uses AVAudioEngine + CallKit; Android uses WebRTC + FCM data messages.
OpenAI Whisper, ElevenLabs Scribe, Google Cloud, Naver Clova, and iOS/Android native speech recognition. VAD (Voice Activity Detection) with RMS threshold tuning for speaker vs. earpiece modes. 0.7s silence detection triggers transcription; 5s max utterance limit.
LLM with dynamic system prompt injected with patient's prescription data from Agent 02. Korean AI pharmacist persona with sliding window memory (last 10 turns). Structured conversation flow: greeting → medication check → side-effect screening → safety escalation → farewell.
ElevenLabs (primary), Google Wavenet, Naver Clova as providers. Server generates TTS audio chunks → streams over WebSocket → native audio engine playback. Echo cancellation: microphone automatically muted during TTS playback, buffer cleared on TTS end. TTS caching strategy: pre-generates and caches TTS responses based on patient profile data (medications, schedule, common dialogue patterns), maximizing cache hits to deliver minimal latency during live calls.
Server checks dosage schedule → sends VoIP push notification at configured time (e.g., 1:00 PM for morning + lunch meds)
Patient sees incoming call from "AI Pharmacist" — standard phone UI, works on lock screen. Patient taps Accept.
Text channel: carries STT transcripts + LLM responses. Audio channel: carries TTS audio chunks. Both over WSS with session management.
AI: "Good afternoon! Did you take your Metformin 500mg with lunch today?" — generated from Agent 02's prescription data. Patient responds naturally; on-device STT converts to text.
LLM probes for common side effects based on prescribed drugs. "Have you experienced any nausea or dizziness since starting this medication?"
If risk signal detected (adverse reaction, missed doses, concerning symptoms) → system connects patient to prescribing pharmacy or hospital. Transcript saved for clinician review.
Dosage Scheduler → FCM / APNs VoIP Push → CallKit (iOS) / FCM (Android) ↓ Patient accepts call ↓ ┌───── Dual WebSocket ─────┐ │ │ Text Channel Audio Channel │ │ On-device STT → text TTS audio chunks ↓ ↑ LLM + Prescription DB → ElevenLabs TTS (dynamic prompt injection) ↓ Adherence logged to DB Risk? → Pharmacy / Hospital alert
Designed and operated the complete technical stack across both projects — from infrastructure and databases to ML pipelines and mobile client.
What draws me to Wonderful is the local-first strategy. AI agents don't succeed through one-size-fits-all deployment — they succeed through deep local adoption, understanding each market's regulations, language nuances, and customer workflows. This is exactly how I've operated: navigating Korean healthcare regulations, building Korean-language voice AI, and adapting to local enterprise procurement processes. Wonderful's approach of embedding with customers locally to drive real adoption, rather than selling from a distance, resonates deeply with how I've built my career.
I'm especially excited about Wonderful's focus on Voice AI agents. Voice is rapidly becoming the defining trend in enterprise AI — and South Korea is the perfect proving ground. The Korean market is saturated with voice-based customer service operations across telecom, banking, insurance, and healthcare. These are high-volume, high-cost call centers ripe for AI transformation. Having already built a production Voice AI agent that makes VoIP calls, handles natural conversation, and integrates with backend systems, I know firsthand both the technical challenges and the massive business opportunity. Voice AI agents aren't just a feature — they're a standalone business model that can replace a significant portion of manual CS costs.
That's why I'm drawn to Wonderful: it's the work I've already been doing, the market I understand deeply, and the future I want to help build at global scale.