AI-native initiatives
AI-led bug fix workstream
WhatPartnered with engineering on an LLM-assisted bug fix program — prompting, reviewing, and validating AI-generated code fixes for open production bugs across iOS and Android codebases
Scale500+ bugs processed through the workstream; participated in a weekly buddy system with engineers to iterate on AI-generated fixes
ResultGenerated scaled insights into AI-driven bug resolution; demonstrated that QA engineers can contribute directly to code-level remediation — expanding the traditional QA role
Reverse task search agent
WhatBuilt an AI agent for task and issue reporting — batch-converts task IDs into compact query URLs, enabling engineers to view dozens of related tasks simultaneously instead of looking them up one by one
ScaleHandles batches of 75+ tasks with automatic URL splitting; promoted through staged rollout (Dev → RC) and shipped as a standalone CLI skill
ResultEliminated manual one-by-one task lookups; increased operational efficiency across cross-functional teams
Self-healing test automation framework
WhatDesigned a self-healing test automation framework that detects outdated test cases from code diffs and auto-generates updated golden path tests — removing the need for manual test maintenance after code changes
ScaleSelf-healing framework achieves F1 scores of 0.97–1.0 on test sets; manages E2E journey test cases with automated weekly summaries
ResultReduced manual test maintenance burden; distinguished product bugs from documentation drift; enabled auto-generation of test cases from engineering code changes
Quality operations
Golden Path user journey program
WhatLed partnership with internal beta testing team, UXR, and Product Design to establish structured "Golden Path" user journey flows — enabling internal testers to provide structured feedback on existing and new features
Scale2,600+ survey responses and nearly 400 bugs filed by non-QA testers in a single quarter
ResultScaled quality signal collection beyond QA; turned internal users into structured, high-signal feedback contributors
VR Engagement oncall and release readiness
WhatServed as primary QA oncall and release gatekeeper for the VR companion app across iOS and Android — player UI, content sharing, navigation, media galleries, CSS rendering
Scale490+ oncall-tagged bugs filed across 47 product areas (344 oncall, 228 team-attributed); performed release QA signoffs for 5+ major release trains; 2 production oncall rotations spanning ~3 years
Result20% reduction in mis-triaged issues; 40% reduction in production bugs in Meta Horizon Mobile; release reports kept engineering and product aligned on blockers and feature readiness
Cross-platform integration testing
WhatLed QA for a social media platform's CSS-based share sheet integration with the VR companion app — covering FOA bar rendering, share sheet orientation, and media download flows across iOS and Android
HowCreated and maintained structured exploratory testing guides with 7+ wiki revisions; verified fixes across multiple app versions and form factors
ResultEnsured cross-platform share sheet quality at launch
QA documentation and technical output
WhatAuthored 10+ technical documents including weekly auto-updated QA summaries, user journey test reports, golden path automation guides, and tool comparison analyses; maintained 3 wiki pages covering mobile testing, player UI, and oncall procedures
ResultEstablished reusable QA documentation standards adopted across the team; enabled other QA engineers to adopt AI-driven testing workflows; organized 59 meetings for team syncs, buddy system check-ins, and tooling coordination
AI agent development
LLM bug fixing
Agentic workflows
Prompt engineering
Release readiness
Triage systems
Cross-platform QA
iOS + Android