I'm Osama, a QA who builds.

I spent 5+ years at Meta making sure products shipped clean — filing bugs, owning release readiness, and running QA for a VR companion app used by millions. Then AI changed what was possible, and I stopped just reporting problems and started building systems to fix them. Now I build products on the side that apply the same thinking: what breaks, why, and how do you design around it from the start.

5+
Years at Meta Reality Labs
2,000+
Regression bugs verified
500+
Bugs tested via AI fix workstream
3,000+
Test cases managed
47
Product areas covered
10+
Technical docs authored
Aug 2024 — Present
Senior Product Quality Analyst
Meta · Reality Labs · Burlingame, CA
AI-native initiatives
AI-led bug fix workstream
WhatPartnered with engineering on an LLM-assisted bug fix program — prompting, reviewing, and validating AI-generated code fixes for open production bugs across iOS and Android codebases Scale500+ bugs processed through the workstream; participated in a weekly buddy system with engineers to iterate on AI-generated fixes ResultGenerated scaled insights into AI-driven bug resolution; demonstrated that QA engineers can contribute directly to code-level remediation — expanding the traditional QA role
Reverse task search agent
WhatBuilt an AI agent for task and issue reporting — batch-converts task IDs into compact query URLs, enabling engineers to view dozens of related tasks simultaneously instead of looking them up one by one ScaleHandles batches of 75+ tasks with automatic URL splitting; promoted through staged rollout (Dev → RC) and shipped as a standalone CLI skill ResultEliminated manual one-by-one task lookups; increased operational efficiency across cross-functional teams
Self-healing test automation framework
WhatDesigned a self-healing test automation framework that detects outdated test cases from code diffs and auto-generates updated golden path tests — removing the need for manual test maintenance after code changes ScaleSelf-healing framework achieves F1 scores of 0.97–1.0 on test sets; manages E2E journey test cases with automated weekly summaries ResultReduced manual test maintenance burden; distinguished product bugs from documentation drift; enabled auto-generation of test cases from engineering code changes
Quality operations
Golden Path user journey program
WhatLed partnership with internal beta testing team, UXR, and Product Design to establish structured "Golden Path" user journey flows — enabling internal testers to provide structured feedback on existing and new features Scale2,600+ survey responses and nearly 400 bugs filed by non-QA testers in a single quarter ResultScaled quality signal collection beyond QA; turned internal users into structured, high-signal feedback contributors
VR Engagement oncall and release readiness
WhatServed as primary QA oncall and release gatekeeper for the VR companion app across iOS and Android — player UI, content sharing, navigation, media galleries, CSS rendering Scale490+ oncall-tagged bugs filed across 47 product areas (344 oncall, 228 team-attributed); performed release QA signoffs for 5+ major release trains; 2 production oncall rotations spanning ~3 years Result20% reduction in mis-triaged issues; 40% reduction in production bugs in Meta Horizon Mobile; release reports kept engineering and product aligned on blockers and feature readiness
Cross-platform integration testing
WhatLed QA for a social media platform's CSS-based share sheet integration with the VR companion app — covering FOA bar rendering, share sheet orientation, and media download flows across iOS and Android HowCreated and maintained structured exploratory testing guides with 7+ wiki revisions; verified fixes across multiple app versions and form factors ResultEnsured cross-platform share sheet quality at launch
QA documentation and technical output
WhatAuthored 10+ technical documents including weekly auto-updated QA summaries, user journey test reports, golden path automation guides, and tool comparison analyses; maintained 3 wiki pages covering mobile testing, player UI, and oncall procedures ResultEstablished reusable QA documentation standards adopted across the team; enabled other QA engineers to adopt AI-driven testing workflows; organized 59 meetings for team syncs, buddy system check-ins, and tooling coordination
AI agent development LLM bug fixing Agentic workflows Prompt engineering Release readiness Triage systems Cross-platform QA iOS + Android
Jul 2021 — Aug 2024
Product Quality Analyst
Meta · Reality Labs · Burlingame, CA
Coverage and scale
Multi-team QA across Meta Quest and VR companion apps
WhatSupported VR Engagement QA, VR Regression QA, and RayBan Stories QA — providing test coverage and quality insights across Meta Quest products and VR companion apps ScaleIdentified and verified fixes for 2,000+ regression issues including 300+ critical launch-blocking bugs; managed a repository of 3,000+ test cases across onshore and offshore testers ResultOptimized issue verification pipelines; enabled faster bug resolution prior to key product launches
Major feature launch testing
WhatLed testing strategy and triage for high-profile launches: Reels in VR, Home Feed redesign, VR Search refresh HowCoordinated test coverage across onshore and offshore testers; improved issue verification pipelines for pre-launch quality gates ResultSmooth rollouts with timely issue resolution on all three launches
VR/MR testing Regression QA Launch readiness Test case management RayBan Stories Offshore coordination
Oct 2020 — Jul 2021
QA Playtester / Product Specialist
Meta · Reality Labs · Remote
Initiative
Ray-Ban Stories beta and dogfooding program
WhatDesigned and ran an internal beta testing and rotational dogfooding program for the Ray-Ban Stories hardware launch — device distribution, user onboarding, daily office hours, and structured feedback collection Scale134 users across 5 weeks; 40+ pre-production devices in rotation; office hours 3× per week; supported 100+ internal beta testers; surfaced 300+ bugs pre-launch; unblocked 100+ participants encountering setup and firmware issues ResultImproved pre-release product stability; generated structured dogfooding feedback that directly informed pre-launch product improvements; accelerated issue resolution across distributed teams
Hardware QA Beta program design Dogfooding Bug triage Device management
AI + Automation
AI agent development Prompt engineering LLM-assisted bug fixing Agentic workflow design Test case generation Self-healing test suites
Quality + Testing
Release readiness VR/MR QA Cross-platform mobile Bug triage + lifecycle Exploratory testing Regression QA Beta program design
Engineering + Tooling
Python JavaScript GraphQL SQL CI/CD CLI development JIRA Tableau
Product + Operations
Cross-functional leadership Release reporting Data-driven triage Stakeholder alignment Dogfooding programs
B.S. Business Administration
San Jose State University
Data Analytics Certificate
General Assembly

Personal
projects.

Building outside of work — applying the same systems thinking that keeps products shippable to products I'm shipping myself.

Coming soon
Prototype + product decisions in progress
pointd.fyi
Travel rewards optimizer that helps users maximize point and mile value across loyalty programs using real-time award availability data. Built out of a personal obsession with credit card rewards and the frustration of not having a simple tool that actually shows you where your points go furthest.
Frontend
React + Vite JavaScript Zustand Tailwind CSS
Backend & Data
Supabase Seats.aero API Google Places API Pexels API Live scrapers (Hyatt)
Infra & Deploy
Cloudflare Pages GitHub Actions Wrangler Resend (email)
Testing
Vitest React Testing Library Playwright (e2e) 500+ tests
Coming soon
Prototype + product decisions in progress
BIP — Build in Public
Platform for AI builders to share shipping logs, embed live prototypes, and collect micro-payments — turning the build process itself into content. Targeting resourceful AI builders who are somewhere between non-technical and semi-technical and want a home for their work that isn't just Twitter threads.
Frontend
Next.js 16 (App Router) React 19 TypeScript (strict) Tailwind CSS 4 Lucide React
Backend & Data
Supabase (auth + db) TanStack Query Anthropic SDK
Infra & Deploy
Cloudflare Workers OpenNext (CF adapter) Wrangler GitHub Actions
Testing
Vitest React Testing Library Playwright (e2e) 175+ tests
Active
Deployed across pointd.fyi and BIP via GitHub Actions
qa-agent github ↗
Config-driven QA agent that runs on every PR and on-demand regression. Analyzes diffs, scores risk by surface, identifies coverage gaps, proposes new tests, executes scoped test suites, and posts a structured report as a PR comment. Propose-only output — the agent drafts, you promote. Same agent, different config per repo.
Pipeline (6 steps)
1. Diff analysis 2. Risk scoring 3. Coverage gap detection 4. Test proposals 5. Test execution 6. Synthesis + report
Stack
Node.js (ESM) Anthropic API (claude-sonnet) GitHub API Vitest Playwright GitHub Actions
Modes
PR mode — scoped diff, @smoke e2e only, <3 min CI Regression mode — full Vitest + Playwright suite