Kasey Schoeff

Full-Stack Engineer | AI demos, prototypes, and customer-facing systems

Rapid prototypes backed by real systems.

Built across 5M+ production records, 95+ shipped endpoints, and 26-agent orchestration systems. I scope use cases quickly, ship working AI prototypes, and turn complex technical behavior into demos customers and stakeholders can actually follow.

OpenAI API experienceGovernment stakeholder demosSecret clearance

Contact

kasey.a.schoeff@gmail.comLinkedIn profile

Experience

Work contexts that shape how I build and demo.

Defense analytics, startup product engineering, founder-led AI implementations, and a teaching foundation that keeps technical systems legible to different audiences.

KBR

Defense analytics

Technical owner of a defense analytics platform

I own a regulated analytics platform, optimize dashboards over 5M+ records, and deliver demos to government product owners.

Cracked Gaming

Startup product

Founding engineer on a zero-to-production startup build

I ship payments, real-time systems, Twitch integrations, creator tooling, and operator tooling across 95+ API endpoints.

Founder-led AI product work

Applied AI

Founder-led AI implementations

I build OpenAI-powered prototypes across evaluation workflows, semantic matching, job intelligence, and multi-agent orchestration.

STEM tutoring

Teaching

Teaching and technical communication

Tutoring foundation across calculus, physics, chemistry, statistics, and algebra for hundreds of AP students.

Selected work

Projects framed for customer-facing AI work.

Each case study focuses on the problem, what was built, and why it mattered.

Trust-sensitive evaluation workflow

Built an evidence-backed credentialing workflow for trust-sensitive AI evaluation.

Made model judgments legible with grounded evidence, review stages, and a verification layer.

4-stageevaluation pipeline
500+challenge library entries
19+tables with RLS
EvaluationHuman reviewVerifiable outputs

Why it matters

Evaluation, review loops, and verification make model decisions easier to trust and deploy.

What shipped

Shipped a four-stage workflow for submission review, evidence grounding, adversarial checks, and credential verification.

Key patterns

Evidence groundingReview workflowVerification layer

Core stack

Next.jsTypeScriptSupabaseOpenAIPlaywrightVitest

What stands out

  • Explains evaluation and trust in plain language.
  • Pairs human review with model judgment.
  • Transfers well to regulated or high-stakes workflows.

Why OpenAI

Why this profile maps to OpenAI's Demo Experience Engineer role.

This is the overlap I already work in: customer-facing demos, rapid prototypes, and OpenAI API implementations that have to feel useful and credible to the audience in front of them. The strongest evidence is the mix of government stakeholder demos, startup product delivery, and applied OpenAI work across evaluation, retrieval, matching, and orchestration.

Customer-facing demos

Government stakeholder demos and startup operator presentations with an emphasis on clarity and business value.

Rapid prototyping

Prototype development across evaluation pipelines, market intelligence, matching systems, and internal AI tooling.

OpenAI API experience

OpenAI API work across structured outputs, embeddings, evaluation, retrieval, and multi-step workflow orchestration.

Trust-sensitive systems

Regulated environments, review loops, verification, moderation, and reliability constraints relevant to enterprise AI.

Data and retrieval

Real systems behind the demos: dashboards, job intelligence, semantic matching, retrieval flows, and operational tooling.

Technical translation

Model behavior, product tradeoffs, and architecture decisions translated into demonstrations non-technical audiences can follow.