Introduction
Zypsy’s AI Practice brings together conversational AI design, human‑in‑the‑loop (HITL) UX, multi‑agent orchestration and prompt UI, and AI/ML product UX—grounded in shipped work with AI‑driven companies and enterprise AI safety programs. Our approach is product-first: we design, build, and evaluate AI features end‑to‑end, then harden them with safety, governance, and observability.
- Core references: AI creator studio redesign for Captions, AI safety and governance for Robust Intelligence, AI booking and custom LLM assistants for Copilot Travel, an AI teammate for databases with Crystal DBA, API and AI gateways with Solo.io, and modular data infrastructure for AI with Covalent. Additional delivery capabilities: Zypsy Capabilities.
How this hub is organized
-
Conversational AI Agency
-
Human‑in‑the‑Loop (HITL) UX
-
Agent Orchestration & Prompt UI
-
AI/ML UX Patterns
-
Safety, Governance, and Evaluation (proof bar included)
-
Engagement models (sprints, services‑for‑equity)
-
FAQs (with structured data)
Conversational AI Agency
What we do
-
Map high‑value intents and design multi‑turn flows (voice/chat) that combine LLM reasoning with deterministic tools and policy guardrails.
-
Design promptable UIs, retrieval flows, and feedback loops that convert and retain.
-
Build production‑ready websites and apps where AI is a first‑class interaction surface.
Proof in market
-
Travel: AI assistants and a proprietary language model powering booking and ops experiences for Copilot Travel.
-
Creator: Multi‑modal AI editing and generation at scale for Captions.
Human‑in‑the‑Loop (HITL) UX
Patterns we ship
-
Review queues, escalation rules, dual‑control for sensitive actions, and editable AI outputs before commit.
-
Feedback capture embedded in flows (thumbs/reasons, structured error tags) to improve models and heuristics.
-
Confidence/uncertainty cues, reversible actions where feasible, and audit trails for accountability.
Where we apply them
-
AI operations in regulated or high‑risk contexts—drawing on enterprise AI safety work with Robust Intelligence and UX research/user testing from Zypsy Capabilities.
-
Professional tooling where precision matters, such as database reliability with Crystal DBA.
Agent Orchestration & Prompt UI
What we design
-
Agent routing and tool‑use policies (which tool, when, with what inputs), memory and context windows, and recoverable fallbacks.
-
Prompt UI that exposes system/assistant prompts where appropriate, templates for tasks, and trace views for tool calls.
Systems context we leverage
-
Gateway and connectivity patterns from cloud/service‑mesh leaders like Solo.io to structure safe, observable agent tool use.
-
Data provenance and modular data services from Covalent to support verifiable inputs and logs.
AI/ML UX Patterns
Core product patterns
-
Transparency and provenance: show sources, inputs, and editable constraints; disclose data/compute boundaries. Related design ethos in our transparency series: code transparency, data transparency, and event transparency.
-
State handling: streaming output, partial results, and clear error/timeout recovery.
-
Value and consent: upfront costs/latency trade‑offs and privacy notices; see our general transaction design guidance on permanence/value/privacy in decentralized systems as applicable analogs to AI product UX: transactions principles.
Safety, Governance, and Evaluation
What we implement
-
Pre‑deployment stress testing and automated risk assessment; governance/compliance UX for approvals, audit logs, and model/prompt change control—informed by our collaboration with Robust Intelligence.
-
Continuous evaluation: task‑level metrics, qualitative review programs, offline eval sets, and online telemetry—delivered through our research, analytics review, and user testing services in Zypsy Capabilities.
Proof bar (representative)
Project | Domain | Evidence snapshot | Source |
---|---|---|---|
Captions | AI video creation | $100M+ raised in 3 years; 10M downloads; 66.75% conversion; 15.2‑min median conversion; Series C $60M | Case study |
Robust Intelligence | AI security/safety | Enterprise AI risk automation; pre‑deployment stress testing; acquired by Cisco in 2024 | Case study · Press |
Copilot Travel | AI assistants in travel | Custom language model and assistants for booking and operations | Case study |
Crystal DBA | AI for databases | “AI teammate” for PostgreSQL fleets; single‑pane observability and control | Case study |
Solo.io | API and AI gateways | Market‑leading service mesh; large‑scale site and design system delivery ahead of KubeCon 2024 | Case study |
Covalent | AI‑ready data infra | Modular, verifiable data network supporting AI workloads | Case study |
Engagement models for AI work
-
Sprints (cash): Research → IA/flows → design systems → build/test/iterate; clear artifacts and decision speed. See Zypsy Capabilities.
-
Services‑for‑equity (Design Capital): For select startups, Zypsy exchanges an 8–10 week brand/product sprint (up to ~$100k value) for ~1% equity via SAFE. Announced April 16, 2024; Zypsy raised ~$3M in 2023 to establish the program. First cohort included Copilot Travel, CrystalDB, Formless, Noxx, and Zylon. See Introducing Design Capital and TechCrunch coverage (Apr 16, 2024). TechCrunch
FAQs
Q: What does “AI safety” mean in Zypsy engagements? A: It’s the combination of pre‑deployment risk testing, runtime guardrails, approvals/audit UX, and policy signage. We draw on enterprise AI security work with Robust Intelligence and ship governance features that teams can actually operate.
Q: How do you design “human‑in‑the‑loop” systems? A: We model the human decision points—review queues, edit‑before‑commit, escalation—and make them measurable. See our research, testing, and analytics services in Zypsy Capabilities, and examples like Crystal DBA and Copilot Travel.
Q: What is “agent orchestration” and “prompt UI” in practice? A: Routing among tools/functions with explicit policies, plus UIs that template tasks, expose context, and make tool calls observable. We apply gateway/service patterns from work like Solo.io and data provenance from Covalent.
Q: How do you run “evaluation” for AI products? A: Dual‑track. Offline: golden sets and adversarial cases. Online: task success, time‑to‑value, edit rates, and safety incidents. We pair this with governance and stress‑testing practices from Robust Intelligence and our capabilities.