Back to all work

Tango Code — AI-First
Design Transformation

"Gabe has been a force of nature when it comes to research, develop, test, learn, repeat in our new path."
RoleDesign Engineer
TeamCEO, CAIO, 4 Designers, Dev Team
Duration1 week to implement, ongoing
ToolsClaude Code, Figma MCPs, GitHub, Vercel, CodeRabbit
01 Thesis

Clients expect the speed of vibe coders and the quality of a professional software house. Tango Code, a global software house, was shipping quality work in 2-3 month engagements. Most of that time was process overhead: handoffs, review cycles, waiting for fixes.

50%
Reduction in project costs
3x
Faster delivery (3 months → 1 month)
Zero
UI review Jira tasks created

McKinsey estimated $4.4 trillion in annual productivity potential from generative AI. GitHub reported developers completing tasks 55% faster with Copilot. The numbers were there. What was missing was someone willing to rewire the actual process, test it, validate it, and bring the whole team along.

  1. [1] McKinsey Global Institute, Jun 2023 — Estimated $2.6T-$4.4T in annual value. Design and software engineering cited as top-impact functions.
  2. [2] GitHub / Microsoft Research, 2022 — Developers using Copilot completed tasks 55.8% faster. Strongest gains in boilerplate and repetitive code.
  3. [3] Forrester, 2024 — Companies with structured AI adoption saw 40-60% reduction in development cycle time.
  4. [4] Gartner — By 2028, 75% of enterprise software engineers will use AI coding assistants, up from <10% in early 2023.
02 New Machine

The transformation happened in one week. I worked with the CEO and CAIO to define the vision, then ran sessions with every designer on the team.

It wasn't clean. The first real friction was decentralized AI: every designer had their own workflow, their own tools, their own prompting habits. Nothing was shared, nothing was consistent. We fixed that by standing up a Team Claude account with Company Skills: shared prompt templates, project context, and coding conventions baked in from day one.

The second friction was the terminal. Designers had never used it. Dropping them straight into Claude Code would have killed momentum. So we built a ramp: Figma Make first to show that code was just another output, then v0 for layout generation, then Claude UI for component work, and finally Claude Code with the full terminal workflow. Each step came paired with dev team sessions where engineers explained what was actually happening under the hood. By the end, designers weren't afraid of the terminal. They owned it.

Before
01Designer creates screens in Figma
02Handoff document written for devs
03Developer builds from static mockups
04Designer opens staging, files UI bugs in Jira
05Dev fixes bugs, pushes again
06Repeat steps 4-5 (3-5 rounds)
8-12 wk
Average project delivery
After
01Designer creates screens in Figma
02Figma MCP connects design to Claude Code
03Designer builds navigable prototype in code
04Designer spots UI bug, fixes it directly
05PR created, CodeRabbit + dev team reviews
06Merge and deploy via Vercel
4 wk
Average project delivery
Claude Code
AI coding agent. Designers write code through conversation, connected to Figma via MCPs.
Figma MCPs
Model Context Protocol servers. Design tokens, components, and layouts available as code context.
Custom Skills
Reusable prompt templates shared across the team for component generation, bug fixes, PRs.
GitHub + PRs
All code through pull requests. Branch protection enforced. Every change traceable.
CodeRabbit
AI code review on every PR. Catches logic errors, security issues, style inconsistencies.
Vercel
Preview deployments on every PR. Stakeholders review real, running code.
Activity What it solved Impact
Daily workshops Some designers had never touched a terminal. Full team onboarded to Claude Code within 5 days.
Skill library Inconsistent prompting. Each person reinventing the wheel. Standardized skill set via CLI. Predictable outputs.
Best practices Fear of breaking things in real codebases. Clear PR workflow, branch naming, commit guidelines.
1-on-1 pairing Individual blockers with Git or CSS architecture. Personalized support. Nobody left behind.
03 Proof

The first project built entirely under the new process was a brand monitoring platform for AI models. The client needed to track how ChatGPT, Claude, Gemini, and Perplexity mention and recommend brands. Estimated at 2-3 months. Shipped in one. Half the cost. I worked alongside Giovanna Souza, who co-designed the product screens using the same AI-first workflow.

These are live components from the production codebase. Each illustration showcases a specific design decision made possible by the AI-first workflow.

Mentions by Model — Live Filtering
Mentions by Model 475 mentions

Which AI models to track?

Toggle models on or off. Disabled models are hidden from the chart below.

Use the model selector dropdown to toggle AI models on and off. Each column shows the total mention count and growth rate for that model. The component reacts instantly — identical to how the production dashboard filters work.
Dev Area — Widget State Machine
Every widget in the product has 4 states: ready, loading, empty, and stressed (high-volume data). We built a Dev Area so the design team could test all states without needing real API data. Toggle each tab to see the state machine in action.
Design Tokens — Connected to Code
Design Tokens
Design tokens in Figma map 1:1 to CSS custom properties in code. When a designer updates a color in Figma, the MCP pulls it into the codebase. Click the theme toggle to see every token swap in real time. This is how dark mode works with zero manual overrides.
2-3 mo
original estimate
Projected
  • 60+ screens designed
  • 6 feature modules
  • 3 design-dev handoff rounds
  • 4-5 UI review cycles in Jira
  • 100% projected budget
1 mo
actual delivery
Reality
  • 60+ screens shipped
  • 6 feature modules live
  • 0 design-dev handoff rounds
  • 0 UI review Jira tasks
  • 50% of projected budget
04 Ship Fast, Break Nothing

Speed without reliability is just chaos. Vibe coding produces demos that collapse in production. We were building real software for a paying client. Every shortcut had to be earned through testing, review, and validation.

Catches
Syntax errors, type mismatches, component API violations. Most code arrives at the PR stage already functional and linted.
How
The AI agent catches issues during generation. Designers describe what they want, Claude Code writes it, and if something breaks, the agent fixes it before the code ever leaves the terminal.
Catches
Logic errors, security vulnerabilities, performance anti-patterns, naming inconsistencies, accessibility gaps.
How
Automated AI review that runs in seconds on every PR. Comments directly on diffs. Catches things humans miss during review fatigue.
Catches
Architectural decisions, business logic correctness, edge cases specific to the domain, design intent alignment.
How
Dev team reviews every PR. AI review is fast, human review adds judgment. Both together create a process stronger than either alone.
Catches
Visual regressions, responsive breakpoints, interaction bugs, real-device behavior.
How
Every PR gets a unique preview URL. Stakeholders click real, running code. No more "it works on my machine."

Designer autonomy with guardrails

Designer spots UI bug
In the deployed product, not in Figma
Opens Claude Code
Describes the issue in natural language
Fix generated + PR created
Branch, commit, push in one step
CodeRabbit
AI review in seconds
Dev Review
Human judgment layer
Vercel Preview
Live URL to verify
Merged in hours
No Jira task. No waiting. No handoff.