



















































I'm a Senior Designer and AI educator. My process combines rigorous UX fundamentals with practical, production-grade AI thinking.
I design for products at scale: complex systems, global users, and measurable business impact.
I teach designers and teams how to use AI as a creative partner — not a black box.
01 · Discovery & Alignment
"Why does this matter now?"
Before any pixels are drawn, I work with product, engineering, and data partners to make sure everyone is solving the same problem.
I clarify the problem, success metrics, constraints, and risks with every stakeholder in the room.
This often means aligning multiple teams and legacy systems around a single narrative — before a single wireframe exists.
The output is a shared north star: what we're building, for whom, and why it matters now.
02 · Research & Synthesis
"The user's words beat any assumption."
I combine qualitative and quantitative methods to find where design and AI can create disproportionate value — not just incremental improvements.
Qualitative: interviews, contextual inquiry, heuristic reviews, diary studies.
Quantitative: product analytics, funnel analysis, market scans, competitive audits.
I map user journeys, jobs-to-be-done, and system constraints to expose the real friction — not the reported one.
Outputs: problem framing, opportunity areas, and crisp design principles that guide everything that follows.
03 · Experience Strategy
"Who are we designing for, and how will we know we succeeded?"
I translate research into a concrete experience strategy: who we're designing for, what outcomes we're targeting, and the metrics that will tell us we got there.
Service blueprints and information architecture that keep teams aligned as we iterate.
Experience north stars — vision prototypes that communicate the 3-year ambition, not just the next sprint.
For complex, large-scale products, I pay particular attention to cross-product consistency and long-term maintainability.
04 · AI‑First Design My Specialty
"If users can't understand it, they can't trust it."
I define precisely where AI should — and should not — appear in the experience. The difference between assistance and automation. Between a recommendation and a decision.
I collaborate with data science to shape model requirements, guardrails, and evaluation criteria rooted in user value and safety.
I design AI interactions that are transparent, controllable, and trustworthy — making capabilities legible to everyday users, not just engineers.
I define the full failure surface: what happens when the model is wrong, slow, or absent.
05 · Prototyping & Validation
"The fastest way to know if it works is to make it."
I move quickly from lo-fi sketches to interactive prototypes that teams can click, critique, and test — long before engineering touches it.
For AI experiences, I prototype not just the UI but the behaviour — using prompt design, simulated responses, and wizard-of-oz techniques.
I run usability tests, heuristic reviews, and A/B experiments where appropriate, then fold every learning back into the design.
Nothing ships without at least one round of real users telling me where it breaks.
06 · Delivery & Evangelism
"Shipping is not the finish line. It's the starting gun."
I document flows, interaction patterns, and content guidelines so engineering can ship with confidence — and I stay close through implementation to protect the experience as constraints appear.
I partner closely with engineers during build, reviewing pull requests and flagging drift before it becomes debt.
I evangelise design decisions across the organisation using narrative decks, live demos, and walkthroughs that make the rationale stick.
After launch, I set up measurement frameworks so we know within weeks what's working and what isn't.
07 · Teaching & Speaking
"The best way to sharpen your process is to explain it to someone else."
Beyond product work, I teach designers, PMs, and students how to design with AI — through talks, workshops, and curriculum built around what teams actually struggle with when adopting AI.
I've spoken at institutions and conferences about AI-assisted workflows, prompt craft for designers, and the future of human–AI collaboration.
I run hands-on workshops where teams go from "AI is scary" to shipping AI-native features with confidence.
This teaching practice feeds directly back into my product work — keeping my process grounded in real adoption challenges, not theory.
Six things I've learned — about design, technology, and what it means to stay irreducibly human when working with AI every day.
01
Every real breakthrough came from sitting with users, not from a model output. AI amplifies your empathy — it can't replace it.
02
Intuition is pattern recognition built from thousands of decisions. Don't abandon it because a language model sounds more confident.
03
Teams that rush to AI solutions skip the thinking that makes AI useful. Define the problem deeply first. Always.
04
Most people don't trust AI yet. Design for their fear, not your enthusiasm.
05
The moment you stop learning, you become irrelevant. In AI, that window is measured in months — not years.
06
AI is clay. What you sculpt with it — the trust, the usefulness, the human dignity — that's what matters.
I run workshops and keynotes for design teams, PMs, and leadership on AI-assisted workflows, responsible AI design, and building products users actually trust.