← Back to Projects
AI Assisted UX Research ยท 2026
Pet Health App Research
TeamUX Researcher, Software Developer My roleUX Researcher MethodsDiscovery Interviews, Competitive Audit, AI Tools AI ToolsFathom AI, Claude Sonnet 4.6, ChatGPT 5.2
The Challenge

In an effort to build a genuinely useful app for pet owners, the team needed to understand how pet owners actually manage their pet's health today, and what would make that process significantly easier, to the point where they'd be willing to pay for it.

The questions that shaped the work

Four goals framed the entire research effort, and determined which methods to use, what to ask in interviews, and what to look for in the competitive landscape.

Goal 01

Understand the pain points

Learn what pain points pet owners experience when organizing their pet's health records and tracking medication intake.

Goal 02

Map current workarounds

Understand how pet owners currently tackle these problems today, and where their workarounds break down.

Goal 03

Establish willingness to pay

Learn how much owners are willing to pay for organization, and which specific features would be the deciding factor for them to upgrade.

Goal 04

Explore cat owner needs

Learn whether cat owners face meaningfully different organizational challenges around their pet's health data compared to dog owners.

Why these two methods

Two methods were chosen to answer the research goals from different angles, one to understand the market, one to understand the people.

Method 01

Competitive Audit

To understand the existing landscape: who's already in this space, what features they offer, what they charge, and where the gaps are. This would answer the business questions, what to build, how to price it, and what's worth differentiating on.

Method 02

Discovery Interviews

To understand the humans behind the problem: how pet owners actually behave today, what their real workarounds look like, and what would make them willing to pay. Market data tells you what exists, only users can tell you what's missing and what matters.

A three-track research sprint

This project ran 3 parallel research tracks simultaneously: a competitive audit across 14 apps in 3 market segments, a UX teardown of best-in-class human health apps for transferable patterns, and discovery interviews with pet owners. The full project took 2 weeks, but only because of interview scheduling constraints. Without them, the same work would have taken 2โ€“3 days.

๐Ÿ“Š
Data Structuring
Structured XLSX templates
๐Ÿ”
Competitive Audit
14 apps, 3 segments
๐ŸŽ™๏ธ
Discovery Interviews
Fathom AI recording
๐Ÿค–
AI Synthesis
Claude analysis
๐Ÿ“„
Strategy Outputs
3 key deliverables

Mapping the landscape

Before talking to users, I needed to understand the market. I defined a structured set of criteria to evaluate each competitor consistently, not just what they built, but how they monetized it and how users responded to it.

Criteria evaluated: company name, summary, URL, Google Play and App Store links, rating, number of users, business model, price, free features, paid features, strengths, and weaknesses.

ChatGPT ยท Data Collection

Researching 14 competitors

With 14 competitors to analyze across three market segments, I used ChatGPT to systematically research and populate the audit criteria for each app. The acceleration was real, what would have taken days of manual research compressed into hours of structured prompting.

Significantly accelerated data collection across 14 apps
Organized and summarized competitor information consistently
Several times analyzed the wrong product, used information about a different app than the one requested
Occasionally failed to find App Store or Google Play links that did exist
Every output required verification against actual app listings before use
Claude ยท Expanding the Lens

Learning from human health apps

Once the pet app audit was complete, I asked Claude to analyze five best-in-class human health tracking apps, Flo, Medisafe, MyFitnessPal, Apple Health, and Headspace, and identify which UX patterns proven to work for human health could transfer meaningfully to a pet context.

Identified UX patterns pet apps hadn't solved: reliable notifications, honest paywalls, AI credibility, vet-ready exports
Proposed features grounded in what users genuinely love about human health apps
Delivered a structured, navigable teardown report directly usable as a research artifact

Talking to the people

Nine pet owners were recruited for 1:1 discovery interviews, a mix of dog and cat owners, varying in how actively they managed their pet's health.

Recording & Notes ยท Fathom AI

Capturing the sessions

All sessions were recorded using Fathom AI, which generated timestamped transcripts and session highlights automatically. I also took notes in a structured spreadsheet in parallel, capturing observations, direct quotes, and emerging patterns in real time rather than relying solely on the transcript.

Eliminated post-interview transcription time entirely
Timestamps allowed rapid jumping to exact quotes for verification
Transcripts required review, accents and mixed-language sessions occasionally introduced errors
Synthesis ยท Claude

Analyzing the data

After all nine interviews were complete, I gave the full dataset to Claude and asked it to analyze the data and produce a findings report. The synthesis was largely strong, themes were organized clearly, patterns surfaced across participants, and the output was well-structured and readable.

Compressed multi-day analysis into hours
Produced a well-designed, thorough report with recommendations
Hallucinated on at least one participant, attributed a statement not present in the data
Made assumptions about user motivations that went beyond what participants actually said
Missed several important findings, including one-time payment preference and nuanced AI skepticism, which were added back manually after a full review against raw data

What the research actually revealed

1, Current systems are fragile and ad hoc
Most owners cobbled together their own "system", and most are one relocation or vet change away from losing it.

Not one participant used a dedicated pet health app. The actual "system" was a combination of: calendar reminders (for medications), paper folders (for documents), email inboxes (for vet records), and memory (for everything else).

Sergey, the most medically complex case with 6 concurrent conditions, had written the dog's medication name on sticky notes on his apartment walls. Kaity maintained a manual packing checklist in Apple Notes. Daria's husband filmed video instructions before each sitter visit because there was no written source of truth.

"If something gives me a very easy way of tracking all the history of the pet, sitter notes, everything, maybe I will pay for it. If it is very easy to use."

โ€” Sergey N., dog owner managing 6 concurrent health conditions
Calendar reminders universally used No single source of truth for any participant Paper folders still common
2, Document chaos surfaces at friction points
Records are rarely needed, but when they are, the process is genuinely painful.

Day-to-day, most owners are fine. But specific scenarios exposed the fragility of the current approach: changing vets, urgent groomer appointments, dogpark entry, airline travel, and moving apartments all required vaccination records, and in almost every case, finding them was stressful.

Lauren couldn't access her dogs' records quickly when the groomer asked, the vet portal was inconvenient and she had to contact the office and wait. Sergey had to contact his vet when nose work classes required a rabies certificate. Another participant described searching a 50,000-email inbox to find a vaccination record sent by a previous vet.

"The vet portal was not convenient. I had to contact the office. They emailed it. But it took too much time."

โ€” Lauren S., 2 dogs, Cavaliers
Groomer, airline, daycare, park = recurring trigger points Email is the de facto document store, and a bad one
3, Low perceived urgency masks real health stakes
Owners felt on top of things, but the specifics revealed meaningful gaps.

Most participants said they felt "fine" about managing their pet's health. Yet when pressed on specifics, the gaps appeared: Hans found a tick on his dog only after discovering he'd missed a dose of flea/tick prevention. Another participant forgot chlorhexidine application multiple times, and cat acne returned as a result.

The low stakes of these specific incidents masked the pattern: missed doses and late vaccinations were common, just rarely consequential enough to create alarm. For high-stakes medications, owners were extremely diligent. The risk of consequence correlated directly with how vigilant they were.

"If they stopped giving it to her, she will have an anxiety episode."

โ€” Kaity H., about her dog Hope's daily anti-anxiety medication
High-stakes meds: near-perfect adherence Preventatives: frequent "day or two late" misses Consequence awareness drives behavior more than reminders
4, Sharing pet care information is genuinely difficult
When multiple caregivers are involved, current tools fail them.

For households with shared pet responsibility, coordination happened primarily through WhatsApp/Telegram messages or verbal communication. Sergey maintained a long written instruction document that he rewrote before every sitter visit. Daria's husband filmed video instructions. No participant had a shared digital space that caregivers, sitters, and family members could actually access.

"Kolya films a video instruction every time, there's no written source of truth."

โ€” Daria K., on how her husband communicates care instructions to sitters
Family sharing was cited as a top desired feature Sitter handoff is a consistent pain point
5, Payment signals were weak, but specific triggers existed
Most participants couldn't imagine paying, but concrete use cases unlocked willingness.

Asked abstractly if they'd pay for a pet health app, most participants hesitated or said no. But when discussing specific friction points, the groomer document scramble, the sitter instruction problem, the medication adherence anxiety, payment willingness emerged.

A notable finding not surfaced in the initial AI synthesis: several participants disliked subscription models and preferred one-time payments. "Better to be a one-time payment, easier to decide." This created a case for exploring lifetime purchase tiers as a conversion pathway.

$3โ€“5/month seen as fair when use case is specific Subscription fatigue was real, some preferred one-time purchase Value prop must be concrete, not abstract
6, AI skepticism was nuanced, not categorical
Users weren't anti-AI. They were anti-wrong-AI.

Questions about AI-powered document extraction revealed a consistent pattern: participants were open to AI help, but wanted to verify everything it touched. The key insight: trust was conditional on stakes. For low-stakes suggestions (breed care tips, seasonal reminders), acceptance was high. For high-stakes data (medication dosage, vaccine dates), users expected to verify everything.

"AI can hallucinate, dates, correctness of fields, different invoices have different structure. I would feel the need to check it every time."

โ€” Sergey N., tech-literate dog owner
Open to AI for proactive health nudges Will verify AI-extracted medical data every time Trust scales inversely with health stakes

From insight to strategy

Three concrete, actionable outputs emerged from this research sprint, each grounded in both user data and market analysis.

Feature Priority

Prioritized Feature Set

Research findings determined which features were true table stakes, which could wait, and which weren't worth building at all, including clear boundaries around what AI should and shouldn't do in this context.

Business Model

Structured Subscription Tiers

User willingness-to-pay signals shaped a tiered pricing structure, defining what belongs in a free tier, what justifies a premium, and which payment model would convert the most skeptical users.

Development Strategy

Phased Build Roadmap

Findings informed the sequence of development, establishing what needed to work flawlessly before anything else was built, and when to introduce more complex features once trust was established.

What worked, what needed a human

โ†‘
Significant acceleration

AI tools compressed what would have been multi-day analysis into hours. The competitive audit across 14 apps, including a full feature matrix, gap analysis, and market opportunity map, would have taken days manually. Navigable reports were generated directly from raw data.

โ–ณ
Where human judgment was essential

Claude's synthesis occasionally made assumptions about user motivations not present in the raw data. Several findings, subscription fatigue, the one-time payment preference, nuanced AI skepticism, weren't surfaced initially and had to be added manually. AI excelled at breadth; researchers were still needed for depth.

โ†‘
Fathom's specific value

Timestamped transcripts made quote verification fast and reliable. Being able to jump directly to a specific moment when cross-checking a quote, or checking whether a claim was participant's own words versus interviewer framing, saved significant time and improved accuracy.

โ–ณ
What to do differently

Prompt Claude to distinguish "universal finding" from "notable exception worth surfacing." Some of the most actionable insights came from edge cases. Also: ask explicitly what was not included in the synthesis. A two-pass review, synthesis, then gap audit, would reduce the risk of losing signal in noise.

Let's work together

Reach out and I'll share more about how I can solve your problem!