In an effort to build a genuinely useful app for pet owners, the team needed to understand how pet owners actually manage their pet's health today, and what would make that process significantly easier, to the point where they'd be willing to pay for it.
The questions that shaped the work
Four goals framed the entire research effort, and determined which methods to use, what to ask in interviews, and what to look for in the competitive landscape.
Understand the pain points
Learn what pain points pet owners experience when organizing their pet's health records and tracking medication intake.
Map current workarounds
Understand how pet owners currently tackle these problems today, and where their workarounds break down.
Establish willingness to pay
Learn how much owners are willing to pay for organization, and which specific features would be the deciding factor for them to upgrade.
Explore cat owner needs
Learn whether cat owners face meaningfully different organizational challenges around their pet's health data compared to dog owners.
Why these two methods
Two methods were chosen to answer the research goals from different angles, one to understand the market, one to understand the people.
Competitive Audit
To understand the existing landscape: who's already in this space, what features they offer, what they charge, and where the gaps are. This would answer the business questions, what to build, how to price it, and what's worth differentiating on.
Discovery Interviews
To understand the humans behind the problem: how pet owners actually behave today, what their real workarounds look like, and what would make them willing to pay. Market data tells you what exists, only users can tell you what's missing and what matters.
A three-track research sprint
This project ran 3 parallel research tracks simultaneously: a competitive audit across 14 apps in 3 market segments, a UX teardown of best-in-class human health apps for transferable patterns, and discovery interviews with pet owners. The full project took 2 weeks, but only because of interview scheduling constraints. Without them, the same work would have taken 2โ3 days.
Mapping the landscape
Before talking to users, I needed to understand the market. I defined a structured set of criteria to evaluate each competitor consistently, not just what they built, but how they monetized it and how users responded to it.
Criteria evaluated: company name, summary, URL, Google Play and App Store links, rating, number of users, business model, price, free features, paid features, strengths, and weaknesses.
Researching 14 competitors
With 14 competitors to analyze across three market segments, I used ChatGPT to systematically research and populate the audit criteria for each app. The acceleration was real, what would have taken days of manual research compressed into hours of structured prompting.
Learning from human health apps
Once the pet app audit was complete, I asked Claude to analyze five best-in-class human health tracking apps, Flo, Medisafe, MyFitnessPal, Apple Health, and Headspace, and identify which UX patterns proven to work for human health could transfer meaningfully to a pet context.
Talking to the people
Nine pet owners were recruited for 1:1 discovery interviews, a mix of dog and cat owners, varying in how actively they managed their pet's health.
Capturing the sessions
All sessions were recorded using Fathom AI, which generated timestamped transcripts and session highlights automatically. I also took notes in a structured spreadsheet in parallel, capturing observations, direct quotes, and emerging patterns in real time rather than relying solely on the transcript.
Analyzing the data
After all nine interviews were complete, I gave the full dataset to Claude and asked it to analyze the data and produce a findings report. The synthesis was largely strong, themes were organized clearly, patterns surfaced across participants, and the output was well-structured and readable.
What the research actually revealed
Not one participant used a dedicated pet health app. The actual "system" was a combination of: calendar reminders (for medications), paper folders (for documents), email inboxes (for vet records), and memory (for everything else).
Sergey, the most medically complex case with 6 concurrent conditions, had written the dog's medication name on sticky notes on his apartment walls. Kaity maintained a manual packing checklist in Apple Notes. Daria's husband filmed video instructions before each sitter visit because there was no written source of truth.
"If something gives me a very easy way of tracking all the history of the pet, sitter notes, everything, maybe I will pay for it. If it is very easy to use."
Day-to-day, most owners are fine. But specific scenarios exposed the fragility of the current approach: changing vets, urgent groomer appointments, dogpark entry, airline travel, and moving apartments all required vaccination records, and in almost every case, finding them was stressful.
Lauren couldn't access her dogs' records quickly when the groomer asked, the vet portal was inconvenient and she had to contact the office and wait. Sergey had to contact his vet when nose work classes required a rabies certificate. Another participant described searching a 50,000-email inbox to find a vaccination record sent by a previous vet.
"The vet portal was not convenient. I had to contact the office. They emailed it. But it took too much time."
Most participants said they felt "fine" about managing their pet's health. Yet when pressed on specifics, the gaps appeared: Hans found a tick on his dog only after discovering he'd missed a dose of flea/tick prevention. Another participant forgot chlorhexidine application multiple times, and cat acne returned as a result.
The low stakes of these specific incidents masked the pattern: missed doses and late vaccinations were common, just rarely consequential enough to create alarm. For high-stakes medications, owners were extremely diligent. The risk of consequence correlated directly with how vigilant they were.
"If they stopped giving it to her, she will have an anxiety episode."
For households with shared pet responsibility, coordination happened primarily through WhatsApp/Telegram messages or verbal communication. Sergey maintained a long written instruction document that he rewrote before every sitter visit. Daria's husband filmed video instructions. No participant had a shared digital space that caregivers, sitters, and family members could actually access.
"Kolya films a video instruction every time, there's no written source of truth."
Asked abstractly if they'd pay for a pet health app, most participants hesitated or said no. But when discussing specific friction points, the groomer document scramble, the sitter instruction problem, the medication adherence anxiety, payment willingness emerged.
A notable finding not surfaced in the initial AI synthesis: several participants disliked subscription models and preferred one-time payments. "Better to be a one-time payment, easier to decide." This created a case for exploring lifetime purchase tiers as a conversion pathway.
Questions about AI-powered document extraction revealed a consistent pattern: participants were open to AI help, but wanted to verify everything it touched. The key insight: trust was conditional on stakes. For low-stakes suggestions (breed care tips, seasonal reminders), acceptance was high. For high-stakes data (medication dosage, vaccine dates), users expected to verify everything.
"AI can hallucinate, dates, correctness of fields, different invoices have different structure. I would feel the need to check it every time."
From insight to strategy
Three concrete, actionable outputs emerged from this research sprint, each grounded in both user data and market analysis.
Prioritized Feature Set
Research findings determined which features were true table stakes, which could wait, and which weren't worth building at all, including clear boundaries around what AI should and shouldn't do in this context.
Structured Subscription Tiers
User willingness-to-pay signals shaped a tiered pricing structure, defining what belongs in a free tier, what justifies a premium, and which payment model would convert the most skeptical users.
Phased Build Roadmap
Findings informed the sequence of development, establishing what needed to work flawlessly before anything else was built, and when to introduce more complex features once trust was established.
What worked, what needed a human
Significant acceleration
AI tools compressed what would have been multi-day analysis into hours. The competitive audit across 14 apps, including a full feature matrix, gap analysis, and market opportunity map, would have taken days manually. Navigable reports were generated directly from raw data.
Where human judgment was essential
Claude's synthesis occasionally made assumptions about user motivations not present in the raw data. Several findings, subscription fatigue, the one-time payment preference, nuanced AI skepticism, weren't surfaced initially and had to be added manually. AI excelled at breadth; researchers were still needed for depth.
Fathom's specific value
Timestamped transcripts made quote verification fast and reliable. Being able to jump directly to a specific moment when cross-checking a quote, or checking whether a claim was participant's own words versus interviewer framing, saved significant time and improved accuracy.
What to do differently
Prompt Claude to distinguish "universal finding" from "notable exception worth surfacing." Some of the most actionable insights came from edge cases. Also: ask explicitly what was not included in the synthesis. A two-pass review, synthesis, then gap audit, would reduce the risk of losing signal in noise.
Let's work together
Reach out and I'll share more about how I can solve your problem!