Homage Mobile App
A one-stop app for caregiving support, at home and on demand.
Rated 4.7 on App Store and 4.5 on Google Play.
What’s the Homage app all about?
The Homage app is a one-stop solution for families seeking caregiving services at a location of their choice. With the app (available on both iOS and Android), I can:
Book caregiving services for myself or my loved ones
Manage my schedule of upcoming and past care visits with notifications
View my trained and certified caregiver’s profile before the visit
Communicate with the caregiver through the in-app chat
Read reports and view photos after the care visit through the app
Learn more about caregiving through the content in the app
My Role
I started as the sole Product Manager and designer on the Homage app since inception, in March 2017, when it was but a twinkle in our CEO’s eye. The startup had just raised seed funding and we were focusing full steam on productising what we were learning through our offline operations.
I was responsible for defining the Minimum Viable Product (MVP) and UX design. I worked very closely with the engineering lead to coordinate sprints in the Agile methodology. I also took on Quality Assurance (QA) testing, and coordinated with stakeholders on all aspects of the launch.
After the initial launch, I product managed feature additions and iterations to existing features.
I reported directly to our CEO who approved all product decisions and then to the CPO after he joined the team.
Challenges
Tight timeline: to get our end-to-end MVP designed, built, tested, and launched within 6 months.
Launch dependencies: Since Homage is a two-sided managed marketplace, we would need to simultaneously launch the Care Professional App AND our enterprise backend tool for either app to actually work. These didn’t exist yet, either!
Change Management: Offline operations were growing strongly every week. We’d have to transition our end users to the new apps, as well as our internal Operators.
Gathering Feature Requirements
How did we know what to build? I gathered and interpreted inputs from several sources, namely:
the CEO and co-founders’ vision
our offline processes and operations which were already in full swing
other internal stakeholders
research on local and overseas companies who were similar 2-sided B2C marketplaces
I approached as many sources as possible in the objective of understanding diverse viewpoints and where the company might go. However, the fundamental product goal was to build and launch an initial V1 as quickly as possible. Naturally not everything would (or should!) make it into the MVP.
Still, in that initial phase, I learned a lot from listening as much as possible and absorbing information from outside Product & Engineering. It was also crucial in those early days to build strong personal relationships and keep open channels of communication as the company grew.
I documented the requirements in a structured Product Requirements Document (PRD) that was stored in Atlassian Confluence.
Roadmap Prioritisation
With the goal of “closing the loop” for the entire customer experience to be digitalised and automated, we worked backwards. Essentially, we knew:
what we wanted, e.g. the end-to-end care management flow for customers and caregivers to be managed through an integrated system
non-negotiables, e.g. obvious features like new user signup, login/logout, onboarding
the desired launch timeline, e.g. Sep 2017, approximately 6 months from kickoff.
Furthermore, in building an app from inception, we couldn’t build more complex features until we had the foundation done, so the feature planning was in a logical sequence based on the customer journey.
(Later on when the product was more mature, I explored other prioritisation frameworks like RICE, the Business vs User Value Matrix, as well as internal workflow sizing to gauge where the biggest opportunities lay.)
Prioritising Sub-features Within a Sprint
Each two-week sprint would focus on one or two main features. In the PRD, I would break down the requirements into individual user stories with a proposed priority. While I formed my own viewpoint on each flow and sub-feature and shared that with my CEO, she had the final say in guiding what was in or out. I would then hand the PRD over to the engineering lead, who would assign each sub-feature an estimated size.
One of the guiding principles I internalised during this period was to always ask “what is a must-have” versus “nice to have", in other words “what can we carve out for later?” For example, a simple example was the MVP for chat, where we decided to support only text and photos instead of enabling other file formats like video to be supported as well.
One area I also developed some experience in was managing stakeholder expectations vis-a-vis the inputs they had provided, which didn’t always appear in the final product. To this end, I would establish the larger company objective and explain how tradeoffs would help with speed-to-market. Tact and a positive balance of goodwill also went a long way.
User Experience Design
My UX Portfolio showcases my design work more extensively.
From Mar 2017 to Jun 2018, I was wearing two hats as both Product Manager and UX designer. After finalising the requirements down to the level of user stories, I executed on design using Sketch software and InVision:
Low-fidelity wireframing
High-fidelity mockups
Interactive prototypes for key flows
Usability testing
Features would typically go through multiple rounds of review and iteration with the CEO and associated stakeholders, where appropriate. During the design process I also looped in the Engineering team to assess for feasibility from a technical perspective.
Finally, after each design was locked and approved, I held a design transition session to brief the engineering team and field questions. This typically happened the week before the designs were scheduled to go into build.
Sprint Kickoff & Progress
During each sprint (typically two weeks), I would participate in the Engineering daily stand-up to check in on the sprint progress.
I had regular check-ins with the Engineering lead to ensure that we were on track to ship the features by the end of the sprint.
Quality Assurance Testing
For the first 18 months or so, I also took on the manual quality assurance testing based on the initial user acceptance criteria defined in the PRD and design spec. I executed the testing towards the end of the sprint and would communicate the feedback to the engineers via Atlassian JIRA.
Later on, as the team grew, I handed over the testing responsibilities to a new team member together with an 80-page detailed testing script for end-to-end regression testing.
Go to Market Planning
For feature-level deployments before the initial launch of the app, launches were very straightforward as the coordination was between myself to give the QA go-ahead to the Engineering team. For the actual launch of the apps, communication was key due to the sheer number of moving parts, including but not limited to:
Synchronise with Engineering team and Operations to ensure precise cutovers; contingency planning
Coordinate with Marketing on the outgoing communications to customers and caregivers, as well as App Store and Play Store metadata
Train internal Operators on the new systems and pre-empt them to handle customer feedback
To ensure stakeholders were onboard, I distributed Gantt charts and called alignment meetings where I outlined individual responsibilities, timelines and potential dependencies. I sent weekly and daily updates over email and Slack.
After countless feature launches over the years, I’ve built my own standard launch GTM template and checklist to cover the bases.
Product Marketing
Every launch would have a set of product marketing deliverables to ensure that customers were informed of the new features.
For the initial app launch and subsequent iterations, I created and/or coordinated:
App and Play Store copy and images
EDMs
Demonstration Videos & GIFs
FAQs, Cheat Sheets, Manuals
Push notifications
Analytics
For each feature, we would also define a number of key tracking events for user actions and screens that we would send to Mixpanel and Amplitude. With the events and screens in place, I was able to monitor:
unique users,
unique event totals over defined time range,
conversions and drop-offs (to the exact step),
conversion percentages over time,
and more
If I noticed a significant drop-off on one of the steps, I would consider if there were immediate copy or design improvements that could be made to reduce drop-off. Similarly, I would monitor conversion over time and note if there were any significant fluctuations, then look into them together with the Engineering team. In so doing, we were able to spot and fix unforeseen technical hitches and bring the best experience to our users!