By Aimee Castro, RN PhD candidate
Jose B. Londono Velez, BScN
Tracy Nghiem, RN, MSc, PCNP candidate
Karyn Moffat, PhD, Associate Professor
Antonia Arnaert, RN PhD, Associate Professor
Ariana Pagnotta, RN, BSc
Ariane Gautrin, RN, MSc
Argerie Tsimicalis, RN, PhD
Citation: Castro, A., Londono Velez, J. B., Nghiem, T., Moffat, K., Arnaert, A., Pagnotta, A., Gautrin, A. & Tsimicalis, A. (2024). A Systematic Search of Publicly Available mHealth Apps for Respite Care Coordination. Canadian Journal of Nursing Informatics, 19(1). https://cjni.net/journal/?p=12734
Respite care is frequently requested by family caregivers. Yet, accessible services remain underutilized due to challenges in scheduling and coordination. mHealth applications (“apps”) have the potential to resolve these challenges. However, little research has been conducted to map and assess currently available apps for respite care coordination. Therefore, a systematic search was conducted across the Apple iOS and the Google Play stores. Using the Mobile App Rating Scale (MARS) (Stoyanov, 2015) and the Enlight tool (Baumel et al., 2017), this study reviewed, appraised, and characterized 40 apps that facilitate the provision of in-person respite care services for supporting family caregiving.
The results found that while respite care apps can create and sustain a market for respite care, they often underutilize their potential to deliver respite care, due in part to poor advertisement of their own functions. In keeping with previous literature, the scarcity and limited accessibility of respite care providers on these platforms was seen to hinder their practical usage. This study is among the first of its kind to provide both subjective and objective summaries of respite care app features and functions. As such, it may offer insights to future app developers and care providers, serving as a benchmark for future mHealth coordination app development.
Keywords: caregivers, eHealth, health services accessibility, home care services, mHealth, short break care, mHealth appraisal
At some point in their lives, half of Canadians will provide unpaid care to a family member or friend with a chronic incapacitating illness, or age-related need (Sinha, 2013). These family caregivers will provide various forms of support (e.g., assistance with transportation, personal care, housekeeping, and/or care coordination) on a non-professional basis to promote the recovery and quality of life of the individual receiving care (Castro et al., 2022). These acts of caregiving amount to exceptional contributions to society and the healthcare system, totaling to an estimation of $25-$72 billion unpaid work per year in Canada (Barylak, 2016; The Change Foundation, 2019). However, too often, family caregivers exhaust themselves in their duties, and experience burn out when they do not have adequate support (Denham et al., 2020; Oliva-Moreno et al., 2018).
Distress among caregivers has been associated with negative effects on mental and physical health, and even life expectancy, highlighting the importance of addressing the need for improved family caregiving support (The Change Foundation, 2019). Different interventions exist to prevent caregivers from feeling overburdened. Of these interventions, respite care remains one of the most frequently requested by family caregivers (Buscemi et al., 2010; Rose et al., 2015). Respite care is a healthcare model that permits community nurses or healthcare aides to visit the homes of family caregivers and take over their caregiving acts, temporarily relieving the caregiver and care recipient of their family caregiving duties and roles (Edelstein et al., 2017; Evans, 2013). Current primary healthcare recommendations urge caregivers to use respite care services when needed (Swartz & Collins, 2019). However, these services are often underutilized because family caregivers are unaware of them; and, even when accessed, these services lack the flexibility and coordination abilities to adequately accommodate the needs of patients and families (Robinson et al., 2017; Rose et al., 2015).
Information and communication technologies (ICTs), such as mobile health (mHealth) and applications (apps), have emerged as a means to render respite care services accessible by delivering flexible support to family caregivers (Castro et al., 2023). These apps have the potential to ameliorate remote communication and care coordination among healthcare providers, family caregivers, and care recipients (Gagnon et al., 2012; Sala-González et al., 2021). Current evidence demonstrates a strong interest by family caregivers in adopting mHealth solutions, especially when these are supported by empirical evidence (Lau et al., 2021; Phongtankuel et al., 2018). Improving apps with desired key features such as portability, GPS location, and instant messaging may help to better coordinate respite care services; however, the extent and form in which these desired features are integrated into such apps remains unknown. Moreover, the overall quality and functionality of these apps have yet to be rigorously assessed using mHealth app assessment tools (Castro et al., 2021). Hence, the aim of this systematic search was to review, compare, appraise, and characterize all publicly accessible mobile apps facilitating the provision of in-person respite care services for family caregivers. Doing so could help prospective respite care recipients understand their current respite care app options and create a launchpad that future mHealth app developers and researchers can refer to for information on the evidence-informed features and qualities of other apps in this industry.
A systematic “hybrid” design proposed by Lau et al. (2021) was used to guide the app store searches, synthesize the results, and analyze the data. This design suggests that a traditional search of academic library databases can be conducted and used to inform a further systematic app store review, to produce a complete picture of current publicly available apps for respite care across both the academic and industry domains.
As recommended by the hybrid mHealth search methodology (Lau et al., 2021), the search terms of a recent scoping review of the academic literature on respite technologies were used to inform the search strategy (Castro et al., 2023). In addition, the expertise of an academic librarian was used to help devise, pilot test, and finalize the search list of English and French keywords included in the search. This final list is available in Table 1.
Table 1
Table of Keywords for App Search
The search for publicly available apps was conducted on two platforms and their corresponding app stores: the Google Play Store via an Android operating system, and the Apple App Store via the Apple iOS operating system. These two app stores were selected as they represent the largest market share of apps (Anthony, 2021). Each app store was searched independently by a few members of the research team. Cookies were cleared prior to searching to avoid inadvertent bias (Donnelly & Thompson, 2015). The search was conducted for Android from April to October 2022 and for iPhone from January to October 2022 using the latest software versions available. To increase reproducibility of this study’s search, the keywords by language (French and English) for both app store searches were recorded (Table 1), and screenshots and screen recordings for each search result were kept. The screenshots provided a record of the first 100 apps that appeared from each keyword search and were kept in folders using a dating system to keep track of progress.
Appendix 1 provides a more detailed overview of the seven steps for screening and selecting (Steps 1-5) and analyzing (Steps 6-7) the respite care apps.
Appendix 1
Detailed Steps of Search, Selection, and Data Extraction Process
The app screening and selection process was divided into five steps.
Step 1 entailed recording the device type (iPhone or Android), device owner, platform, software version, and search date.
Step 2 involved extracting the first 100 results of each keyword search into Excel spreadsheets to facilitate name sorting.
Step 3 entailed the removal of duplicate apps present in the same app store. Duplicates across the two app stores were further screened in case features varied between the two operating systems.
Step 4 involved selecting apps based on title and description and independently assessing the apps for inclusion by two reviewers (initial screening).
Step 5 entailed downloading the selected apps to further assess eligibility based on app features (secondary screening).
Most necessarily, included apps had to provide users with the ability to schedule in-person respite care services. These services had to afford the family caregiver the freedom to leave the care recipient attended by the respite service provider for a predetermined number of hours, such as by offering “accompaniment”; by this criterion, many home care activities like wound care or grocery shopping as the sole home care tasks requested would not qualify as respite care. Ambiguous apps that disputedly met the criteria were flagged, cross-checked, and discussed by the reviewers until consensus was reached.
Data from the final selection of apps were extracted from the apps and their own official websites and sorted into an Excel spreadsheet to produce a descriptive summary of the apps. Extracted data included: purpose of the app; demographic data such as country of app origin, app language(s), target user healthcare condition or group, specific app layout and functions, type of respite care provided; as well as marketing aspects such as price of the app and user comments, if available, and any other information necessary for the quality appraisal.
Two complementary digital health appraisal tools were used to analyze the final selection of respite care apps: the Mobile App Rating Scale (MARS) (Stoyanov, 2015) and the Enlight tool (Baumel et al., 2017).
The MARS tool presents a quick, reliable, and multidimensional method to appraise mHealth apps. The MARS tool has 23 items, each rated on a 5-point scale (1-Inadequate, 2-Poor, 3-Acceptable, 4-Good, 5-Excellent). The first 19 items assess four objective app quality ratings sections:
section A – Engagement (entertainment, interest, customisation, interactivity, target group);
section B – Functionality (performance, ease of use, navigation, gestural design);
section C – Aesthetics (layout, graphics, visual appeal); and
section D – Information Quality (accuracy of app description, goals, quality of the information, quantity of information, visual information, credibility of the developer, evidence base/testing).
The items in each of the sections are averaged to give a “Mean Section Score” out of 5 points. The Mean Section Scores are themselves then averaged to give an overall “App Quality Mean Score” out of 5 points.
The final fifth section E of the MARS (items 20 through 23) provides the “App Subjective Quality Score”. This score is independent from the previous objective assessment sections because it provides questions directed towards the evaluator in a separate section of the MARS (e.g., Would you recommend this app to people who might benefit from it?). The items in this section are also each scored out of a possible 5 points, and then averaged to get the “App Subjective Quality Score”.
While many items from the MARS tools help assess critical elements of an mHealth app, there are no items that appraise the privacy, transparency, and security of user information. These domains were therefore analyzed using the criteria set by the Enlight tool’s Privacy Explanation, and Basic Security checklists (Bining et al., 2022). These checklists encompass multiple criteria which are rated either as “Yes”, “Not Applicable”, or “No or can’t tell”. A point for every criterion not met is given, making the best score possible for the Privacy Explanation checklist a 0/8 and a 0/4 for the Basic Security checklist. This interpretation entails: 0/8 meets user Privacy Explanation checklist requirements, and 0/4 indicates user data is reasonably secured for the Basic Security checklist.
To test inter-researcher variability in scoring, a blind second-analysis of three Apple apps, and three Android apps was conducted by another researcher. There were no discrepancies greater than one point noted for any score in any of the MARS or Enlight sections assessed for any of the apps. The complete MARS and Enlight scores for all apps can be found in Appendix 2 and Appendix 3.
Appendix 2
18 Apple iOS App Store Applications Analyzed
Appendix 3
22 Android Google Play Store Applications Analyzed
The data were analyzed both quantitatively and qualitatively in Excel by three researchers using descriptive content analysis techniques (Elo & Kyngäs, 2008). Visual data in the form of tables and clusters were generated from the information gathered by the MARS and Enlight tools to outline trends in the quantitative data. Qualitative features were analyzed and grouped into themes to facilitate comparison across the apps.
In total, 4,711 apps were screened (3,193 from the Android store, and 1,518 from the Apple iOS store). 2,510 (1577 from Android and 933 from iOS) apps remained after deduplication. 3 apps were added through snowball sampling after the deduplication process for a total of 2513 apps. From there, apps underwent two additional rounds of screening, each with their own set of exclusion criteria. Ultimately, 18 apps from the Apple store and 22 apps from the Google Play store met the inclusion criteria for download, data extraction, and analyses, for a combined total of 40 respite care apps analyzed. Eight of the apps had Apple-Android interoperability, however these apps were still counted separately in case there were differences in the design across the two stores.
See Figures 1 and 2 for flow charts outlining the stages of app inclusion criteria. See Appendix 1 for detailed reasons for app inclusion and exclusion. The most common reason for exclusion was Reason one (i.e., the brief app description in the returned results did not give indications of being respite care oriented. Specifically, there were no indications that the app would be used to coordinate some form of in-person support that would allow the family caregiver to leave the home.)
Figure 1
Android Google Play Apps Screening Flow Chart
Figure 2
Apple Apps Screening Flow Chart
Of the final list of apps assessed as being able to provide respite care services, only seven apps explicitly advertised “respite care” or related terms, such as “short break care” or “accompaniment” as an official service. The remaining 33 apps were nonetheless included because their services could function as respite care, in that they provided users with the ability to coordinate in-home care services to a care recipient for a selectable number of hours where the home care services that could be provided did not require supervision by a family caregiver; such respite care activities included psychosocial support, accompaniment, and overnight care for sleep. Therefore, while not explicitly using the term “respite care” in their ads or services, these apps could be used for coordinating respite care services.
The most highly represented countries by number of operating respite care apps were Canada with 11 apps, the United States with 10 apps, and Singapore with 6 apps. The number of app users could not be reliably determined from the app stores or apps themselves, so these statistics were not recorded. Of the 40 apps, only two apps had more than 10 reviews online; these apps were Carelinx (Android) with 1,160 reviews giving a mean of 4/5 stars, and Curam (Android) with 286 reviews giving a mean of 4.3/5 stars.
Most apps had multiple target populations: 21 apps mentioned senior/elder care as an option, 19 apps mentioned services to all ages and populations, 8 apps explicitly mentioned serving populations with chronic illnesses or disabilities, and 3 apps mentioned care for children. Only one app (Ianacare) had “Primary caregiver” as a target population.
Below, we share the key quantitative appraisal and thematic results for these respite care app analyses.
The following is a summary of the findings gathered with the MARS scores, the complete tables can be found in Appendices 2 and 3.
The app “Quality Mean Score” section of the MARS indicated the apps were on average between “acceptable” and “good” (Stoyanov et al.), with an average quality mean score across the 40 apps of 3.33. Apple apps scored higher on average (mean = 3.66, SD 0.53) than Android apps did (mean=3.08, SD= 0.54). For overall MARS Quality Mean Score, the United Arab Emirates-based Dardoc (4.7) and the United States-based Carelinx (4.5) scored highest overall, averaging across sections A-D. The Canada-based Vytality at Home app scored highest for mean quality subjective score (4.5/5).
As shown in Figure 3, Apple apps outperformed Android apps in every section except perceived (subjective) score, which was slightly worse for Apple apps. Amongst all sections (A to D), Functionality Mean Score (Section B) was the only criterion showing a major difference in app store app performance between Apple (mean=4.02) and Android (mean=3.68) apps. See Figure 3 for a complete visual comparison of the Apple and Android MARS sections.
Figure 3
Apple iOS Versus Android App MARS Quality
The following is a summary of the findings of the Enlight checklist, the complete tables can be found in Appendices 2 and 3.
Both Android and Apple apps performed well on average in the Enlight privacy checklist (Apple =2.29/8, Android = 2.3/8), but poorly in the security checklist (Apple = 2.18/4, Android = 2.56/4).
For both Android and Apple, the privacy criterion least met by all apps was question (Q.) 5: “Does system explicitly tunnel users through terms of use before program utilization.” For Apple, 11/18 apps did not meet the criterion, and for Android 20/22 apps did not meet this criterion.
The best scoring point for Apple apps on the privacy checklist was Q.1: “Terms of use informs users of data journey in detail and sources of exposure.” 16/18 apps met this criterion. For Android, the best scoring point was Q.8: “The system warns users about providing private identifiable information”; 20/22 apps met this criterion.
The worst performing apps in the Enlight Privacy checklist were Respite Now (Apple) and Good Homecare (Android) scoring 7/8 points and one N/A each. Both apps did have a Terms of Service page, and a Privacy Policy page, though neither met any of the criteria outlined by the Enlight Privacy checklist. The terms of service on the Respite Now website did not appear to make any reference to the online aspects of data privacy such as measures for data protection, how or where personal data will be stored, whether information will be encrypted, or whether personal identifiable information will be kept secured.
The best performing app in the Enlight Privacy checklist was Damava. Present on both the Android and Apple stores, it scored 0/8. Damava describes which data the users are consenting to have collected when they agree to the Terms of Service and Privacy Policy, how the data is used, how the data cannot be used (with examples like your email address will not be shared with third parties), and how all personal identifiable information will be removed before data sharing with third parties. These, as well as all other points in the Enlight Privacy checklist, were met.
For the security checklist scores, both Apple and Android apps scored lowest on Q3: “Is there documentation of data exposure through monitoring of login activities on platform servers and data.” 14/18 Apple apps did not meet this criterion, and 20/22 Android apps did not meet this criterion.
The best scoring point on the security checklist for both was Q.1: “Is there encryption protection and de-identification of data as well as device password protection.” 14/18 Apple apps met this criterion; 14/22 Android apps met this criterion.
While objective tools like the MARS and Enlight are necessary to provide results against a comparable and referable set of criteria, they do not explain how or what functions the various apps have been created with to be able to successfully provide their respite care services. The following section describes the respite care apps’ operational features and categorizes in terms of themes.
The apps assessed came in two general layouts for which we coined the names “Care Package” and “Filter Pick”.
“Care Package” apps had simple layouts: the respite care recipient or family caregiver could sign up by creating a personal account, and then pick from a short list of predetermined services (“Care Packages”) with assigned hourly rates or monthly/contractual rates. These “Care Packages” did not disclose any information (e.g., profile) about the care provider hired for the service before the booking process commenced, nor did these apps include a filter for specific needs of the family caregiver or care recipient.
For “Filter Pick” apps, after signing in, users were given the opportunity to “get care”, “search care providers”, or “find nearby care providers”, whereupon they were presented with a listing of care providers, who could be sorted through using a filter (i.e., a “Filter Pick”). Filter options could be laid out in terms of the specific tasks needed by the prospective family seeking care (e.g., hygiene care or PEG tube feeding); or by respite care provider credentials, hourly charge, or certifications/licensures. After setting filters on care providers, users (i.e., family caregivers or care recipients) could click on the care provider’s profile and request to schedule care. The next page was typically a fillable form with instructions on how to schedule care, and which additional services would be needed on the day of care as part of the respite care (e.g., solely accompaniment/companionship, hygiene care, cooking, etc.). Once the request for a care provider and their services had been submitted, family end-users could contact the selected care provider, or they could be asked to pay in advance, depending on the app.
Besides creating an account for family caregivers to login, some apps, like Respite Now, Carer, and Carers required the creation of at least one care recipient profile. Profile information varied per app, but the most common categories included: name, age, sex, medical conditions, medical needs (e.g., help with ambulation, feeding assistance) of the care recipient, and preferences for incoming respite care providers (e.g., must be male, must be an RN, etc.). Once completed, the profile would be saved within the app and any care sought thereafter could be directed to a specific care recipient profile.
Each “Filter Pick” app presented varying degrees of information about the care providers listed from the search. After setting filtering options, “Filter Pick” apps had two main pages to navigate: (1) the list of care providers and (2) the provider’s personal information. The first page would be a list of care provider profile names with pictures, which the care recipient could scroll down and click on to reach the second page. This second page allowed users to view individual care provider profiles more closely and review additional information. The type of information found at each stage of the search is summarized in Table 2.
Table 2
Care provider profile contents in Filter Pick apps
For most of the apps, care provider performance, reliability, and safety could not be assessed because app hiring policies and certification standards were difficult to identify and not always available. Hiring information and criteria were not centralized, but instead were fragmented across the app websites, terms and conditions, and in-app information blurbs. Nonetheless, useful qualitative data on the apps’ care coordination reliability were gathered after thorough review of each app’s filter capabilities, and a scan of the resulting care providers.
The most unreliable aspect of apps’ care coordination functions was the availabilities (or lack thereof) of care providers, based on user location. Some apps internally kept a maximum distance limit, and those often returned “0 care providers in your area”. Those that did not have a maximum distance limit, or those that allowed users to pick a maximum commute, returned care providers thousands of miles away. Furthermore, the apps did not give clear indications as to providers’ willingness to commute. See Table 3 for the different methods of location filtering that the apps provided.
Table 3
Location filtering functions
Another aspect of reliability was the quality of the respite care provider profiles on the apps. Most in-app care providers’ profiles did not have any client reviews despite the availability of the common “Leave a Rating” function, and profiles gave no other indications that they were providing the services they claimed to be capable of.
Of the 40 assessed apps, four provided training options for its staff or clientele. Nurse on Call and Respite Now sent emails with links to Zoom invitations in which they offered app demos to its users and to prospective care recipients. Respite Now went one step further and offered care providertraining, sending monthly emails, such as: “Strategies virtual training: Here you will learn some strategies on how to effectively support individuals with Fetal Alcohol Spectrum Disorder.” The app Carer advertised eight training modules tailored to the caregiver-recipient dyad lasting two hours each, which could take place in the dyad’s home with an instructor (e.g., Stoma Care, Caring for Dementia Patients, etc.). The app Carelinx also provided online educational care classes for families via a company website called CareAcademy. These classes consisted of asynchronous modules that one could complete at one’s pace on any device. All the apps providing training were from the Apple store; the Android versions of Carelinx and Carer did not provide links to their training modules.
Before providing care to new users, two apps (Vytality and Homage) required that their employees perform environmental scans of care recipients’ homes and care environments. Vytality specified that patient care requirements would be determined by their assessment findings made during the preliminary visit. For Homage, although the visit was stated to be “optional but highly recommended”, the company would not allow new users to progress to order care without agreeing to the preliminary visit.
To develop an app that provides family caregivers with useful tools, the search, dissection, and appraisal of existing apps needed to be completed. The following discussion analyses the above findings and explores the implications of the MARS Quality Mean Scores, the apps’ performances on the Enlight checklists, noted issues regarding respite care accessibility and the inadequate marketing of respite care services. Throughout this discussion, we compare the results of this app store search with the companion academic scoping review (Castro et al., 2023) completed as part of the chosen hybrid methodology (Lau et al., 2021). We finish with a discussion of this study’s strengths and limitations, and of the future opportunities for research in the areas of respite care and home care app development.
The overall app Quality Mean Score from the MARS for the 40 tested apps was 3.33, translating into an average app score between the MARS scale’s “acceptable” and “adequate” marks. This finding is consistent with other app store reviews that evaluated the average MARS mean scores for other mHealth apps. In a study of 23 mHealth apps, (Kim et al., 2018) reported an average mean score of 3.23 for apps that mapped out drug interactions, while another study of 17 apps for informal caregivers of people with dementia conducted by (Werner et al., 2022). calculated an average mean score of 3.08. (Richardson et al., 2019) also attributed a mean score of 3.37 in a study of 18 mHealth apps for parents of children in intensive neonatal care units. The proximity in Quality Mean Scores between this study’s findings and others’ work indicates that respite care app development is at a similar level to that of other mHealth app services.
Lack of security was apparent among the apps assessed. Our companion scoping review of academic studies of respite care information and communication technologies also noted the importance of end-user trust in the data privacy standards and usability of the respite care platform (Castro et al., 2023). Lack of security in the assessed apps could be problematic for some consumers, as many people wish to know their risk of data exposure before downloading an app (Madden et al., 2013). Security information relayed by apps’ terms of service and privacy policies varied, and these data were frequently scattered across various documents (e.g., terms of service vs privacy policy) or locations (e.g., the accompanying website for the app vs the app itself). Poor accessibility to security and privacy information could make it more difficult for users to relieve their security concerns. Poor privacy standards could be a serious barrier to respite app use, as existing literature suggests that privacy concerns have a strong influence over willingness to provide personal information. This is especially true for sensitive data, as privacy concerns over health information have been shown to negatively impact the use of mHealth app interventions, and could even prevent individuals from obtaining other healthcare services (Nurgalieva et al., 2020; Wu et al., 2012). A solution proposed by Albrecht and colleagues to this could come in the form of an “App synopsis” produced by the manufacturer (Albrecht et al., 2015). This synopsis would yield a structured and highly legible document that can “aid users in evaluating whether an app meets their needs and can be used in a safe manner, even if they are not familiar with performing such evaluations” (Albrecht et al., 2015, p. en10).
The issues of limited care provider supply and accessibility are common topics of discussion within respite care literature (Cooke et al., 2020) and were once again demonstrated in this study. “Filter Pick” apps that internally kept a maximum distance limit, or those that allowed to filter by distance, often returned “0 care providers in your area”. Furthermore, very few apps made it clear which locations had care providers available and which locations did not. To avoid respite care recipients and family caregivers from wasting their time, it is important that respite care coordination apps make it swiftly clear to their users when their app does not have the personnel to provide care services in the user’s area.
Care provider results often mapped to residences in large USA cities, possibly suggesting a larger concentrations of respite care providers in urban areas. Access to respite care services in rural areas has been described as a challenge in other respite care studies (Cooke et al., 2020). The results of this app store search once again showed that care provider accessibility is highly dependent on region, and that at this stage, a large subset of the population cannot reliably use apps to command respite care to their homes. By marketing such apps to schools of nursing, homecare training programs, and employment agencies, respite care app development teams could potentially build their supply of respite care providers on the apps (Winston et al., 2023). Furthermore, by partnering with existing institutions, home care support agencies, and community health organizations, these apps may also be perceived as being more trustworthy due to their established affiliations.
Of the 40 respite care apps identified through our search strategy, only 7 apps explicitly advertised “respite care” as a core service; the others described services that can count as “respite care” but did not use this term, or related terms such as “short break care”. Our results suggest that care coordination apps are failing to advertise respite care as part of their services, despite being capable of coordinating such services. These results align with our complementary scoping review of respite care ICTs, where adequate promotion and marketing of technologies to support respite care services was emphasized (Castro et al., 2023). Building a well-designed technology is insufficient for successfully facilitating respite care services; families and providers must be aware of these novel avenues for coordinating respite care if these platforms are to be used. Participatory design methods with families, respite care providers, and homecare agencies may help to ensure the successful implementation of respite care apps (Castro et al., 2023).
Failing to adequately promote the “respite care” service nature of these apps can be seen as a missed opportunity for apps, respite care providers, and family caregivers who could mutually benefit from this service. After all, respite care services remain underutilized, due partly to too few, and too inflexible respite care options (Cooke et al., 2020). However, this study suggests that the problem goes beyond just the lack of options, as poor advertising of respite care services may be limiting the pool of visible apps. Developers of apps with care models that allow for users to specify a type of medical service, and book a healthcare professional for a selectable number of hours to fulfill this service independently, and have the capacity to provide respite care (i.e., where the family caregiver can leave the care recipient alone with the trusted care provider), and should advertise it and include “respite care” or “short break care” explicitly as a service option in their app.
There were several limitations to this systematic app store search. The most notable limitation was the absence of an actual coordination of respite care events by members of the research team. This app store search had the aim of determining the state and quality of respite care apps, not the quality of the caregivers hired by the apps to provide services. As a result, we did not assess the caregiving services beyond the point of digital contact with respite care workers. Knowing how an app behaves after payment, and knowing how its features provide communication, location, and timing information in a safe and reliable way to connect users to care providers, is essential for fully comprehending the user experience and the limitations of current respite care coordination apps.
Another constraint lies with the assessment tools. Previous studies of mHealth app systematic searches have highlighted that both the MARS and Enlight tools complementarily evaluate certain aspects of mHealth apps (Dogtiev, 2021). However, these complementarities are not exhaustive (Belen Sotillos, 2021). Even with the combination of the MARS and Enlight tools, to our knowledge, there is no validated tool available to assess app scalability, interoperability, or the care provider certification and competency verification policies for mHealth apps coordinating community services. In this respect, the two tools are not comprehensive in their ability to analyse app features specific for supporting the coordination of in person healthcare services.
It will be important for future researchers to find a method of assessing apps beyond the point of care provider contact or payment prompt, as data beyond this point is essential to capturing the full user experience, and synthesizing a comprehensive view of respite care apps. Furthermore, similarly to other respite care studies, this app store search has found that accessibility to respite care remains a problem. Many studies have also found that existing respite care services are often in-demand yet underutilized due to issues such as scheduling flexibility and trust, pointing to a disconnect between the chains of supply and demand of respite care (Robinson et al., 2017; Rose et al., 2015). Based on these findings, the next steps for respite care software developers and respite care organizations should be to conduct research into how respite care apps can break down the barriers to accessibility, weave together the chains of supply and demand, and increase recruitment and engagement of respite care providers in respite care apps across various communities.
This systematic search for respite care coordination apps revealed that many apps are capable of creating and sustaining a market for respite care, but are not realising the potential of their platforms to deliver respite care — due in part to a lack of supply of respite care providers or connections with existing agencies. Furthermore, these apps often fail to explicitly advertise respite care in their app store descriptions. In agreement with other respite care related research, this study again highlighted how the short supply and poor accessibility to app-going care providers negatively affects the practical use of these apps and keeps them severely limited by user location.
Strengths of our study included the rigour of the methodology, the size of our app sample, and the descriptive explanations of respite care apps’ functions and layouts. Our methodology followed a hybrid approach, which searched the app store using key words suggested from the results of a traditional systematic search of academic library databases for research on respite care technologies conducted by Castro et al. (2023). This search strategy ensured that the greatest number of words relating specifically to respite care, both in English and French, would be used to scour the app stores for relevant respite care applications. Our search was also conducted over two app stores, further increasing the breadth of our sample, and allowing for a more thorough examination of the mHealth respite care app market.
This respite care app store search is among the first of its kind in that it provides objective and descriptive summaries of respite care apps’ features and functions. Our study provides respite care app developers and service providers with a comprehensive summary of what other respite care app creators have done to automatize and simplify the process of coordinating respite care. This study can serve as a benchmark that future developers of mHealth coordination technologies can use to guide the development of similar respite care and home care coordination apps.
The authors would like to thank McGill University Research Librarian Francesca Frati for her feedback on drafting the early search strategies review; and McGill University Associate Professor John Kildea, PhD, for his support in co-supervising Jose B. Londono Velez in an independent study course that led to a preliminary draft of this manuscript.
This project was funded with generous support from the Rossy Cancer Network Cancer Care Quality and Innovation Program (2020) and from the McGill Nursing Collaborative for Education and Innovation in Patient-and-Family-Centered Care (2023). Tsimicalis is supported by a Chercheur-Boursier Junior 1 from the Fonds de Recherche du Québec-Santé (Québec Medical Research Council). Castro is a Canadian Nurses Foundation Scholar. She is supported by a Fonds de Recherche du Québec-Santé Formation de Doctorat (Doctoral Research Award) and the MEC-BSI bourses (2023) for doctoral students in nursing in Quebec.
ARC and AT designed the study. JLV, TN, and AP collected the data. JLV, TN, AP, ARC, and AG analyzed the data. All authors contributed to writing the manuscript, finalizing the discussion points, and/or approving the final submission.
Albrecht, U. V., Pramann, O., & von Jan, U. (2015). Medical apps – The road to trust. European Journal for Biomedical Informatics, 2015 (11), en7-en12. https://doi.org/10.24105/ejbi.2015.11.3.3
Anthony, J. (2021). Number of apps in leading app stores in 2021/2022: Demographics, Facts, and Predictions. Finances Online. https://financesonline.com/number-of-apps-in-leading-app-stores/
Barylak, L. G., N. . (2016). Beyond recognition–Caregiving & human rights in Canada. http://www.carerscanada.ca/wp-content/uploads/2021/03/CCC_Policy_brief_Human_rights_EN.pdf
Baumel, A., Faber, K., Mathur, N., Kane, J. M., & Muench, F. (2017). Enlight: A comprehensive quality and therapeutic potential evaluation tool for mobile and web-based eHealth interventions. Journal of Medical Internet Research, 19(3), e82. https://doi.org/10.2196/jmir.7270
Baumel, F., Kane, J.M., & Muench, F. (2016). Enlight tool. Mind Tools.
Bellan Sotillos, B., Vázquez, M., Birov, S., Müller, A., Prodan, A., Jacinto, S., Martinez, S., Schulz, R., Smaradottir, B., Forjan, M., Frohner, M., Pasteka, R., Sauermann, S. Ravic, M., Skorin, M., Eskandar, H., Pestina, S., Elkin, J., Joshi, S., Pérez, V., Pujari, S. & Shokralla, M. (2021). D2.1 Knowledge Tool 1. Health apps assessment frameworks. https://www.ehtel.eu/activities/eu-funded-projects/mhealth-hub.html
Castro, A. R., Arnaert, A., Moffatt, K., Kildea, J., Bitzas, V., & Tsimicalis, A. (2021). Developing an mHealth application to coordinate nurse-provided respite care services for families coping with palliative-stage cancer: Protocol for a user-centered design study. JMIR Research Protocols, 10(12), e34652. https://doi.org/10.2196/34652
Castro, A. R., Arnaert, A., Moffatt, K., Kildea, J., Bitzas, V., & Tsimicalis, A. (2022). “Informal Caregiver” in nursing: An evolutionary concept analysis. Advances in Nursing Science, 46(1):p e29-e42. https://doi.org/10.1097/ANS.0000000000000439
Castro, A. R., Brahim, L. O., Chen, Q., Arnaert, A., Quesnel-Vallée, A., Moffatt, K., Kildea, J., Bitzas, V., Pang, C., & Hall, A.-J. (2023). Information and communication technologies to support the provision of respite care services: Scoping review. JMIR Nursing, 6(1), e44750. https://doi.org/10.2196/44750
Cooke, E., Smith, V., & Brenner, M. (2020). Parents’ experiences of accessing respite care for children with Autism Spectrum Disorder (ASD) at the acute and primary care interface: A systematic review. BMC Pediatrics, 20(1), 244. https://doi.org/10.1186/s12887-020-02045-5
Denham, A. M. J., Wynne, O., Baker, A. L., Spratt, N. J., Turner, A., Magin, P., Palazzi, K., & Bonevski, B. (2020). An online survey of informal caregivers’ unmet needs and associated factors. PLOS ONE, 15(12), e0243502. https://doi.org/10.1371/journal.pone.0243502
Dogtiev, A. (2021). App Stores List (2020). Business of Apps. Retrieved 2021-08-05 from https://www.businessofapps.com/guide/app-stores-list/
Donnelly, K. Z., & Thompson, R. (2015). Medical versus surgical methods of early abortion: protocol for a systematic review and environmental scan of patient decision aids. BMJ Open, 5(7), e007966. https://doi.org/10.1136/bmjopen-2015-007966
Edelstein, H., Schippke, J., Sheffe, S., & Kingsnorth, S. (2017). Children with medical complexity: a scoping review of interventions to support caregiver stress. Child Care Health Development, 43(3), 323-333. https://doi.org/10.1111/cch.12430
Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107-115. https://doi.org/10.1111/j.1365-2648.2007.04569.x
Evans, D. (2013). Exploring the concept of respite. Journal of Advanced Nursing, 69(8), 1905-1915. https://doi.org/10.1111/jan.12044
Gagnon, M.-P., Desmartis, M., Labrecque, M., Car, J., Pagliari, C., Pluye, P., Frémont, P., Gagnon, J., Tremblay, N., & Légaré, F. (2012). Systematic review of factors influencing the adoption of information and communication technologies by healthcare professionals. Journal of Medical Systems, 36(1), 241-277. https://doi.org/10.1007/s10916-010-9473-4
Kim, B. Y. B., Sharafoddini, A., Tran, N., Wen, E. Y., & Lee, J. (2018). Consumer mobile apps for potential drug-drug interaction check: Systematic review and content analysis using the Mobile App Rating Scale (MARS). JMIR Mhealth and Uhealth, 6(3), e74. https://doi.org/10.2196/mhealth.8613
Lau, N., O’Daffer, A., Yi-Frazier, J., & Rosenberg, A. R. (2021). Goldilocks and the three bears: A just-right hybrid model to synthesize the growing landscape of publicly available health-related mobile apps [Viewpoint]. Journal of Medical Internet Research, 23(6), e27105. https://doi.org/10.2196/27105
Madden, M., Lenhart, A., Cortesi, S., & Gasser, U. (2013). Teens and mobile apps privacy. Pew Internet and American Life Project. https://www.pewresearch.org/internet/2013/08/22/teens-and-mobile-apps-privacy/
Nurgalieva, L., O’Callaghan, D., & Doherty, G. (2020). Security and privacy of mHealth applications: A scoping review. IEEE Access, 8, 104247-104268. https://doi.org/10.1109/ACCESS.2020.2999934
Oliva-Moreno, J., Peña-Longobardo, L. M., Mar, J., Masjuan, J., Soulard, S., Gonzalez-Rojas, N., Becerra, V., Casado, M., Torres, C., Yebenes, M., Quintana, M., & Alvarez-Sabín, J. (2018). Determinants of informal care, burden, and risk of burnout in caregivers of stroke survivors: The CONOCES study. Stroke, 49(1), 140-146. https://doi.org/10.1161/strokeaha.117.017575
Phongtankuel, V., Shalev, A., Adelman, R. D., Dewald, R., Dignam, R., Baughn, R., Prigerson, H. G., Teresi, J., Czaja, S. J., & Reid, M. C. (2018). Mobile health technology is here – But are hospice informal caregivers receptive? American Journal of Hospice & Palliative Medicine, 35(12), 1547-1552. https://doi.org/10.1177/1049909118779018
Richardson, B., Dol, J., Rutledge, K., Monaghan, J., Orovec, A., Howie, K., Boates, T., Smit, M., & Campbell-Yeo, M. (2019). Evaluation of mobile apps targeted to parents of infants in the neonatal intensive care unit: Systematic app review. JMIR Mhealth and Uhealth, 7(4), e11620. https://doi.org/10.2196/11620
Robinson, C. A., Bottorff, J. L., McFee, E., Bissell, L. J., & Fyles, G. (2017). Caring at home until death: Enabled determination. Support Care Cancer, 25(4), 1229-1236. https://doi.org/10.1007/s00520-016-3515-5
Rose, M. S., Noelker, L. S., & Kagan, J. (2015). Improving policies for caregiver respite services. The Gerontologist, 55(2), 302-308.
Sala-González, M., Pérez-Jover, V., Guilabert, M., & Mira, J. J. (2021). Mobile apps for helping informal caregivers: A systematic review. International Journal of Environmental Research and Public Health, 18(4), 1702. https://www.mdpi.com/1660-4601/18/4/1702
Sinha, M. (2013). Spotlight on Canadians: Results from the General Social Survey Portrait of caregivers, 2012. Statistics Canada. https://www150.statcan.gc.ca/n1/en/catalogue/89-652-X
Stoyanov, S. R., Hides, L., Kavanagh, D. J., Zelenko, O., Tjondronegoro, D., & Mani, M. (2015). Mobile App Rating Scale: A new tool for assessing the quality of health mobile apps. JMIR mHealth and uHealth, 3(1), e27. https://doi.org/10.2196/mhealth.3422
Swartz, K., & Collins, L. G. (2019). Caregiver care. American Family Physician, 99(11), 699-706.
The Change Foundation, T. C. (2019). 2nd Annual Spotlight on Ontario’s Caregivers https://ontariocaregiver.ca/wp-content/uploads/2019/12/Spotlight-on-ontarios-caregivers-2019_EN.pdf
Werner, N. E., Brown, J. C., Loganathar, P., & Holden, R. J. (2022). Quality of mobile apps for care partners of people with Alzheimer Disease and related dementias: Mobile app rating scale evaluation. JMIR Mhealth and Uhealth, 10(3), e33863. https://doi.org/10.2196/33863
Wu, K.-W., Huang, S. Y., Yen, D. C., & Popova, I. (2012). The effect of online privacy policy on consumer privacy concern and trust. Computers in Human Behavior, 28(3), 889-897. https://doi.org/https://doi.org/10.1016/j.chb.2011.12.008
Institution: McGill University
E-mail: aimee.castro2@mail.mcgill.ca
ORCID: 0000-0002-6461-0866
Biography: Aimee Castro is a PhD candidate in Nursing at McGill University. She has been a family caregiver, a homecare worker, and an entrepreneur making mobile technologies more accessible to older adults. Her research focuses on the uses of digital health technologies for supporting family caregiving.
Institution: McGill University
E-mail: jose.londonovelez@mail.mcgill.ca
ORCID: 0009-0001-4691-4380
Biography: Jose Londono is a McGill Nursing alumni and a current University of Saskatchewan medical student. He’s taught English and done independent fieldwork at the Shaolin Monastery in Henan province China, and is currently involved in mobile Health app research through McGill.
Institution: Ingram School of Nursing, McGill University
E-mail: tracy.nghiem@mail.mcgill.ca
Biography: Tracy Nghiem is a primary care nurse practitioner currently practicing in Montreal region serving the pediatric, adult, and geriatric populations.
Institution: School of Information Studies, McGill University
E-mail: karyn.moffatt@mcgill.ca
ORCID: 0000-0002-7081-0709
Biography: Karyn Moffatt is an Associate Professor and Canada Research Chair in Inclusive Social Computing. She leads the Accessible Computing Technologies Research Group where her research seeks to harness the capabilities of technology to enable older adults and people with disabilities to better share, communicate, and connect with those around them.
Institution: Ingram School of Nursing, McGill University
E-mail: antonia.arnaert@mcgill.ca
ORCID: 0000-0001-8086-7058
Biography: Antonia Arnaert is an Associate Professor at the McGill Ingram School of Nursing. She is an educator, researcher, innovator, and entrepreneur with a specific interest in the use of digital health technologies for the development of healthcare delivery and medicinal products.
Institution: Ingram School of Nursing, McGill University
E-mail: ariana.pagnotta@mail.mcgill.ca
ORCID: 0000-0002-2320-3367
Biography: Ariana Pagnotta is a registered nurse clinician from the Ingram School of Nursing. She has worked as a research assistant with PhD Candidate, Aimee Castro, in developing an app to better deliver respite care. Ariana is currently working in the radiation oncology department at the Jewish General Hospital.
Institution: Research Assistant, McGill University
E-mail: ariane.gautrin@mcgill.ca
ORCID: 0009-0002-5418-5170
Biography: Ariane Gautrin is a registered nurse with a graduate degree in Nursing and an undergraduate degree in Philosophy from McGill University. She is a family caregiver and has worked in a palliative respite care center for children. Her research and practice focus on improving nursing care provided to children.
Institution: Associate Professor, McGill University & Nurse Scientist, Shriners Hospital for Children
E-mail: argerie.tsimicalis@mcgill.ca
ORCID: 0000-0002-5963-9728
Biography: Argerie Tsimicalis, RN, PhD, is an Associate Professor at the Ingram School of Nursing, and Program Director, Global Oncology, in the Gerald Bronfman Department of Oncology. Faculty of Medicine and Health Sciences, McGill University, and Nurse Scientist at the Shriners Hospitals for Children® – Canada. Dr. Tsimicalis is the recipient of a Junior 2 Research Scholar from the Fonds de recherche du Québec-Santé.