Canadian Journal of Nursing Informatics

App misinformation   

Print Friendly, PDF & Email

Software Column

by Allen McLean, RN, MN, MSc, PhD(c)

Allen is currently a PhD Candidate in Health Sciences at the University of Saskatchewan (Saskatoon) in the Computational Epidemiology and Public Health Informatics Lab. His research interests include the development of computer modeling and simulation software for addressing health systems challenges, chronic diseases and health inequities at the population level, as well as mobile technologies applied in long-term care facilities. Allen previously attended the University of Victoria earning an MN and MSc (Health Information Science) in a unique dual degree program for Nursing Informatics professionals. Allen has over 20 years’ experience in healthcare as an ultrasound technologist, clinical educator, team leader and community health RN.

Citation: McLean, A. (2021). App misinformation. Software Column. Canadian Journal of Nursing Informatics, 16(3-4). https://cjni.net/journal/?p=9376

COLUMN

App misinformation

Reporting guidelines and checklists such as CONSORT, STROBE, and PRISMA are meant to enhance the quality and transparency of health research (Equator Network, 2022).  These guidelines are often consulted at the end of the research process before journal submission, however experienced nurse researchers know that it is useful to follow the appropriate guidelines throughout each step of a research project. The same could be said for mHealth app development.

The quality, efficacy, and transparency of mHealth apps varies widely, and this is an important problem if people are relying on these apps for health information or their own self-care or the care of others. Fortunately we have many checklists for evaluating the quality of mHealth apps, some well validated and others validated and available in multiple languages and designed for specific patient populations or clinical pathologies. Unfortunately there is no single ‘gold standard’ tool for evaluating mHealth apps, but there are several that stand out and can be used together. And while these evaluation tools were designed to assess completed apps, I would argue that if nurses working in mHealth app development refer to these tools throughout the software development process we will all benefit from better quality apps.

Mobile health apps can be evaluated on a number of criteria, though most reviews typically focus only on content and the user experience. I recommend a more inclusive approach based on theories and research from the field of health communication. A framework I use comes from the World Health Organization (WHO, 2017), called the ‘strategic framework for effective communications’ and includes six principles of effective communication: accessible, actionable, credible and trusted, relevant, timely, and understandable. While this framework was not designed specifically for mHealth app development or evaluation, the principles can be a useful guide.

One of the validated tools most seen in the research literature is the Mobile App Rating Scale (MARS) (Stoyanov et al., 2015). This tool incorporates many of the principles from the WHO framework and evaluates an average overall app quality through four core subscales: engagement, functionality, aesthetics, and information quality. The tool also offers two optional subscales to support the evaluation: app subjective quality and perceived impact of the app on user knowledge and behaviors.

The Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-AV) is an important tool as it is one of the few to assess if materials are understandable and key messages are clear to a diverse population with varying levels of health literacy (Shoemaker, et.al., 2014).

 The Persuasive Systems Design framework is a well validated tool and was designed specifically for the evaluation of eHealth and mHealth applications in terms of encouraging or discouraging behaviours that affect people’s health (Oinas-Kukkonen & Harjumaa, 2009). And although the name might suggest it is not a serious resource, Trust It or Trash It (Access to Credible Genetics Resource Network (ATCG), 2013). provides guidance on how to critically evaluate the quality of health information provided in health resources. This tool uses six questions to help determine the validity and reliability of the resources: Who wrote the information you are reading? Who provided the facts? Where did the facts come from? Who paid for it? When was it written or updated? How do you know this information pertains to you?

There are other evaluation tools you may find useful, and it is worth taking the time to find good tools if you will be evaluating or developing mHealth apps. In my opinion, poor quality mHealth apps are a dangerous form of misinformation, and I believe nurses working in informatics have the values, skills, and vision to address this problem.

References

Access to Credible Genetics Resource Network (ATCG). (2013). Trust It or Trash It. http://www.trustortrash.org/

Equator network. https://www.equator-network.org/

Oinas-Kukkonen, H. & Harjumaa, M. (2009). Persuasive Systems Design: Key Issues, Process Model, and System Features. Communications of the Association for Information Systems, 24, Article 28. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=3424&context=cais

Stoyanov, S.R., Hides, L., Kavanagh, D.J., Zelenko, O., Tjondronegoro, D. & Mani, M. (2015). Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth, 3(1) :e27

Shoemaker, S., Wolf, M. & Brach, C. (2014, Sep.). Development of the Patient Education Materials Assessment Tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information. Patient Education and Counseling, 96(3), 395-403.

World Health Organization (WHO). (2017). Communicating for Health. https://www.who.int/about/communications

Be Sociable, Share!

Comments are closed.