Established guidelines recommend that patient educational materials should be set at no higher than a sixth-grade reading level to be considered adequately comprehensible to the general public. Our study objective was to assess the readability of online patient resources related to sentinel lymph node biopsy (SLNB) as part of treatment for melanoma.
Materials and methods
The top 50 results from a Google search (search terms: “sentinel lymph node biopsy melanoma”) were analyzed using seven established readability formulae in order to determine their level of adherence to current guidelines.
We found that the readability of available online patient resources is currently very poor, with only 12% of the websites meeting the sixth-grade reading level criteria according to at least one measure, and 0% meeting the criteria according to all seven assessment tools. Furthermore, half of search results were peer-reviewed academic journal articles not intended for the general public.
Discussion and conclusions
Online patient resources related to SLNB carried out as part of melanoma treatment have poor readability. Several simple measures may be taken in order to make these resources more accessible and comprehensible to a broader audience. These resources should undergo ongoing evaluation, with the ultimate goal being improved readability and patient education.
In recent years, the Internet has become a ubiquitous source of information for patients who are seeking to learn more about their symptoms, conditions, upcoming procedures or any aspect of subjects relevant to their own, or to a friend or family member’s care [1,2]. Powerful internet search engines such as Google are often queried for medical advice well before a patient actually sees a medical professional in consultation . In many ways, an online consultation with “Doctor Google” is understandable due to the appeal of the easily accessible, and seemingly limitless, healthcare-related information the Internet provides. However, while the Internet does have an important role to play in patient education, it is its unfiltered and unvetted nature that makes it imperative that all users carefully scrutinize the accuracy and reliability of the information it provides. Even a resource with a credible author may not be of adequate quality or accessibility for the general public. Readability is a key aspect of accessibility that is often ignored, and guidelines formulated by the American Medical Association (AMA) and the US Department of Health and Human Services (USDHHS), make it clear that patient reading material should be set at no higher than a sixth-grade reading level in order to be considered accessible and comprehensible to the majority of the general public . Readability may be readily evaluated using several validated formulae.
Melanoma, also referred to as malignant melanoma or cutaneous melanoma, is the least common but most lethal form of skin cancer. However, it is amongst the most prevalent cancer types that is diagnosed in young and middle-aged adults, and therefore is responsible for a substantial burden of disease . Early melanoma detection, staging and treatment may also significantly improve patient outcomes. Sentinel lymph node biopsy (SLNB) is a surgical lymph node mapping technique that is commonly utilized in order to identify, for sampling, the first lymph node(s) to which a particular malignancy would metastasize . SLNB also allows for reduction in procedure-related morbidity when compared to the performance of routine lymphadenectomy. It helps the clinician to stage, prognosticate, and select appropriate treatment for melanoma patients .
The objective of this study was to assess the readability of online patient resources that are specifically related to SLNB being carried out as part of melanoma treatment, and evaluate these observations within the context of currently established readability guidelines. Furthermore, recommendations that ultimately will make reliable, evidence-based, high-quality and easily understandable health-related information available to a broader audience, will also be reviewed. Several recently published articles have reported that online patient resources for a variety of surgical procedures, and other health conditions including a variety of cancer types, tend to be written at too high of a grade level to allow for adequate comprehensibility [7-14], and we hypothesize that this finding will also hold true for resources related to SLNB that is being performed as part of melanoma treatment.
Materials & Methods
The Google search engine was used to run our search, as it has been reported to be the most prominent and most popular search engine utilized by patients . Prior to conducting the search, the browser data (including cookies and cache) was cleared, and we also made sure it was not logged into any specific user account. The following search terms were entered into a Google Chrome web browser, using the incognito browsing mode: “sentinel lymph node biopsy melanoma”. The search was performed on July 1st, 2017. The top 50 unique web pages yielded from the search that met study inclusion criteria and did not meet study exclusion criteria, were investigated. The study inclusion criteria required the website to be: (1) written completely in English, (2) free-to-access, and (3) specifically contain information regarding SLNB being performed for melanoma. The study exclusion criteria required the website not to: (1) have obvious financial conflicts of interest (i.e., specifically sponsored by an external organization or company, or advertising products for sale), (2) be a news article, or (3) solely be a video.
The readability of each website was evaluated using seven different validated readability formulae (Figure 1). The Flesch-Kincaid Grade Level (FKGL) and the Flesch Reading Ease (FRE) both primarily take into account average sentence length, and average syllables per word [16,17]. The Gunning-Fog Score (GFS) is based on average sentence length and the number of polysyllabic words (words containing three or more syllables) . The Coleman-Liau Index (CLI) is based on the variables of average number of letters per 100 words (L), and the average sentence length (S) . The Automated Readability Index (ARI) is similar to the CLI in that it also considers the number of characters per word . The Simple Measure of Gobbledygook (SMOG) index takes three 10-sentence samples near the beginning, middle and end of a piece of text. The number of polysyllabic words are counted and used to calculate a specific grade level. If there are fewer than 30 sentences, the formula includes a correction factor . The New Dale-Chall Score (NDC) includes a list of 3000 words an American fourth-grader can reliably understand, and any word not on this list is considered a “difficult word” . The FRE is a 100-point scale with higher scores indicating more easily understood text. Precise scoring is outlined in Table 1. The FKGL, GFS, CLI, ARI and SMOG indicate the US academic grade level, or number of years of education, that are required in order to comprehend the text. The NDC has a similar but distinct scoring system that is outlined in Table 2.
In order to minimize the risk of bias and human error during calculations, and for ease of use, all seven readability tests were administered using a specific online readability calculator recommended by the National Institutes of Health (NIH) (https://readability-score.com/). Prior to analyzing the web pages, the “ideal” readability criteria for online resources were established. The USDHHS recommends health-related materials be written at a sixth-grade level. Thus, for this study, the level of acceptable readability was determined to be greater than or equal to 80.0 for the FRE, less than or equal to 5.9 for the NDC, and less than or equal to 6.9 for the FKGL, GFS, CLI, ARI and SMOG.
Data entry and analyses were performed using a Microsoft® Excel spreadsheet (Microsoft®, Redmond, Washington). Standard independent upper-tailed hypothesis tests were conducted for each readability index, comparing the mean score of websites that were identified in our search with the grade level, or other measure of readability, recommended by the AMA and USDHHS. Differences were considered statistically significant when they yielded a p-value equal to or less than 0.05.
The first 50 websites yielded by the search that met all of the inclusion criteria, and none of the exclusion criteria, were analyzed for their readability (Table 3). A webpage was categorized as “specialty” if the root website focused on any of the following topics: melanoma, lymph node biopsy, skin cancer, dermatology, or general surgery (10 out of 50); otherwise, it was categorized as “general” (40 out of 50). Moreover, half of the 50 websites were themselves, or contained links to peer-reviewed journal articles, or to guidelines that were derived from them.
All seven assessment tools reported statistically significant results in that the calculated p-value was less than the standard alpha value of 0.05. In other words, all seven average readability scores (“average” meaning average of the top 50 websites), as calculated by the assessment tools, were significantly different (worse) than their respective “acceptable” readability scores previously outlined in the Methods section. This observation remained unchanged whether or not journal articles, a subgroup that could potentially bias observations towards worse readability scores, were included in the analysis. Distributions of readability scores of the health information websites were outlined in box-and-whisker plots (Figure 2a, 2b). A comparison of mean readability scores between the websites that were classified as “general”, and the websites that were classified as “specialty” is shown in Table 4. Specialty websites tended to have better readability than general websites, but these differences were not statistically significant. A comparison of mean readability scores between peer-reviewed journal articles, and all other websites, is shown in Table 5. Journal articles were significantly less readable than non-journal articles. A comparison of mean readability scores between the top 10 websites that were identified by the search, and the remaining 40 websites, is shown in Table 6. There was no appreciable difference in readability identified by this comparison. Of the 50 websites evaluated, only six (12.0%) were within the limits of the recommended sixth-grade reading level as evaluated by at least one assessment tool, while there were no websites either at or under the sixth-grade reading level according to all seven assessment tools (Table 7). This observation did not change when journal articles were either included or excluded from the analysis.
Despite readability scores varying between the different formulae utilized, a consistent observation emerged from the analysis. None of the 50 websites were consistently written at a reading level below the sixth-grade level, as recommended by the AMA and USDHHS guidelines, and this observation overwhelmingly highlights a need for major change. The rationale behind these readability guidelines is based upon evidence that the average reading level among American adults is between grades eight and nine, and that approximately one in five of these people read below a grade five level and are considered functionally illiterate . Although according to a report by Statistics Canada, the Canada-United States literacy gap is sizable, with the average Canadian adult more literate than the average American adult by almost a full year of schooling, 48% of Canadian adults 16 years of age or older did not possess the literacy skills required by the current workforce, with many of them being new immigrants . This is especially problematic as literacy is known to be highly positively correlated with socioeconomic status (SES) . Those people with lower literacy rates, and thereby more commonly of lower SES, are also more likely to suffer from a plethora of chronic health conditions that include: type 2 diabetes, hypertension, ischemic heart disease, mental illness, and cancer. Populations with low literacy rates also have many risk factors for these conditions that include: smoking, alcohol and illegal drug use, adverse childhood experiences, and others [26,27]. Interestingly, one study found that the relative risk of developing melanoma was actually decreased in people with lower annual household incomes, but despite this observation those individuals with lower SES were more likely to have a later-stage disease at diagnoses. This observation could potentially be due to substantial barriers that these individuals face when accessing health care (e.g., waitlists in Canada and the lack of universal health care in the United States) . This was an important observation because the prognosis of melanoma is highly dependent on its stage at the time of diagnosis.
Table 4 suggests that there were no significant differences in the readability scores of general medical websites as compared to specialty-specific websites when evaluated by any of the measurements utilized. However, specialty websites tended to be more readable, perhaps because the authors may be experts on the subjects, and are able to better present the relevant information in terms that are more readily comprehended by the general public. Table 5 identifies a significant discrepancy (p-values well below 0.01 across all measures) between the readability of journal articles compared with non-journal articles. This observation is not surprising because journal articles are written for an academic audience that would be expected to have higher literacy scores than the general population. Regardless, these journal articles are identified in the top online search results that are accessible by the general population. Interestingly, articles published in peer-reviewed journals, or links to such articles, represented half of the top 50 search results, most likely because the search phrase was very specific. This suggests that the breadth of online educational resources is actually even worse for people with limited literacy than was previously believed. Table 6 shows that there was no significant difference or trend in the readability scores of the top 10 search results, when compared to the next 40. This is important information because the top 10 websites are much more likely to be accessed by patients, in part because they comprise the first page of search results. The top 10 websites were evenly split between five general websites and five specialty websites. The specialty websites were all authored by organizations or research institutes focused on the study and/or treatment of melanoma (Table 3).
It is important for individuals with either a suspected or confirmed melanoma diagnosis to learn about their cancer, and to understand their recommended management plan, that often includes SLNB. It is expected that members of the patients’ healthcare management team, especially the surgeon, will review with them the risks and benefits of SLNB. However, online resources do still represent an important and readily available source of supplementary educational information; it is crucial that the treating physician is cognizant of website readability when recommending or directing patients toward particular resources. This is the first study to evaluate the readability of online resources for SLNB, and one of few to assess the proportion of journal articles amongst the search results.
The current study has several limitations that must be reviewed. The scope and coverage of available online patient resources may not have been ideal, as only a single search phrase and a single search engine were utilized, and only 50 websites were analyzed in detail. However, we found that variations on the search phrase did not appreciably change the search results, and that Google is currently the most favored search engine for finding health-related information , often cited as being the most reliable and least redundant . Furthermore, Google metrics and analytics for the United States suggest that most users evaluate a median of two to four webpages (20-40 search results) for a given search, thus providing a good rationale for our 50-website focus. The study also is limited in the applicability of its observations to more ethnically and geographically diverse patient populations. Only English websites were included and analyzed in the current study. People living in other countries, who may speak other languages, may utilize other search engines that yield different search results, as well as have unique measures or standards of reading literacy. The cross-sectional nature of our study must also be considered, as the Internet is dynamic and is constantly evolving. The same search performed even months earlier or later would have likely yielded somewhat different results; in fact, a repeat search was performed on November 6th, 2017, and the list of top 50 websites the search yielded was 86% similar to the original list. Despite these limitations, we intended this study to inform others about the current state of online patient resource readability, and to provide suggestions that could help facilitate much-needed change.
The observation that the websites’ readability scores were inadequate amongst multiple measures suggests that complex sentence structure, semantics, and vocabulary are all key issues that must all be addressed in the future. When editing or developing online resources, medical jargon must also be avoided whenever possible. This is especially crucial in the first few lines of text, or in any sort of overview paragraph, because if a user is unable to initially comprehend a resource they may cease reading altogether . If using medical terminology is unavoidable, it would be helpful and intuitive to add either a hyperlink to an understandable definition, or a mouse-over pop-up box that contains the definition . Alternatively, dictionary, thesaurus or translation support could be provided, though this would require more time and effort by the user . Keywords and phrases should also be highlighted and emphasized . Educational objectives should be targeted at the sixth-grade reading level, and should also be carefully reviewed so that the authors have a better understanding of the most appropriate language structures to utilize. If re-writing a resource is necessary, authors should focus on utilizing plainspoken words, simple sentence structures, and the active voice, whenever possible. Other areas for future study include investigating whether the timing and character of patient internet searches (e.g., before versus after meeting with the treating surgeon) is impacted by preoperative counselling. Projects such as Health on the Net Foundation show promise in addressing other important dimensions of health information accessibility, such as accuracy and quality, which were not the focus of the current study.
The findings of the current study are consistent with several other reports that have evaluated the readability of online patient educational resources for many different medical procedures and conditions. In 2014, Vargas et al. evaluated 102 articles for patient-directed content obtained from the top 12 websites found with the Google search terms “hernia repair surgery”, as well as the top 10 articles from each of the top 12 consumer magazines (comparison group), and found that all of the 102 articles identified were above the recommended sixth-grade reading level . Jayaweera and Zoysa’s 2016 report assessed the top 50 hits from several different internet search engines (for the search terms “laparoscopic cholecystectomy”). After removing duplicates, they analyzed the resources identified for usability, accessibility, reliability and readability (using FRE and GFI, among other tools) . They found that the websites varied greatly in all of the domains they examined, but were especially poor in reliability. In terms of readability, in this study the average FRE and GFI scores identified major shortcomings, though there were some websites that did meet the AMA/USDHHS guidelines’ standards . Kher et al.’s 2017 study identified the top 100 Google hits using the search terms “congestive heart failure”, and analyzed 70 webpages that met study selection criteria for readability using six of the seven tools that we used in the current study . They found that only five of the 70 websites had readability that was either at or below the sixth-grade reading level. All three of these recent studies highlighted the importance of improving the readability of internet-based patient-directed healthcare information resources, and are part of a growing body of literature.
Overall, we recommend that current and future patient educational online resources should not only be regularly monitored, evaluated, updated and edited by their authors for the accuracy of factual content, with no single website representing the sole authority, but also for their readability, with the ultimate overall goal being improvement of patient accessibility and education through awareness and adherence to current guidelines.
- Schwartz KL, Roe T, Northrup J, Meza J, Seifeldin R, Neale AV: Family medicine patients' use of the internet for health information: a MetroNet study. J Am Board Fam Med. 2006, 19:39-45. 10.3122/jabfm.19.1.39
- Baker L, Wagner TH, Singer S, Bundorf MK: Use of the internet and e-mail for health care information: results from a national survey. JAMA. 2003, 289:2400-2406. 10.1001/jama.289.18.2400
- Wald HS, Dube CE, Anthony DC: Untangling the web - the impact of Internet use on health care and the physician-patient relationship. Patient Educ Couns. 2007, 68:218-224. 10.1016/j.pec.2007.05.016
- Edmunds MR, Barry RJ, Denniston AK: Readability assessment of online ophthalmic patient information. JAMA Ophthalmol. 2013, 131:1610-1616. 10.1001/jamaophthalmol.2013.5521
- Faries MB, Thompson JF, Cochran AJ, et al.: Completion dissection or observation for sentinel-node metastasis in melanoma. N Engl J Med. 2017, 376:2211-2222. 10.1056/NEJMoa1613210
- Coit D: The enigma of regional lymph nodes in melanoma. N Engl J Med. 2017, 376:2280-2281. 10.1056/nejme1704290
- Vargas CR, Chuang DJ, Lee BT: Online patient resources for breast reconstruction - analysis of readability. J Surg Res. 2014, 186:673. 10.1016/j.jss.2013.11.758
- Vargas CR, Chuang DJ, Lee BT: Online patient resources for hernia repair: analysis of readability. J Surg Res. 2014, 190:144-150. 10.1016/j.jss.2014.03.045
- Jayaweera JM, De Zoysa MI: Quality of information available over internet on laparoscopic cholecystectomy. J Minim Access Surg. 2016, 12:321-324. 10.4103/0972-9941.186691
- Hansberry DR, Agarwal N, Shah R, et al.: Analysis of the readability of patient education materials from surgical subspecialties. Laryngoscope. 2014, 124:405-412. 10.1002/lary.24261
- Kher A, Johnson S, Griffith R: Readability assessment of online patient education material on congestive heart failure. Adv Prev Med. 2017, 2017:1-8. 10.1155/2017/9780317
- Patel CR, Sanghvi S, Cherla DV, Baredes S, Eloy JA: Readability assessment of internet-based patient education materials related to parathyroid surgery. Ann Otol Rhinol Laryngol. 2015, 124:523-527. 10.1177/0003489414567938
- Ellimoottil C, Polcari A, Kadlec A, Gupta G: Readability of websites containing information about prostate cancer treatment options. J Urol. 2012, 188:2171-2176. 10.1016/j.juro.2012.07.105
- Edmunds MR, Denniston AK, Boelaert K, Franklyn J, Durrani O: Patient information in Graves' disease and thyroid-associated ophthalmopathy: readability assessment of online resources. Thyroid. 2014, 24:67-72. 10.1089/thy.2013.0252
- Brophy J, Bawden D: Is Google enough? Comparison of an internet search engine with academic library resources. Aslib Proc. 2005, 57:498-512. 10.1108/00012530510634235
- Flesch R: A new readability yardstick. J Appl Psychol. 1948, 32:221-233. 10.1037/h0057532
- Kincaid JP, Fishburne RP Jr, Rogers RL, Chissom BS: Derivation of new readability formulas (automated readability index, fog count and Flesch reading ease formula) for navy enlisted personnel. Institute for Simulation and Training. 1975, 1-40. 10.21236/ada006655
- Gunning R: The Technique of Clear Writing. McGraw-Hill, New York, NY; 1952.
- Coleman M, Liau TL: A computer readability formula designed for machine scoring. J Appl Psychol. 1975, 60:283-284. 10.1037/h0076540
- Smith EA, Kincaid JP: Derivation and validation of the automated readability index for use with technical materials. Hum Factors. 1970, 12:457-564. 10.1177/001872087001200505
- McLaughlin GH: SMOG grading - a new readability formula. J Read. 1969, 12:638-646.
- Chall JS: Readability Revisited: The New Dale-Chall Readability Formula. Brookline, Cambridge, MA; 1995.
- Doak C, Doak L, Root J: Teaching Patients with Low Literacy Skills, 2nd Edition. Belcher M (ed): JB Lippincott Company, Philadelphia; 1996.
- Literacy and numeracy in Canada is declining and only action will reverse this trend. (2013). Accessed: June 24, 2018: https://abclifeliteracy.ca/workplace-literacy-facts.
- Rikard RV, Thompson MS, McKinney J, Beauchamp A: Examining health literacy disparities in the United States: a third look at the National Assessment of Adult Literacy (NAAL). BMC Public Health. 2016, 16:975. 10.1186/s12889-016-3621-9
- Winkleby MA, Jatulis DE, Frank E, Fortmann SP: Socioeconomic status and health: how education, income, and occupation contribute to risk factors for cardiovascular disease. Am J Public Health. 1992, 82:816-820. 10.2105/ajph.82.6.816
- Clegg LX, Reichman ME, Miller BA, et al.: Impact of socioeconomic status on cancer incidence and stage at diagnosis: selected findings from the surveillance, epidemiology, and end results: National Longitudinal Mortality Study. Cancer Cause Control. 2009, 20:417-435. 10.1007/s10552-008-9256-0
- Wang L, Wang J, Wang M, Li Y, Liang Y, Xu D: Using internet search engines to obtain medical information: a comparative study. J Med Internet Res. 2012, 14:e74. 10.2196/jmir.1943
- Silverman J, Kurtz SM, Draper J: Skills for Communicating with Patients, 3rd Edition. Radcliffe Publishing, London; 2013.
- Yu C-H, Miller RC: Enhancing web page readability for non-native readers. Proc CHI 10. 2010, 2523-2532. 10.1145/1753326.1753709
Poor Readability of Online Patient Resources Regarding Sentinel Lymph Node Biopsy for Melanoma
Ethics Statement and Conflict of Interest Disclosures
Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
Cite this article as:
Yen P P, Wiseman S M (January 13, 2019) Poor Readability of Online Patient Resources Regarding Sentinel Lymph Node Biopsy for Melanoma. Cureus 11(1): e3877. doi:10.7759/cureus.3877
Received by Cureus: October 01, 2018
Peer review began: October 03, 2018
Peer review concluded: January 07, 2019
Published: January 13, 2019
© Copyright 2019
Yen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 3.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.