"Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has."

Margaret Mead
Original article

A CONSORT Clinical Trial Reporting Compliance Audit of the Oncology Randomized Controlled Trial Literature


Introduction: The Consolidated Standards for Reporting Trials (CONSORT) checklist has been formulated to improve the reporting of randomized controlled trials (RCTs). This investigation aims to determine predictors of CONSORT checklist compliance in the oncology literature.

Methods: Eight-hundred and fifty articles assessing interventions in adult breast, prostate, colorectal, and lung cancer between 1992-2010 were identified by a systematic search.

Exclusion criteria included investigations reporting interim/secondary/long-term update analyses, pilot/phase 2/non-parallel design studies. After full article review, 408 RCTs were eligible for inclusion. RCT descriptive variables including number of authors/study patients, 2009 journal impact factor/journal classification, type of cancer and therapeutic intervention, publication year, primary study country, and cooperative group involvement were captured for all trials. Two qualified auditors assessed all manuscripts in order to generate average and difference CONSORT checklist scores related to the observers.

Results: Mean average CONSORT score was 16.6 (SD 3, max 25) and median difference score was two (interquartile range one to three). Kappa agreement for each checklist item ranged from (0.02-0.92) with an overall two-way intra-class correlation coefficient of 0.71 (95% Cl: 0.61-0.78) for comparison of overall CONSORT score between raters. Recent year of publication, increasing author number, and higher impact factor were associated with higher average CONSORT scores (p<0.0001).

Conclusions: Improvements in RCT reporting have been observed over time. Further work in the assessment of the inter-observer reliability of individual CONSORT items is warranted.


In 2011, the Canadian Cancer Society stated that lung, prostate, breast, and colorectal cancer account for 50% of all cancer deaths and 50% of all new cancer cases [1]. Of new incident cancer cases, breast cancer accounts for 28% of cases in women and prostate cancer accounts for 27% of cases in men, with colorectal cancer accounting for about 12% of cases in both sexes.

Randomized controlled trials (RCTs) are an indispensible tool in investigating various cancer management methods. RCTs are frequently used to examine a broad range of therapies including: new or existing anti-cancer therapies, new approaches to cancer prevention and screening, as well as complementary and alternative cancer therapies. The ability to evaluate the methodological quality of RCTs is central to the critical appraisal of individual trials and the conduct of unbiased systematic reviews. Additionally, misinterpretation of trial results can lead to incorrect changes to clinical practice, potentially negatively affecting patients' medical care [2]. Furthermore, studies have shown that RCTs with poor reporting quality are associated with biased findings [3]. The awareness concerning the quality of RCTs reporting is growing as inadequately conducted trials are viewed as a waste of time, effort, and increasingly scarce health-care resources. Conversely, well-conducted trials with suboptimal reporting can also represent squandering of these same resources [4]. Beyond these significant financial considerations, the well-documented incidents of research fraud play an equal and certainly more alarming role in examining and improving reporting quality [5]. The highest possible standards should be sought in the reporting of medical research, particularly in regards to RCTs.

Various guidelines have been created to alleviate problems arising from inadequate RCTs reporting. These guidelines are currently encapsulated by the "CONSORT - CONsolidated Standards of Reporting Trials" that have been developed by the CONSORT Group (http://www.consort-statement.org) [6-9]. This document was first published in 1996 with subsequent revisions in 2001 and 2010. The CONSORT  statement "offers a standard way for authors to prepare reports of trial findings, facilitating their complete and transparent reporting, and aiding their critical appraisal and interpretation" [7].

Although there have been numerous recommendations over the past 30 years to enhance RCTs reporting, and more intensely over the past 15 years to adhere to the CONSORT Statement, reviews in several fields of medicine have repeatedly shown that the reporting quality is still problematic with poor adherence [10-11]. To our knowledge, a systematic assessment of the quality of RCTs reporting and adherence to the CONSORT statement in cancer research has not been previously conducted. A contemporary audit of RCT reporting quality is warranted given important health policy implications and disease burden related to cancer and its associated therapies.

Materials & Methods

Study design

A cross-sectional CONSORT compliance audit of published parallel two- arm RCTs assessing oncological interventions in adult breast, prostate, colorectal, and lung cancer between 1992-2010 was performed. This study was conducted in collaboration between the Departments of Oncology and Epidemiology & Biostatistics, University of Western Ontario, London, Ontario, Canada. The sample selection was conducted between May and June 2010. Data collection was performed between June and November 2010. Data analysis was performed between November 2010 and January 2011.

Study population

Randomized controlled trials on the topic of the four most common solid malignancies were identified for potential inclusion into the final CONSORT database for quality analysis. The following list of inclusion and exclusion criteria were utilized in order to obtain a representative sample of oncology RCTs over the past 20 years while maintaining a feasible number of papers to review within the context of the research team.

Inclusion criteria:

1. Published phase Ill RCTs between 1992-2010
2. English language
3. Involving adults: A specific age range is not specified, as many of the RCTs included in the study did not report such ranges.
4. Cancer type: Breast, Prostate, Colon, and Lung
5. Parallel group design
6. Studies published in journals that published >4 RCT studies in the last 20 years

Exclusion criteria:

1. Non-English language reports
2. Investigations reporting interim analysis that did not result in stopping the trial
3. Secondary and long-term update analyses
4. Pilot/phase 2 studies
5. Trial that did not employ a parallel design such as crossover, factorial, cluster, split-body, and multiple arm trial.
6. Duplicate reports
7. Cost effectiveness and economic studies
8. Trials studying benign tumours or pre-cancerous lesions
9. Trials studying cancers other than the four mentioned in the inclusion criteria, or a combination of two or more of these four cancers.

Selection methods

A professional librarian at the London Regional Cancer Program conducted a search of PubMed database for RCTs in compliance with the study inclusion criteria.

Search Strategy: randomized controlled trials as topic (mh) AND (quality control [mh] OR guideline adherence [mh] OR guidelines as topic (mh) OR publishing/standards [mh] OR publication/standards (mh). Eight-hundred and fifty parallel 2-arm RCTs assessing oncological interventions in adult breast, prostate, colorectal, and lung cancer between 1992-2010 were identified.

One reviewer (IA) screened the titles and abstracts of the 850 retrieved reports to exclude any obvious non-eligible trials (Figure 1). Of these, 515 RCTs were deemed eligible for inclusion in a full article review. A copy of the full article was obtained for each of the 515 included reports. Two trained physician reviewers (IA and GR) conducted a full article review of the 515 reports. This review had three goals: first--to exclude any reports of non-eligible trials, second--to independently score the included RCTs' reporting quality using the CONSORT checklist, and third--to collect data on predictive clinical trial descriptive variables for further analyses. Of the 515 reports, 408 RCTs met all inclusion and exclusion criteria for insertion into the final study database.

RCT scoring

Two reviewers (IA and GR) used a standardized form to generate average  (primary assessment of overall study quality) and difference (overall estimate of item reliability) CONSORT scores. Our extraction form consisted of the 25 core (excluded follow-up sub-items) items from the 2010 CONSORT checklist that would be common to all trials irrespective of design or intervention to be assessed; see Appendix (Table 3) for further details. Each item was given equal weighting; therefore, each RCT was given a score out of 25, reflecting how many of the 25 extraction form items were in compliance with the CONSORT checklist.

Statistical considerations

The sample size was calculated as a function of the number of coefficients that can be safely included in the study's regression model. The commonly used rule in epidemiology is that 10 observations (RCT in our study) per coefficient are sufficient to provide adequate precision. Because we intended to analyze the association between two outcome measures and 11 predictors (13 coefficients), any ratio greater than 10/1 (N>130 RCTs) should generate precise results.

We calculated Kappa statistics for each individual CONSORT item and for the total scores on the entire sample (408 RCTs) in order to assess the chance-adjusted reliability of all CONSORT checklist items. We calculated descriptive summary statistics for all variables. For the main analysis, we constructed two main effects models. The first model examined the association between the predictors (Intervention, Year of Publication, Trial Site, Cooperative Group, Cooperative Group, Oncology Journal Type, Number of Authors, Number of Patients, and IF) and the CONSORT Average Score. A second model examined the association between the same predictors and the CONSORT Difference of Scores. A p-value of <0.05 was considered statistically significant. All statistical analyses were conducted using SAS Software 9.2 (SAS Inc. North Carolina, USA).


Results from the descriptive analyses carried out to assess demographic characteristics of the 408 RCTs were as follows:  Frequency by year of publication shows that the number of RCTs published in the three time periods related to changes in CONSORT reporting (1992-1996), (1997-2001), and (2002- 2010) were 51, 84, 273 RCTs, respectively. Frequency by trial site shows that most of the trials, 377 RCTs, were conducted in more than one site vs. 31 RCTs in one site. Frequency by number of countries shows that 156 RCTs were conducted in multiple countries vs. 252 RCTs in one country. Frequency by type of intervention shows that 13 RCTs investigated radiation therapy, 349 RCTs investigated chemotherapy, 1 RCT investigated surgical therapy, and 45 RCTs investigated a combination of the previous three therapies.

Frequency by type of cancer shows that 273 RCTs investigated lung cancer, 135 RCTs investigated breast cancer, 41 RCT investigated prostate cancer, and 59 RCTs investigated colorectal cancer. Frequency by journal shows that the Journal of Clinical Oncology published a significant proportion of our sample (178 RCTs).  Annals of Oncology published 52 RCTs.  Each one of the other journals published <21 RCTs.

Frequency by primary country shows that 107 RCTs originated in the United States. Each one of the other countries published <35 RCTs. Frequency by oncology vs. non-oncology shows that 374 RCTs were published in oncology journals vs. 34 RCTs were published in non-oncology journals.

Four hundred and eight articles were analyzed for descriptive analysis.  Our primary outcome, the mean average CONSORT score, was 16.6 (SD 3, max 25).  Our secondary outcome, the median CONSORT Difference of Scores, was two (interquartile range 1-3). Figure 2 presents the distribution of the two outcome measures. Figure 3 presents a scatter plot of the Intraclass Correlation Coefficients (ICC) between the CONSORT Scores generated by the two reviewers which was 0.71 (95% Cl 0.61, 0.78).

The entire sample was also approached regarding the reliability in the final analysis. Kappa Agreement and Percent Agreement for each individual CONSORT checklist item ranged from (0.02-0.92), and (30.9 - 97.8%), respectively (Table 1). The result of the main effect model analysis of the 408 articles was as follows:

Checklist Item Kappa Statistics Percent Agreement
Randomization (1a) 0.93 96.32
Design Summary (1b) 0.88 96.57
Background (2a) 0.14 * 87.50
Objectives (2b) 0.16 * 87.85
Design (3a) 0.66 * 90.67
Participants Eligibility (4a) 0.30 * 97.79
Settings and Locations (4b) 0.55 * 87.25
Interventions (5) 0.37 * 94.12
Primary and Secondary (6a) 0.55 * 80.88
Sample Size (7a) 0.56 * 85.30
Sequence Generation (8a) 0.49 * 82.60
Sequence Generation (8b) 0.59 * 84.32
Allocation Mechanism (9) 0.39 * 71.32 ^
Implementation (10) 0.10 ** 76.71 ^
Blinding (11a) 0.52 * 82.00
Statistical Methods (12a) 0.56 * 96.80
Flow for Patients (13a) 0.24 ** 71.32 ^
Flow for Loss to Follow Up (13b) 0.27 ** 61.03 ^
Recruitment Dates (14a) 0.88 95.10
Baseline Data (15) 0.28 * 96.57
Number of Patients An (16) 0.05 * 58.82 ^
Primary and Secondary (17a) 0.05 * 51.96 ^
Harms (19) 0.40 * 88.73
Limitations (20) 0.39 * 70.59 ^
Generalizability (21) 0.04 ** 30.89 ^
Interpretation (22) 0.03 ** 92.65
Registration (23) 0.81 97.30
Protocol (24) 0.18 ** 94.37
Funding (25) 0.78 * 90.20

CONSORT average Score

Year of Publication: The results demonstrate a dose response relationship in which later publication year reflects an increase in reporting quality. There was an average increase of 3.1 CONSORT score points comparing an RCT published between 2002- 2010 to an RCT published between 1992-1996 (p<0.0001), and an increase of 1.8 CONSORT score points comparing an RCT published between 2002-2010 to an RCT published between 1997-2001 (p<0.0001).

Author Number: Higher author number was associated with a higher CONSORT Average score (p<0.0001). There was an increase of one point in the CONSORT Score in published RCTs for an increase of about seven in the number of authors.

Impact Factor (IF): Higher IF was associated with a higher CONSORT average score (p<0.0001). RCTs published in journals of a high IF have higher COSORT Average score (1.46 point higher) compared to RCTs published in journals with low IF.

CONSORT difference Score

Recent Year of Publication: This was the only predictor associated with a decrease in the CONSORT difference of scores (p=0.0085). Table 2 presents the results of the main effect model of the association between predictors on one hand and CONSORT average score (reporting quality) and CONSORT score difference (predictors of reliability) on the other hand.

Variable CONSORT Average CONSORT Difference
Estimate p-value Estimate p-value
Intervention * Radiation vs. Multiple ^ 1.54 0.0481 -0.48 0.3694
Chemo vs. Multiple ^ 0.63 0.1139 -0.28 0.302
Surgical vs. Multiple ^ 1.21 0.6306 1.91 0.2697
Year of Publication ** (1992-1996) vs. (2002-2010)^ -3.1 <.0001> 0.7 0.0085
(1997-2001) vs. (2002-2010)^ -1.82 <.0001> 0.07 0.7491
Trial Site (Single vs. Multiple ^) -0.71 0.151 -0.07 0.8405
Cooperative Group (Non-Cooperative vs. Cooperative^) -0.31 0.2043 -0.08 0.6186
Journal Type (Non-Oncology vs. Oncology^) 0.46 0.3406 0.02 0.9636
No. of Authors 0.15 <.0001> -0.03 0.1899
No. of Patients 0.0001 0.6602 -0.0002 0.167
Impact Factor (High vs. Low^) 1.46 <.0001> -0.25 0.1661


Our primary aim was to determine predictors of CONSORT Checklist compliance in the oncology literature over the past two decades and the magnitude of their effects. This audit and its findings were important due to the size and scope of the analysis performed in terms of RCT reporting quality as well as providing some preliminary information on CONSORT item reliability.  We identified three statistically significant predictors (Year of Publication, IF, and Author Number).  Year of Publication was the one with the highest impact on CONSORT score.  There was an increase of 3.1 in the CONSORT Score comparing RCTs published in the period, 2002-2010 to 1992-1996, and an increase of 1.8 comparing RCTs published in the period, 2002-2010 to 1997-2001. These results may be explained by several factors that have changed over the two decades.

One possible factor is the increase in the number of researchers trying to publish, which has made publishing more competitive. Journals have raised the publication standards and the editorial instructions have become stricter. Also, the process of peer review has become more regulated. The number of peer reviewers has increased, and strict rules have been put in place to minimize potential biases, such as blinding the peer reviewers to the names of authors and institutions. Another possible factor is the advancement in technology. This advancement has a clear impact on the way research is conducted.

The predictor with the second largest coefficient was IF.  High IF RCTs have a higher CONSORT average score (1.46 point higher) compared to low IF RCTs.  Journals with high IF may have more strict publication instructions, peer review process as well as an increased use of the CONSORT methodology.  The predictor with the smallest coefficient was the total author number. There was an increase of one point in the CONSORT Score for an increase of about seven in the number of authors. Intuitively, larger research teams have the advantages of more feedback and internal reviews. Members from different backgrounds bring to the equation different experiences and perspectives. Although this association is statistically significant, it does not seem to be of practical importance.

For each individual item, Kappa agreement ranged from (0.02-0.92), and percent agreement ranged from (30.9 to 97.8%).  The following items:  Allocation Mechanism (Item # 9), Implementation (Item # 10), Flow for Patients (Item # 13a), Flow for Loss to Follow-up (Item # 13b), Generalizability (Item # 21), and Interpretation (Item # 22) were the least clear to interpret by the two reviewers. Previous works in the field of CONSORT checklist compliance have shown similar results [9, 12].

Recent year of publication was the only factor associated with an increase in reliability. This result could be explained by the same possible reasons mentioned above to explain the increase of the reporting quality in recent studies.

Our data showed that a few of the items have poor Kappa values yet high levels of agreement which is a phenomenon previously seen in the literature [13].

The variability in the level of agreement for some items is likely multi-factorial. One possible explanation of this variability may stem from the fact that journals adopting CONSORT statement do not actually use the checklist as their guidelines rather they integrate the statement's recommendations in their own guidelines. A second possible explanation is that some items may be more applicable for certain specialties or procedures. For example, allocation concealment may be easier to explain and report in drug trials than in surgical trials. A third possible explanation is that the wording of some of the items is indeed unclear. In all cases, a baseline level of variation is expected with any interpretative activity.

Although improvements in RCT reporting have been observed over time in the cancer literature, the overall quality of reporting remains suboptimal (Mean average CONSORT score was 16.6 [SD 3, max 25]). In this study, we found that 50% of the published literature has a reporting quality of 66.4% or less, and 85% has a reporting quality of 78.4% or less, based on the CONSORT statements consensus definition of reporting standards. These findings mirror those by other investigators [9, 11].

Of the 515 reports included in the full article review, 107 had unclear abstracts.  Many of these abstracts presented the reports as novel RCTs, while the full article review revealed that the reports were not.  Many research consumers depend only on the abstract to obtain the research results and level of evidence.  It is therefore concerning that up to a fifth (107/515) of the literature might provide inaccurate information in this regard. In addition to CONSORT reporting for RCTs, separate criteria regarding a standardized abstract format and nomenclature may assist with regards to this issue. Other studies found similar or less optimal results [11].

The study sample was derived from a single database, PubMed.  This increases the reliability (internal validity) of the study results, yet this potentially reduces the generalizability (external validity) to RCTs published in other databases.  Plint, et al. [10] and Farrokhyar, et al. [12] reached similar conclusions regarding validity in their investigations.  Their data were obtained from electronic searches of MEDLINE, EMBASE, and Cochrane CENTRAL and of MEDLINE, the Cochrane Library, CINAHL, HealthSTAR, and EM BASE, respectively, and both used the CONSORT checklist to score the quality.

This study investigated the four most common cancer types. Since research studying other cancers is conducted in a similar pattern (same countries and journals), it might be possible, with caution, to generalize the study results to the research dealing with other cancers. The study sample included RCTs published only in English. We cannot infer whether it is safe to generalize the study results to RCTs published in other languages. The article review was done by qualified reviewers who have different research backgrounds (oncology, epidemiology), which increased the generalizability. The study sample includes RCTs conducted in many different institutions, groups, and countries.

This variety also ensures good generalizability. This study investigated RCTs published in journals that had published more than four RCTs meeting our eligibility criteria in the past 20 years. Therefore, the study results may not be applicable for RCTs published in journals that publish cancer research infrequently.

This study has a number of limitations and sources of bias. One source of bias is the fact that the two reviewers could not be blinded to the journals' names or authors. There might be a theoretical inclination to give high impact journals a higher score because of their reputation. If this bias did occur, it would similarly affect older and newer publications resulting in no effect on our study's conclusions. There is no clear way to evaluate how much of the included reports were "improved" by the journal editors and peer-reviewers after submission to the journal. If this "improvement" in fact exists, the variation from journal to journal remains unknown.

We did not examine our data to see whether some of the RCTs were published by the same author (research team). Reporting quality of RCTs published by the same author are likely to be more similar than reporting quality of RCTs published by different authors. Hence, the assumption of statistical independence, an assumption that is central for the validity of the hypothesis testing, could be violated. We did not factor in this effect in our analysis because of its small magnitude as the number of RCTs published by the same author in the sample is extremely small. Another limitation of our analysis is the use of the 2010 CONSORT item checklist for clinical trials that were reported before this time.


Based on our results, there are several directions for future study.  Given the observed Kappa agreement heterogeneity, further work in the assessment of the reliability of individual CONSORT items is warranted.  Going forward, conducting a study with a prospective database might provide a higher level of evidence with regards to addressing questions related to reporting quality. Our study provides complementary results to those from Moher, et al., highlighting the need for a more standardized method to assess the reporting of RCTs [14].

Section/Topic Item No Checklist item
Title and Abstract
  1a Identification as a randomised trial in the title
1b Structured summary of trial design, methods, results, and conclusions (for specific guidance see CONSORT for abstracts)
Background and Objectives 2a Scientific background and explanation of rationale
2b Specific objectives or hypotheses
Trial Design 3a Description of trial design (such as parallel, factorial) including allocation ratio
3b*^ Important changes to methods after trial commencement (such as eligibility criteria), with reasons
Participants 4a Eligibility criteria for participants
4b Settings and locations where the data were collected
Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually administered
Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed
6b*^ Any changes to trial outcomes after the trial commenced, with reasons
Sample Size 7a How sample size were determined
7b*^ When applicable, explanation of any interim analyses and stopping guidelines
Sequence Generation 8a Method used to generate the random allocation sequence
8b Type of randomization; details of any restriction (such as blocking and block size)
Allocation Concealment Mechanism 9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned
Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions
Blinding 11a*^ If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how
11b*^ If relevant, description of the similarity of interventions
Statistical Methods 12a* Statistical methods used to compare groups for primary and secondary
12b*^ Methods for additional analyses, such as subgroup analyses and adjusted analyses
Participant flow (a diagram is strongly recommended)  13a For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome
13b For each group, losses and exclusions after randomisation, together with
Recruitment 14a Dates defining the periods of recruitment and follow-up
14b*^ Why the trial ended or was stopped
15 A table showing baseline demographic and clinical characteristics for
16 For each group, number of participants (denominator) included in each
17a For each primary and secondary outcome, results for each group, and the
17b*^ For binary outcomes, presentation of both absolute and relative effect
18*^ Results of any other analyses performed, including subgroup analyses
19 All important harms or unintended effects in each group (for specific
Limitations 20 Trial limitations, addressing sources of potential bias, imprecision
Generalizabilit 21 Generalizability (external validity, applicability) of the trial findings
Interpretation 22 Interpretation consistent with results, balancing benefits and harms
Other information
  23 Registration number and name of trial registry
  24 Where the full trial protocol can be accessed, if available
  25 Sources of funding and other support (such as supply of drugs), role
*We strongly recommend reading this statement in conjunction with the CONSORT 2010 Explanation of and Elaboration for important clarifications on all the items. If relevant, we also recommend reading funders CONSORT



  1. Canadian Cancer Society's Steering Committee on Cancer Statistics: Canadian Cancer. Statistics 2011. Canadian Cancer Society, Toronto, ON; 2011.
  2. Devereaux PJ, Manns BJ, Ghali WA, Quan H, Guyatt GH: The reporting of methodological factors in randomized controlled trials and the association with a journal policy to promote adherence to the consolidated standards of reporting trials (CONSORT) checklist. Control Clin Trials. 2002, 23:380-8.
  3. Moher D, Pham B, Jones A, Cook DJ, et al.: Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?. Lancet. 1998, 352:609-13.
  4. Johnston SC, Rootenberg JD, Katrak S, Smith WS, Elkins JS: Effect of a US national institutes of health programme of clinical trials on public health and costs. Lancet. 2006, 367:1319-27.
  5. Altman DG: The scandal of poor medical research. BMJ. 1994, 308:283-4.
  6. Schulz KF, Altman DG, Moher D, CONSORT Group: CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMJ. 2010, 340:c332.
  7. Altman DG: Better reporting of randomised controlled trials: The CONSORT statement. BMJ. 1996, 313:570-1.
  8. Schulz KF, Altman DG, Moher D, CONSORT Group: CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol. 2010, 63:834-40.
  9. Moberg-Mogren E, Nelson DL: Evaluating the quality of reporting occupational therapy randomized controlled trials by expanding the CONSORT criteria. Am J Occup Ther. 2006, 60:226-35.
  10. Plint AC, Moher D, Morrison A, et al.: Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006, 185:263-7.
  11. Mathoulin-Pelissier S, Gourgou-Bourgade S, Bonnetain F, Kramar A: Survival end point reporting in randomized cancer clinical trials: A review of major journals. J Clin Oncol. 2008, 26:3721-6.
  12. Farrokhyar F, Chu R, Whitlock R, Thabane L: A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007, 50:266-77.
  13. Feinstein AR, Cicchetti DV: High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990, 43:543-9.
  14. Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, Walsh S: Assessing the quality of randomized controlled trials: An annotated bibliography of scales and checklists. Control Clin Trials. 1995, 16:62-73.
Original article

A CONSORT Clinical Trial Reporting Compliance Audit of the Oncology Randomized Controlled Trial Literature

Author Information

Ian Arra

Public Health and Preventive Medicine, Northern Ontario Medical School- East Campus - Laurentian University, Health Sciences Education Resource Centre

Public Health and Preventive Medicinece Centre, Health Sciences Education Resource Centre

Vikram Velker

London Health Sciences Centre, London, Ontario, CA

Tracy Sexton

Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, CA; Schulich School of Medicine & Dentistry, Western University, London, Ontario, CA, london, CAN

Brian W. Rotenberg

University of Toronto

R Gabriel. Boldt

London Health Sciences Centre

George Rodrigues Corresponding Author

Department of Radiation Oncology, London Regional Cancer Program, London, Ontario, CA; Schulich School of Medicine & Dentistry, Western University, London, Ontario, CA, London, CAN

Ethics Statement and Conflict of Interest Disclosures

Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.

Original article

A CONSORT Clinical Trial Reporting Compliance Audit of the Oncology Randomized Controlled Trial Literature

Figures etc.


Scholary Impact Quotient™ (SIQ™) is our unique post-publication peer review rating process. Learn more here.