• Users Online: 100
  • Print this page
  • Email this page

 Table of Contents  
Year : 2019  |  Volume : 20  |  Issue : 1  |  Page : 32-33

Improper presentation of statistical results misleads into interpretations of study conclusions: An incidence of outcome reporting bias

Clinical Psychologist, PsyClinic, New Delhi, India

Date of Web Publication20-Jun-2019

Correspondence Address:
Dr. Tarun Verma
B-3/141, 2nd Floor, Paschim Vihar, New Delhi - 110 063
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/AMH.AMH_49_18

Rights and Permissions

How to cite this article:
Verma T. Improper presentation of statistical results misleads into interpretations of study conclusions: An incidence of outcome reporting bias. Arch Ment Health 2019;20:32-3

How to cite this URL:
Verma T. Improper presentation of statistical results misleads into interpretations of study conclusions: An incidence of outcome reporting bias. Arch Ment Health [serial online] 2019 [cited 2023 Feb 8];20:32-3. Available from: https://www.amhonline.org/text.asp?2019/20/1/32/260776


Suresh et al.[1] wrote an article about internet addictive behavior and subjective well-being (SWB) among 1st-year medical students. They concluded in their study that individuals with higher levels of internet addiction showed reduced levels of subjective happiness. They draw this conclusion on the basis of descriptive analysis of the two variables. They have created four groups of internet addiction levels (normal, mild, moderate, and severe) and three levels of SWB (low, average, and high). They have not mentioned how many participants fall under each of the internet addiction as well as SWB groups. Only in the severe internet addiction test (IAT) group, they have told that there was only one participant. This indicates a possibility of large sampling bias in the generalization of results. It is not certain whether rest of the three groups of IAT had comparable group of participants. For 149 remaining participants, the distributions can take any form which can bias the results significantly. The authors described that they have taken care of normality of data for dependent variable (DV) (which are different for various analyses), but no DV or independent variable was mentioned. Why they have used two statistical package for social sciences (SPSS) version 18.0 and R environment (SPSS Inc., Chicago, USA), and for which types of analyses, is again not clear. Means and standard deviations were nowhere mentioned for any variable or group.

On the other hand, they have mentioned distribution based on gender in three groups of SWB. However, the analysis for gender was not important as the study nowhere describes it as chief hypothesis and authors have not discussed the results in detail in the study. Here, the study mentions only one Chi-square test with P = 0.508 (ns) and no Chi-square value. It is apparent from the data that the groups of average (4.5–5.5) and total scores of SWB had large differences in participants from both genders. It was necessary to calculate four such values for each SWB level including the total scores, showing differences between males and females.

Only P = 0.401 (ns) for differences between IAT groups has been mentioned in the Discussion section (and not in Results section) with again no F, Chi-square, or t-value. The authors indicate that despite no significant differences in the two analyses described, there is a “trend” which helps in concluding that individuals with higher internet addiction had lower levels of SWB.

These points described above indicate toward serious limitations of the study where authors have neglected important standards in reporting of results and have not made proper use of statistical tests. They have “inferred” the findings based on descriptive statistics, rather through inferential tests of significance. Their “trend” analysis of relationships between internet addiction and SWB enabled them to conclude positively (confirming alternate hypothesis), despite no significant differences. They should have rather emphasized on the null findings of the study but have committed Type 1 error where they have “interpreted” the findings as indicating a difference while there was none. This shows negligence on the part of authors and misleading conclusions based on statistical tests. Such an outcome reporting bias is quite commonly seen in researches;[2] however, here, the researchers have not only missed reporting of certain important findings but also interpreted the findings inappropriately too. Many times authors fear reporting of negative results, due to competitions in publication culture and concerns over publish-or-perish phenomena.[3] Negative results have their significance in the literature; however, researchers state that chief reasons for focusing on positive and biased results stem from lack of time and priority, incompleteness of the study, not initially intended for publication, low-quality design of study, fear of rejection in scientific community, as well as by the publisher, and others.[4] HARKing is one common practice that leads to such distortions of findings.[5]

The article[1] needs to be referred as an incidence of reporting bias and not for the merits as it indicates. Their findings are not conclusive, and their manner of reporting results contains several biases.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.

  References Top

Suresh VC, Silvia WD, Kshamaa HG, Nayak SB. Internet addictive behaviors and subjective well-being among 1st-year medical students. Arch Ment Health 2018;19:24-9.  Back to cited text no. 1
  [Full text]  
Sterne JA, Egger M, Moher D. Addressing reporting biases. In: Higgins JP, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. England: Wiley; 2008. p. 297-334.  Back to cited text no. 2
Mlinarić A, Horvat M, Šupak Smolčić V. Dealing with the positive publication bias: Why you should really publish your negative results. Biochem Med (Zagreb) 2017;27:030201.  Back to cited text no. 3
Fanelli D. Do pressures to publish increase scientists' bias? An empirical support from US states data. PLoS One 2010;5:e10271.  Back to cited text no. 4
Kerr NL. HARKing: Hypothesizing after the results are known. Pers Soc Psychol Rev 1998;2:196-217.  Back to cited text no. 5


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article

 Article Access Statistics
    PDF Downloaded174    
    Comments [Add]    

Recommend this journal