By Aaron S. Edlin and Michael Love:
Knowing the magnitude and standard error of an empirical estimate is much more important than simply knowing the estimate’s sign and whether it is statistically significant. Yet, we find that even in top journals, when empirical social scientists choose their headline results - the results they put in abstracts - the vast majority ignore this teaching and report neither the magnitude nor the precision of their findings. They provide no numerical headline results for 63%±3% of empirical economics papers and for a whopping 92% ± 1% of empirical political science or sociology papers between 1999 and 2019. Moreover, they essentially never report precision (0.1% ± 0.1%) in headline results. Many social scientists appear wedded to a null hypothesis testing culture instead of an estimation culture. There is another way: medical researchers routinely report numerical magnitudes (98%±1%) and precision (83% ± 2%) in headline results. Trends suggest that economists, but not political scientists or sociologists, are warming to numerical reporting: the share of empirical economics articles with numerical headline results doubled since 1999, and economics articles with numerical headline results get more citations (+19% ± 11%).
Via somebody on Twitter?