NRIN Research Conference 2018

April 20, 2018MeetUp Jaarbeurs, Utrecht, The Netherlands
This event has passed.
Last modified: October 3, 2018

2nd NRIN Research conference
Connecting researchers in an emerging field
Friday 20 April 2018, Jaarbeurs MeetUp, Utrecht

Following the highly successful first NRIN Research Conference in 2016, NRIN organized its second one-day Research Conference in Utrecht on April 20th, 2018. The conference aims to display scientific research on Research Integrity, Research Ethics, and Meta-research in the Netherlands and Flanders, and to encourage and facilitate exchange, mutual learning, and collaboration between the researchers in this field.

Scroll down for the program or download the PDF.
Find the PDFs of the presentations next to the titles in the program below (for the presentations in parallel sessions, click on the title to view the abstract and find the PDF).
To view all photos of the conference, please scroll all the way down.

Photo: Fenneke Blom

 

Program

09.00 Registration
09.30 Opening
09.45 Keynote lecture: Fixing the replicability crisis in science – Prof. dr. Jelte Wicherts | view PDF
10.35 “As-you-go” instead of “after-the-fact”: Better practices by design – Chris Hartgerink | view PDF

11.00 Break

11.30 Parallel sessions 1

  • 1.1 Statistical issues
  • 1.2 QRP, prevalence and prediction
  • 1.3 Initiatives to foster responsible research practices

12.45 Lunchbreak

13.45 Discussion: Future of scholarly communication – dr. Gerben ter Riet & dr. Mario Malički
14.30 Parallel sessions 2

  • 2.1 Biases and solutions
  • 2.2 Incentive structures
  • 2.3 Education

15.30 Break

16.00 Discussion: Redefinition of QRP – dr. Stephanie Meirmans & dr. Gerben ter Riet
16.45 Closing
17.00 Drinks

1.1 Statistical issues

Chair: Jelte Wicherts, Tilburg University

KNAW advice on Replication studies: what's next? - Jean Philippe de Jong

Presenter: Jean Philippe de Jong, Koninklijke Nederlandse Akademie van Wetenschappen (KNAW)

Het systematisch her halen van onderzoek van anderen moet normaler worden. Dat is de kern van het advies Replication studies – Improving reproducibility in the empirical sciences, dat de KNAW uitbracht op 15 januari 2018. Financiers en wetenschappelijke tijdschriften dienen meer ruimte te bieden voor replicatieonderzoek. Daarnaast kan de reproduceerbaarheid van onderzoeksresultaten verhoogd worden met betere onderzoeksmethodes en transparantere verslaglegging. Een commissie van de KNAW sprak voor dit advies met onderzoekers, voerde een literatuurstudie uit en organiseerde een workshop met experts uit binnen- en buitenland. De commissie signaleert dat in de medische wetenschappen, de levenswetenschappen en de psychologie nog veel gedaan kan worden om de kwaliteit en daarmee de reproduceerbaarheid van onderzoek te verbeteren. Hierbij hoort ook dat onderzoek vaker wordt herhaald. Tot nu toe lijkt replicatieonderzoek hooguit een paar procent van het onderzoek te betreffen. De KNAW beveelt aan dat onderzoekers, onderzoeksinstellingen, financiers en wetenschappelijke tijdschriften samen zorgen dat replicatieonderzoek gemakkelijker wordt, onder meer door een goede opslag van onderzoeksgegevens. Ver der moeten tijdschriften richtlijnen uitvaardigen voor het rapporteren van onderzoek en zorgen dat de methode en de hypothese van te voren vastliggen. Wetenschappers moeten andere onderzoekers helpen die onderzoek willen herhalen. Financiers moeten meer geld vrijmaken voor replicatieonderzoek. Tenslotte moeten instellingen bij het beoordelen van personeel meer waarde hecht en aan replicatieonderzoek. De volgende stap na het uitbrengen van het advies is uitzoeken in hoeverre de conclusies en aanbevelingen ook toepasbaar zijn op vakgebieden binnen de sociale, geesteswetenschappen en natuurwetenschappen. Hiervan zullen de tussentijdse bevindingen gepresenteerd worden.

Perceptions on Null Hypothesis Significance Testing: Results from interviews with researchers - Jonáh Stunt

Presentation PDF
Presenter: Jonáh Stunt, VU University
Authors: J.J. Stunt; L.E. van Grootel; D. Trafimow; L.M. Bouter; J.T.C.M. de Kruif; L.D.J. Kuijper; A.G.J van de Schoot; I.H.M. Steenhuis; M.R. de Boer

Background: Methodologists and statisticians have long argued that there are substantial problems inherent in NHST. However, NHST is still used by the majority of empirical researchers, potentially leading them to draw erroneous conclusions from their research. The aim of our ongoing study is to explore perceptions on using NHST or alternatives among all relevant stakeholders in the scientific system.
Methods: We are conducting interviews and focus groups with scientists, editors, representatives of funding agencies and lecturers of statistics. Interviews take place at the respondent’s workplace, typically last between 45-75 minutes and are conducted using a topic guide.
Preliminary results: Up until now, twelve interviews have been conducted with scientists and lecturers of statistics. First results indicate that there is a lot of variability in the extent to which respondents are aware about the debate about NHST. Respondents also differ in their understanding of the interpretation of p-values and NHST. All researchers think it is very important to draw conclusions in a valid way. Finally most of the respondents are very open to using alternatives to NHST.
Discussion: From the interviews conducted so far we have a fair idea of the perceptions of researchers and lecturers in statistics. We have planned interviews with representatives of funding agencies and are seeking contact with editors. This latter group presumably plays a vital role in the current ubiquitous use of NHST.

What heuristics do researchers use when assessing the outcomes of multiple studies? - Olmo van den Akker

Presentation PDF
Presenter: Olmo van den Akker (Tilburg University)
Co-authors: Linda Dominguez Alvarez, Marcel van Assen, Marjan Bakker, Jelte Wicherts

In social and experimental psychology single studies are generally considered to be insufficient to test a theory and multiple study papers are the norm. In this project, we consider how researchers assess the validity of a theory when they are presented with the results of multiple studies that all test that theory. More specifically, we consider what researchers’ beliefs in the theory are as a function of the number of significant vs. nonsignificant studies, and whether this relationship depends on the type of studies (direct or conceptual replications) and the role of the respondent (researcher or reviewer). We find that researchers’ belief in the theory increases with the number of significant outcomes and that replication type and the respondent’s role do not affect response patterns. In a preregistered follow-up analysis we look at individual researcher data to find out which heuristics researchers use when assessing the outcomes of multiple studies. We lump each researcher into one of six categories: those who use Bayesian inference (i.e. the normative approach), those who use deterministic vote counting, those who use proportional vote counting, those who average prior beliefs with the proportion of significant results, those with irrational response patterns, and those whose response patterns are inconsistent with any of the previous categories. This follow-up study highlights mistakes researchers make when assessing the outcomes of scientific papers and sheds light on the ways we can educate current and future researchers to avoid making these mistakes.

Study pre-registration: the need for improved guidance - Coosje Veldkamp

Presentation PDF
Authors: Veldkamp, C. L. S., Bakker, M., van Assen, M. A. L. M., Crompvoets, E. A. V., Ong, H. H., Soderberg, C. K., Mellor, D., Nosek, B. A., & Wicherts, J. M.

In studies using Null Hypothesis Significance Testing (NHST), researchers face many, often seemingly arbitrary choices in formulating the hypotheses of their study, designing their experiment, collecting their data, analyzing their data, and reporting their results. Opportunistic use of these ‘researcher degrees of freedom’ aimed at obtaining statistical significance however increases the likelihood of obtaining false positive results and overestimating effect sizes, and lowers the reproducibility of published results. In a recent study, we compared the effectiveness of two types of study pre-registration (i.e. stipulating all planned aspects of a study in advance) as a solution to restrict opportunistic use of researcher degrees of freedom. Both types (Standard Pre-Data Collection Registrations and Prereg Challenge Registrations) are currently available on the Open Science Framework and differ in the extent to which they provide authors with detailed instructions and requirements on how to write the pre-registration. In this talk, we will discuss the results, the benefits and limitations of re-registration, and suggestions on how to improve pre-registration formats.

Preregistration and Statistical Power - Marjan Bakker

Presentation PDF
Presentor: Marjan Bakker, Tilburg University
Co-authors: Veldkamp, C. L. S., van Assen, M. A. L. M., Crompvoets, E. A. V., Ong, H. H., Van den Akker, O., Soderberg, C. K., Mellor, D., Nosek, B. A., & Wicherts, J. M

Many studies in the psychological literature are underpowered (Bakker, van Dijk, & Wicherts, 2012; Cohen, 1990; Maxwell & Delaney, 2004). Specifically, in light of the typical effect sizes (ES) and sample sizes seen in the literature, typical power is estimated to be less than .50 (Cohen, 1990) or even .35 (Bakker et al., 2012). A recent study showed that the intuitions of researchers about power are also flawed, especially when ES are small to medium sized, and only half of the researchers use power analyses to make sample size decisions (Bakker, Hartgerink, Wicherts, & van der Maas, 2016).
A possible solution is a formal power analysis which is reported a priori. Two ways to force researchers to make a formal power analysis before starting the experiment is to incorporate the power analysis in a preregistration or in an Institutional Review Board (IRB) proposal. We present the results of a preregistered study in which we investigated whether the statistical power of a study is better when researchers are asked to make a formal power analysis before collecting the data. Furthermore, we discuss the problems and quality of the power analyses and some concrete recommendations to improve power analyses.

1.2 QRP, prevalence and prediction

Chair: Mario Malički, Academic Medical Center, University of Amsterdam

The Living Room of Science: Do We Need Questionable Research Practices To Survive in Academia? - Rens van de Schoot

Presentation PDF
Presenter: Rens van de Schoot, Utrecht University

Science has always been a dynamic process with continuously changing rules and attitudes. While innovation and new knowledge production are essential in academia, making sure the best practices in research are widely known is vital. However, rules and traditions on responsible research practices differ greatly between research disciplines and often different rules apply in different fields. Most of these rules are subjective and in fact ‘unwritten’ that makes them difficult to identify, differentiate and grasp. The current debate about appropriate scientific practices is fierce and lively and has moved from academia to the public domain, resulting in many public opinions, not solely driven by objective information, but instead loaded by emotions. I will present the results of a vignette study among PhD-students in The Netherlands and replicated in Belgium about responsible research practices. I also asked deans and heads of departments to provide expert opinions about the prevalence among PhD-students of: data fabrication, deleting outliers to get significant effects, salami slicing, gift authorship and excluding information from your paper. Their opinions are confronted with data. Will the experts be able to predict the prevalence of questionable research practices?

Measuring Questionable Conclusions and Messages in Health Services Research - Reinie Gerrits

Presentation PDF
Presenter: R.G. Gerrits, Department of Public health, Academic Medical Center – University of Amsterdam, Amsterdam Public Health Research Institute
Authors: R.G. Gerrits, T. Jansen, M.J. van den Berg, N.S. Klazinga, D.S. Kringos

Background:
Scientific publications in biomedical fields are found to be prone to Questionable Research Practices (QRPs). It is likely that the Health Services Research (HSR) field is vulnerable to these practices as well. This study focusses on the phase of research where scientists have the most freedom to give their own interpretation and have potentially the largest impact with their work, i.e.: making inference from results by drawing conclusions in scientific publications. QRPs in reporting conclusions would be particularly worrisome considering the direct impact of HSR on policy and practice.

Aim: To construct a method for measuring the prevalence of questionable conclusions and messages in scientific Health Services Research publications applicable across research designs.

Methods: A literature review was performed to identify checklists and definitions of QRPs. This review formed the input for a consensus meeting and 14 individual interviews with the research leaders of 13 HSR groups in the Netherlands. QRPs derived from these interviews were listed and reviewed by 5 leading international health services researchers.
Result: We constructed a list of 35 QRPs in the reporting of messages and conclusions. Identified QRPs concern conclusions and messages in relation to for instance the abstract, objectives, design, findings, context of evidence, limitations, generalisations, causation, and language use. A Data Extraction Form was constructed in Excel to facilitate the systematic assessment of QRPs in the reporting of conclusions and messages in scientific HSR publications.

Can we predict responsible research practices in Randomized Controlled Trials? Study protocol for a big-data approach - Herm J Lamberink

Presentation PDF
Presenter: Herm Lamberink, Utrecht University Medical Center, Utrecht University
Co-authors: Willem Otte, Joeri Tijdink, Christiaan Vinkers

Background. The highest standard of evidence in medicine is from randomized controlled trials (RCTs) and meta-analyses thereof. RCTs, however, are also subject to questionable research practices such as performing underpowered studies, changing the primary outcome of the study, not adequately limiting sources of bias or statistical misreporting. Aim of the current study is to find predictors of these four outcomes.
Method. Data collection. RCTs will be identified via PubMed. Basic information regarding the journal, article and authors will be downloaded. We have developed a script which downloads PDFs of trials available via our institutional license. PDFs will be converted to XML text-data files. Articles of which ultimately no XML version was created will be excluded.
Variables. Outcome measures: statistical power (from Cochrane Database of Systematic Reviews), outcome reporting bias (via ClinicalTrials.gov), risk of bias (using RobotReviewer software) and correct statistical reporting (using StatCheck software). Potential predictors are related to the authors (number of authors, and for the first and last author: sex, country, Hirsch-index, academic age, uninterrupted publication presence, and number of collaborations), the institution of the first and last author (ranking by the Times Higher Education), the trial (financial support, lexicon, medical field, acknowledgements), and the journal (journal impact factor, journal impact factor stability).
Analysis. Logistic or linear regression will be used depending on the outcome. Predictors will be selected using backward selection.
Conclusion. The results of this study will allow for prediction of study power, outcome reporting bias, risk of bias and statistical rigor in clinical trials.

Analysis of research misconduct cases - Shila Abdi

Presenter: Shila Abdi, KU Leuven, University of Leuven, Belgium
Authors: Shila Abdi, Benoit Nemery, Kris Dierickx

Background: Within the national guidelines in Europe on research integrity and misconduct, a remarkable discrepancy can be found whether intention should be considered as a key factor in defining a practice as misconduct. Therefore, a thorough investigation of the role of intention when dealing with misconduct cases is essential.
Method: Twelve European countries were selected, representing three levels of regulation of investigation of misconduct: local commissions, national advisory commissions and national commissions with legal mandate. The countries were contacted by e-mail regarding the collection of misconduct cases. Content analysis of retrieved cases was conducted and data was analyzed on how intention was used in handling research misconduct.
Results: At this stage, data analysis is still ongoing. An analysis of 143 cases has already be done, providing an initial view of the role of intention when committing research misconduct. At the time of the conference, the results for the Netherlands will be presented, where different types of acts referring to the intentionality of actions will be discussed.
Conclusion: The results of content analysis could be framed in the scope of limited literature referring to the intentionality of person’s actions. According to literature, three levels of intentionality/culpability can be emphasized: intentional acts, grossly negligent acts and careless/negligent acts. With the results of our research, we could also enter the discussion with ‘European Code of Conduct’, where to demonstrate that misconduct was committed intentionally, knowingly or recklessly is no longer required.

1.3 Initiatives to foster responsible research practices

Chair: Willem Halffman, Radboud University

INSPIRE: Inventory in the Netherlands of Stakeholders’ Practices and Initiatives on Research integrity to set an Example - Fenneke Blom

Presentation PDF
Presenter: dr. Fenneke Blom, VU University Medical Center / Netherlands Research Integrity Network
Co-authors: Prof. dr. Lex Bouter, Dr. Daan Andriessen, Prof. dr. Yvo Smulders, Dr. Gerard Swaen, Dr. Joeri Tijdink, Dr. Coosje Veldkamp, Prof. dr. Guy Widdershoven, Dr. Hans Berkhof.

In 2018, the revised version of the Netherlands Code of Conduct on Research Integrity will become effective. However, having such a code does not imply that all stakeholders in scientific research are aware of the current rules of conduct and, more importantly, live up to them. Further steps need to be taken to facilitate dissemination and implementation among stakeholders in their day to day scientific research practice. Although many of the major stakeholders (including researchers, research policy makers, journal editors, funding agencies, supervisors, and review boards) have developed initiatives to foster responsible research practices (FRRP), exchange and mutual learning is essential to help stakeholders strengthen their initiatives, to effectively implement the code of conduct, and to avoid ‘reinventing the wheel’.
The Netherlands Research Integrity Network (NRIN) collects and shares a wide variety of relevant information on its website (www.nrin.nl) with the ultimate goal to unite the community of research integrity and to facilitate collaboration, exchange and mutual learning. This project aims to elaborate on and to augment the NRIN initiative, by performing a more exhaustive and systematic inventory and assessment of the current and planned initiatives to FRRP in the Netherlands. The project goes beyond the scope of just sharing knowledge: it activates policy makers, administrators, teachers, researchers, supervisors, editors, and other stakeholders to contribute and reflect on their own practices. Consequently, we expect to inspire and enable all stakeholders in scientific research to facilitate measures locally that foster RRPs.

The project comprises four parts. In Part 1, an ‘FRRP-checklist’ will be developed. This instrument aims to assess (on aspects such as effectiveness and quality of dissemination and evaluation) and classify (e.g. type of initiative, stakeholders, topics and research stage that are addressed) all FRRP-initiatives in the Netherlands. Moreover, it can be used by initiators to assess their (developing) initiatives. The FRRP-checklist will be developed by a diverse team of experts and stakeholders, by means of a Delphi procedure.

In Part 2, several strategies will be used to conduct the inventory, including a call, personal invitations to submit initiatives in our project team’s networks, consultation of previous inventories (such as www.equator-network.org, www.printeger.eu, www.rri-tools.eu etc.), and an automated search (web crawler) on the websites of research institutions in the Netherlands. The web crawler will be repeated after one year to find changes and new initiatives, and its scripts will be made available publicly for others to apply later again.

In Part 3, the checklist will be applied to assess and classify the FRRP-initiatives that are collected in part 2, and initiators receive the results of application of the checklist. The initiatives that meet our selection criteria (i.e. 1. sufficient information available, 2. within our scope and 3. soon or currently implemented) will be assessed by two reviewers. All selected initiatives will be collected in an online toolbox to inspire and enable others to easily implement or adjust it to the setting of another institution. FRRP-initiatives that are classified based on the checklist as either ‘good practices’ or ‘unique initiatives’ will be described in more detail in mini case studies. This will ultimately result in providing a repository which may help the relevant national and international stakeholders in their action to FRRP.

In Part 4, the results of the part 3 will be shared in an open online toolbox designed to easily browse through the included FRRP-initiatives. The toolbox will be made available via the online platform of the EU project Mapping Normative Frameworks for Ethics and Integrity of Research (EnTIRE), which is currently being developed.

The Bonn PRINTEGER Statement: guidance for research performing organisations to foster research integrity - Henriette Bout

Presentation PDF
Presenter: Henriette Bout, Integrity Agency of the municipality of Amsterdam & University of Amsterdam
Authors: Ellen-Marie Forsberg, Henriette Bout, Serge Horbach, Barend van der Meulen & Sarah de Rijcke

Research performance organisations are increasingly expected to promote responsible conduct in research, foster research integrity and reduce the risk of research misconduct. While initial focus in the debate on research integrity was on the individual researcher, scholars increasingly recognise the crucial role of organisations and the research system to establish conditions under which research integrity can strive. This currently led to heightened attention for integrity policy in research performing and supporting organisations. To further this agenda, the PRINTEGER project initiated a consensus panel of experts in the field of research integrity, tasked to set up a framework supporting organisations in strengthening research integrity. The resulting document, the Bonn PRINTEGER statement, emphasises that responsibility for ethical research lies with everyone who is active in research, but especially with leaders in research performing organisations. It acknowledges that individual researchers’ virtues alone cannot ensure research integrity. Instead, good conditions for exercising integrity must also be created at the organisational and systemic level. The statement intends to make research integrity challenges recognisable from the work-floor perspective. It provides concrete advice on organisational measures to strengthen integrity, thereby complementing existing codes and regulations that commonly address the individual researcher. On behalf of the consensus panel, we shortly elucidate the process that formed the Bonn PRINTEGER statement, which was recently published in Science and Engineering Ethics. We further elaborate on the statement’s content and comment on how organisations may implement its advice, taking into account the differences in their size, goals, structure and legal embedding.

Research Integrity in Dutch Universities of Applied Sciences - Daan Andriessen

Presentation PDF
Presenter: D. Andriessen HU University of Applied Sciences Utrecht
Co-authors: R. van der Sande, E. Wouters and A. van Gorp

Abstract: Ever since the introduction of professors in Dutch universities of applied sciences (UASs) in 2001, practice-oriented research has a structural and unmistakable position within these UASs. The aim of the research activities in the UASs is to foster scientific competencies among teachers in higher vocational education, feed the curriculum with state-of-the art knowledge and innovation of professional practice. For the purposes of research and education, UASs closely work together with industry, small and medium-sized enterprises and other organizations. So far, little is known about the policies, measures and good practices taken by the UASs to foster research integrity. This research project aims to increase the understanding of the current practices in the UASs with regard to research integrity and to develop measures that should be developed according to those involved. The study will aid UASs with formulating policies and (educational) measures to ensure research integrity and implement the new code o f conduct research integrity that is currently under development. The stud y consists of a web-based search and survey among all UASs into policy and measures with regard to research integrity, a survey among professors, and focus groups and interviews among relevant stakeholders. First impression i s that the research integrity practices at UASs are rather diverse. Some UASs have a rather complete research integrity policy including possibilities for ethical review of research, complaints procedures, data management plans, formats and systems. Other UASs are at the beginning of developing a research integrity policy.

Peer review innovations and their effectiveness as a self-regulating mechanism - Serge Horbach

Presentation PDF
Presenter: Serge Horbach, Institute for Science in Society – Radboud University & CWTS – Leiden University
Authors: Serge Horbach & Willem Halffman

The effectiveness of self-regulating mechanisms is crucial to the scientific enterprise. Among these, the peer review system in particular is often considered the best available gatekeeper of both quality and integrity in science. However, several recent studies suggest that peer review is under threat. They demonstrate that faulty and even fraudulent research slips through editorial peer review at alarming rates. Numerous measures have been proposed to improve the quality of the peer review system, but their effectiveness remains uncertain. This study assesses the effectiveness of measures used by scientific journals to improve peer review’s ability to detect misconduct and error. It does so in several stages. First, it identifies which innovative peer review practices are currently suggested and implemented. Second, it creates an inventory of which journals have implemented precisely which peer review format, showing similarities and differences in processes of quality control between various research disciplines. This part uses a web-based survey among journal editors, asking questions on 12 characteristics of their journal’s review model. Third, the study examines how effectively these review practices spot problematic publications, with bibliometric analysis techniques that use retractions of journal publications as an indicator of problematic publications. We will present the results of our project, focussing on the survey results and the bibliometric analysis, drawing conclusions concerning the use of peer review practices and their effectiveness as a self-regulating mechanism.

Discussion: Future of Scholarly Communication

Dear colleagues,

we would like invite you to a Discussion session: Future of scholarly communication which will be facilitated using MeetingSphere software, which guarantees anonymity and also allows reflection online after the discussion. So please bring your laptop, tablet, iPad or smartphone with you. You will be able to join the discussion using the link: https://eu01.meetingsphere.com/01028638/nrin, which will also be provided during the session (Note: when using an iPad, download the MeetingSphere app via the iTunes store).

The topic, we would like to tackle, is the Future of scholarly communication

Description of the problem: The replication crisis in science, alongside P-value hacking, and hypothesis generation after results are known (HARKing) are practices believed to be preventable or mitigated by prospective registration (before data collection or subject recruitment starts) of study protocols (of both experimental and observational research). As the effectiveness of current peer review practices has been brought into question, and the agreement between peer reviewers found to be very low, there has been advocacy of switching to pre-prints of manuscripts (publishing of non-peer reviewed versions of articles) and greater focusing on post publication peer review.

Question: What would be the pros and cons of switching to pre-print servers and post-publication peer review?

In hopes that you will join us,
On behalf of the project members,

Mario Malicki
Postdoc at the Academic Medical Center, UVA, Amsterdam
Project website
ORCID

2.1 Biases and solutions

Chair: Jelte Wicherts, Tilburg University

Selective citation in the published literature on the association between swimming in chlorinated water and childhood asthma: a citation analysis - Bram Duyx

Presentation PDF
Presenter: Bram Duyx, Care and Public Health Research Institute (School CAPHRI), Maastricht University, Maastricht, The Netherlands.
Auhtors: Bram Duyx, Miriam J.E. Urlings, Gerard M.H. Swaen, Lex M. Bouter, Maurice P. Zeegers

Introduction: Development of knowledge depends on an unbiased representation of evidence. Selective citation may thwart this unbiased representation. The research on the relationship between swimming in chlorinated water and childhood asthma has caused some controversy in recent years, which raised the question about the role of selective citation. Specifically, we aimed to assess which factors determine the citation of previous publications in this field.

Methods: We identified all published literature in Web of Science Core Collection on this topic. We extracted data on publication characteristics that were related to the content (e.g. study outcome, article type, sample size), that were content-unrelated (e.g. funding source, impact factor), and on author characteristics (e.g. gender, authority). We also looked at self-citation. To assess the impact of these factors on citaton, we performed a series of univariate random-effects logistic regressions, with the potential citation paths as unit of analysis.

Results: A total of 36 publications were identified. There was strong evidence for self-citation in this network: publications that had at least one author in common, cited each other 5.2 times more often than publications that did not. Other predictors of citation were: empirical publications (odds ratio 4.2), big sample size (odds ratio 5.8), and high authority (odds ratio 4.1).

Conclusions: There is clear evidence of selective citation in this research field. Authors particularly prefer to cite their own work rather than that of others.

Abstract Reporting Bias: a research proposal to assess a new type of bias - Bram Duyx

Presentation PDF
Presenter: Bram Duyx, Care and Public Health Research Institute (School CAPHRI), Maastricht University, Maastricht, The Netherlands
Authors: Bram Duyx*, Gerard M.H. Swaen, Miriam J.E. Urlings, Lex M. Bouter, Maurice P. Zeegers

Reporting bias refers to the phenomenon that negative findings are less likely to be reported in publications. This distorts the development of knowledge; even systematic reviews – generally considered to provide the highest level of evidence – can be biased because of this.

Something similar may occur in the case of abstract reporting bias (or abstract bias, in short). This is the phenomenon that negative findings are less likely to be reported in abstracts, even if they are conscientiously reported in the full-text of a publication. Because the main academic databases such as Web of Science and Pubmed are not capable of searching within full-texts, such studies are less likely to be identified by systematic searches of the literature. Consequently, they are also less likely to appear in systematic reviews and meta-analyses. If this is indeed the case, then the conclusions of systematic reviews may be skewed in the positive direction. We believe that epidemiological fields studying multiple health outcomes, are particularly prone for this type of bias.

In our presentation we will propose a study to estimate the occurrence of abstract reporting bias within a specific research field and to assess its consequences. For this study we will focus on the association between diesel emissions and bladder cancer. We intend to run a broad search that contains all cancer outcomes, and compare its results with a more specific search that is restricted to bladder cancer alone.

Past and Future Dependencies in Meta-Analysis: Safe Statistics for Reducing Health Research Waste - Judith ter Schure & Peter Grünwald

Presentation PDF
Presenter: Judith ter Schure, Centrum Wiskunde & Informatica
Authors: Judith ter Schure, Peter Grünwald

In 2009, a paper estimated that 85% of our global health research investment is wasted each year. It recommended to reduce this waste by basing study design and reporting on meta-analyses, involving decision making (which issues to research and how) and interpretation (how new trial results relate to previous research). However, conventional meta-analysis reporting – p-values and confidence intervals – is neither suitable for such decisions nor straightforwardly interpretable. As a decision procedure, it treats a sequence of trials as an independent sample, while in reality both (1) whether additional trials are performed and how many and (2) the timing of the meta-analysis (‘stopping rule’) often depend on previous trial results. Ignoring this introduces (1) ‘Accumulation Bias’ and (2) optional stopping problems, while the existence of such dependencies has been empirically demonstrated, e.g. ‘Proteus effect’ and ‘citation bias’. To solve both (1) and (2), we propose ‘Safe Tests’ and a reporting framework. Which tests are ‘Safe’ (e.g. Bayesian t-test) and which are not (both Bayesian and frequentist tests) is intuitively discussed in the meta-analysis context, but mathematical detail is postponed to a forthcoming paper. The reporting framework is also focused on the meta-analysis context, but is based on an individual study setup put forward by Bayarri et al. (2016) as ‘Rejection Odds and Rejection Ratios’. Apart from decision making, our proposal also improves meta-analysis interpretation: reported values are related to gambling earnings, frequentist and Bayesian analyses and thus, apart from reducing waste, also contribute to the recently revived p-value debate.

Effect of Financial Conflicts of Interest, on Reported Outcomes and Conclusions in Evaluations of Public Employment Programmes: A Meta-Analysis - Arnaud Vaganay

Presentation PDF
Presenter: Arnaud Vaganay, Director of Meta-Lab
Authors: Arnaud Vaganay, Maria Monge-Larrain, Stephan Bruns

Studies assessing the effect of health or behavioural interventions on their target groups are frequently funded or co-authored by the very organisations that implemented the intervention in the first place. This situation creates a financial conflict of interests (FCOI). Of the five systematic reviews published so far, three (including 120 studies) found that FCOIs were associated with favourable outcomes and two (including 18 studies) found no such association. The overwhelming majority of this literature is in the area of health. This study extends this literature to evaluations of active labour market policies (ALMPs). The study considered two definitions of FCOI: (a) a narrow definition, based the proportions of authors affiliated with a conflicted organisation and (b) a broader definition, considering both affiliations and/or the provision of external funding. For both definitions, we investigated whether there was a positive association with reported (1) effect sizes (2) estimates, and (3) conclusions. We meta-analysed 207 evaluations of ALMPs to answer these questions. Our preliminary results provide evidence of a positive and significant association between FCOI and our three outcomes base d on the strict definition of FCOI. However we found no such association when using the broader definition. These results suggest that, at the very least, users/readers of these evaluations should discount conclusions of studies with conflicted co-authors. Ideally, researchers affiliated with a policy-making organisation should refrain from evaluating the interventions developed/implemented by this organisation.

2.2 Incentive structures

Chair: Joeri Tijdink, VU University

Research Integrity in Norway (RINO) - Jeroen van der Sluijs

Presentation PDFUpdate on the RINO project (http://www.uib.no/en/rino)

Proxy Economics – An interdisciplinary theory of competition with imperfect information - Oliver Braganza

Presentation PDF
Presenter: Dr. rer. nat. Oliver Braganza, University of Bonn

In many areas of society we rely on competition to better achieve societal goals. However, due to imperfect information, competition generally relies on quantitative proxy measures in order to assess performance. Examples include: in science, the publication count of an author; in healthcare, the number of patients treated or in economics, the profit achieved. Importantly, in many circumstances it may be possible to make decisions which optimize ‘proxy performance’ but not the actual societal goal, such that individual decisions and cultural practices may shift toward the proxy. In fact, prominent voices have argued that this is precisely what has happened in the current scientific ‘reproducibility crisis’ [1]. Unfortunately, we lack a coherent theory on the basis of which to assess such claims. Here, I develop an interdisciplinary theory of ‘proxy economics’. The central concept is captured by a law attributed to Charles Goodhart or Donald T. Campbell, most pithily phrased as: “When a measure becomes a target it ceases to be a good measure.” We observe that any proxy measure in a competitive societal system becomes a target for the competing individuals (or groups). A progressive cultural evolution towards proxy-oriented practices may ensue, as has recently been proposed for scientific practices [2]. The theory is developed around an agent-based computational model to provide a formal description of the minimal components required to capture the tension between moral/social considerations and competitive pressures.

1. Bénabou, R. & Tirole, J. Bonus Cu lture: Competitive Pay, Screening, and Multitasking. J. Polit. Econ. 124, 3 05–370 (2016).
2. Smaldino, P. E. & McElreath, R. The natural selection of bad science. R. Soc. Open Sci. 3, 160384 (2016).

Optimizing the Responsible Researcher - Joeri Tijdink & Govert Valkenburg

Presentation PDF
Presenters: Joeri Tijdink, VU University & Govert Valkenburg, CWTS Leiden
Authors: J.Tijdink, G. Valkenburg, S. de Rijcke

Introduction: The project Optimizing the Responsible Researcher aims to articulate received ideals of responsible research and responsible researchers, and compare these to the systems of recruitment, assessment and promotion of biomedical researchers. These systems will be studied through the lens of cultural analysis, focusing on how ideals circulate in research practices, how they become (transformed and) codified into rules and regulations, and how individual researchers in turn make sense of what these rules and regulations seem to demand from them.
Methods: In our project, we use focus groups and semi-structured interviews to determine an optimal profile of a responsible researcher from a biomedical scientist perspective and compare these out comes with document analyses of recruitment, assessment and promotion practices within several biomedical institutions in the Netherlands.
Results: At the NRIN conference, we will discuss: 1. preliminary insights from empirical research (semi-structured interviews and focus groups) into both questions: what ideals of responsible research are held within the biomedical community, and 2. what kind of ideal seems to emerge de facto from policies regarding recruitment, assessment and promotion. 3. We interpret these results in a framework of cultural analysis, where both institutional arrangements and endorsed ideals are articulated. 4. Finally, we discuss whether other assessment and promotion practices are warranted and if yes, what should these practices contain.

Publication pressure among academic researchers in Amsterdam a cross-sectional survey study among 1000+ researchers - Tamarinde Haven

Presentation PDF
Presenter: Tamarinde Haven, MSc. – VU, department of philosophy
Co-authors: Dr. Joeri Tijdink (VU), Prof. Dr. Lex Bouter (VU/VUmc), Prof. Dr. Frans Oort (UvA)

Publications are the standard performance criterion for individual researchers and research institutions at large. To be successful in academia, one must publish plenty and preferably in high-impact journals. The rising prestige of impact factor measures and emphasis on the quantity of publications, intensifies competition between researchers. This competition was traditionally considered an incentive to produce high quality work, but there are unwanted side-effects of this hyper-competitive climate such as publication pressure. Publication pressure may tempt researchers to cut corners that compromise their research integrity. Publication pressure is also linked to burnout in academics, massive dropout among young researchers and mental health problems. Little empirical evidence is available on the level of publication pressure, researchers’ opinion on publication culture, and resources researchers are provided with to not fall prey and cut corners or get burned-out. Our research question was: What is the level of publication pressure in the four academic institutions in Amsterdam, stratified for academic rank and disciplinary field? We used the Publication Pressure Questionnaire in a cross-sectional survey to investigate this. The talk starts with a brief outline of the study’s context and the conceptual models we use to investigate publication pressure. We will briefly explain how we revised the Publication Pressure Questionnaire and describe our data collection procedure. Then we will present the first survey results, focusing on our main research question. We will end by discussing potential implications of our findings and explain how our data will be used in future interventions.

2.3 Education (discussion session)

chair: Evert van Leeuwen, RadboudUMC

Some initiatives on RCR education will be presented, followed by discussion on how to study the effectiveness of RCR training. Are we doing the right thing?

Teaching Scientific Integrity with Upright - Vincent Coumans

Presentation PDF
Vincent Coumans, Institute for Science in Society, Radboud University
Co-authors: Luca Consoli, Hub Zwart

In 2015 the PRINTEGER project started. The mission of the PRINTEGER project is to enhance research integrity by promoting a research culture in which integrity is part and parcel of what it means to do excellent research.

Educational tools are key in promoting such a research culture. Hence, part of the PRINTEGER project is the development of Upright, an educational tool for scientific integrity. Upright is envisioned as a web-based interactive learning environment with a strong focus on a virtue ethics approach in teaching scientific integrity.

Apart from providing an informative foundation, Upright will allow users to discuss and reflect on issues of scientific integrity. Furthermore, a key element of Upright is that it can be used not only as a standalone tool, but also as a complementary tool for course-contexts.

In this presentation we will first lay out the stated learning goals. Then we will describe the current development process of Upright, with particular attention to the received feedback on the tool and to how said learning goals are addressed in Upright. After this conceptual introduction, there is an interactive demonstration of the current prototype where the main elements of Upright are shown. We conclude the presentation with a description of future work in the form of try-outs and concrete improvements and implementations of Upright. After the presentation, the audience can provide feedback on the educational tool Upright.

Two innovative teaching methods in RCR training: Moral Case Deliberation and Fiction movies - Fenneke Blom

Presentation PDF
Short introduction into two innovative interactive methods we apply in our RCR course for PhD-students of the VU University Medical Center: Moral Case Deliberation on cases brought in by participants themselves; and discussion initiated by fragments of fiction movies.

Higher Education Institutions and Responsible Research and Innovation (HEIRRI) - Jeroen van der Sluijs

Discussion: Redefinition of QRP

Discussion about questionable research practices (QRP)
led by Stephanie Meirmans and Gerben ter Riet (both AMC, University of Amsterdam)

The discussion question will be whether the commonly used meaning of “questionable research practices” in the research integrity community might be in need of change. We will start the session with a dialogue on how one might view questionable research practices differently when viewing them from clinical epidemiology (Gerben ter Riet) versus philosophy of science in practice (Stephanie Meirmans). We then open the discussion to include the audience, also (but not exclusively) using the digital discussion tool “Meetingsphere”.

NRIN devotes a great deal of attention to the website’s content and would greatly appreciate your suggestions of documents or links you believe to belong on this website.

This selection is an incomplete convenience sample, and does not reflect NRIN’s vision or opinion. Including an item does not necessarily mean that we agree with the authors nor does it imply we think unmentioned items are of poorer quality.

Please report any suggestions, inaccuracies or nonfunctioning hyperlinks etc. that you discover. Thanks in advance!

Contact
Icon