High-quality systematic reviews in the field of Dentistry provide the most definitive overarching evidence for clinicians, guideline developers and healthcare policy makers to judge the foreseeable risks, anticipated benefits, and potential harms of dental treatment. In the process of carrying out a systematic review, it is essential that authors appraise the methodological quality of the primary studies they include, because studies which follow poor methodology will have a potentially serious negative impact on the overall strength of the evidence and the recommendations that can be drawn. In Endodontology, systematic reviews of laboratory studies have used quality assessment criteria developed subjectively by the individual authors as there are no comprehensive, well-structured, and universally accepted criteria that can be applied objectively and universally to individual studies included in reviews. Unfortunately, these subjective criteria are likely to be inaccurately defined, unreliably applied, inadequately analysed, unreasonably biased, defective, and non-repeatable. The aim of the present paper is to outline the process to be followed in the development of comprehensive methodological quality assessment criteria to be used when evaluating laboratory studies, that is research not conducted in vivo on humans or animals, included in systematic reviews within Endodontology. The development of new methodological quality assessment criteria for appraising the laboratory-based studies included in systematic reviews within Endodontology will follow a three-stage process. First, a steering committee will be formed by the project leaders to develop a preliminary list of assessment criteria by modifying and adapting those already available, but with the addition of several new items relevant for Endodontology. The initial draft assessment criteria will be reviewed and refined by a Delphi Group (n = 40) for their relevance and inclusion using a nine-point Likert scale. Second, the agreed items will then be discussed in an online or face-to-face meeting by a group of experts (n = 10) to further refine the assessment criteria. Third, based on the feedback received from the online/face-to-face meeting, the steering committee will revise the quality assessment criteria and subsequently a group of authors will be selected to pilot the new system. Based on the feedback collected, the criteria may be revised further before being approved by the steering committee. The assessment criteria will be published in relevant journals, presented at national and international congresses/meetings, and will be freely available on a dedicated website. The steering committee will update the assessment criteria periodically based on feedback received from end-users.