From: How Delphi studies in the health sciences find consensus: a scoping review
No | Main category | Subcategory 1 | Subcategory 2 |
---|---|---|---|
1 | General aspects | Area | 1. Clinical patient care = diagnosis and therapy of diseases in inpatient settings, e.g., ID5a 2. Healthcare services/public health = management of diseases, availability of care, access to healthcare, policy implication, e.g., ID35 3. Medical education = teaching and studying in health science programs, competencies of healthcare professionals, e.g., ID83 4. Methodological health research = methods in healthcare, research on research, e.g., ID134 |
2 | General aspects | Delphi variant | 1. Classic = reported as a classic Delphi study or not reported as modified Delphi study 2. Modified = reported as modified Delphi study |
3 | General aspects | Consensus criterion for rating scales | 1. Standardized measure of dispersion = e.g., coefficient of variation, interquartile range, standard deviation 2. Standardized measure of central tendency = e.g., median, mean 3. Percentage agreement (one scale point) = proportion of agreement with a value, e.g., 70% vote for 5 on a 5-point scale 4. Percentage agreement (adjacent scale points) = proportion of agreement with two adjacent values, e.g., 70% vote for 3 or 4 on a scale of 1–5 5. Percentage agreement (other conditions) = other criteria for measuring percentage agreement, e.g., less than 15% vote 1 or 2 and at least 70% vote 6 or 7 on a 7-point scale or proportion of agreement within specific subgroups 6. Percentage agreement (unclear) = unclear definition of consensus, e.g., unclear which scale items were used to measure percent agreement 7. Dependency analyses = e.g., Kendall’s coefficient of concordance, Spearman's rho 8. Other criteria = e.g., number of outcomes predefined, content validity index, RAND/UCLA disagreement index, diversity of responses |
4 | General aspects | Percentage level consensus | Reported percentage level consensus in, e.g., 75%. Criteria may differ between Delphi rounds. In this case, all criteria were noted |
3 | Panel of experts | Sampling strategy | 1. Snowball sampling = researcher relies on participant referrals to recruit new participants, e.g., recruiting colleagues from your own network 2. Purposive sampling = researcher seeks out participants with specific characteristics, e.g., recruiting researchers on the topic of artificial intelligence in clinical patient care 3. Purposive quota/random sampling = researcher randomly selects cases from within several different subgroups/quota, e.g., random selection of a number of the identified researchers on the topic of artificial intelligence in clinical patient care 4. Pool from a previous project or register = researchers select cases from a previous project or register, e.g., participants from a previous study 5. Convenience sampling = the authors reported to have selected according to convenience sampling, e.g., researcher gathers data from whatever cases happen to be convenient 6. Open calls = researchers recruit through open calls, e.g., through professional societies, regional networks, and advertisements on social media platforms |
4 | Panel of experts | Number of participants first round | Reported number of experts completing the first survey round |
5 | Panel of experts | Number of participants last round | Reported number of experts completing the last survey round |
6 | Panel of experts | Heterogeneity of expertise | 1. Homogeneous = only one group of participants, e.g., nurses 2. Heterogeneous = the panel consists of participants from different disciplines and/or subject areas, e.g., nurses, care managers, nursing researchers 3. Heterogeneous including everyday life experts (e.g., patients) = the panel consists of participants from different disciplines and/or subject areas including affected persons, e.g., patients, patient representatives, affected persons |
7 | Panel of experts | Scope | 1. national = one country, e.g., Germany 2. international = two or more countries without local scope, e.g., Germany and South Africa 3. Local = local cross-national focus, e.g., German-speaking region 4. Regional = specific region in one country, e.g., central Berlin |
8 | Questionnaire design | Survey software | Name of the digital platform for conducting the survey rounds, e.g. SurveyMonkey, LimeSurvey, Google forms, Microsoft Excel |
9 | Questionnaire design | Question types first Delphi round | 1. Closed questions = questions with one or more answer options to choose from, e.g., rating, ranking, and multiple-choice questions 2. Closed questions with the possibility to comment = questions with one or more answer options to choose from and the possibility to comment on answers, e.g., including the option to reformulate or suggest new items 3. Open-ended questions = exclusively questions with free-text fields, e.g., in a first qualitative Delphi round |
10 | Questionnaire design | Number of items first Delphi round | Reported number of items or questions of the first survey round. Subdivision according to items and questions was not given in every case |
11 | Questionnaire design | Question types last Delphi round | 1. Closed questions 2. Closed questions with the possibility to comment 3. Open-ended questions |
12 | Questionnaire design | Number of items last Delphi round | Number of items or questions of the last survey round. Subdivision according to items and questions was not given in every case |
13 | Questionnaire design | Width of rating scales | Width of rating scales, e.g., 4-point scale (1 = strongly agree, 4 = strongly disagree). If the response options or scale endpoints are not reported or are unclear, only the scale width is noted, e.g., 4-point scale |
14 | Questionnaire design | Rating scale, evasive category | Use of an evasive category, e.g., “unsure” or “don’t know”—option to answer, option to answer “Absent” due to a lack of perceived expertise. Recorded as reported or not reported |
15 | Questionnaire design | Randomization of questionnaire content | Randomization of question blocks, questions in question blocks, answer options in questions, e.g., through the survey software randomly assigning respondents. Recorded as reported or not reported |
18 | Process and feedback design | Timing of consensus definition | 1. A priori = determined before the Delphi round 2. A posteriori = determined after the Delphi round |
19 | Process and feedback design | Method or literature reference for the analysis of qualitative data | 1. Content analysis 2. Thematic analysis 3. Inductive approach 1–3 = reported method as mentioned in the text, e.g., thematic analysis 4. Other = e.g., grounded theory, quantitative analysis |
20 | Process and feedback design | Feedback designed to reconsider the judgments | 1. Group response or summary = summary of qualitative date, e.g., comments from open-ended questions, or summary of quantitative data, e.g., statistical feedback of results from closed-ended questions 2. Group response or summary of different groups of participants = peer feedback of one or different groups of participants, e.g., caregivers received feedback from patients or only from caregivers 3. Individual response = display the respondent's answer from the previous round |
21 | Process and feedback design | Termination criterion | 1. Consensus reached = achieving consensus in the Delphi study on all or the majority of the issues 2. Number of rounds = terminate the Delphi study after a predefined number of rounds, e.g., after two rounds of voting 3. Stability of judgments = terminate the Delphi study if the judgments are stable, e.g., determined through interquartile range, changes in mean scores 4. Other criteria = other criteria to terminate the Delphi study, e.g., when no new items are proposed, the judgments align, the response rate dropped below a certain value |
22 | Process and feedback design | Number of rounds | Reported number of survey rounds/iterations, e.g., three Delphi rounds |