The process of writing a review depends on the type of review you are writing. While there are similarities there are also differences.
The differences:
- Writing a literature review: A literature review focusses on elucidating a research gap and comparing and contrasting studies that relate to this research gap. This is described in detail in Writing a literature review.
- Writing a scoping review: A scoping review is written to map key concepts and evidence, ascertain the breadth of a field of research and to inform future research. A scoping review might outline a broad research question as a precursor to posing a narrowly-focussed research question in a systematic review later. Typically, however, they outline objectives in an aim statement.
- Writing a systematic review: A systematic review is written to answer a very specific research question. While there are different purposes for writing a systematic review, the question is central.
- Writing a meta-analysis: A meta-analysis uses statistical methods on data from a number of homogeneous quantitative studies in order to arrive at synthesis of the results of those studies.
The key differences between the various review types is provided in more detail on the following pages: Systematic reviews, scoping reviews and meta-analyses and Writing review articles: Review types.
The differences in review types are clear, but what are the similarities when writing a review?
The similarities:
All types of review:
- are planned carefully with either the research question/hypothesis or key concepts/evidence clearly in focus
- involve discrete stages or steps
- compare and contrast primary or secondary research on the issue they are investigating
- are clearly structured with no extraneous sections; everything discussed is tied to the purpose of the review
- will narrow down from broadly related literature to increasingly specific details drawn from the literature. This is sometimes informally called the “eye of the storm” in a review paper
- are written with clarity and economy, and precise, complete sentence structure. There is no room for sentence fragments, vague statements, imprecision, and “waffle”. Sentences should be economically written and redundancy avoided.
- use academic writing style, and pay careful attention to unity and coherence in terms of paragraph structure. They require careful drafting and redrafting before editing and proofreading.
- have a singular purpose. They must clearly elucidate either: a) the research gap (literature review); b) the key concepts and evidence (scoping review); c) an answer to a specific research question (systematic review); or d) statistical evidence from a synthesis from homogeneous quantitative studies (meta-analysis).
We look at writing scoping and systematic reviews below. These instructions can also be used for meta-analyses with the addition of the statistical methods used for data analysis.
Review Structure
A review typically follows the proforma structure provided by JBI, PRISMA, or Cochrane (Tricco et al., 2018). Review writers may use the proforma as mere guidelines and not follow them prescriptively. If publishing, remember to carefully check the journal you intend to submit to for author guidelines and look at other reviews they have previously published.
The proforma is essentially the same format as an empirical or scientific report, otherwise known as an AIMRaD report after its main sections (Abstract, Introduction, Methods, Results and Discussion). This is the same structure as a typical scientific research paper or research paper. However, there are inclusions that are particular to reviews in specific sections. These are outlined below.
Abstract
A Structured Abstract is usually written for a scoping or systematic review. This is an abstract written with section headings for each part. See Abstracts, structured abstracts and executive summaries for how to write a structured abstract.
Introduction
The Introduction to a review proceeds in the usual manner as it does for any scholarly paper. It first a) introduces the general topic, then b) provides background information (possibly with definitions of terms included), then c) narrows the focus to a specific issue or problem. The thesis statement or main claim can be added here. Finally, d) the Introduction outlines the areas or main points to be discussed in the paper. These steps are described on our helpsheet: Writing an Introduction. In addition, a rationale and statement of objectives, or aim statement is included.
Rationale: This outlines the justification for the review. Make your rationale clear and accessible for the non-expert. It should be a clear statement about why your review needs to be done. The rationale is best done by articulating the research gap (see underlined examples below) although the rationale can be clearly made without doing so. In a systematic review, the rationale may be looking at new methodologies, a research gap, or to inform practice.
- ‘Concern needs to be directed at non-verbal communication [NVC] and its different modalities as critical contributors to high quality care which plays a significant role in demonstrating respect for patients, fostering empathy and trusting provider-patient relationships’ (Wanko Keutchafo et al., 2020, Background section).
- ‘A scoping review was conducted in 2013 on social media use by health professionals and trainees, rather than health researchers. However, there is no evidence synthesis of the ways in which health researchers, as a specific population group, are using social media across platforms, and there remains uncertainty about how to best harness the potential of this medium in health research’ (Dol et al., 2019, Introduction section).
- ‘By focusing on actual behaviour change and patient outcomes in emergency situations, this review provided an opportunity to identify the essential components of effective emergency training. If this can be achieved, then the factors that are required to deliver the best possible training can be incorporated into emergency training courses to facilitate improvement in patient and organisational outcomes across specialities’ (Merriel et al., 2019, “Why it is important” section).
- ‘The findings from this meta-analysis can provide guidance to policymakers about the efficacy of single-track YRE as an intervention to increase student achievement, and for which schools and students it is most likely to be effective’ (Fitzpatrick & Burns, 2019, “Why it is important” section).
Objectives: This section requires an aim statement: The aim statement is not a research question thought it may lead to one. It is a statement as to what your review will accomplish. Use the word “aim(s)” or “objective”: ‘This review aims to …’ ‘The aim of this review is to … ‘The objective of the scoping study …’ and so on. Leave no room for doubt as to the precise purpose of the review. JBI advises that the objectives should capture the core elements of the inclusion criteria (Peters et al., 2020).
- ‘The primary aim of the study was to identify the type of NVC strategies used by nurses to communicate with older adults’ (Wanko Keutchafo et al., 2020, Background section).
- ‘The objective for this scoping review was to map the literature on the ways in which health researchers report on their use of social media from the existing literature’ (Dol et al., 2019, Introduction section).
- ‘To assess the effects of interactive training of healthcare providers on the management of life‐threatening emergencies in hospital on patient outcomes, clinical care practices or organisational practices, and to identify essential components of effective interactive emergency training programmes’ (Merriel et al., 2019, Objectives section).
- ‘The main objective is to identify, across studies published in the post-NCLB era, how single-track YRE affects student achievement. Along with this, we investigate the effect of YRE on different subgroups of students’ (Fitzpatrick & Burns, 2019, Research questions section).
The research question(s): Some scoping reviews raise a broadly-defined research question. Typically, a scoping review will outline an aim statement not a research question. If included, the research question follows the statement of aim in the Introduction. This is a question that you hope to answer by the end of the review. See Designing research questions. The research question(s) are kept quite general in a scoping review but can be followed with sub-questions relating to different concepts in the review, such as population groups or outcome measures. This ensures the scoping review meets its aims of determining how a field of research can be pursued systematically in a later systematic review. Here is an example of a general research question suitable for a scoping review:
- ‘The main question for this review was: What is the evidence of nonverbal communication [NVC] between nurses and older adults?’ (Wanko Keutchafo et al., 2020, Research questions section).
Compare this with the highly refined research questions used in a systematic review. Examples below:
- What is the estimated effect of single-track YRE for math achievement and for reading achievement?
- What is the effect size (of math and reading achievement) for only low-income students and for only minority students?
- What is the relationship between characteristics of YRE (calendar structure, duration of the longest remaining break) and the effect size estimate? (Fitzpatrick & Burns, 2019, Research questions section).
A scoping review might have a research question; a systematic review must have one. In a systematic review the research question is clearly-defined and narrowly-focussed. This is designed to delimit the aims of the review and ensure that all relevant evidence surveyed is illuminated during the review process. The role of a systematic review is ‘to collate evidence that fits pre-specified eligibility criteria in order to answer a specific research question’ (Chandler et al., 2022, para. 1). Importantly, the research question should be researchable, i.e., there needs to be a means to answer the question (it can’t be overly-ambitious, and should be time-bound, limited in scope, and there should be access to suitable data).
Establishing a research question is best done by first establishing a topic, researching the topic and finding a sub-theme, narrowing down the theme, and turning the narrow and focussed theme into a question. Designing a research question is further described Designing a research question and Writing review articles: Research questions. Note that a ‘luke warm’ topic is better than a ‘hot’ or ‘cold’ topic and topics can also be refined by using SMART goals or using a framework such as PICO. See more on designing research questions at Designing a research question.
The Introduction should finish with a statement about the searches that were conducted for previous reviews on the topic, including the sources searched (e.g., Cochrane, JBI, Campbell Library etc), e.g., ‘Previous reviews consulted … This review … [how your review differs].
Methods
Protocol and registration: A protocol statement makes clear if a protocol already exists and makes note of its registration number, if applicable. If using an existing protocol in the literature, be sure to acknowledge and make explicit reference to it. The explicit use of another protocol helps to make research advances clear when a researcher is using a similar methodological protocol as a previous study. If the protocol is new it might need to be registered to facilitate future research in the same area. Scoping review protocols are often registered with the Center for Open Science, FigShare or at the systematic review register of the JBI. Systematic review protocols are often registered with Cochrane, PROSPERO or Campbell.
- ‘The protocol for this review is available at Fitzpatrick & Burns, 2017’ (Fitzpatrick & Burns, 2019, Types of studies section).
- ‘A scoping review protocol was created to guide the process and is available from the corresponding author upon request. This paper adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping reviews’ (Dol et al., 2019, Overview section).
Variations on this language is, of course, possible. The important thing is that if a previous protocol is being followed it should be acknowledged formally. Some reviews link directly to the protocol.
Eligibility criteria: This states the eligibility criteria of the literature sought as shown above. This can be done using the PPC or PICO framework or in other ways. The types of evidence included must be mentioned, along with justification for any limits to the source type. Eligibility criteria might also be mentioned briefly and the details relegated to an appendix or table.
- ‘Articles were determined eligible for inclusion if they discussed the use of social media by health researchers, including but not limited to use of social media for recruitment, data mining, social media initiatives or campaigns, hashtag communities, and journal clubs. Articles could be from health researchers at any stage of their career (trainee to faculty member) and across any types of health research (policy, services, outcomes, medical, and basic)’ (Dol et al., 2019, “Inclusion and exclusion” section).
- ‘We included randomised trials and cluster‐randomised trials investigating training interventions where there was the comparison of interactive training and no or standard training’ (Merriel et al., 2019, Types of studies section).
- ‘Studies must be of K-12 schooling (students). Additionally, we consider studies of whole schools or of only regular-education students (who are in some cases the only students for whom achievement data are available), but not any studies of special education students’ (Fitzpatrick & Burns, 2019, Types of participants section).
Information sources: This describes the databases used with coverage dates.
- ‘A range of sources were used to ensure a comprehensive coverage of the literature. An initial search was conducted in August 2017, repeated and finalized in November 2019. The search made use of the following databases: … (Wanko Keutchafo et al., 2020, Database searching section).
- ‘An experienced information specialist (RP) developed comprehensive search strategies for 6 electronic citation databases: MEDLINE (OvidSP, 1946 to May 2018), EMBASE (Elsevier, 1947 to May 2018), CINAHL (EBSCOHost, 1971 to May 2018), PsycINFO (EBSCOHost, 1967 to May 2018), ERIC (ProQuest, 1966 to May 2018), and Web of Science Core Collection (Clarivate Analytics, 1900 to May 2018)’ (Dol et al., 2019, Search strategy section).
- ‘We also searched the following trial registries on 11 March 2019: World Health Organization International Clinical Trials Registry Platform (WHO ICTRP); US National Institutes of Health Ongoing Trials Register ClinicalTrials.gov). We used the sensitivity and precision‐maximising filter for retrieving randomised trials from MEDLINE and Embase as recommended in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins, 2011) , which we adapted for the other databases’ (Merriel et al., 2019, Electronic searches section).
Search strategy conducted: This section outlines the search criteria and limits placed on the search for the study. Some reviews include a full search strategy as an appendix or figure.
- ‘The search terms for this review originated from indexed subject headings, keywords of relevant studies, that recurred repetitively, and the Medical Subject Headings (MeSH) terms. The term ‘nonverbal communication’ was used…‘ (Wanko Keutchafo et al., 2020, Search strategy section).
- ‘We include a database search log in an online appendix to this review. This log contains, for each database that was searched, the terms, phrases, and Boolean operators that were used to identify relevant studies; fields that were searched; and restrictions or filters that were used. The log also includes comments on the search strategy used for each database to describe any database-specific procedures that were used to identify studies. Finally, the log indicates the number of records that were retrieved from each database along with the number of full-text studies that were downloaded from this pool’ (Fitzpatrick & Burns, 2019, Electronic searches section).
- ‘Our general/starting-point search terms for this meta-analysis include those used by Cooper et al. (2003), augmented by terms used in pertinent research published after that meta-analysis. The basic form of the search terms is: “year-round school*” or “year-round education” or (school AND (“alternative calendar” or “modified calendar” or “balanced calendar”) or (“year-round calendar” AND school). We modified the precise terms, phrases, and Boolean operators to take advantage of the search features, index terms identified in the resource’s thesaurus, and tools within each of 22 specific search/retrieval resource’ (Fitzpatrick & Burns, 2019, Electronic searches section).
Selection of sources of evidence: This section outlines how the sources were selected, i.e., the criteria used and how disputes were resolved.
- ‘The titles were reviewed against the eligibility criteria by EW. This initial search was monitored, exported into EndNote X9 reference manager, for abstract and full text screening by EW. The duplicated studies were deleted, followed by independent reviewing of the abstracts by EW and JK. Studies deemed ‘unclear’ were advanced to the subsequent screening stage. Assistance from the study university library services was requested when full texts could not be retrieved from the databases and five full texts were provided’ (Wanko Keutchafo et al., 2020, Study selection section).
- ‘Two review authors (of AM, JF, and KB) independently screened all titles and abstracts for eligibility. We retrieved the full‐text articles for all studies deemed by any review author to be potentially eligible. Two review authors (of AM, JF, and KB) assessed the full‐text articles against the inclusion criteria. Any disagreements between the two review authors were resolved by discussion with the review team’ (Merriel et al., 2019, “Selection” section).
Data charting process: Sometimes referred to as ‘data extraction’, this section outlines how the data was charted from the sources of evidence included, and whether it was done independently or in duplicate, and notes any processes of confirmation that might have used. Importantly, this section tabulates or otherwise visually presents the how the data was selected.
- ‘Microsoft Excel was utilized for this stage. The collected data points were author(s), title, publication year, country of first author’s affiliated university, research setting, purpose, participant demographics, research methods, measures, interventions, key findings, and limitations. The authors, participants, measures, interventions, and findings are summarized in Table 1. Aggregate data have been presented in the results section. As the first author compiled the data for Table 1, 12 articles were removed from the study because they were not primary research (n = 3), measured stereotype and not SF (n = 5), did not isolate PA from other activities (n = 1), did not include a PA component (n = 2), or did not include a SF component (n = 1). The discarded articles were approved by authors two and five before the analysis was complete’ (Reinders et al., 2019, Chart the data section).
- ‘Two review authors (of AM, JF, and KB) independently extracted data from each study onto a data collection form based upon the Cochrane Effective Practice and Organisation of Care (EPOC) Group data collection checklist (EPOC 2013). Review authors (of AM, JF, and KB) piloted the form and ensured that it was fit for purpose and that there was consistency of approach. We refined the form as we progressed in the data extraction process by adding further fields or categories to the existing fields’ (Merriel et al., 2019, “Data extraction” section).
Data items: This section explains what data is obtained from the studies reviewed. It needs to include a summary of the results based on the inclusion criteria, for example PCC, if one is using it.
- ‘Extracted data included bibliographic details, country and setting, aim/objective, study design, targeted population, nurses’ nonverbal strategies used while communicating with older adults, older adults’ interpretation of nurses’ nonverbal behaviors, and relevant outcomes of interest’ (Wanko Keutchafo et al., 2020, Data extraction section).
- ‘We extracted the student outcome data needed for calculating the effect size(s) from each study. To consider our second research question, we recorded two independent variables of interest: calendar structure and the duration of summer break. For each study, we recorded standard information on the study and report itself. This included the report author, year of publication or release, published/unpublished status, and the matching protocol used to identify the comparison school(s). For the treatment schools examined, this included the state in which the schools were located, years of student testing data included, and the type of score used for the outcome measurement. We also recorded sample/student characteristics associated with each estimate’ (Fitzpatrick & Burns, 2019, Student outcomes section).
Critical appraisal of individual sources of evidence: Scoping reviews do not require critical appraisal of the evidence included, as the purpose is generally to map what evidence exists, regardless of the quality. However, some may include this step if it is relevant to the objectives. If critical appraisal is included, a rationale of how the appraisal aligns with the objectives must be included, as well as the approach used, such as tools or checklists, the process, number of reviewers and how disputes were resolved, and how the findings from the appraisal were used. This section is sometimes titled ‘Quality Appraisal’ and consists of a statement showing how the evidence sources were treated and discarded as appropriate. This is based on the exclusion criteria. A scoring system might be used in this section.
- ‘Of the 22 included studies, 16 studies underwent methodological quality assessment using the MMAT version 2018 [reference omitted]. The remaining six [references omitted] were excluded from the quality appraisal because they were not primary studies. The 16 studies which underwent methodological quality assessment showed high methodological quality and scored between 80 and 100%. Of these studies, 15 studies [references omitted] scored 100%, and one [references omitted] scored 80%’ (Wanko Keutchafo et al., 2020, Quality of evidence section).
Systematic reviews must include quality appraisal and assess the risk of bias. These are often conducted using standard tools, which must be described. State who assessed each study and how disputes were solved. Explain any system used to rank the studies. Any other potential source of bias, and its relative risk, needs to be explained and resolved.
- ‘We used the EPOC ‘Risk of bias’ tool to assess the risk of bias (EPOC, 2015). The areas of bias addressed by the tool cover the domains outlined in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins, 2011). Two review authors (of AM, JF, and KB) independently assessed the risk of bias of each included study, and assessment was compared and reconciled, if necessary, with the help of an arbitrator. We categorised each study as having low, high, or unclear risk of bias using the EPOC ‘Risk of bias’ tool (EPOC, 2015). Any disagreements were resolved by discussion or by consulting the senior review author’ (Merriel et al., 2019, “Assessment of risk of bias” section).
- ‘Examining the studies included in this meta-analysis revealed two potential sources of bias in our results: publication bias and bias arising from the internal validity of included studies. While publication bias is a concern in any meta-analysis, we argue that the risk of publication bias in this review is low because the majority of studies in the final sample are unpublished dissertations and reports’ (Fitzpatrick & Burns, 2019, “Assessment of risk of bias” section).
Measures of effect: Systematic reviews and meta-analyses will have additional sections on measures of the treatment effect, which examines the effect estimates and confidence intervals of included studies.
- ‘From each study we collected the outcomes relevant to this review, regardless of whether they were the primary outcome for each individual study or not. We extracted the effect estimate and confidence intervals of the intervention from the data provided in the publication. We were unable to perform a meta‐analysis due to the heterogeneity of outcomes reported in the included studies. We presented a structured synthesis of the results as reported by the authors’ (Merriel et al., 2019, “Measures of treatment” section).
Other typical sections in systematic reviews and meta-analyses may include assessments of heterogeneity, reporting bias, units of analysis, certainty assessments and missing data.
- ‘Studies that did not report all data necessary to calculate an effect size were handled in one of three ways. First, authors were contacted in order to seek supplemental information to allow for standard calculations. For a subset of studies whose authors could not provide additional data, the N and mean but not SD figures were provided. However, SDs can be imputed for effect size calculations with continuous outcomes’ (Fitzpatrick & Burns, 2019, Dealing with missing data section).
- ‘Due to the nature of this review, we expected significant statistical heterogeneity between studies. In addition, it was difficult to anticipate a priori the sources of heterogeneity. We therefore extracted all important sources of heterogeneity in the data abstraction form, which included methodological and contextual aspects of the included studies’ (Merriel et al., 2019, Assessment of heterogeneity section).
Results
See here for an explanation of how to write a Results section.
Search results: This describes how many sources were found and chosen. A description of the search process along with a flowchart presents how sources were selected. The PRISMA flow chart or a variation is generally used to visually display the selection process for this section.
- ‘We identified 3261 references from electronic database searching and handsearching of reference lists after de‐duplication. Full‐text screening of 75 records resulted in 11 studies being included in the review. Three studies were ongoing. The PRISMA flow diagram is shown in Figure 1’ (Merriel et al., 2019, Results of the search section).
- ‘Two hundred and fifty-seven (257) studies met the eligibility criteria following the deletion of 478 duplicates from the 735 studies identified at the title screening stage (see Fig. 1)’ (Wanko Keutchafo et al., 2020, Results section).

Characteristics of sources of evidence/Critical appraisal within sources of evidence/Results of individual sources of evidence/Synthesis of results: Sometimes reviews have separate sections here and sometimes they provide a general Results section that outlines, compares and contrasts the findings from the review of the literature. Importantly, interpretation of the findings is not provided here. This is relegated to the Discussion section. This is typical of the experimental report genre. If they have separate sections, in the Results section the review might first a) compare and contrast the articles obtained in terms of general trends, and then b) compare and contrast them according to specific characteristics such as the PCC or PICO elements used.
General trends:
- ‘The study identified 414 unique articles across 278 different journals. The number of articles published on health researchers’ use of social media has increased significantly over time (see Figure 2), ranging from 1 publication in 2007 to 88 in 2017’ (Dol et al., 2019, Article characteristics section).
Specific characteristics:
- ‘A third of the articles included were nonspecific and covered health broadly (33.1% [137/414]), touching on 61 different health areas’ (Dol et al., 2019, “Area of health” section).
Study type:
- ‘Atypically, the majority of the studies in Tables 1 and 2 are dissertations. Published works, perhaps in order to increase their sample size to make statistically significant findings easier to achieve, tended to look at mixed single- and multitrack YRE. As a result, excluding mixed studies resulted in a final sample with three reports, two conference presentations, five articles, and 20 dissertations’ (Fitzpatrick & Burns, 2019, Included studies section).
There may also be sections describing and summarising the results of the included studies according to the framework characteristics used, for example comparing the participants in the included studies, describing the interventions used, and the outcomes reported.
- ‘We identified 11 randomised studies for inclusion in this review. Four focused exclusively on obstetric training (Nielson, 2007; Riley, 2011; Sorensen, 2015; Fransen, 2017), three on obstetric and neonatal training (Nisar, 2011; Walker, 2014; Gomez, 2018), two exclusively on neonatal training (Opivo, 2008; Xu, 2014), one on trauma (Knudson, 2008), and one on general adult resuscitation (Weidman, 2010). There were approximately 2000 healthcare workers randomised to different forms of training in these studies. Outcome data were collected on over 300,000 patients’ (Merriel et al., 2019, Included studies section).
- ‘Given that summer learning loss is most evident among students from disadvantaged groups, the estimated effects for low‐income and minority students are unexpectedly about the same magnitude or smaller than for the full sample, and are not statistically significant’ (Fitzpatrick & Burns, 2019, “Effect by student” section).
Systematic reviews will also report on the additional areas, such as the risk of bias, results of syntheses and analyses, potential reporting biases, and certainty of evidence. This could be presented visually in a table, or in a narrative description. These elements need to be justified and explain any tools or statistical methods used.
See the attached annotated scoping review for highlighted sections showing parts of a representative scoping review.
Discussion
The Discussion section of a scoping review is written in the same way as a Discussion in scientific reports and research papers. This is described in detail The Discussion section.
Other elements in the discussion section are:
Strengths and limitations: A systematic review includes sections on the limitations of the evidence and review process. For example, the studies found may have small sample sizes or a high risk of bias, or the authors may have only searched for English language material. The information given in the assessment of confidence section can be used to justify this.
- ‘Some limitations should be noted regarding the quality of this scoping review. While the first two authors took many steps to ensure all relevant articles were included in the review, it is possible some studies were missed due to the selection of databases and search terms. Second, the first two authors each conducted title reviews independently, meaning if one author determined an article was irrelevant based on the title, the other author would not have seen it’ (Reinders et al., 2019, Limitations section).
Study implications: As a scoping review does not typically conduct a quality or risk of bias assessment, implications for policy or practice are unable to be given. In a systematic review, by contrast, these implications need to be included to explain to stakeholders what action they need to take based on the review findings. All review types can make explicit recommendations for further research.
- ‘Logically, it seems important to train staff for in‐hospital‐based emergencies. However, due to the heterogeneity of outcomes within this review, it was not possible to provide firm conclusions as to whether interactive training works. Having said this, the structured synthesis of the evidence showed that most of the studies included in this review reported improvements in patient, staff, or organisational outcomes. The certainty of the evidence for these results is very low’ (Merriel et al., 2019, Implications for practice section).
Further material
Conclusion and/or recommendations: In a systematic review, the study implications and recommendations are often included at the end of the discussion and forms the conclusion. A scoping review usually has a separate conclusion section. The conclusions need to match the review objectives and questions, summarising what was found out.
- ‘In summary, this scoping review provides insight into the relationship between PA and SF in young people with ASD. From the current literature, PA may be related to the social interactions and behaviors of young people with ASD. This review has summarized the relevant literature regarding PA and SF and suggests future directions for research. It has become evident that PA is a viable intervention option to target some of the primary concerns associated with ASD. Further, interventions educating young people with ASD about how to engage in PA may enhance quality of life through increased PA participation and diversified social relationships’ (Reinders et al., 2019, Conclusion section).
Reference list: Check the author guidelines for the journal you plan to publish in to find the required referencing style. For example, the JBI Evidence Synthesis journal requires references in Vancouver, while other journals may have developed their own particular style. It is a good idea to use EndNote to manage references, as it is easy and quick to alter the reference style in a document.
Additional files/Appendixes: It is encouraged now to make data freely available for others to use. This aids research integrity, accountability, and reproducibility. For a review, this may include the data extracted from studies, the full search strategy, a list of excluded sources, clean datasets in a form able to be re-analysed, metadata or analytic software code. It is important to check the copyright restrictions as well, for example records exported from databases may include copyright material and is therefore unable to be shared. These files can be appended to the review text, or shared on a site such as FigShare and linked out to.
Reference List
- Chandler, J., Cumpston, M., Thomas, J., Higgins, J. P. T., Deeks, J. J., & Clarke, M. J. (2022). Introduction. In J. P. T. Higgins, J. Thomas, M. Cumpston, T. Li, M. J. Page, & V. A. Welch (Eds.), Cochrane handbook for systematic reviews of interventions (Version 6.3). Cochrane. https://training.cochrane.org/handbook/current/chapter-i#section-i-1
- Dol, J., Tutelman, P. R., Chambers, C. T., Barwick, M., Drake, E. K., Parker, J. A., Parker, R., Benchimol, E. I., George, R. B., & Witteman, H. O. (2019). Health researchers’ use of social media: Scoping review. Journal of Medical Internet Research, 21(11), Article e13687. https://doi.org/10.2196/13687
- Fitzpatrick, D., & Burns, J. (2019). Single-track year-round education for improving academic achievement in U.S. K-12 schools: Results of a meta-analysis. Campbell Systematic Reviews, 15(3), e1053. https://doi.org/10.1002/cl2.1053
- Merriel, A., Ficquet, J., Barnard, K., Kunutsor, S. K., Soar, J., Lenguerrand, E., Caldwell, D. M., Burden, C., Winter, C., Draycott, T., & Siassakos, D. (2019). The effects of interactive training of healthcare providers on the management of life‐threatening emergencies in hospital. Cochrane Database of Systematic Reviews. https://doi.org/10.1002/14651858.CD012177.pub2.
- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P., & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
- Peters, M. D. J., Godfrey, C. M., McInerney, P., Munn, Z., Tricco, A. C., & Khalil, H. (2020). Scoping reviews. In E. Aromatis & Z. Munn (Eds.), JBI manual for evidence synthesis. The Joanna Briggs Institute. https://doi.org/10.46658/JBIMES-20-12
- Reinders, N. J., Branco, A., Wright, K., Fletcher, P. C., & Bryden, P. J. (2019). Scoping review: Physical activity and social functioning in young people with autism spectrum disorder. Frontiers in Psychology, 10, Article 120. https://doi.org/10.3389/fpsyg.2019.00120
- Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D. J., Horsley, T., Weeks, L., Hempel, S., Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., Lewin, S., Godfrey, C. M., Macdonald, M. T., Langlois, E. V., Soares-Weiser, K., Moriarty, J., Clifford, T., Tunçalp, Ö., & Straus, S. E. (2018). PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467-473. https://doi.org/10.7326/M18-0850
- Wanko Keutchafo, E. L., Kerr, J., & Jarvis, M. A. (2020). Evidence of nonverbal communication between nurses and older adults: A scoping review. BMC Nursing, 19, Article 53. https://doi.org/10.1186/s12912-020-00443-9
‘The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00120/full#supplementary-material’ (Reinders et al., 2019, Supplementary material section).
For a downloadable helpsheet, see Writing a Scoping Review or a Systematic Review. See also:
- Abstracts, Structured Abstracts and Executive Summaries
- Designing a Research Question
- The Introduction and the research gap
- Writing a Literature Review
- Systematic Reviews, Scoping Reviews, and Meta-analyses
- The Methodology, Methods and Procedure Sections
- The Results section
- Presenting data
- The Discussion Section