In Step 7, you will skim the full text of included articles to collect information about the studies in a table format (extract data), to summarize the studies and make them easier to compare. You will:
For accuracy, two or more people should extract data from each study. This process can be done by hand or by using a computer program.
Click an item below to see how it applies to Step 7: Extract Data from Included Studies.
If you reach the data extraction step and choose to exclude articles for any reason, update the number of included and excluded studies in your PRISMA flow diagram.
Covidence allows you to assemble a custom data extraction template, have two reviewers conduct extraction, then send their extractions for consensus.
A librarian can advise you on data extraction for your systematic review, including:
In this step of the systematic review, you will develop your evidence tables, which give detailed information for each study (perhaps using a PICO framework as a guide), and summary tables, which give a high-level overview of the findings of your review. You can create evidence and summary tables to describe study characteristics, results, or both. These tables will help you determine which studies, if any, are eligible for quantitative synthesis.
Data extraction requires a lot of planning. We will review some of the tools you can use for data extraction, the types of information you will want to extract, and the options available in the systematic review software used here at UNC, Covidence.
The Cochrane Handbook and other studies strongly suggest at least two reviewers and extractors to reduce the number of errors.
Click on a type of data extraction tool below to see some more information about using that type of tool and what UNC has to offer.
Most systematic review software tools have data extraction functionality that can save you time and effort. Here at UNC, we use a systematic review software called Covidence. You can see a more complete list of options in the Systematic Review Toolbox. Covidence allows you to create and publish a data extraction template with text fields, single-choice items, section headings and section subheadings; perform dual and single reviewer data extraction; review extractions for consensus; and export data extraction and quality assessment to a CSV with each item in a column and each study in a row.
Most systematic review software tools have data extraction functionality that can save you time and effort. Here at UNC, we use a systematic review software called Covidence. You can see a more complete list of options in the Systematic Review Toolbox.
Covidence allows you to create and publish a data extraction template with text fields, single-choice items, section headings and section subheadings; perform dual and single reviewer data extraction; review extractions for consensus; and export data extraction and quality assessment to a CSV with each item in a column and each study in a row.
You can also use spreadsheet or database software to create custom extraction forms. Spreadsheet software (such as Microsoft Excel) has functions such as drop-down menus and range checks can speed up the process and help prevent data entry errors. Relational databases (such as Microsoft Access) can help you extract information from different categories like citation details, demographics, participant selection, intervention, outcomes, etc.
RevMan offers collection forms for descriptive information on population, interventions, and outcomes, and quality assessments, as well as for data for analysis and forest plots. The form elements may not be changed, and data must be entered manually. RevMan is a free software download.
Survey or form tools can help you create custom forms with many different question types, such as multiple choice, drop downs, ranking, and more. Content from these tools can often be exported to spreadsheet or database software as well. Here at UNC we have access to the survey/form software Qualtrics & Poll Everywhere.
In the past, people often used paper and pencil to record the data they extracted from articles. Handwritten extraction is less popular now due to widespread electronic tools. You can record extracted data in electronic tables or forms created in Microsoft Word or other word processing programs, but this process may take longer than many of our previously listed methods. If chosen, the electronic document or paper-and-pencil extraction methods should only be used for small reviews, as larger sets of articles may become unwieldy. These methods may also be more prone to errors in data entry than some of the more automated methods.
There are benefits and limitations to each method of data extraction. You will want to consider:
For example, in Covidence you may spend more time building your data extraction form, but save time later in the extraction process as Covidence can automatically highlight discrepancies for review and resolution between different extractors. Excel may require less time investment to create an extraction form, but it may take longer for you to match and compare data between extractors. More in-depth comparison of the benefits and limitations of each extraction tool can be found in the table below.
|Systematic Review Software (Covidence)||
|Spreadsheets (Excel, Google Sheets)||
|Survey or Form Software (Poll Everywhere, Qualtrics, etc.)||
|Electronic documents (Word, Google Docs)||
It may help to consult other similar systematic reviews to identify what data to collect or to think about your question in a framework such as PICO.
Helpful data for an intervention question may include:
If you plan to synthesize data, you will want to collect additional information such as sample sizes, effect sizes, dependent variables, reliability measures, pre-test data, post-test data, follow-up data, and statistical tests used.
Extraction templates and approaches should be determined by the needs of the specific review. For example, if you are extracting qualitative data, you will want to extract data such as theoretical framework, data collection method, or role of the researcher and their potential bias.