Time Saving in Literature Reviews with CAPTIS
Celegence’s medical writing teams have been utilizing and benefiting from CAPTIS™ since 2020 and have noticed that this software solution has single-handedly made literature reviews more efficient and enjoyable. With features like automatic metadata and full-text PDF retrieval, article tagging, ability to edit the search strategy at any point during the review, literature reports, PRISMA diagrams and more, CAPTIS enables medical writers and reviewers to focus their time purely on analyzing content rather than splitting time between manual, mundane data gathering tasks and content analysis. As the platform takes the bulk of the data gathering and record-keeping work on itself, gone are the days of maintaining multiple excel sheets and other documents to keep track of their literature reviews, since CAPTIS automates these activities.
Time required for a literature review depends upon several parameters including the number of articles, type of device, therapeutic area, and the general experience of the reviewer. We were interested to find out which aspects of the entire review process benefit most from the technology. Hence, we decided to put CAPTIS to the test and quantify the time saved (in percentages) for our medical writers when compared to a traditional manual approach, when conducting a literature review for the same device.
Check out our behind the scenes look at CAPTIS in our exclusive on demand video to see how you and your business can benefit from CAPTIS AI.
1. Aim
The aim of this study was to compare time taken for literature search, article metadata compilation and review via manual method versus CAPTIS.
2. Approach
Five medical writers participated in the study. Literature reviews for five medical devices from different therapeutic areas (Physical Medicine, Orthodontics, Orthopedics, Ophthalmology and Laryngology) were reconducted with and without CAPTIS. PubMed and Google Scholar were the databases leveraged for the literature search.
Table 1: Considerations
Therapeutic Areas | Inputs | |
---|---|---|
Device | Therapeutic Area | Safety and Performance Objectives |
Device 1 | Ophthalmology | Claims |
Device 2 | Laryngology | IFU |
Device 3 | Orthopedics | Old CEP/CER if any |
Device 4 | Physical Medicine | Product Description/specifications |
Device 5 | Orthodontics | Search strings used |
Dos | Don’ts | |
Screen a minimum of 200 articles | Utilize varying search string outside CAPTIS, to eliminate chances of getting different results | |
Use search strings created within the project deliverable | ||
Predefine the Level 1 screening and Level 2 appraisal criteria | Introduce new elements which are not defined in the deliverables | |
Use a third-party source to download full text articles | ||
Medical writers were randomly assigned to 2 workflows, either CAPTIS, manual, or a mix of both. Both CAPTIS and manual workflows consisted of identical tasks (see list below). The time taken for each task within a workflow along with the number of articles processed in each step were recorded.
The following tasks were executed for each workflow:
- Literature search
- Article metadata (reference and abstract) retrieval and consolidation
- Pre-processing (deduplication)
- Level 1 (L1) Review: Title and Abstract screening
- Full-text PDF search and save
- Level 2 (L2) Review: Full-text appraisal
Table 2: Workflow Assignments
Team Member | Device 1 | Device 2 | Device 3 | Device 4 | Device 5 |
---|---|---|---|---|---|
Writer 1 | CAPTIS – PM | CAPTIS – PM | |||
Writer 2 | Manual – PM | CAPTIS – GS | |||
Writer 3 | Manual – GS | Manual – GS | |||
Writer 4 | CAPTIS – GS | CAPTIS – GS | |||
Writer 5 | Manual – GS | Manual – PM | |||
PM: PubMed
GS: Google Scholar |
3. Results
A total of 10 workflows (5 manual and 5 CAPTIS) were executed by 5 medical writers. The time taken for article metadata (reference and abstract) retrieval and consolidation, pre-processing (deduplication), L1 title and abstract review, full-text PDF search and saving and L2 full-text appraisal was noted, along with the number of articles included at each review stage.
Table 3: Observations
Device | # of Hits | Duplicate Articles | De-Duplications (mins.) | # of Missing Abstracts | Processing times (mins.) | L1 Screening (mins.) | Moved to L2 | # Full Text Articles download | Time to download FT (mins.) | L2 Review | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CAP | Man | CAP | Man | CAP | Man | CAP | Man | CAP | Man | CAP | Man | CAP | Man | CAP | Man | CAP | Man | CAP | Man | |
Device 1 | 201 | 201 | 15 | 15 | 0 | 9 | 2 | 0 | 12 | 66 | 292 | 236 | 66 | 73 | 40 | 73 | 63 | 84 | 325 | 348 |
Device 2 | 216 | 216 | 17 | 18 | 0 | 30 | 145 | 215 | 32 | 267 | 88 | 185 | 21 | 59 | 21 | 59 | 12 | 86 | 120 | 125 |
Device 3 | 206 | 206 | 39 | 29 | 0 | 7 | 155 | 199 | 156 | 156 | 84 | 71 | 22 | 6 | 8 | 6 | 8 | 6 | 24 | 22 |
Device 4 | 214 | 214 | 33 | 33 | 0 | 6 | 0 | 0 | 9 | 71 | 189 | 49 | 2 | 1 | 1 | 1 | 1 | 1 | 10 | 0 |
Device 5 | 207 | 207 | 29 | 26 | 0 | 30 | 114 | 207 | 104 | 214 | 138 | 157 | 49 | 35 | 25 | 35 | 23 | 25 | 56 | 107 |
3.1 Article Metadata (reference and abstract) Retrieval and Consolidation
Article metadata retrieval and consolidation was faster via CAPTIS in 4 out of 5 Devices, with an average time reduction of 62% seen overall.
3.2 Pre-processing (Deduplication)
CAPTIS identified a higher number of duplicates in 2 out of 5 devices when compared to the manual deduplication method. Medical writers spent 0 minutes on CAPTIS vs 6-30 minutes when manually identifying duplicates depending upon the number of hits.
3.3 Full-text Search and Save
CAPTIS automatically searches and consolidates full-text PDFs of open access articles, leaving users a much shorter list of articles for which full-texts are needed. In comparison, doing this activity manually entails searching for the full-text PDF of every single article included in L1, renaming, and saving PDFs which were available, and marking those which need to be purchased.
Searches for full-text articles were done post-L1 Title and Abstract screening. However, considering that the same set of articles was reviewed independently by two different writers (CAPTIS versus manual), there were slight variations in the number of articles included in L1. To accurately estimate the potential time savings for full-text search and save, we normalized the number of articles included in L1 in both the workflows by considering the manual L1 included article count to be the same as that in the CAPTIS workflow. Further, we extrapolated the estimated time taken for searching and saving full-texts manually by dividing the amount of time taken for this activity upon the number of full-texts searched for and multiplying the resultant “time per article” value to the adjusted number of articles.
CAPTIS users consistently spent less time finding full texts or categorizing articles for purchase since the platform automatically downloads all available/free full-texts. Users saved an average of 45% of time for this activity when using CAPTIS.
Table 4: Percentage time savings for full-text search and save
Device | Manual Time (min) | CAPTIS Time (min) | % savings with CAPTIS |
---|---|---|---|
Device 1 | 76 | 63 | 17% |
Device 2 | 31 | 12 | 61% |
Device 3 | 22 | 8 | 64% |
Device 4 | 2 | 1 | 50% |
Device 5 | 35 | 23 | 34% |
3.4 Level 1 and Level 2 Reviews
Comparison between Level 1 Review (Title and Abstract Screening) and Level 2 full-text appraisal duration for CAPTIS vs. Manual processes were not calculated since these two steps constitute the “analysis” portion of the entire literature review process. Time variations in these two steps were observed, as expected, since two different reviewers reviewed the same dataset. This can be attributed to the fact that these tasks are subjective in nature, i.e., time taken to screen and appraise articles will vary amongst different reviewers depending on their level of experience and knowledge in the respective therapeutic area.
4 Extrapolations
Let’s look at the average percentage of time each task took in the manual review process.
With reductions of 62%, 100% and 45% (as calculated in the sections above) applied to Article metadata (reference and abstract) retrieval and consolidation, deduplication and full-text PDF search and save activities (non-analysis components of the literature review), respectively, we saw an overall reduction of 62% for non-analysis tasks leading to an overall reduction of 28% on the overall literature review.
5 CAPTIS Time Saving Study Conclusion
CAPTIS utilization resulted in an overall time reduction of 62% for non-analysis tasks (Article metadata (reference and abstract) retrieval and consolidation, deduplication, and full-text PDF search) leading to an overall reduction of 28% on the overall literature review.
6 Additional Possible Conclusions
We compared time savings with benchmark values for manual tasks. We assumed around 400 articles as the initial number of articles retrieved, with a 25% inclusion rate (i.e., 100 articles passed L1 title and abstract screening). We further assumed 1.5 min/article for manually capturing article metadata, 0.5 min/article for deduplication and 5 min/article for full-text search and save. These benchmark values assume a realistic normal-paced working day for a typical medical writer. With time savings of 62%, 100% and 45% applied for article metadata (reference and abstract) retrieval and consolidation, deduplication and full-text PDF search and save activities, respectively, we can expect savings of 13.28 hours on the non-analysis components of the literature review.
Table 5: Additional Possible Conclusions
Task | Manual Benchmark per article (min) | No. of Articles assumed | Total Manual Time (min) (Assumed articles x Manual Benchmark) | Time savings with CAPTIS | Time savings (min) (Total manual time x Time Savings) | Time Savings (hrs) |
---|---|---|---|---|---|---|
Article metadata retrieval | 1.5 | 400 | 600 | 62% | 372 | 6.20 |
Deduplication | 0.5 | 400 | 200 | 100% | 200 | 3.33 |
Full-text search and save | 5 | 100 | 500 | 45% | 225 | 3.75 |
Total Savings on Non-analysis Tasks (hrs) | 13.28 |
7 CAPTIS Time Saving Study Discussion
How does CAPTIS help writers save so much time?
- Article Metadata Retrieval: While databases like PubMed are fairly easy to export data out of, CAPTIS makes an easy task of Google Scholar articles too. Google Scholar is notorious for not having an export option for abstracts, with medical writers having to manually copy and paste each abstract.
- Deduplication: Users save 100% of the time they would have spent on deduplicating articles since CAPTIS automatically deduplicates their entire article list for them. CAPTIS also allows users to manually mark duplicates, if required.
- Full-text search and save: From the entire list of articles that need to be searched for full-texts, CAPTIS takes care of all the open-access articles for which full-texts are freely available. Meaning, the task of searching, downloading, renaming, saving the available PDFs, and marking those articles which require to be purchased is significantly reduced, since users now only need to look at the articles for which full texts could not be located (because they were either paid, or not available on the original webpage from which the article came from).
Download CAPTIS Time Saving Study
Download a copy of this CAPTIS Time Saving Study now to review at a later date.
Schedule Your CAPTIS Demo
Your medical writing team can benefit from CAPTIS with faster turnaround times for systematic literature reviews and more accurate end-to-end MDR/IVDR documentation support. To learn more and view a comprehensive demo of CAPTIS, reach out to info@celegence.com today or contact us online to connect with a Celegence representative.
Check out our behind the scenes look at CAPTIS in our exclusive on demand video to see how you and your business can benefit from CAPTIS AI.