In September 2016 we launched the first full version of the Explore tool: where you could input a research paper and Iris.ai would match it with what was then a connected database of 30 million Open Access papers.  We knew we had built some cool NLP technology and that we had built a user interface and design that was new and different, but the big question was: Would it work better than regular search engines? From then on, we have continuously worked to systematically and scientifically prove that the tools simplify and improve the systematic literature review process.

.

Scithons: Proving the Explore tool. 

For proving the Explore tool, we needed to develop an entire framework that would make the evaluations fair, unbiased and systematic. We came up with the concept of Science Hackathons, or Scithons, where multiple teams of researchers would sit down over a set number of hours and compete to solve the same R&D challenge. The teams were equally distributed, with participants ranging from Master and PhDs all the way to professors, and none deep domain experts in all interdisciplinary fields to be explored. The results were blindly evaluated by an external panel of experts, on (1) spot on papers found,  (2) demonstrated overview of the field as well as (3) conclusions drawn. The scores were then compared with keylogger data from the teams, to see how scores correlated to usage of Iris.ai.

To our massive excitement, the results showed repeatedly that the highest usage of Iris.ai was directly correlated with the highest scores. This showed true for research on how to introduce Augmented Reality in medical surgery training, effective interventions to sustain healthy lifestyles, trans-disciplinary research for governance of social-ecological systems as well as building reusable rockets with composite materials. 

A comprehensive review of the framework and the results was accepted to the 11th edition of the Language Resources and Evaluation Conference, 7-12 May 2018, Miyazaki (Japan), peer-reviewed and published in 2018. 

.

Systematic mapping study collaboration: Proving the Focus tool.

In early 2018, the Focus tool was introduced. This tool was a result of a collaboration with the Computer Science department at Chalmers University of technology. The team, led by Christian Berger, was performing the world’s largest Systematic Research Landscape Mapping on Autonomous Vehicles. The goal was to use our technology as a support for the manual review and compare it with the initially performed manual-only review. This way we could measure both the accuracy of the screening and the speed with which papers could be reviewed. The goal was to publish both the mapping study itself as well as the results of the manual vs AI aided review in one large paper. 

The Chalmers team had a collection of 11,000 papers they had manually marked for inclusion and exclusion; first by reading all titles and marking for inclusion/exclusion, then by reading all abstracts of the ones still not excluded. In this process, experienced researchers could perform the evaluation at a much higher speed than the less experienced PhD/postdoc candidates. 

The researchers were then given the Iris.ai machine-identified concepts and topics for all papers, as a support for their manual evaluation. Measuring their performance, overlaying it with the fully manual process, it was found that they could increase the speed of the evaluation by 78% - and as important, that the less experienced researchers were given tools allowing them to work at the speed of their far more experienced colleagues. 

Research is a lengthy process, and by the time the research team was ready with the first set of results, almost two years had passed. A new query was done to see if there were any new papers published in that time - and to the team’s dismay, there had been 11,000 more potentially relevant papers published. A decision was made that there was no point in a fully manual review, and the Iris.ai tools were used for the review. 

As it frequently happens, several obstacles got in the way of the final paper publication. As the comparison between the manual and machine-aided review is intrinsically linked to the results of the mapping study, results are still not peer-reviewed or published. A less comprehensive article is being produced. 

External parties' publications

  • Schoeb D, Suarez-Ibarrola R, Hein S, et al. Use of Artificial Intelligence for Medical Literature Search: Randomized Controlled Trial Using the Hackathon Format. Interactive Journal of Medical Research. 2020 Mar;9(1):e16606. DOI: 10.2196/16606. Accessed on: https://europepmc.org/article/pmc/pmc7154940 [last accessed 10.12.2020]

.

Continuous work

We currently have several ongoing collaborations with reputable research groups where they are doing side by side comparisons of our tools, other tools, and manual approaches. We welcome these efforts and are happy to provide the premium tools for these kinds of open collaborations. Please get in touch to discuss this on founders@iris.ai!

 

Tags:
Created by Iris Admin on 2021/03/29 10:29
© 2015-2020 Iris.ai AS. All rights reserved.
v1.0