Difference between revisions of "Main Page"

From HealthNLP-ShaRe
Jump to: navigation, search
Line 1: Line 1:
 
== Welcome to the ShARe Project==
 
== Welcome to the ShARe Project==
* [//www.mediawiki.org/wiki/Manual:Configuration_settings Configuration settings list]
+
 
* [//www.mediawiki.org/wiki/Manual:FAQ MediaWiki FAQ]
+
Welcome to the Shared Annotated Resources (ShARe) project.
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
+
 
* [//www.mediawiki.org/wiki/Localisation#Translation_resources Localise MediaWiki for your language]
+
Much of the clinical information required for accurate clinical research, active decision support, and broad-coverage surveillance is locked in text files in an electronic medical record (EMR). The only feasible way to leverage this information for translational science is to extract and encode the information using natural language processing. Over the last two decades, several research groups have developed NLP tools for clinical notes, but a major bottleneck preventing progress in clinical NLP is the lack of standard, annotated data sets for training and evaluating NLP applications. Without these standards, individual NLP applications abound without the ability to train different algorithms on standard annotations, share and integrate NLP modules, or compare performance. We propose to develop standards and infrastructure that can enable technology to extract scientific information from textual medical records, and we propose the research as a collaborative effort involving NLP experts across the U.S.

Revision as of 11:31, 3 June 2014

Welcome to the ShARe Project

Welcome to the Shared Annotated Resources (ShARe) project.

Much of the clinical information required for accurate clinical research, active decision support, and broad-coverage surveillance is locked in text files in an electronic medical record (EMR). The only feasible way to leverage this information for translational science is to extract and encode the information using natural language processing. Over the last two decades, several research groups have developed NLP tools for clinical notes, but a major bottleneck preventing progress in clinical NLP is the lack of standard, annotated data sets for training and evaluating NLP applications. Without these standards, individual NLP applications abound without the ability to train different algorithms on standard annotations, share and integrate NLP modules, or compare performance. We propose to develop standards and infrastructure that can enable technology to extract scientific information from textual medical records, and we propose the research as a collaborative effort involving NLP experts across the U.S.