Format

Send to

Choose Destination
Brain. 2016 Jun;139(Pt 6):1713-22. doi: 10.1093/brain/aww045. Epub 2016 Mar 31.

Crowdsourcing reproducible seizure forecasting in human and canine epilepsy.

Author information

1
Mayo Systems Electrophysiology Laboratory, Departments of Neurology and Biomedical Engineering, Mayo Clinic, Rochester, MN 55905, USA Brinkmann.Benjamin@mayo.edu.
2
University of Pennsylvania, Penn Center for Neuroengineering and Therapeutics, Philadelphia, PA, USA.
3
AiLive Inc, Sunnyvale, CA, USA.
4
University of Queensland, Centre for Advanced Imaging, Queensland, Australia.
5
Hemedics Inc, Boston, MA, USA.
6
CEU Cardenal Herrera University, Valencia, Spain.
7
Sydney, Australia.
8
New York, NY, USA.
9
Ghent University, Ghent, Belgium.
10
Kaggle, Inc. New York NY, USA.
11
University of Pennsylvania, School of Veterinary Medicine Philadelphia, PA, USA.
12
University of Minnesota, Veterinary Medical Center, St. Paul, MN, USA.
13
Mayo Systems Electrophysiology Laboratory, Departments of Neurology and Biomedical Engineering, Mayo Clinic, Rochester, MN 55905, USA.

Abstract

SEE MORMANN AND ANDRZEJAK DOI101093/BRAIN/AWW091 FOR A SCIENTIFIC COMMENTARY ON THIS ARTICLE  : Accurate forecasting of epileptic seizures has the potential to transform clinical epilepsy care. However, progress toward reliable seizure forecasting has been hampered by lack of open access to long duration recordings with an adequate number of seizures for investigators to rigorously compare algorithms and results. A seizure forecasting competition was conducted on kaggle.com using open access chronic ambulatory intracranial electroencephalography from five canines with naturally occurring epilepsy and two humans undergoing prolonged wide bandwidth intracranial electroencephalographic monitoring. Data were provided to participants as 10-min interictal and preictal clips, with approximately half of the 60 GB data bundle labelled (interictal/preictal) for algorithm training and half unlabelled for evaluation. The contestants developed custom algorithms and uploaded their classifications (interictal/preictal) for the unknown testing data, and a randomly selected 40% of data segments were scored and results broadcasted on a public leader board. The contest ran from August to November 2014, and 654 participants submitted 17 856 classifications of the unlabelled test data. The top performing entry scored 0.84 area under the classification curve. Following the contest, additional held-out unlabelled data clips were provided to the top 10 participants and they submitted classifications for the new unseen data. The resulting area under the classification curves were well above chance forecasting, but did show a mean 6.54 ± 2.45% (min, max: 0.30, 20.2) decline in performance. The kaggle.com model using open access data and algorithms generated reproducible research that advanced seizure forecasting. The overall performance from multiple contestants on unseen data was better than a random predictor, and demonstrates the feasibility of seizure forecasting in canine and human epilepsy.media-1vid110.1093/brain/aww045_video_abstractaww045_video_abstract.

KEYWORDS:

epilepsy; experimental models; intracranial EEG; refractory epilepsy

PMID:
27034258
PMCID:
PMC5022671
DOI:
10.1093/brain/aww045
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Silverchair Information Systems Icon for PubMed Central
Loading ...
Support Center