Abstract— Goal: We hypothesized that COVID-19 subjects,
especially including asymptomatics, could be accurately
discriminated only from a forced-cough cell phone recording using
Artificial Intelligence. To train our MIT Open Voice model we
built a data collection pipeline of COVID-19 cough recordings
through our website (opensigma.mit.edu) between April and May
2020 and created the largest audio COVID
....
Results: When validated with subjects diagnosed using an
official test, the model achieves COVID-19 sensitivity of 98.5%
with a specificity of 94.2% (AUC: 0.97). For asymptomatic
subjects it achieves sensitivity of 100% with a specificity of 83.2%.
Key bits of the abstract (below the fold):
Abstract— Goal: We hypothesized that COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. To train our MIT Open Voice model we built a data collection pipeline of COVID-19 cough recordings through our website (opensigma.mit.edu) between April and May 2020 and created the largest audio COVID
....
Results: When validated with subjects diagnosed using an official test, the model achieves COVID-19 sensitivity of 98.5% with a specificity of 94.2% (AUC: 0.97). For asymptomatic subjects it achieves sensitivity of 100% with a specificity of 83.2%.