Thought this paragraph was particularly interesting:
“The notion that any risk assessment instrument can account for bias ignores the racial disparities in current and past policing practices.” There are abundant theoretical and empirical reasons to support this claim, since risk assessments are typically based on data of arrests, convictions, or incarcerations, all of which are poor proxies for individual behaviors or predispositions. The coalition continued, “Ultimately, risk-assessment tools create a feedback-loop of racial profiling, pre-trial detention and conviction. A person’s freedom should not be reduced to an algorithm.” By contrast, the Partnership’s statement focused on “minimum requirements for responsible deployment,” spanning such topics as “validity and data sampling bias, bias in statistical predictions; choice of the appropriate targets for prediction; human-computer interaction questions; user training; policy and governance; transparency and review; reproducibility, process, and recordkeeping; and post-deployment evaluation.”
Seem's like they're addressing superficially irresponsible AI deployment but ignoring some of the deeper, complex, ethical issues raised by technologies like facial recognition.
Thought this paragraph was particularly interesting:
“The notion that any risk assessment instrument can account for bias ignores the racial disparities in current and past policing practices.” There are abundant theoretical and empirical reasons to support this claim, since risk assessments are typically based on data of arrests, convictions, or incarcerations, all of which are poor proxies for individual behaviors or predispositions. The coalition continued, “Ultimately, risk-assessment tools create a feedback-loop of racial profiling, pre-trial detention and conviction. A person’s freedom should not be reduced to an algorithm.” By contrast, the Partnership’s statement focused on “minimum requirements for responsible deployment,” spanning such topics as “validity and data sampling bias, bias in statistical predictions; choice of the appropriate targets for prediction; human-computer interaction questions; user training; policy and governance; transparency and review; reproducibility, process, and recordkeeping; and post-deployment evaluation.”
Seem's like they're addressing superficially irresponsible AI deployment but ignoring some of the deeper, complex, ethical issues raised by technologies like facial recognition.