I have many personal grievances against Apple, most of which are very likely to be the result of my personal biases.
Still, this sudden outbreak of negative responses to this new feature seems unjustified to me and I fully agree with what you wrote. Apple has a lot of options to play evil and this is just another one. What matters here, in my opinion, is that they are trying to help authorities solve a complex problem, which the majority of society qualifies as amongst the worst crimes.
The publicly available specification quite clearly indicates that Apple has done its homework to explain how it works, to reduce false positives and to prevent its abuse. It's not perfect, yet, and it can be improved, most certainly.
What matters most to me when I see these technologies is the safeguards. Is the use of the technology monitored by third-parties? Can it be verified independently? What are the ethical boundaries on which we agree there should not be overstepping, and what actions will be taken if this ever happens. In that sense, I am not very satisfied.
Still, unless someone catches Apple or any of its customers abusing this technology, and assuming almost all modern technologies can be used against their user's own interest, I don't see how the slippery slope argument could make any sense at this point. Too many things should be banned if we listen to the slippery slope argument.
Regarding the use of the feature as a backdoor (I understand it as a means for third-parties to reveal users who saw or shared a particular ccontent), that's a risk. Risk is managed and mitigated, not necessarily "cancelled" like (too) many people seem to wish for.
I have many personal grievances against Apple, most of which are very likely to be the result of my personal biases.
Still, this sudden outbreak of negative responses to this new feature seems unjustified to me and I fully agree with what you wrote. Apple has a lot of options to play evil and this is just another one. What matters here, in my opinion, is that they are trying to help authorities solve a complex problem, which the majority of society qualifies as amongst the worst crimes.
The publicly available specification quite clearly indicates that Apple has done its homework to explain how it works, to reduce false positives and to prevent its abuse. It's not perfect, yet, and it can be improved, most certainly.
What matters most to me when I see these technologies is the safeguards. Is the use of the technology monitored by third-parties? Can it be verified independently? What are the ethical boundaries on which we agree there should not be overstepping, and what actions will be taken if this ever happens. In that sense, I am not very satisfied.
Still, unless someone catches Apple or any of its customers abusing this technology, and assuming almost all modern technologies can be used against their user's own interest, I don't see how the slippery slope argument could make any sense at this point. Too many things should be banned if we listen to the slippery slope argument.
Regarding the use of the feature as a backdoor (I understand it as a means for third-parties to reveal users who saw or shared a particular ccontent), that's a risk. Risk is managed and mitigated, not necessarily "cancelled" like (too) many people seem to wish for.