What will happen if a deep-learning bug is found in Tesla autopilot where it misclassifies a fire truck for a bridge?
Will they ground all the Tesla cars remotely? Or disable autopilot remotely? Until they have gathered new training data and updated the software?
I am starting to wonder if we’re pretending to live a little further into the future than we realistically can right now. Modern Silicon Valley is built around businesses with software margins. If you can scale via software, it’s huge. If we actually saddled these companies with their externalities or with doing what they say they do, would they still have been able to have software margins, or would they have had to wait until the tech/science was better?
It gives me a wary feeling when people talk about tech regulation and warn that it would change the internet as we know it. Like, if putting the externalities on the company means the company can’t exist as it does today, is that really so bad?
Point is, we/they all knew the unavoidable AI’s flaws way before the s*it hit the fan, the underlying sci-fi assumptions undermining the science… and the problem is, some states’ wild de-regulation allowing deployment in public, commonal spaces (real and virtual) being managed like a gigantic theme park and joe mcjoes becoming the guinea pigs for such, well-advised experiments.
Question for people here: Is it justifiable to accept these flaws in the short-term if it results in a car that has a lower-error rate than human drivers in the medium/long-term?
> Businesses are also shifting their focus away from “AI-as-a-service” vendors who promise to carry out tasks straight out of the box, like magic. Instead, they are spending more money on data-preparation software, according to Brendan Burke, a senior analyst at PitchBook. He says that pure-play AI companies like Palantir Technologies Inc. and C3.ai Inc. “have achieved less-than-outstanding outcomes,” while data science companies like Databricks Inc. “are achieving higher valuations and superior outcomes.”
Palantir is now a "pure-play AI company"? (And, for that matter, a market cap of $50b is 'less than outstanding'?)
It seems the flaws are too binary. 90% of the results are great, the other 10% absolute trash.
You can see this everywhere. For example those app that generate a non existing person. A lot of times the results are great except for that one spot which makes the overall result useless.
Another example is the OptiX denoiser (NVidia). You can get very nice renders in a few seconds which speeds up the workflow. But every time it has areas with a lot of flaws. This doesn't matter when you are still working on something but for production it is useless.
ML has it's use in a lot of areas where the outcome doesn't have to be perfect. But I am still not convinced it is 'production ready'.
And at the same time, the UK government is looking at reversing the GDPR review right for automated decisions:
Article 22 guarantees that people can seek a human review of an algorithmic decision, such as an online decision to award a loan, or a recruitment aptitude test that uses algorithms to automatically filter candidates.
In May, a government task force set up to look for deregulatory dividends from Brexit, led by the leading Brexiter Iain Duncan Smith, argued that Article 22 should be removed because it made it “burdensome, costly and impractical” for organisations to use AI to automate routine processes.
The idea is part of broad-based plans for a big overhaul of the UK data regime after Brexit which ministers say will boost innovation, and deliver what Oliver Dowden, the culture secretary, has called a “data dividend” for the UK economy.
https://www.ft.com/content/519832b6-e22d-40bf-9971-1af3d3745...
(Edit: formatting/link)
"For Tesla, Facebook and Others, Misusing AI and Pretending It's a Magic Bullet Is Getting Harder To Ignore"
“AI” is just a buzzword name for “software” these days
Harder to debug and harder to fix
It's pretty easy to ignore Bloomberg, they're a bunch of dopes.
Yes, would you trust a car with an error rate of 0.001%?
I've lost count of the number of people I've talked to who think that neural nets means we've created brains that will just magically learn how to do new tasks. So we just need more training and then automated <whatever> is just around the corner.
Tesla, et al do not have the luxury of ignorance to explain that away however, they know what the technology is and is not currently capable of, but they don't want to admit it.