I've always been a bit of an A.I. skeptic/grinch, but there are things to very much like here.
Firstly, is an idea that we've deeper learned ML quite a lot, and we need more representations / abstract thinking again to make more fundamental progress. Nice, I like the sound of that.
Secondly,
> ... So Christian Szegedy and Sarah Loos have the system where you take sort of a regular theorem prover and you give it a problem. And then you have a neural net decide out of the million axioms I have, which 100 are most relevant to this problem. ...
I also thought combining machine learning with theorem provers would be an excellent avenue for further research: we have abstract reasoning that doesn't "go wrong" as it does in many end applications ("expert systems don't work"), but is also still extremely "rich", and not trivially automated because it's intractable without intuition/heuristics.
Glad to hear the big leagers are also interested.
For what it's worth the original tech singularity idea was from John von Neumann in the 1950s:
βThe ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.β
Which I guess is kind of in the eye of the beholder.
That said he misrepresents Kurzweil a bit with "if you're Kurzweil all the curves are exponential and they're going up. And right now is a special time." Kurzweil has already said the singularity will be about 2045 and now is not a special time in that way.
> But if you've got log paper, then all the lines are straight lines and there's nothing special about right now. It was a straight line yesterday and it'll be a straight line tomorrow.
I don't understand this point. If a point is interesting on an exponential curve, e.g. because it's within a human lifespan of human intelligence being exceeded (which I think is the context of the quote; I'm not looking to debate this point), how does changing the Y axis to a log scale make that any less interesting?
>> So I'd like to see more of that kind of approach, where you have these very powerful general techniques that you can call on but then on top of that, you try to learn the patterns for how to use them.
Here, Peter Norvig is advocating for, essentially, neuro-symbolic AI (there is an actual research field, with a conference and all, named that, but of course here he's talking more generally about the combination of pattern matching and logical reasoning).
My question is, how is this ever going to work when tutors are removing resolution theorem proving from the curriculum and Russel and Norvig have cut down that part of their book, to an inch of its life, as Peter Norvig says earlier in the interview:
>> We kept a lot of the old material, even though we know people are skipping it. So professors aren't teaching resolution theorem proving anymore, but we kept that in. We cut down that material quite a bit, but it's still there.
We got plenty of folks trained to fill in the "neuro" part of Peter Norvig's plan. How about the "symbolic [reasoning]" part?
I guess conscious is the eye of beholder. Its a switch between thinking and acting flopping insanely. Basically all multithread web server+site is a kind of conscious. But the self evolution of thinking is beyond conscious, just like us. Any mammal has conscious, but when the data (culture/static data) can be learned by recognizing abstract patterns linked to body data (reality) it may be able to self develop. I guess we have a lot to do automating processes and middleware interfacing to master transfer of wise between runtime models, runtime data and the core mapping of decisions on a wannabe singularity. The boot of computer language, human language and minimum hardware (boston dynamics?) to a machine have the same kind of freedom that enable us to self learn is the point. If you has no pain data of a robotic body, pain will not be on the scope of that singularity, leading a core with "Hansen disease". But if you capture pain data, you might expect fear behaviours on decision making. In other words its a threat to us that machines could secure itselves cause pain could be an abstract for instance human. We can choose between disposable semi-inteligent machines or a new type of free living being that could enslave us.
Not really interested on birthing a new type of living being too soon. It's not capitalism. But as a tool it can be usefull mocking reality, not surviving to it.
Machines already are good communication tools. Can be good enough processing tools. Not more than enough.
I have an alternative theory, called the "anti-singularity". It posits that there is a point in technological development where both future development AND maintenance of key technological tools will become impossible, and from that point on we will be doomed to use ever-crumbling and worsening systems.
For evidence, see almost any computer-based tool, my favourite examples being Windows, Android, and DNS.
Really interesting transcript, highly recommend it. Regarding this:
> Well, what if I do that an infinite number of times, then it's no longer a mountain. When does it not become a mountain, right? So we don't quite have answers to that.
Is interesting how 21st century technologists are basically asking the same questions as Socrates and his disciples were asking ~2500 years ago. If I remember correctly (I last read some Plato about 15 years ago) the example that Socrates gives related to that is one about a table. Is a table with only 3 legs still a table? Probably, many would say. Is a table with only 2 legs still a table. Less probably. Is a table without any legs still a table? Probably not. Is it correct to ask about the idea of a table? i.e. is there such a thing as a table in the abstract? (or a mountain in the abstract, to go back to Norvig's example). Plato famously thought that there was such an idea, many other Greek philosophers were a lot more ambivalent about it (with Heraclitus I think the best-known example).
What I'm trying to say is that maybe today's engineers should go back to reading some philosophy, not the modern US-version of analytical philosophy which doesn't teach anyone almost anything, but all the way back from the Greeks up until the late 19th-early 20th century, maybe that way those engineers would also be more forthcoming in accepting their ethical responsibilities. I personally didn't like how Norvig was quick to set aside AI's ethical responsibilities, passing the hot potato to the general field of engineering, i.e. to no-one in particular.