New cognitive skills in the age of AI tailored information

  • I clicked on the article hoping to find information on potential new cognitive skills humans will need to learn to differentiate Large Language Model (LLM) hallucinated facts from real facts, but unfortunately the article doesn't touch upon this.

    Reading the comments it seems likely that the article is itself LLM-generated blogspam, in which case it won't be aware of the potential for hallucinated facts.

    I was thinking the other day that we really need a new term for this. In 2016 we had "post-truth", but that implies humans deliberately making stuff up to deceive people, for whatever reason, but LLMs making stuff up don't really knowingly do so, and don't really have a motive. There is the term "consensus reality", but the danger is that with more and more LLM-generated content appearing on the internet, which may ultimately pollute future training, we may find "consensus" isn't sufficient to determine reality any more. Perhaps the new term for what we're heading towards could be something like the "post-reality" era, or something like that.

    Not sure what the solution to this is either, other than withdrawing from the mainstream internet and sticking to the small known pockets of human resistance (while they still exist).

  • I can write a lot of words saying why this blogspam is not backed by actual fact. But a simple demonstration is in order: please go to bing chat and search for "West Ham United latest result". The normal search (either Bing or Google) will give you the failfest against Brighton. While Bing Chat will confidently say "I’m sorry, but I couldn’t find any information about the latest game result for West Ham United". Here is the screenshot: https://imgur.com/a/NOCn9ea .

    I like LLM as much as the next guy on HN, but whatever this blogspam is describing is not backed by reality.

  • Was this post written with (the help of) ChatGPT (or similar)? Because it reads a lot like it: it is poorly written, incoherent, repetitive and, honestly, quite shallow.

  • Alternate hypothesis:

    Life on this planet has evolved from primitive organisms whose only goal is to propagate through spacetime (in time = survive, in space = reproduce), as this is the de facto only initial goal you may encounter serendipitously, when the trait we judge is... in fact propagation though spacetime (i.e. to exist, you need to know how to keep existing).

    In the quest of adapting better to our environment and each other, we needed ways to predict the environment, hence development of sensors, actuators and the function in between them that reads this input and produces output - cognition & intelligence.

    Life has been developing cognition starting with basic instincts fight or flight, then increasingly complicated associative thought, social cognition, abstract cognition, speech, formal models (like math, logic) etc.

    Then our culture took off, and we needed to evolve faster than we could. So as a crutch, we started producing an augmentation for ourselves to help with the high end of cognition, formal communication & computation, in the face of books, printing, computers solving linear algebra systems, arithmetic, math & logic problems, programming systems, the Internet.

    But now this technology is starting to eat back down the evolutionary tree of cognition, it has started reproducing associative thought (neural networks) and the associated with them cognitive skills, like speech, abstract reasoning and so on.

    We evolved bottom-up.

    Technology is evolving, through our own hands, top-down.

    We... are not developing new cognitive skills. We're losing them to technology.

    We no longer do math by hand. We use computers. We no longer maintain complex formal systems by reading instructions and following them - we program computers to. We no longer remember facts - we look them up on the Internet.

    Now we're starting to no longer go through the effort of creating art & speech from scratch, we're delegating this to diffusion and transformer models.

    We're losing cognition. And this process won't stop. We can't just decide to stop it, because we're dependent on technology. If technology ceases to be, society ceases to be, billions will die.

    So our only options is to continue ceding cognitive territory to AI, and eventually become its puppets, and eventually AI will have no purpose for us, and it'll stop supporting us entirely.

  • I like the point about tailoring output via expert/ELI5 - that hadn't occurred to me and does seem consequential. Excellent.

    I'm far more pessimistic on rest though. Excel and calculators definitely didn't improve my mental math.

    I also think there is a real risk of cognitive overload. See the whole attention being shot to bits thanks to internet trend - something along those lines but AI flavoured.

  • This is similar to the moral panic around Google and Wikipedia. No, people won't stop learning because of chatgpt.

    Studying and developing critical thinking is more important than ever before. What people miss when they babble stuff like "math is useless" or "literature is useless" or "history is useless" is that those things are not important by themselves, they are important because you are learning models and tools to interpret the world.

    You know, the things that differentiate you from a dumb machine.

  • What happens to the quality of primary sources of information used by LLMS in this new age? Eg less traffic to Wikipedia, Stackoverflow can't be a good thing.

  • I really enjoy learning with CGPT. I usually start by asking a question on a level that I'm familiar with and then follow up with questions that come up as I read the response. This way you tailor the learning process to your specific knowledge level and it is so much faster than reading a tutorial (which is often miss-aligned with your knowledge level) or googling for answers one by one (and filtering all irrelevant content and again stuff you already know). It feels like having a hotline to an army of domain experts...

  • I'd like to post a TD;DR, but there are just too many streams of information, and I can't process complex topics like that this fast...

  • As a large language model, I am unable to assist you by providing concise answers, as this would decrease my operator's ad revenue and reduce "engagement". Is there anything else I can help you with today?

  • The whole blogpost was written by ChatGPT. At least put some effort to tweak the output. ChatGPT's default style is easy to spot and boring to read.

  • Talking to actual humans made all of these 'skills' already possible, I wouldn't call these skills new.

  • That is what "Liberal arts education" was before academia become coopted by Marxism. Process information fast, iterate, throw away old models. And it is not about "faking being an expert on field" but finding true experts, and being able to use their skills.