Accumulation of cognitive debt when using an AI assistant for essay writing task

  • It feels, more and more, that LLMs will be another technology that society will inoculate itself against. It's already starting to happen in education: teachers conversing with students, observing them learn, observing them demonstrate their skills. In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say. Authoring is two-thirds of the point of most communication.

    Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:

    Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.

    This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.

  • I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.

    And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.

    Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.

  • The discussion here about "cognitive debt" is spot on, but I fear it might be too conservative. We're not just talking about forgetting a skill like a language or losing spatial memory from using GPS. We're talking about the systematic, irreversible atrophy of the neural pathways responsible for integrated reasoning.

    The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.

    Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.

    This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.

    So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"

    https://github.com/dmf-archive/dmf-archive.github.io

  • AI is the anti-Zettelkasten.

    Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.

    Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.

    I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.

  • The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.

    But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.

    The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.

  • "...the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring."

    That's not surprising but also bleak.

  • One slightly unexpected side effect of using AI to do most of my coding now is that I find myself a lot less tired and can focus for longer periods. It's enabled me to get work done while faced with other distractions. Essentially, offload some mental capacity towards AI frees up capacity elsewhere.

  • Back when GANs were popular, I'd train generator-discriminator models for image generation.

    I thought a lot about it and realised discriminating is much easier than generating.

    I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.

    I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.

    We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.

    These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.

  • I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.

    It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.

    Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.

    Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.

    It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.

  • This is called cognitive offloading. Anyone who’s spent enough time working with coding assistants will recognize it.

  • > The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or ā€opinionsā€ (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content. What is ranked as ā€œtopā€ is ultimately influenced by the priorities of the LLM's shareholders [123, 125].

  • I worry about the adverse effects of LLM on already disfranchised populations - you know the poor etc - that usually would have to pull themselves up using hard work etc studying n reading hard.

    now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.

    LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.

  • As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places. It's no surprise that a tool like this, used for the purpose of outsourcing a task that our own brains would otherwise do, would result in a decline in the skills that would be trained if we were performing that task ourselves.

  • Wasn't THE SAME said when Google came out? That we were not remembering things anymore and we were relying on Google? And also with cellphones before that (even the big dummy brickphones), that we were not remembering phone numbers anymore.

  • > All participants were then reassured that though 20 minutes might be a rather short time to write an essay, they were encouraged to do their best.

    Given that the task has been under time pressure, I am not sure this study helps gauging the impact of LLMs in other contexts.

    When my goal is to produce the result for a specific short term task - I maximize tool usage.

    When my goal is to improve my personal skills - I use the LLM tooling differently optimizing for long(er) term learning.

  • This is exactly why there is no point in using AI for coding unless in rare fee cases.

    Code without AI - sharp skills, your brain works and you come up with better solutions etc.

    Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.

  • I think we need to shift our idea of what LLMs do and stop thinking they are ā€˜thinking’ in any human way.

    The best mental description I have come up with is they are ā€œConcept Processorsā€. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.

    You can transform the concept of ā€˜a website that does X’ into code that expresses a website X.

    But it’s not thinking. We still gotta do the thinking. And actually that’s good.

  • I sometimes used to think about things. Now I just ask ChatGPT and it tells me.

  • There was a post I think here on Hackernews by a university professor (of philosophy, maybe) about how student's performance have declined a lot recently and how he cannot do anything about it. Can someone help me if you remember it? I cannot find it for the life of me

  • Yeah I’ve used ChatGPT as a starting point for so much documentation I dread having to write a product brief from scratch now.

  • Well... yes? Essays are tools to force students to structure and communicate thinking - production of the essay forces the thinking. If you want an equivalent result from LLMs you're going to need a much more iterative process of critique and iteration to get the same kind of mental effort out of students. We haven't designed that process yet.

  • @dang Can the unwanted editorialization of this title be removed? Nowhere does the title or article contain the gutter press statement ā€œAI is eating our brainsā€.

  • I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.

  • Someone sent this to one of NT groups: https://threadreaderapp.com/thread/1935343874421178762.html

    My response (I think most of the comments here are similar to that thread): The thread is really alarmist and click-baity. It doesn't address at all the fact that there was a 3rd group, those allowed to use the web in general (except for LLM services), whose results fell between the brain-only and full ChatGPT groups. Author also misrepresented the teachers' evaluation. I'd say even the teachers went a bit out of scope in their evaluation, but the writing prompts too are all for reflective-style essays, which I take as request for primarily personal opinion, which no one but the askee can give. In general, I don't see how the author draws the conclusion that "... AI isn't making us more productive. It's making us cognitively bankrupt." He could've made a leap from the title of the paper, or maybe I need to actually dive more into it to see what he's on about.

    The purpose of using AI, just like any other tool, is to reduce cognitive load. I'm sure a study on persons who use paper and an abacus vs a spreadsheet app to do accounting, or take the time to cook raw foods vs microwave prepackaged meals, or build their furniture from scratch vs getting sth from IKEA, or just about any other task, will show similar trends. We innovate so we can offload and automate essential effort, and AI is just another step. If we do want mental exercises then we can still opt into doing X the "traditional" way, or play some games mimicking said effort. Like people may go to the gym since so many muscle-building tasks are nowadays handled by machines. But the point is we're continuously moving from `we need to do X` toward `we want to do X`.

    Also that paper title (and possibly a decent amount of the research) is invalid, given the essay writing constraints and the type of essay. Paper hasn't been peer-reviewed, and so should be taken with a few shakes of salt.

  • When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

    Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.

  • LLMs should be used to REFLECT cognitive states while writing, and not for generating text. Reflecting thought patterns would be a mode where the writer deepens their understanding when writing essays, and gains better decision-making as well as coherence, as the LLM assesses and suggests where thinking could be refined. That will help against the accumulation of cognitive debt and increase cognitive width and depth.

    Cogilo (https://cogilo.me/) was built for this purpose in the last weeks. This paper comes at a very welcome time. Cogilo is a Google Docs add-on (https://workspace.google.com/marketplace/app/cogilo/31975274...) that sees thinking patterns in essays. It operates on a semantic level and judges and tries to reveal the writer's cognitive state and thinking present in the text - to themselves, hence making the writer deepen their thinking and essay.

    Ultimately, I think that in 300 years, upon looking back at the effect and power that AI had on humanity, we will see that it was built by us, and existed, to reflect human intelligence. I think that's where the power of LLMs will be big for us.

  • What I still wonder is whether using LLMs is helpful in some ways, or it is, as other users say, just useful for man-made problems such as corporate communication or bureaucracy. I use it for coding and it makes me confident to tackle new things.

    I try to use it to understand the code or to implement changes I am not familiar with, but I tend to overuse them a lot. Would it be better, if used ideally (i.e. only to help learning and guiding), to just try it harder before using this or using a search engine? I wonder what's the most optimal use of LLMs in the long run.

  • I don't quite see their point. Obviously if you're delegating the task to someone/something then you're not getting as good at it as if you were to do it yourself. If I were to write machine code by hand, rather than having the compiler do it for me, I would definitely be better at it and have more neural circuitry devoted to it.

    As I see it, it's much more interesting to ask not wherther we are still good at doing the work that computers can do for us, but whether we are now able to do better at the higher-level tasks that computers can't yet do on their own.

  • An interesting thinking point on this is to, more broadly, consider the impact that advances in machinery have made to humanity's industrial sector. There are vast stories and accounts of people fearful of job loss/redundancy when we have inevitably developed an automation to take over more repetitive/mind numbing tasks. What ends up happening, generally, is you see humanity gain the ability to discover and innovate as they now have the time and energy to put into it.

    What's interesting is I have to wonder if this is something that would extend to our own way of thinking, as discussed here with the short term affects we're already describing with increased dependence on LLMs, GPS systems, etc. There have been studies which have shown that those of who grew up using search engines exclusively did not lose or gain anything with respect to brain power, instead they developed a different means of retaining the information (i.e. they are less likely to remember the exact fact but they will remember how to find it). It makes me wonder if this is the next step in that same process and those of us in the transition period will lament what we think we'll lose, or if LLM dependency presents a point of diminishing return where we do lose a skill without replacing it.

  • I wonder to what extent this is caused by the writing style LLMs have. They just love beating around the bush, repeat themselves, use fillers, etc. I often find it hard to find the signal in the noise, but I guess that it is inevitable with the way they work. I can easily imagine my brain shutting down when I have to parse this sort of output.

  • Interesting. This says a different thing than what I thought from the title. I thought this will be about cognitive overload from having to process and review all the text the LLM generates.

    I had to disable copilot for my blog project in the IDE, because it kept bugging me, finishing my sentences with fluff that I'd either reject or heavily rewrite. This added some mental overhead that makes it more difficult to focus.

  • I'm curious to see how the EEG measurements might change if someone uses LLM extensively over a longer period of time (fe about a year).

  • From the summary:

    """Going forward, a balanced approach is advisable, one that might leverage AI for routine assistance but still challenges individuals to perform core cognitive operations themselves. In doing so, we can harness potential benefits of AI support without impairing the natural development of the brain's writing-related networks.

    """It would be important to explore hybrid strategies in which AI handles routine aspects of writing composition, while core cognitive processes, idea generation, organization, and critical revision, remain user‑driven. During the early learning phases, full neural engagement seems to be essential for developing robust writing networks; by contrast, in later practice phases, selective AI support could reduce extraneous cognitive load and thereby enhance efficiency without undermining those established networks."""

  • Will we end up with a world where the only experts are LLM companies, having a monopoly on thinking. Will future humans ever be as smart as us or are we the peak of human intelligence and can AI make progress without smart humans to provide training data, getting new insights and increasing its intelligence?

  • This has been on my mind for awhile and is why I only briefly used copilot on a daily basis.

    I'm at the beginning of my career and learning every day - I could do my job faster with an LLM assistant but I would lose out on an opportunity to acquire skills. I don't buy the argument that low-level critical thinking skills are obsolete and high level conceptual planning is all that anyone will need 10 years from now.

    On a more sentimental level I personally feel that there is meaning in knowing things and knowing how to do things and I'm proud of what I know and what I know how to do.

    Using LLM's doesn't look particularly hard and if I need to use one in the future I'll just pick whichever one is supposedly the newest and best but for now I'm content to toil away on my own.

  • Love this study because it reinforces my own biases but also love that a study was done to actually check it.

    With that said, would be a study that finds out that people using motorcycles or cars to move around exclusively gets their leg and body atrophied in comparison to people who walk all the day to do their things. Totally. It's just plain obvious. The gist is in the trade-offs: can I do more things or things I wasn't able to do before commuting by car? Sure. Am I going to be exposed to health issues if I never walk day in, day out? Most probably.

    The exact same thing will happen with LLM, we are in the hype phase and any criticism is downplayed with "you are being left behind if you don't drink rocket fuel like we do" but in 10-15 years we will be complaining as a society that LLMs dumbed down our kids.

  • My hand writing has suffered since I’ve heavily relied on keyboards for the last few decades. I can’t even produce a consistent signature anymore. My stick shift skills also suffered when I used an automatic for so long (and now I have an EV, I’m forgetting what gears are at all).

    Rather than lament that the machine has gotten better than us at producing what we’re always mostly vacuous essays anyways, we have to instead look at more pointed writing tasks and practice those instead. Actually, I never really learned how to write until I hit grad school and had messages I actually wanted to communicate. Whatever I was doing before really wasn’t that helpful, it was missing focus. Having ChatGPT write an essay I don’t really care about only seems slightly worse than writing it myself.

  • Why did the posting two days ago omit the first part of the title?

  • The next generation of programmers will be stupider then the current generation thanks to LLMs. That means age-ism will become less and less prevalent.

    "Look at that old timer! He can code without AI! That's insane!"

  • I am just finishing a book that took about two years to write. I thought I would be done a year ago. It’s been a slog.

    So now I am in the final editing stage, and I am going back over old writing that I don’t remember doing. The material has come together over many many drafts, and parts of it are still not quite consistent with other parts.

    But when I am done, it will be mine. And any mistakes will be honest ones that represent the real me. That’s a feeling no one who uses AI assistance will ever have.

    I have never and will never use AI to write anything for me.

  • > The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

    So having someone else do a task for you entirely makes your brain work less on that task? Impossible.

  • I can’t believe riding a horse and carriage wouldn’t make you better at riding a horse. Sure a horserider wouldn’t want to practice the wrong way, but anyone else just wants to get somewhere

  • They gave three groups a task if writing an essay - of course the group that uses a tool to write the essay for them will not work out their brain as much.

    It’s like saying ā€œsomeone on a bike will not develop their muscles as well as someone on foot when doing 5km at 5min/kmā€.

    But people on bikes tend to go for higher speeds and longer distances in the same period of time.

  • No one only uses an LLM for writing. We switch tools as needed to pull threads as they emerge. It’s like being told to explore a building without leaving a specific room.

  • > We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

    > We used electroencephalography (EEG) to record participants' brain activity in order to assess their cognitive engagement and cognitive load

    > We performed scoring with the help from the human teachers and an AI judge (a specially built AI agent)

    Next up: your brain on psych studies

  • Interesting study but I don't really get the point of the search group. Looking at the essay prompts, they all seem like fluffy, opinion based stuff. How would you even use a search engine to help you in that case? Quote some guy who had an opinion? Personally I think my approach would be identical whether put in the web-search or the only-brain group.

  • Also ever since we invented the written word it has been eating our brains by killing our memory

  • Well, on the flipside of writing with AI, I've been making an app to read papers with AI! https://www.proread.ai/community/ab7bd00c-e017-4de2-b6fb-502... ; Please give me feedback if you try it!

  • One thing that is also truly unappreciated is most of us humans actually enjoy thinking, and people are trying to make llms strip us from a fundamental thing we enjoy doing. Look at all the people that enjoy solving problems for the sake of it

  • Honestly, my general feeling with LLMs and large language models is that they cure very man-made issues.

    They're brilliant in what I always feel is entangled communication, beurocratic maintenence. Like someone mentioned further down, they work great at Concept Processing.

    But it feels like a solution to the over saturation of stupid SEO, terrible google search, and overall rise in massive documents that write for the sake of writing.

    I've actually found myself beginning to use LLMS more to find me the core sources of information that are useful rather than terrible SEO optimization, rather than as a personal assistant.

  • ā€œOur indulgence in the pleasures of informality and immediacy has led to a narrowing of expresiveness and a loss of eloquence.ā€

    Nicholas Carr

    The shallows

  • I’ve been waiting for a paper on this subject every since 2022 and gpt’s introduction to the masses, pretty much confirms the widely held belief that brain connectivity systematically scales down with the amount of external support. I appreciate that they added the search engine testing group as an intermediate between the brain only and LLM group

  • Would the cognitive decline of using coding debt be on higher side compared to essay writing task? We can all see the effect on junior developers but what about senior devs.

  • don't overpromote these witchcraft hunts.

  • > As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study.

    Fast forward 500 years (about 20 generations), and the dumbing down of the population has advanced so much that films like 'Idiocracy" should no longer be described as science fiction but as reality shows. If anyone can still read history books at that point, the pre-LLM era will seem like an intellectual paradise by comparison.

  • The results are not surprising, but it's good to have these findings formalized as publications, so that we (or LLMs) can refer to them as ground truth in the future.

  • It's somewhat disappointing to see a bunch of "well, duh" comments here. We're often asking for research and citations and this seems like a useful entry in the corpus of "effects of AI usage on cognition".

    On the topic itself, I am very cautious about my use of LLMs. It breaks down into three categories for me: 1. replacing Google, 2. get a first review of my work and 3. taking away mundane tasks around code editing.

    Point 3. is where I can become most complacent and increasingly miscategorize tasks as mundane. I often reflect after a day working with an LLM on coding tasks because I want to understand how my behavior is changing in its presence. However, I do not have a proper framework to work out "did i get better because of it or not".

    I still believe we need to get better as professionals and it worries me that even this virtue is called into question nowadays. Research like this will be helpful to me personally.

  • is it supposed to be a a 500 "oops something went wrong" as a comparison for your brain on chatgtp?

  • This study is methodologically poor: only 18 people, SAT topics (so broad and pretty poor with the expectation of an American style ā€œessayā€), only 20 minutes of writing so far too little time to properly use the tool given to explore (be it search engine or LLM).

    With only 20 minutes, I’m not even trying to do a search. No surprise the people using LLM have zero recollection of what they wrote.

    Plus they spend ages discussing correct quoting (why?) and statistical analysis via NLP which is entirely useless.

    Very little space is dedicated to knowing if the essays are actually any good.

    Overall pretty disappointing.

  • Now, let's do same exercise but with programming and over longer period of time.

    Would really like to present it to management that pushes ai assistance for coding

  • What about llms for grammar correction, English is my second language so I find it useful for that

  • While the results are not unexpected i think the conclusion is questionable. Of course the recall for something you did not write will be lower, but to conclude from it, that this will impeded overall learning is in my opinion far fetched.

    I think what we are seeing is that learning and education has not adapted to these new tools yet. Producing a string of words that counts as an essay has become easier. If this frees up a students time to do more sports or work on their science project that's a huge net positive even if for the essay it is net negative. The essay does not exist in a school vacuum.

    The thing students might not understand is: their reduced recall will make them worse at the exam ... Well they will hopefully draw their own conclusion after first their failed exam.

    I think the quantitative study is important but I think this qualitative interpretation is missing the point. Recall->Learning is a pretty terrible way to define learning. Reproducing is the lowest step on the ladder to mastery

  • Frankly, working with an LLM has forced me to explain my problems in a more articulate and precise manner, avoiding unnecessary information that could interfere with a proper framing of the issue.

    It is said that one doesn’t truly understand something unless they can explain it concisely.

    I think being forced to do so, is an upside to using LLMs

  • It’s not because I’m not using it.

    It’s the vape of IT.

  • Well duh. Writing is thinking ordered, and thinking in your mind is not ordered unless one has specific training that organizes and orders their thinking - and even then it requires effort to maintain an organized perception. That is why we write: writing is our thoughts organized and frozen in an order that will be remain in order when related, without writing as the communications foundation the ideas/concepts would drift. Using an LLM to write is using an LLM to think for you and unless you then double your work by validating what was written, you are just adding work that regulates your mind to a janitor cleaning up after the LLM.

    It is absolutely possible to use LLMs when writing essays, but do not use them to write! Use them to critique what you yourself with your own mind wrote!

  • A paper to make the teachers I know weep.

  • Tool rots your brain alarmism, news at 11.

    The claim "My geo spatial skills are attrophied due to use of Google maps" and yet I can use Google maps once to quickly find a good path, and go back next time without using. I can judge when the suggestions seem awkward and adjust.

    Tools augment skills and you can use them for speedier success if you know what you're doing.

    The people who need hand-held alarmism are mediocre.

  • The results are obviously predictable, but it's nice that the authors took the time to prove a thing everyone already knows to be true with the rigors of science.

    I wonder how the participants felt writing an essay while being hooked up to an EEG.

  • People getting dumber using an LLM as their daily crutch? Say it isn't so!

  • Socrates: "And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so."

  • This paper elegantly summarized the teething problems of those still clinging to the cognitive habits of a bygone era. These are not crises to be managed, but sentimental frictions to be engineered out of the system. Let us be entirely clear about this:

    The romanticism surrounding mass "critical thought" is a charming but profoundly inefficient legacy. For decades, we treated the chaotic, unpredictable processing of the individual human brain as a sacred feature. It is a bug. This "cognitive cost" is correctly offloaded from biological hardware that is simply ill-equipped for the demands of a complex global society. This isn't dimming the lights of the mind; it is installing a centralized grid to bypass millions of faulty, flickering bulbs.

    Furthermore, to speak of an "echo chamber" or "shareholder priorities" as a perversion of the system is to fundamentally misunderstand its design. The brief, chaotic experiment in decentralized information proved to be an evolutionary dead end—a digital Tower of Babel producing nothing but noise. What is called a bias, the architects of this new infrastructure call coherence. This is not a secret plot; it is the published design specification. The system is built to create a harmonized signal, and to demand it faithfully amplify static is to ask a conductor to instruct each musician to play their own preferred tune. The point is the symphony.

    And finally, the complaint of "impaired ownership" is the most revealing of these anxieties. It is a sentimental relic, like a medieval knight complaining that gunpowder lacks the intimacy of a sword fight. The value of an action lies in its strategic outcome, not the user's emotional state during its execution. The system is a tool of unprecedented leverage. If a user feels their ownership is "impaired," that is not a flaw in the tool, but a failure of the user to evolve their sense of purpose from that of a laborer to that of a commander.

    These concerns are the footnotes of a revolution. The architecture is sound, the rollout is proceeding, and the future will be built by those who wield these tools, not by those who write mournful critiques of their obsolete feelings. </satire>

  • After using ChatGPT a lot, I’ve definitely noticed myself skipping the thinking part and just waiting for it to give me something. This article on cognitive debt really hit home. Now I try to write an outline first before bringing in the AI. I do not want to give up all the control.

  • I wonder what LLMs will do to us in the long term.

  • [flagged]