Revisiting Minsky's Society of Mind in 2025

  • In 2004 I previewed Minsky's chapters-in-progress for "The Emotion Machine", and exchanged some comments with him (which was a thrill for me). Here is an excerpt from that exchange: Me: I am one of your readers who falls into the gap between research and implementation: I do neither. However, I am enough of a reader of research, and have done enough implementation and software project management that when I read of ideas such as yours, I evaluate them for implementability. From this point of view, "The Society of Mind" was somewhat frustrating: while I could well believe in the plausibility of the ideas, and saw their value in organizing further thought, it was hard to see how they could be implemented. The ideas in "The Emotion Machine" feel more implementable.

    Minsky: Indeed it was. So, in fact, the new book is the result of 15 years of trying to fix this, by replacing the 'bottom-up' approach of SoM by the 'top-down' ideas of the Emotion machine.

  • MIT OpenCourseWare course including video lectures taught by Minsky himself:

    https://ocw.mit.edu/courses/6-868j-the-society-of-mind-fall-...

  • Finally someone mentions this. Maybe I've been in the wrong circles, but I've been wishing I had the time to implement a society-of-mind-inspired system ever since llamacpp got started, and I never saw anyone else reference it until now.

  • Good timing, I just started rereading my copy last week to get my vibe back.

    Not only is it great for tech nerds such as ourselves for tech, but its a great philosophy on thinking about and living life. Such a phenomenal read, easy, simple, wonderful format, wish more tech-focused books were written in this style.

  • As a teen in the '90s, I dismissed Marvin Minsky’s 1986 classic, The Society of Mind, as outdated. But decades later, as monolithic large language models reach their limits, Minsky’s vision—intelligence emerging from modular "agents"—seems strikingly prescient. Today’s Mixture-of-Experts models, multi-agent architectures, and internal oversight mechanisms are effectively operationalizing his insights, reshaping how we think about building robust, scalable, and aligned AI systems.

  • Jürgen Schmidhuber's team is working on this, applying these ideas in a modern context:

    https://arxiv.org/abs/2305.17066

    https://github.com/metauto-ai/NLSOM

    https://ieeexplore.ieee.org/document/10903668

  • Having studied sociology and psychology in my previous life I am now surprised how relevant some of the almost forgotten ideas became to my current life as a dev!

  • > Eventually, I dismissed Minsky’s theory as an interesting relic of AI history, far removed from the sleek deep learning models and monolithic AI systems rising to prominence.

    That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.

    This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?

    I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).

  • Minsky disliked how Harry Harrison changed the end of "the turing option" and wrote a different ending.

    (not directly related to the post but anyway)