Tim Bray on Grokipedia

  • I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.

    The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?

    In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.

  • Maybe it's just me, but reading through LLM generated prose becomes a drag very quickly. The em dashes sprinkled everywhere, the "it's not this, it's that" style of writing. I even tried listening to it and it's still exhausting. Maybe it's the ubiquity of it nowadays that is making me jaded, but I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.

  • Wondering if the project will get better from the pushback or will just be folded like one of Elon's many ADHD experiments. In a sense, encyclopedias should be easy for LLMs: they are meant to survey and summarize well-documented material rather than contain novel insights; they are often imprecise and muddled already (look at https://en.wikipedia.org/wiki/Binary_tree and see how many conventions coexist without an explanation of their differences; it used to be worse a few years ago); the writing style is pretty much that of GPT-5. But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

    If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.

  • "Wikipedia, in my mind, has two main purposes: A quick visit to find out the basics about some city or person or plant or whatever, or a deep-dive to find out what we really know about genetic linkages to autism or Bach’s relationship with Frederick the Great or whatever."

    Completely agree with the first purpose but would never use wikipedia for the second purpose. Its only good at basics and cannot handle complex information well.

  • Not sure it still does this but for awhile if you asked Grok a question about a sensitive topic and expanded the thinking, it said it was searching Elon's twitter history for its ground truth perspective.

    So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.

  • I looked at Grokopedia today and spot-checked for references to my own publications which exist in Wikipedia. As is often reported, it very directly plagerizes Wikipedia. But it did remove dead links. This is pretty underwhelming even on the Musk hype scale.

  • Why give it oxygen?

  • Grokipedia seems to serve no purpose to me. It's AI slop fossilized. Like if I wanted the AI opinion on something I would just ask the AI. Having it go through and generate static webpages for every topic under the sun seems pointless.

  • Grokipedia is a joke. Lot of articles I've checked are AI slop at its worst and at the bottom it says "The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License."

  • Grokipedia might have a better present-tense understanding as it hoovers up data.

    One great feature of Wikipedia is being able to download it and query a local shapshot.

    As a technical matter, Grokipedia could do something like that, eventually. Does not appear to support snapshots at the 0.1 version.

  • Interesting that only now I'm learning about Grokipedia. Never heard of it until someone said it's bad so my natural instinct is to check it out.

    Guess that's plus one for "it doesn't matter what they say as long as they say."

  • So, how often does it awkwardly bring up white genocide in South Africa in unrelated contexts?

  • Dead Internet Theory is no longer a theory huh?

  • > Woke/Anti-Woke · The whole point, one gathers, is to provide an antidote to Wikipedia’s alleged woke bias

    According to the Manhattan Institute as cited by the Economist, even grok has a leftwards bias (roughly even to all the other big models).

    https://www.economist.com/international/2025/08/28/donald-tr...

  • I don’t really know who Tim Bray is and until now I had never been to Grokipedia. I don’t really like Grok - I tried Superheavy and it was slow, bloated and no better than Claude Opus.

    But I have a bad habit of fact checking. It’s the engineer in me. You tell me something, I instinctively verify. In the linked article, sub-section, ‘References’, Mr. Bray opines about a reference not directly relating to the content cited. So I went to Grokipedia for the first time ever and checked.

    Mr. Bray’s quote of a quote he quote couldn’t find is misleading. The sentence on Grokipedia includes 2 referencee of which he includes only the first. This first reference relates to his work with the FTC. The second part of the sentence relates to the second reference. Specifically on Grokipedia in the Tim Bray article linked reference number 50, paragraph 756 cleanly addresses the issue raised by Mr. Bray.

    After that I stopped reading, still don’t know or care who Tim Bray is and don’t plan on using either Grokipedia or Grok in the near future.

    Perhaps Mr. Bray’s did not fully explore the references or perhaps there was malice. I don’t know. Horseshoe theory applies. Pure pro- positions and pure anti- positions are idiotic and should be filtered accordingly. Filter thusly applied.

  • Wikipedia is a great educational resource and one I've donated to for over a decade. That said, I like the idea of Grokipedia in the sense that it's another potential source I can look at for more information and get multiple perspectives. If there's anything factual in Grokipedia that Wikipedia is missing, Wikipedia can be updated to include it

    I hope we can keep growing freely available sources of information. Even if some of that information is incorrect or flat out manipulative. This isn't anything new. It's what the web has always been

  • It is a disinformation project aimed at morons and morally bankrupt monsters, powered and funded by one of history’s bloodiest mass murderers. Not sure why this takes four pages to investigate.

  • On the other hand, I click on a Wikipedia article and I'm immediately bombarded with "[blank] is an alt-right neo-nazi fascist authoritarian homophobic transphobic bigoted conspiracy theory (Source: PLEASE PLEASE PLEASE HATE THIS TOPIC I BEG YOU)"

    At least Grokipedia tries to look like it was written with the intent to inform, not spoonfeed an opinion.

  • These hot takes are somewhat useless honestly. People give these point-in-time opinions ignoring that the rate of improvement is exponential when it comes to software. The last three, four years of heavy AI utilization have been refreshing.

    I personally treat these things the same way I treat car accidents: if an autonomous system still has accidents but has less than human drivers do, it’s a success. Given the amount of nonsense and factually incorrect things people spout, I’d still call Grok even at this early stage a major success.

    Also I’m a big fan of how it ties nuanced details to better present a comprehensive story. I read both TBray’s Wiki and Groki entries. The Groki version has some solid info that I suppose I should expect of an AI that can pull a larger corpus of data in. A human editor would of course omit that, or change it, and then Wiki admins would have to lock the page as changes erupt into a silly flame war over what’s factually accurate. Because we can’t seem to agree.

    Anyway - good stuff! Looking forward to more of Grok. Very fitting name, actually.

  • [flagged]

  • [flagged]

  • [flagged]

  • [flagged]

  • [flagged]

  • [flagged]

  • Grokipedia is VERY rough to read at the moment, and has a clear pro-capitalist / 'classical right wing' bias (reading the economic pages).

    However it's still 0.1, we'll see what the v1 will look like.

  • At a glance, Grokipedia seems quite promising to me, considering how new it is. There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references. The fact that it’s automatically generated at scale means it can be iterated on to improve fact checking reliability, exclude certain known sources as unreliable, and ensure it has up-to-date and valid citation links. We’ll have to wait and see how it changes over time, but I expect an AI driven online encyclopedia to eventually replace the need for a fully human wikipedia.

  • We may joke about it, but the fact is that it's releasing dumb ideas like this that you sometimes get masterpieces. Maybe this one is really just one of the bad ones, but eventually Elon will have some good ones just like he already has.

    And a lot of us would be better off releasing our dumb ideas too. The world has a lot of issues and if all you do is talk down and don't try to fix anything yourself. Maybe it's time to get off the web a little and do something else.