Have LLMs solved natural language parsing?

  • It feels like it's sort of it's own thing. LLMs are really good at morphing or fuzzy finding.

    An interesting example – I had a project where I needed to parse out addresses and dates in a document. However, the address and date formats were not standardized across documents. Utilizing LLMs was way easier then trying to regex or pattern match across the text.

    But if you're trying to take a text document and break it down into some sort of a structured output, the outcome using LLMs will be much more variable.

  • No. Word2Vec takes in words and converts them to a high dimensional vector. The relationship between the vectors in terms of cosine distance generally indicates similarity of meaning. The vector difference in terms can be used to indicate some relationship, for example [father]-[mother] is close in distance to [male]-[female].

    There's nothing like an abstract syntax tree, nor anything programmatic in the traditional meaning of programming going on inside the math of an LLM. It's all just weights and wibbly-wobbly / timey-whimey stuff in there.

  • I think it’s useful to draw a Chomsky-esque distinction here between understanding and usefulness.

    I think LLMs haven’t advanced our understanding of how human language syntax/semantics work, but they’ve massively advanced our ability to work with it.

  • Not perfect, but using pretrained embeddings from a LLM will handle >80% of your NLP problems.

  • I think they show that parsing is not needed, it's a limited idealization. Why is parsing a goal?

  • Turns out, grammars and ASTs to represent natural language are a dead end in NLP.