We call our approach - Luna — LLM-powered unstructured analytics. We do see a world where LLMs are used to answer questions, but it’s a more complex compound AI system that references a corpus (knowledge source), and uses LLMs to process that data.
The discussion around context sizes is a red herring. They can’t grow as fast the demand for data.
The discussion around agents needs a lot more thinking through. They’re likely to specialize — ours will be coming with strategies for answering questions — ie query planning.
We think that RAG is fundamentally limited:
https://www.aryn.ai/post/rag-is-a-band-aid-we-need-llm-power...
We call our approach - Luna — LLM-powered unstructured analytics. We do see a world where LLMs are used to answer questions, but it’s a more complex compound AI system that references a corpus (knowledge source), and uses LLMs to process that data.
The discussion around context sizes is a red herring. They can’t grow as fast the demand for data.
The discussion around agents needs a lot more thinking through. They’re likely to specialize — ours will be coming with strategies for answering questions — ie query planning.