Just a little more context: Jumprun let's you connect different data sources (like web searches/pages, APIs, X, Youtube videos, Notion etc) and use LLMs to analyze and visualize the data.
We support rich components like tables, timeseries, charts and maps. We're working on automations at the moment so that you can provide natural language conditions that trigger actions (like sending you an email or changing updating a page in Notion).
Our long-term vision is that canvases become more interactive and interconnected, so that you end up building mini applications without it feeling like you're using a low-code app builder.
How hard would it be to do a comparative analysis, say of Vision Pro, Oculus, and Quest? Does that capability exist? Or is it more of a summarization/vision board thing?
Just a little more context: Jumprun let's you connect different data sources (like web searches/pages, APIs, X, Youtube videos, Notion etc) and use LLMs to analyze and visualize the data.
We support rich components like tables, timeseries, charts and maps. We're working on automations at the moment so that you can provide natural language conditions that trigger actions (like sending you an email or changing updating a page in Notion).
Our long-term vision is that canvases become more interactive and interconnected, so that you end up building mini applications without it feeling like you're using a low-code app builder.