I (or more accurately A.I.) made the FOSS mobile app https://www.crayeye.com to make it easier to experiment with multimodal vision prompts augmented by device data (e.g. location, date/time).
While this tool still uses GPT-4v / GPT-4o as its default, it now supports configuring custom engines (via OpenAPI spec) which can point to any API/model - this has been tested using Llava (and Bakllava) running locally via Ollama.
I (or more accurately A.I.) made the FOSS mobile app https://www.crayeye.com to make it easier to experiment with multimodal vision prompts augmented by device data (e.g. location, date/time).
While this tool still uses GPT-4v / GPT-4o as its default, it now supports configuring custom engines (via OpenAPI spec) which can point to any API/model - this has been tested using Llava (and Bakllava) running locally via Ollama.