The paper link on that site doesn't work -- here's a working link:
Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)
Can someone explain why this would be more effective than a system prompt? (Or just point me to it being tested out against that, I supposed)
Interesting work to adapt LoRa adapters. Similar idea applied to VLMs: https://arxiv.org/abs/2412.16777
An alternative to prefix caching?
What is such a thing good for?
LoRA adapters modify the model's internal weights
[dead]
[flagged]
Sounds like a good candidate for an mcp tool!
I got very briefly excited that this might be a new application layer on top of meshtastic.