Wow, so empty here, and it was 3 days ago... I wonder why?
I have a question if someone can answer. On github they are stating 8x80GB for 14B models but I found no information on how long this fine-tuning takes?
Given the toolchain, it probably takes significant time.
Another question wouldn't it be fun to hijack the training loop with some tasks set by humans? Would it improve results or opposite?
I wonder if at some point all tasks will degrade to the "uh-oh moment" tasks, which will be most complex and perplexing with no actual productive yield?
Wow, so empty here, and it was 3 days ago... I wonder why?
I have a question if someone can answer. On github they are stating 8x80GB for 14B models but I found no information on how long this fine-tuning takes?
Given the toolchain, it probably takes significant time.
Another question wouldn't it be fun to hijack the training loop with some tasks set by humans? Would it improve results or opposite?
I wonder if at some point all tasks will degrade to the "uh-oh moment" tasks, which will be most complex and perplexing with no actual productive yield?