Running result
About
This demonstration results from based on the following settings:
Pre-trained model: Vicuna 7B in 8 Bit
Finetuning adapter model: LoRa
Participants: 4 (2 Proposers, 2 Voters)
Total communication round: 4
Participants' local epoch: 1
Stake requirement: 10
Total Reward pool: 1000
Training logs
Here are the training logs on the proposer and voter's local PC
Finetuned model demo
We compared the fine-tuned Vicuna model with the original pre-trained Vicuna model. The results can be found as follows:
Note: Left screen is FLockLLM (Finetuned), right screen is Vicuna (Original)
Last updated