Gemma3NPC
Collection
A collections of Gemma3n E4B models fintuned using the pippadataset, aimed to be a general roleplaying model
•
4 items
•
Updated
•
1
The Q8_0 quantized version of Gemma3NPC-Float16.
We trained this model as a rank-16 LoRA adapter with one epoch over pippa
using a 40GB vRAM A100 in Google Colab. For this run, we employed a learning rate of 2e-5
and a total batch size of 1 and gradient accumulation steps of 16. A cosine learning rate scheduler was used with an 800-step warmup. With a gradient clipping of 0.4.
Check out our training notebook here.
Here is a graph of the Step Training Loss, saved every 10 steps:
8-bit