Experimental and negative results
Collection
Models that didn't always quite work out, but may still be of interest.
•
10 items
•
Updated
This is a merge of pre-trained language models created using mergekit.
This model constitutes an interesting negative result, as it echoes back input. Feel free to verify the failure mode, but don't expect anything interesting at that level.
However, there is something very interesting going on with how layernorms progress in Gemma 3 12B versus models like Nemo 2407 12B.
This model was merged using the DeLERP merge method using google/gemma-3-12b-pt as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: google/gemma-3-12b-it
- model: google/gemma-3-12b-pt
merge_method: delerp
base_model: google/gemma-3-12b-pt
parameters:
t:
- value: 0.999
dtype: bfloat16