constant fine tuning - next steps to finetune this model?
#2
by
fblgit
- opened
what do u mean by that exactly?
This model is a merge of the two Qwen, then applied a further SFT & DPO session?
According the " continuous finetuning " theory, how will this model be finetuned again?
Separately, and just to be fair.. the reproduction of mergekit .. its the real deal, truth.
Uploaded Weights MD5 BLOBS:
898cd7942ca145c7828f36af58f62ddf 07bfe0640cb5a0037f9322287fbfc682806cf672
41e93fdb943f5257e5483a8f9fa585c1 16d6b0aaf9d2a3862c32982015675857d9accd437b2da5dfa793873d7fd9d90d
5c4cd47da7fba88377ed1a942f2be1e2 225ce09e4887b0c7a202cf020403c4656c07dd4b9864196d01f6ac3117b70140
6a948be62cb67c66ab5b7dbea0cfb645 31d3ed19260801686e3d68bb5958101dc054d4e25efc4f936407a2177e21a148
1627fac78f91294a9723d46eb6e1717e 3c345e465205526d22d6c9284c6c98b60e5b088c539c7b5ce5716cec9cc6a8dd
8b9f7a05c3c5443dc816ab1bc88613e0 3f50ee76a093924572a082e44c1232de15f78d19035ce48a4d15a7b8433805be
3f99a313c3754208a7f3acfccadc53cd 443909a61d429dff23010e5bddd28ff530edda00
613b8e4a622c4a2c90e9e1245fc540d6 4783fe10ac3adce15ac8f358ef5462739852c569
33ed539e1e1c7728812613cd603bf0f2 4c08fd18bb52efaa5275adf2d39ae21c69b782f5c91f99533bcb96a5e2ec60c3
5444a2b371258a2688548dad4ef185a7 55c1a40ef0d58afa9fd6b29976f65af355edbbc3
d45eb169863953385758fabac4d70ad3 5709b6f3fed9a5235386110727d475784d0adc75
d9ffc8321aee6a87285f7caef26adb21 6497ad4cf0b12fcef224823b7b12e44c0ddab24adb25a91580958eb3db1cec37
f6c91e5203c4e4caa51739be0dd69913 7297420e866d397f63efe21615b685240520ecdd14863e0432af3c184ca6ba53
8f71d03930529c93afd579d70f06a63c 7cba08164fcea1c7a24678c8355a9c43fe46d63726089d007a8165026b0fc876
0f19c4d756b967db0ef93c0c8ef77b11 989289c4009026e35063c943c2a228e2c0873d31
a859f8a89685747ffd4171b870540c41 a6344aac8c09253b3b630fb776ae94478aa0275b
63921a3fc45e2de0d116c96b35661bb2 a8e152ca85985311500959d35c25c74eb33d044b124d0dace0502a274c17f7ac
29e4ee1f63af5989778778dbfcfc5e59 bf077f03dc569cfb8a90b3ec1ad20365a620bad6
903bdd80a972a00c6362e6d6f442be08 ca40ca7bd3de63a1577396a23071309913a83da6181322ad5238982d3dd0de04
5688a2079a8778748a0afdafd366b321 da8ac9bf0f00150f6b0220d05b06a8a780f5182848bd5e6c04bcb1c0dc4afb04
f9437358554ea1c86d54450ca02f6c8c df9da63046cfffd4f1ef9c09e33e4894e37a8e75ffa521894c8c6ac7ea62a8d6
454ed993b32bda0ec6d6ce937d518eff e3419048e5da8ade678dc83b69e2753d781f65c0e9cb9a6762b3fdd268a61297
6030665179300710befea45c62d5524e ef819a886f85df01023efb9e9cbac1bedc5702c3
Output Weights MD5 from author mergekit.yml:
b9a7a9cfe915c587f6fa31268b8529b0 /data/tools/mergekit/output-model-4/added_tokens.json
3d15398b8af55ec498aead0f6c838399 /data/tools/mergekit/output-model-4/config.json
6a1938cc79fe5b8ab3491f1b0d834076 /data/tools/mergekit/output-model-4/mergekit_config.yml
72dacf1de43bc354dc3c521e44ed0b24 /data/tools/mergekit/output-model-4/merges.txt
33ed539e1e1c7728812613cd603bf0f2 /data/tools/mergekit/output-model-4/model-00001-of-00014.safetensors
63921a3fc45e2de0d116c96b35661bb2 /data/tools/mergekit/output-model-4/model-00002-of-00014.safetensors
8f71d03930529c93afd579d70f06a63c /data/tools/mergekit/output-model-4/model-00003-of-00014.safetensors
8b9f7a05c3c5443dc816ab1bc88613e0 /data/tools/mergekit/output-model-4/model-00004-of-00014.safetensors
1627fac78f91294a9723d46eb6e1717e /data/tools/mergekit/output-model-4/model-00005-of-00014.safetensors
f9437358554ea1c86d54450ca02f6c8c /data/tools/mergekit/output-model-4/model-00006-of-00014.safetensors
454ed993b32bda0ec6d6ce937d518eff /data/tools/mergekit/output-model-4/model-00007-of-00014.safetensors
d9ffc8321aee6a87285f7caef26adb21 /data/tools/mergekit/output-model-4/model-00008-of-00014.safetensors
41e93fdb943f5257e5483a8f9fa585c1 /data/tools/mergekit/output-model-4/model-00009-of-00014.safetensors
f6c91e5203c4e4caa51739be0dd69913 /data/tools/mergekit/output-model-4/model-00010-of-00014.safetensors
6a948be62cb67c66ab5b7dbea0cfb645 /data/tools/mergekit/output-model-4/model-00011-of-00014.safetensors
5688a2079a8778748a0afdafd366b321 /data/tools/mergekit/output-model-4/model-00012-of-00014.safetensors
903bdd80a972a00c6362e6d6f442be08 /data/tools/mergekit/output-model-4/model-00013-of-00014.safetensors
5c4cd47da7fba88377ed1a942f2be1e2 /data/tools/mergekit/output-model-4/model-00014-of-00014.safetensors
5444a2b371258a2688548dad4ef185a7 /data/tools/mergekit/output-model-4/model.safetensors.index.json
d21783a0a8dbf28e194f79e185a42e16 /data/tools/mergekit/output-model-4/README.md
3ba08f4449a309b27e5c4ab64315daf0 /data/tools/mergekit/output-model-4/special_tokens_map.json
9254fb07cdd4da44092a7df03cf6bc33 /data/tools/mergekit/output-model-4/tokenizer_config.json
aa2988694a177776324112349a8df055 /data/tools/mergekit/output-model-4/tokenizer.json
613b8e4a622c4a2c90e9e1245fc540d6 /data/tools/mergekit/output-model-4/vocab.json
I was able to reproduce the MD5 of uploaded vs merged hash weights, no doubt. It confirms the reproducibility of the experiment.
It does not confirm, exhibit, empirically proves the "infinite loss-less training theory".. but, it has something "special" to be considered on a tier-1 blood-line modeling approach.
In any case, Kudos to the author.
fblgit
changed discussion status to
closed