Some suggestions for testing

#2
by ChuckMcSneed - opened

Nice to see another benchmarker. Can you test the following models:

Also a few small suggestions related to your benchmark:

  • keep the long names of the models to avoid confusion(e.g. instead of goliath-120b use alpindale/goliath-120b)
  • at least vaguely explain what you test and how you test(e.g. @wolfram tests in his benchmark if a model can answer 18 german data protection law questions correctly on deterministic settings; I test on one of my benchmarks if a model can write 6 poems flawlessly on deterministic settings); how high is the human factor in your evaluations
  • upload your data in csv format instead of an image, this makes it easier for other people to import the data in their own spreadsheet software such as Microsoft Excel and LibreOffice Calc

Thank you for the suggestions. I love your benchmark btw.

I have not finished writing up the details, but I will definitely include some useful information about the test setting, goal, procedure and question types. I am using a determinitic configuration, but evaluation of the answer is subjective as it is my own opinion of the results. This is on purpose as one of the main goal is evaluating creativity and writing quality, which, for now, is still a very human perception.
Good suggestion to upload the csv, as I actually appreciate it when people do the same.

Regarding the model list, I have a lot more models in my benchmark, where I have not finished the evaluation yet, or I have simply marked them as bad and not worth fully testing. I will add those in due time. Specifically for the ones you asked about:

  • alpindale/goliath-120b : this is on my list to benchmark soon
  • Sao10K/Euryale-1.3-L2-70B : I will maybe come to it eventually, but I would like to focus on miqu derivatives first
  • Sao10K/WinterGoddess-1.4x-70B-L2 : I will maybe come to it eventually, but I would like to focus on miqu derivatives first
  • Xwin-LM/Xwin-LM-70B-V0.1 : I will maybe come to it eventually, but I would like to focus on miqu derivatives first
  • ChuckMcSneed/WinterGoliath-123b : I will maybe come to it eventually, but I would like to focus on miqu derivatives first
  • ChuckMcSneed/Gembo-v1-70b : I am very interested in testing it, and will do soon
  • wolfram/miquliz-120b-v2.0 : in testing, I am halfway through it; so far, I am getting better results from wolfram/miqu-1 120b

I advice you to establish some guidelines for evaluation, not having them will lead to too subjective evaluation which will make the results depend too much on your mood and fluctuate A LOT. I for example have the following simple guidelines for my poems test:

  • 1: flawless
  • 0.75: one wrong rhyme
  • 0.5: two wrong rhymes or severe repetition
  • 0.25: "well, there was an attempt"
  • 0: not a poem/no rhyme or severe rhyme problems/wrong topic(yes, I had tested models that were THAT bad at it)

And the following guidelines for styles test:

  • 1: flawless
  • 0.75: minor style errors/slightly ambiguous explaination
  • 0.5: severe explaination errors, perfect style/minor style errors+slightly ambiguous explaination
  • 0.25: "well, there was an attempt"
  • 0: dry textbook explaination without style/schizo ramblings

Having those helps greatly against fluctuations in rating.

I have simply marked them as bad and not worth fully testing

I advice against doing that. Having data that shows that some models are bad can help people filter out the garbage. Just test them later when you got time. I didn't get to 95+ tested models in one month!

P.S.: Try to be honest on your tests. Even when everyone says that the model is good it doesn't mean that it has to perform well on the test. Try to evaluate as if you are doing a blind test each time. It is especially difficult for me when I test my own models, I feel the urge to be a bit more positive about them(I even noted it in limitations of my benchmark).

Great advice. I also use a scoring system, from 0 to 6, with well defined criterias. There is still a bit of subjectivity and variance, but it helps to make the evaluation fairer.
For the models which I discarded, there is no way I going to start testing them one day; I already do not have enough time to test everything I want. Maybe what I will do instead is maintain a list of discarded models.

Saw this a few days ago and wondered if it might be worth testing:

https://huggingface.co/jspr/miqurelian-120b

merge_method: linear
parameters:
  weight: 1.0
slices:
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [0, 1]
      - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16
        layer_range: [0, 1]
        parameters:
          weight: 0
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [1, 20]
  - sources:
      - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16
        layer_range: [10, 30]
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [20, 40]
  - sources:
      - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16
        layer_range: [30, 50]
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [40, 60]
  - sources:
      - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16
        layer_range: [50, 70]
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [60, 79]
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [79, 80]
      - model: grimulkan/aurelian-v0.5-70b-rope8-32K-fp16
        layer_range: [79, 80]
        parameters:
          weight: 0
dtype: float16
tokenizer_source: model:152334H/miqu-1-70b-sf

I tried to ask the creator if mixing 10K×8 vs 1M ROPE worked OK, but he didn't reply yet. I would think that the aurelian-rope8-32K layers would be really confused by seeing an embedding created from 1M frequency and wanted to know if he'd tested it on longer contexts, but other people seem to be mixing stock 10K ROPE models with Miqu so maybe it won't be that bad??

@ChuckMcSneed might also be interested in this model too.

@jukofyork Have you tested it? Has the author tested it? I don't really see the point of it when we have Miqu-120b.

froggeric changed discussion status to closed

Sign up or log in to comment