Current progress on evaluation module. See readme & documentation for usage.

#3
cs182project org

Feel free to look over this and try it yourself, but it's like 800 lines or something so don't feel pressured to review too deeply.

I'm leaning on methods described in the Structural Feature extraction section of this paper: https://staff.aist.go.jp/m.goto/PAPER/TIEICE202309watanabe.pdf. There's no public implementation of it though, so everything here is just reimplemented from scratch (potentially buggy).

Basically the way it works is I've defined a ScoreAccumulator that evaluates a bunch of metrics (either "comparison" or "non-comparison") on the target and prediction text.

  • Comparison metrics compare the structure of similar stanzas as defined by the model/target data. For example if the two stanzas are labeled "chorus", then they will be compared with eachother but not the verses, and similarly for other stanza labels. I extracted like 30 different stanza labels in english and spanish from the data (see text_processing_utils.py), making this a bit hacky, so feel free to contribute ideas.
  • Non-comparison metrics (currently the only one is lexical diversity via mltd) are just run over the entire text.

NOTE: Currently function requires that stanzas are delineated by [<delineator (verse, chorus, etc) ]. Some model outputs (llama) delineate by (), which creates an ambiguity problem as "()" is used in actual lyrics with highly variable text strings inside.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment