Defines the metric to evaluate the model's performance. 

We provide several metric options for evaluating the performance of your model.
In addition to the BLEU and the Perplexity score, we offer GPT metrics that utilize the OpenAI API to determine whether
the predicted answer is more favorable than the ground truth answer.
To use these metrics, you can either export your OpenAI API key as an environment variable before starting LLM Studio,
or you can specify it in the Settings Menu within the UI.