DaliaGala commited on
Commit
6b6493e
1 Parent(s): a107237

Copy model accuracy rates description

Browse files
pages/2_📈_Visualize_the_Results.py CHANGED
@@ -490,6 +490,16 @@ def model_vis(full_df):
490
 
491
  def filter_for_protected(data):
492
  st.markdown('''Sometimes, the overall model metrics can be deceptive when it comes to predicting the results for different groups under consideration. Ideally, for our models, the varying model metrics would be similar across different groups, which would indicate that the overall model performance is reflected in how this model performs for a given group. It is often not the case, and it is likely that you will see that models A and B perform differently when it comes to those metrics. Even the same model can have different metrics for different subgroups.''')
 
 
 
 
 
 
 
 
 
 
493
  model = st.selectbox('Choose which model outputs to assess', pred_dict.keys())
494
  test = data.loc[data[pred_dict[model]] != "train"]
495
 
 
490
 
491
  def filter_for_protected(data):
492
  st.markdown('''Sometimes, the overall model metrics can be deceptive when it comes to predicting the results for different groups under consideration. Ideally, for our models, the varying model metrics would be similar across different groups, which would indicate that the overall model performance is reflected in how this model performs for a given group. It is often not the case, and it is likely that you will see that models A and B perform differently when it comes to those metrics. Even the same model can have different metrics for different subgroups.''')
493
+ st.markdown('''We can represent each model in terms of its accuracy rates, calculated using the numbers of candidates in each group:''')
494
+ st.markdown('''
495
+ - True Positive Rate (TPR), also called recall, sensitivity or hit rate, is the probability of a positive test result when the result is indeed positive.
496
+ - True Negative Rate (TNR), or specificity/selectivity, is the probability of a negative test result when the test is indeed negative.
497
+ - Positive Predictive Value (PPV) or Precision is the ratio of truly positive results to all positive results.
498
+ - Negative Predictive Value (NPR) is the ratio of truly negative results to all negative results.
499
+ - False Positive Rate (FPR) or fall-out is the probability of assigning a falsely positive result when the test is negative.
500
+ - False Negative Rate (FNR) or miss rate is the probability that a test with a positive result will falsely be assigned a negative result.
501
+ - False Discovery Rate (FDR) it the ratio of false positive results to total number of positive results.
502
+ ''')
503
  model = st.selectbox('Choose which model outputs to assess', pred_dict.keys())
504
  test = data.loc[data[pred_dict[model]] != "train"]
505