""" 
Purpose: Benchmarking different features of Phase 2 drug interventional clinical trials for efficient machine learning predicting outcomes of those associated Phase 3 clinical trials 

Background:
   (1) 923 pairs of drug interventional clinical trials (Abbreviated as 'clinical trials', 'drug trials' or simply 'trials' in the following contents) were obtained through a series of analyses. Each pair of clinical trials contained a succeeded Phase 2 drug internventional clinical trial, and the same drug's associated Phase 3 clinical trial that was launched / initiated later after the success of that Phase 2 drug interventional. Each pair of clinical trials focused and tested the same drug.

   (2) A variety of factors (data and text features) were extracted from dataset and records of Phase 2 drug interventional clinical trials. 

   (3) Different chemical and text data were embedded using different methods, e.g., chemical structure data were embedded into chemical fingerprints and molecular descriptor, and text features were embedded (vectorized) by different BERT-based models (Bidirectional Encoder Representation from Transformers)
   
   (4) Using these vectors of differet features one by one and also in combination way (i.e., the multi-modal way), machine learning models were trained, validated and benchmarked to identify what type(s) of features of the clinical trials can be used to efficiently predict the outcome (success or failure) of the associated Phase 3 clininical trials.

Conclusion: 
  Amongst various features tested, the Phase 2 drug intervieional clinical trials' "(People) Inclusion and Exclusion Criteria", as best one of the highly useful feaeture for predicting the outcomes of Phase 3 clinical trials, scored high performance, i.e., about 0.97 in Accuracy, F1-measure, AUROC, etc., metrics. 


Author : H. Lin, Ph.D., https://orcid.org/0000-0003-4060-7336 

This script contained Python programming codes

Version: Created on 2nd May 2025, and updated on 29th Aug. 2025 (Lastest)

PS: Here we don't cover details about BERT and variants of BERT. Simply, for example, the clinical BERT was a pre-trained (not a fine-tuned one) large language model trained using large amounts of clinical documents and the BERT architecture. For example, those electronic health records (EHRs). Similarly, the PubMed BERT was also a BERT architecture based large language model but pre-trained using those literatures from the PubMed database. For more details, please search the Internet about the names of BERT variants by yourself. 
"""


# Load embedded vectors of features of phase 2 drug interventional clinical trials
import pandas ; 


""" 4 kinds of embedded vectors of features were to be tested in this script
(1) RDKit Fingerprint 
(2) Molecular Descriptors 
(3) PubMed BERT embedded texts of "inclusion and exclusion criteria"
(4) PubMed BERT embedded texts of "primary outcome/endpoint measurements"  """



vec_inexclusion = pandas.read_csv(  
# Load embedded vectors of inclusion and exclusion criteria feature of phase 2 drug interventional clinical trials from local disk into Python environment
  "/local_disk_path_phase2InExclusionCriteria.csv",    
  header=None)


vec_moldesc = pandas.read_csv( # Load embedded vectors of molecular descriptor feature of phase 2 drug interventional clinical trials from local disk into python env.
  '/local_disk_path_Molecular_descriptor_Vectors.csv', header=None)


vec_fp = pandas.read_csv( # Load embedded vectors of RDKit Fingerprint feature of phase 2 drug interventional clinical trials from local disk into python env.
  '/local_disk_path_RDKit_Fingerprint_Vectors.csv', header=None)


vec_endpoint = pandas.read_csv( # Load embedded vectors of RDKit Fingerprint feature of phase 2 drug interventional clinical trials from local disk into python env.
  '/local_disk_path_primary_Endpoint_vectors.csv', header = None)




""" combine above imported vectors and then check and confirm """
ls_df = [vec_fp, vec_moldesc, vec_endpoint, vec_inexclusion];  
jd = pandas.concat(ls_df,  axis=1); 
print( jd.shape ) 


# Spltting data ------pubmed BERT 's embedded vectors against lazypredict all predictors!!
from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(
	
  # Input the feature for machine learning 

  jd, # Option No.1: All 4 types of feature sets were used here as the input for machine learning. So, multiple types of (also known as heterogeneous) data were used, i.e., the multi-modal feature data for machine learning 

  """ Other options for using single feature as the input for machine learning """
  # vec_inexclusion, # Option No.2: Using embedded vectors of Phase 2 drug interventional clinical trials' inclusion and exclusion criteria as the input of machine learning.
  
  # vec_fp, # Option No.3: Using embedded vectors of Phase 2 drug interventional clinical trials' RDKit fingerprints as the input of machine learning.

  # vec_endpoint, # Option No.4: Using embedded vectors of Phase 2 drug interventional clinical trials' primary outcome/endpoint measurement as the input of machine learning.


  # vec_moldesc, # Option No.5: Using embedded vectors of Phase 2 drug interventional clinical trials' drug molecular descriptor as the input of machine learning.


  labels_y, # specify the label set for machine learning training and validation
  test_size=0.2, 
  random_state=99, 
  stratify=labels_y )  #  The "stratify=y" , which is important for maintaining the original data ratio of positive/negative classes instance in the data splitting..   



# Machine Learning Processes started, loading the classifier function first
from lazypredict.Supervised import LazyClassifier 


# Specify the arguments/parameters for machine learning classifier (classification) to be trained    
clf = LazyClassifier(
	verbose=110, 
	ignore_warnings=True, 
	custom_metric=None,
  predictions=True,
  random_state=99,
  classifiers="all"
  );


# Model Fitting
models, predictions = clf.fit(
	X_train, 
	X_test, 
	y_train, 
	y_test); 


#  Show metrics of validation results of machine learning
print(models) 



"""
#################### Below showed the resultant metrics of machine learning classification using multi-modal feature dataset 

Conclusion: results were unsatisfying
 
                                  Accuracy  Balanced Accuracy  ROC AUC  F1 Score  Time Taken
Model                                                                                    
SGDClassifier                      0.65               0.71     0.71      0.66        0.53
NuSVC                              0.63               0.68     0.68      0.63        0.66
GaussianNB                         0.59               0.62     0.62      0.60        0.12
BernoulliNB                        0.58               0.61     0.61      0.59        0.14
NearestCentroid                    0.58               0.61     0.61      0.59        0.13
RandomForestClassifier             0.71               0.60     0.60      0.65        0.71
DecisionTreeClassifier             0.69               0.59     0.59      0.64        0.21
AdaBoostClassifier                 0.70               0.59     0.59      0.64        1.15
BaggingClassifier                  0.70               0.59     0.59      0.64        0.74
ExtraTreeClassifier                0.69               0.59     0.59      0.64        0.10
ExtraTreesClassifier               0.69               0.58     0.58      0.63        0.63
SVC                                0.70               0.58     0.58      0.63        0.71
RidgeClassifierCV                  0.68               0.58     0.58      0.63        0.22
LGBMClassifier                     0.70               0.58     0.58      0.62        0.66
XGBClassifier                      0.68               0.57     0.57      0.63        0.48
LogisticRegression                 0.68               0.57     0.57      0.63        0.28
LinearDiscriminantAnalysis         0.68               0.57     0.57      0.63        0.54
RidgeClassifier                    0.68               0.57     0.57      0.63        0.15
LabelSpreading                     0.65               0.56     0.56      0.62        0.17
LabelPropagation                   0.65               0.56     0.56      0.62        0.20
PassiveAggressiveClassifier        0.62               0.55     0.55      0.60        0.30
KNeighborsClassifier               0.65               0.54     0.54      0.59        0.18
QuadraticDiscriminantAnalysis      0.65               0.54     0.54      0.59        0.63
CalibratedClassifierCV             0.67               0.53     0.53      0.56       25.33
Perceptron                         0.60               0.53     0.53      0.58        0.23
LinearSVC                          0.61               0.51     0.51      0.57        6.45
DummyClassifier                    0.65               0.50     0.50      0.51        0.09 








#################### Below showed the resultant metrics of machine learning classification using drug molecular descriptor feature dataset 

Conclusion: results were unsatisfying

                                  Accuracy  Balanced Accuracy  ROC AUC  F1 Score  Time Taken
Model                                                                                    
QuadraticDiscriminantAnalysis      0.60               0.66     0.66      0.60        0.05
PassiveAggressiveClassifier        0.57               0.62     0.62      0.57        0.03
BernoulliNB                        0.57               0.59     0.59      0.58        0.01
NearestCentroid                    0.54               0.59     0.59      0.53        0.02
Perceptron                         0.68               0.59     0.59      0.64        0.02
LGBMClassifier                     0.70               0.59     0.59      0.64        0.24
NuSVC                              0.70               0.58     0.58      0.63        0.06
KNeighborsClassifier               0.70               0.58     0.58      0.63        0.03
ExtraTreesClassifier               0.70               0.58     0.58      0.63        0.15
RandomForestClassifier             0.70               0.58     0.58      0.63        0.21
BaggingClassifier                  0.69               0.58     0.58      0.63        0.12
XGBClassifier                      0.69               0.58     0.58      0.62        0.24
DecisionTreeClassifier             0.68               0.57     0.57      0.63        0.03
RidgeClassifierCV                  0.68               0.57     0.57      0.63        0.03
GaussianNB                         0.48               0.57     0.57      0.44        0.01
LinearDiscriminantAnalysis         0.67               0.57     0.57      0.62        0.03
RidgeClassifier                    0.67               0.57     0.57      0.62        0.02
LogisticRegression                 0.67               0.57     0.57      0.62        0.03
ExtraTreeClassifier                0.68               0.56     0.56      0.61        0.02
LinearSVC                          0.67               0.56     0.56      0.61        1.07
LabelSpreading                     0.65               0.56     0.56      0.61        0.03
LabelPropagation                   0.65               0.56     0.56      0.61        0.02
CalibratedClassifierCV             0.66               0.54     0.54      0.58        4.75
AdaBoostClassifier                 0.68               0.54     0.54      0.57        0.19
SVC                                0.67               0.54     0.54      0.57        0.06
SGDClassifier                      0.65               0.54     0.54      0.58        0.04
DummyClassifier                    0.65               0.50     0.50      0.51        0.02 






#################### Below showed the resultant metrics of machine learning classification using RDKit fingerprints feature dataset 

Conclusion: results were unsatisfying

                                  Accuracy  Balanced Accuracy  ROC AUC  F1 Score  Time Taken
Model                                                                                    
GaussianNB                         0.59               0.62     0.62      0.60        0.07
NearestCentroid                    0.58               0.62     0.62      0.59        0.08
BernoulliNB                        0.58               0.61     0.61      0.59        0.07
LGBMClassifier                     0.70               0.60     0.60      0.65        0.40
RandomForestClassifier             0.70               0.59     0.59      0.64        0.50
DecisionTreeClassifier             0.69               0.59     0.59      0.64        0.12
AdaBoostClassifier                 0.70               0.59     0.59      0.64        0.63
ExtraTreesClassifier               0.69               0.59     0.59      0.64        0.40
SVC                                0.70               0.58     0.58      0.63        0.38
BaggingClassifier                  0.69               0.58     0.58      0.63        0.40
LogisticRegression                 0.68               0.58     0.58      0.63        0.20
NuSVC                              0.68               0.58     0.58      0.63        0.33
XGBClassifier                      0.68               0.57     0.57      0.63        0.27
RidgeClassifier                    0.68               0.57     0.57      0.63        0.09
LinearDiscriminantAnalysis         0.68               0.57     0.57      0.63        0.35
RidgeClassifierCV                  0.68               0.57     0.57      0.63        0.16
ExtraTreeClassifier                0.67               0.57     0.57      0.62        0.06
KNeighborsClassifier               0.66               0.57     0.57      0.62        0.07
LabelPropagation                   0.65               0.56     0.56      0.62        0.08
LabelSpreading                     0.65               0.56     0.56      0.62        0.09
SGDClassifier                      0.62               0.54     0.54      0.60        0.34
PassiveAggressiveClassifier        0.61               0.54     0.54      0.59        0.18
LinearSVC                          0.62               0.53     0.53      0.59        3.52
CalibratedClassifierCV             0.67               0.53     0.53      0.56       13.81
QuadraticDiscriminantAnalysis      0.64               0.53     0.53      0.58        0.55
Perceptron                         0.59               0.53     0.53      0.58        0.13
DummyClassifier                    0.65               0.50     0.50      0.51        0.06  






#################### Below showed the resultant metrics of machine learning classification using text feature dataset of primary outcome/endpoint measurement texts.

Conclusion: results were better than above, but still not so satisfying, far from ideal.

                                  Accuracy  Balanced Accuracy  ROC AUC  F1 Score  Time Taken
Model                                                                                    
KNeighborsClassifier               0.78               0.74     0.74      0.78        0.03
NuSVC                              0.79               0.73     0.73      0.78        0.21
LinearSVC                          0.76               0.72     0.73      0.76        1.00
RidgeClassifier                    0.74               0.72     0.72      0.74        0.05
SGDClassifier                      0.76               0.71     0.71      0.75        0.07
LogisticRegression                 0.76               0.71     0.71      0.75        0.08
GaussianNB                         0.75               0.71     0.71      0.75        0.03
LinearDiscriminantAnalysis         0.72               0.71     0.71      0.72        0.15
SVC                                0.78               0.70     0.70      0.76        0.18
RidgeClassifierCV                  0.74               0.70     0.70      0.74        0.13
BaggingClassifier                  0.75               0.70     0.70      0.74        3.47
AdaBoostClassifier                 0.76               0.70     0.70      0.75        2.74
RandomForestClassifier             0.77               0.69     0.69      0.74        1.20
XGBClassifier                      0.76               0.69     0.69      0.74        1.20
LGBMClassifier                     0.76               0.69     0.69      0.74        1.31
BernoulliNB                        0.69               0.68     0.68      0.70        0.03
DecisionTreeClassifier             0.71               0.68     0.68      0.71        0.54
QuadraticDiscriminantAnalysis      0.66               0.68     0.68      0.67        0.15
ExtraTreesClassifier               0.75               0.68     0.68      0.73        0.36
NearestCentroid                    0.69               0.68     0.68      0.69        0.04
PassiveAggressiveClassifier        0.71               0.68     0.68      0.71        0.10
ExtraTreeClassifier                0.69               0.67     0.67      0.69        0.03
Perceptron                         0.68               0.65     0.65      0.68        0.06
LabelSpreading                     0.58               0.63     0.63      0.58        0.04
LabelPropagation                   0.57               0.62     0.62      0.58        0.04
CalibratedClassifierCV             0.71               0.61     0.61      0.66        4.02
DummyClassifier                    0.65               0.50     0.50      0.51        0.02 






#################### Below showed the resultant metrics of machine learning classification using Inclusion and Exclusion criteria (text) feature dataset 

Conclusion: metric scores were high ! And results were good ! Good feature was identified for machine learning prediction on phase 3 drug interventional clinical trial outcomes using the data from associated phase 2 clinical trials. 

                                  Accuracy  Balanced Accuracy  ROC AUC  F1 Score  Time Taken
Model                                                                                    
LogisticRegression                 0.97               0.97     0.97      0.97        0.04
PassiveAggressiveClassifier        0.97               0.96     0.96      0.97        0.07
LinearSVC                          0.96               0.95     0.95      0.96        0.65
Perceptron                         0.96               0.95     0.95      0.96        0.05
SGDClassifier                      0.96               0.95     0.95      0.96        0.04
RidgeClassifierCV                  0.95               0.94     0.94      0.95        0.13
CalibratedClassifierCV             0.95               0.94     0.94      0.95        2.79
SVC                                0.95               0.93     0.93      0.95        0.19
RidgeClassifier                    0.91               0.91     0.91      0.91        0.04
XGBClassifier                      0.92               0.91     0.91      0.92        0.69
NuSVC                              0.92               0.90     0.90      0.92        0.21
LGBMClassifier                     0.92               0.90     0.90      0.92        1.31
ExtraTreesClassifier               0.90               0.87     0.87      0.90        0.26
AdaBoostClassifier                 0.87               0.85     0.85      0.87        2.86
RandomForestClassifier             0.89               0.84     0.84      0.88        1.04
BaggingClassifier                  0.85               0.84     0.84      0.85        3.10
KNeighborsClassifier               0.85               0.83     0.83      0.85        0.03
GaussianNB                         0.81               0.82     0.82      0.81        0.03
BernoulliNB                        0.80               0.79     0.79      0.80        0.04
DecisionTreeClassifier             0.79               0.77     0.77      0.79        0.57
QuadraticDiscriminantAnalysis      0.84               0.77     0.77      0.82        0.15
NearestCentroid                    0.77               0.77     0.77      0.78        0.04
ExtraTreeClassifier                0.78               0.76     0.76      0.78        0.02
LinearDiscriminantAnalysis         0.75               0.75     0.75      0.75        0.22
LabelSpreading                     0.48               0.60     0.60      0.41        0.07
LabelPropagation                   0.48               0.60     0.60      0.41        0.04
DummyClassifier                    0.65               0.50     0.50      0.51        0.02 
"""

