repo
stringlengths
8
116
tasks
stringlengths
8
117
titles
stringlengths
17
302
dependencies
stringlengths
5
372k
readme
stringlengths
5
4.26k
__index_level_0__
int64
0
4.36k
0bserver07/One-Hundred-Layers-Tiramisu
['semantic segmentation']
['The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation']
camvid_data_loader.py helper.py fc-densenet-model.py model-dynamic.py model-tiramasu-103.py model-tiramasu-67-func-api.py model-tiramasu-56.py train-tiramisu.py model-tiramasu-67.py load_data Tiramisu normalized one_hot_it Tiramisu Tiramisu Tiramisu Tiramisu Tiramisu step_decay rollaxis print len normalized append range one_hot_it zeros float32 equalizeHist zeros range pow floor
0bserver07/One-Hundred-Layers-Tiramisu
0
101vinayak/Neural-Style-Transfer
['style transfer']
['A Neural Algorithm of Artistic Style']
images2gif.py checkImages writeGif get_cKDTree readGif NeuQuant GifWriter intToBin append uint8 astype copy int int checkImages hasattr handleSubRectangles GifWriter writeGifToFile convertImagesToPIL open fromarray asarray seek tell convert append enumerate open
# Neural-Style-Transfer Neural style transfer is an optimization technique used to take two images—a content image and a style reference image (such as an artwork by a famous painter)—and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image. Implementation of https://arxiv.org/abs/1508.06576 Research paper from Cornell Univ. STYLE TRANSFER PROCESS: <h3 align="center"> <img src="result.gif"> </h3> The process follows the use of VGG19 model of CNN which is used for content and style extraction features in the network. <img src="VGG19.png">
1
12kleingordon34/NLP_masters_project
['word embeddings']
['Gender Bias in Contextualized Word Embeddings']
process_winogender_data.py main process_wino_data process_occ_stats float int list items replace append process_wino_data process_occ_stats
# NLP_masters_project Code base used for NLP project 2020. By Daniel de Vassimon Manela, Boris van Breugel, Tom Fisher, David Errington --- ## Contents * `process_ontonotes.ipynb`: Loads the Ontonotes Release 5.0 data from [Github](https://github.com/yuchenlin/OntoNotes-5.0-NER-BIO.git), and processes raw data into a suitable formatting for modelling. Notebook is also dependent on data loaded from [UCLA NLP Group](https://github.com/uclanlp/gn_glove.git). Both these files are loaded within the notebook. Running notebook back-to-front returns `original_data.csv` and `flipped_data.csv`, which are used in training a BERT Masked Language Model. * `BERT_fine_tuning_full_ontonotes.ipynb`: Loads the output of `process_ontonotes.ipynb` and trains a BERT Masked Language Model using [Huggingface](https://github.com/huggingface/transformers) directory as described in Section 4.3 of our paper. Be sure to correctly comment/uncomment out the lines for loading the data. Data Augmented models require loading in both `original_data.csv` and `flipped_data.csv`. To train regular unaugmented finetuned models, only load `original_data.csv`. * `bias_analysis.ipynb`: Analyses streo and skew bias in out-of-the-box ELMo, BERT, distilBERT and RoBERTa using [WinoBias dataset](https://arxiv.org/pdf/1904.03310.pdf). Also includes Online Skewness Mitigation for different BERTs, visualisations of professional embedding bias, and compares bias in BERT to industry statistics. ## Extra Dependencies
2
131250208/TPlinker-joint-extraction
['relation extraction']
['TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking']
tplinker_plus/config.py common/utils.py tplinker/config.py tplinker/train.py tplinker_plus/train.py setup.py common/components.py tplinker/tplinker.py preprocess/__init__.py tplinker_plus/tplinker_plus.py LayerNorm HandshakingKernel DefaultLogger Preprocessor DataMaker4BiLSTM TPLinkerBert HandshakingTaggingScheme DataMaker4Bert TPLinkerBiLSTM MetricsCalculator train_step train_n_valid MyDataset bias_loss valid_step DataMaker4BiLSTM TPLinkerPlusBert TPLinkerPlusBiLSTM HandshakingTaggingScheme DataMaker4Bert MetricsCalculator train_step sample_equal_to train_n_valid MyDataset valid_step to CrossEntropyLoss get_sample_accuracy backward rel_extractor zero_grad loss_func step get_rel_cpg get_sample_accuracy join format valid state_dict glob print save train range len add format set_trace set repeat gather long get_cpg long
# TPLinker **TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking** This repository contains all the code of the official implementation for the paper: **[TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking](https://www.aclweb.org/anthology/2020.coling-main.138.pdf).** The paper has been accepted to appear at **COLING 2020**. \[[slides](https://drive.google.com/file/d/1UAIVkuUgs122k02Ijln-AtaX2mHz70N-/view?usp=sharing)\] \[[poster](https://drive.google.com/file/d/1iwFfXZDjwEz1kBK8z1To_YBWhfyswzYU/view?usp=sharing)\] TPLinker is a joint extraction model resolved the issues of **relation overlapping** and **nested entities**, immune to the influence of **exposure bias**, and achieves SOTA performance on NYT (TPLinker: **91.9**, TPlinkerPlus: **92.6 (+3.0)**) and WebNLG (TPLinker: **91.9**, TPlinkerPlus: **92.3 (+0.5)**). Note that the details of TPLinkerPlus will be published in the extended paper, which is still in progress. **Note: Please refer to Q&A and closed issues to find your question before proposed a new issue.** - [Model](#model) - [Results](#results) - [Usage](#usage) * [Prerequisites](#prerequisites) * [Data](#data)
3
15saurabh16/Multipoles
['time series']
['Mining Novel Multivariate Relationships in Time Series Data Using Correlation Networks']
COMET_fMRI_Multiple_Parameters.py COMET_Climate.py bronk_kerb.py empirical_observations.py SignifTesterMultipole.py quick-cliques/utils/edges2snap.py quick-cliques/utils/invertdimacs.py Misc_Modules.py get_graph_in_txt.py COMET_ADVANCED.py RandomSearch_Climate.py COMET_fMRI.py quick-cliques/utils/edge2dimacs.py quick-cliques/utils/edges2graph.py BruteForce_fMRI.py BruteForce_Climate_New.py COMET.py compare_with_brute_force.py quick-cliques/utils/dimacs2edges.py completeness_eval.py generate_random_ts.py RandomSearch_fMRI.py Comet_scalability_analysis.py BruteForce_fMRI_New.py GraphLASSObaseline.py BruteForce_Climate.py convert_txt_to_mat_lasso.py COMET_Climate_Multiple_Parameters.py CLIQUE_multipoles_algorithm.py compare_multipole_algorithms.py COMET_Synthetic_Multiple_Parameters.py levg_null_dist_expt3.py degeneracy_ordering find_cliques_pivot find_cliques get_inputs_for_groups brute_search mkdirnotex find_max_edgewt brute_search_parallel2 brute_search_group get_maxedgewt_vec get_inputs_for_groups brute_search mkdirnotex find_max_edgewt brute_search_parallel2 brute_search_group get_maxedgewt_vec mkdirnotex mkdirnotex remove_neg_cliques extract_multipole find_good_multipoles remove_duplicate_cliqs_2 remove_duplicate_cliqs construct_GraphF remove_g2_cliques get_supersets CLIQ_ALGO get_inputs_for_groups extract_multipoles_from_cliq cliq_brute_search COMET_EXT find_good_multipoles_complete_parallel extract_multipoles_from_cliqgroup get_inputs_for_groups extract_multipoles_from_cliq cliq_brute_search COMET_EXT extract_multipoles_from_cliqgroup find_good_multipoles_complete_parallel remove_non_maximals mkdirnotex mkdirnotex mkdirnotex mkdirnotex mkdirnotex mkdirnotex get_sorted_mps find_MP_in_MPList get_MPs_of_M1_missed_by_M2_NOLEV get_MPs_of_M1_missed_by_M2 get_MPs_of_M1_missed_by_M2_ALTER gen_dist_kxk_mats_edge_range get_lev_and_levg get_lev_minors gen_dist_kxk_mats_min_edge mkdirnotex make_sym_matrix generate_std_rand_ts_corrmat get_graph_in_txt filter_multipoles get_lev_and_levgs_all_cands get_multipole_cands pval_anal3_SLP get_null_dist_expt3 pval_anal3_fMRI get_inputs_for_groups get_lev_levg_and_weakest_pole key_generator remove_redundant_multipoles_alter_parallel_recursive get_lev_and_levg find_next_level_multipoles remove_redundant_multipoles_alter get_lev_minors remove_redundant_multipoles_alter_group find_multipoles_for_P2 remove_redundant_multipoles_alter_parallel object_to_list_arr remove_redundant_multipoles get_inputs_for_groups brute_search random_search_parallel2 generate_final_output mkdirnotex brute_search_group get_inputs_for_groups brute_search random_search_parallel2 generate_final_output mkdirnotex brute_search_group pval_anal_fMRI get_pval_multipole get_lev_and_levg_all_wins get_lev_and_levg_group pval_parallel pval_anal_SLP get_pval_multipole_group get_multiple_groups pval_anal_model_SLP degeneracy_ordering remove find_cliques_pivot union set add intersection keys remove add difference iter intersection append next union pop remove defaultdict set add reverse append len dirname makedirs size arange len str list time combinations get_lev_and_levg print append range list time get_lev_and_levg zip append max Pool str list map append sum next range factorial get_inputs_for_groups close zip combinations print min divide repeat len triu_indices argmin eig where max zeros len range find_max_edgewt append issubset range len tolist array range list tolist append array range len list tolist append array range len int list min tolist copy split append array range len list min tolist copy append array range len list get_lev_and_levg min argmin divide mean eye abs range extract_multipole str list time print zeros range len remove_neg_cliques str time find_good_multipoles print tolist remove_redundant_multipoles_alter copy find_cliques construct_GraphF range combinations list get_lev_and_levg min tolist divide mean eye append abs array range len int cliq_brute_search min range len extract_multipoles_from_cliq time list str get_lev_and_levg print zip range len get_inputs_for_groups list print close map repeat zip append Pool range len str time print tolist copy find_cliques find_good_multipoles_complete_parallel remove_duplicate_cliqs remove_redundant_multipoles_alter_parallel construct_GraphF range len extract_multipoles_from_cliq list sorted sort append range len format get_graph_in_txt chdir getcwd system remove_non_maximals mkdir remove_duplicate_cliqs_2 tolist argsort append array range len get_sorted_mps list set_trace set append range len get_sorted_mps list issubset set append range len list print issubset set append range len len issubset range set zeros triu_indices pop list eig min zeros range eig get_lev_minors min triu_indices get_lev_and_levg argmin eig rand where append max range make_sym_matrix gen_dist_kxk_mats_edge_range str arange print size range normal int zscore matmul cholesky zeros range format len write close range open list min append abs range len range get_lev_and_levg len append range len list get_lev_and_levg corrcoef sample zeros range len str time format print getcwd get_null_dist_expt3 append loadmat range time format print getcwd transpose size get_null_dist_expt3 append loadmat range list sort issubset zip append zeros range len list remove_redundant_multipoles_alter zip get_inputs_for_groups list min close map remove_redundant_multipoles_alter zip Pool len get_inputs_for_groups list format print min close map remove_redundant_multipoles_alter zip Pool len str list time print sort zip append zeros array range len argmin eig get_lev_minors min reverse list get_lev_and_levg sort set append range len list format print close map set repeat zip append sum Pool range str list time format generate_final_output print min close map choice repeat zip append sum Pool range len str format print getcwd savemat mkdirnotex append remove_non_maximals range len print size arange len zeros range get_pval_multipole len pop list get_lev_levg_and_weakest_pole get_lev_and_levg size corrcoef sample zeros float range len list get_lev_and_levg corrcoef zip zeros range len max list close map repeat array zip get_multiple_groups zeros Pool range len time list format concatenate print tuple close delete map repeat zip get_multiple_groups zeros Pool max range len str get_lev_and_levg_all_wins pval_parallel savemat append expanduser loadmat range str getcwd get_lev_and_levg_all_wins pval_parallel append loadmat range getcwd transpose size get_lev_and_levg_all_wins pval_parallel append loadmat range
# Multipoles This repository contains the code that we used to find "multipoles", a new class of multivariate relationship patterns in time series data in datasets from climate and neuroscience domain. See [1],[2] for further details. <b>Instructions to run the code:</b> 1) The code to obtain all the empirical observations discussed in section 4.1 of techincal report can be obtained using empirical_observations.py. 2) Please keep all the codes and data files in the same directory. 3) There are two implementations of COMET algorithm; COMET.py and COMET_ADVANCED.py. The two implementations mainly differ in the choice of algorithm used to solve the clique enumeration problem in one of the intermediate steps. Script COMET.py uses bronk_kerb.py, a python implementation of bronk kerbosch's algorithm (1973), whereas COMET_ADVANCED.py uses the C++ implementation (in quick-cliques folder) provided by authors (available at https://github.com/darrenstrash/quick-cliques). contains the primary module COMET_EXT which can be called to find all multipoles in a given dataset. COMET_EXT takes five inputs:<br> CorrMat: Correlation matrix of the entire dataset. <br> sigma: minimum threshold on linear dependence of desired multipoles. <br> delta: minimum threshold on linear gain of desired multipoles.<br> edge_filt: same as parameter \mu of CoMEtExtended (see <a href = "https://www.researchgate.net/publication/323129038_Mining_Novel_Multivariate_Relationships_in_Time_Series_Data_Applications_to_Climate_and_Neuroscience"> technical report </a> for further details). Higher values of edge_filt will recover more multipoles but will increase computational time and vice-versa.<br>
4
1980x/ABAW2020DMACS
['facial expression recognition']
['Affect Expression Behaviour Analysis in the Wild using Spatio-Channel Attention and Complementary Context Information']
dataset/affectwild2_dataset.py models/losses.py test_affwild2.py train_affwild2_expw_affectnet.py util.py dataset/sampler.py train_affwild2.py dataset/affectwild2_expw_affectnet.py models/attentionnet.py models/resnet.py validate statistic AverageMeter accuracy save_checkpoint adjust_learning_rate main train val_accuracy load_state_dict _read_path_label PIL_loader change_emotion_label_same_as_affectnet default_reader ImageList switch_expression _read_path_label default_reader_expw PIL_loader change_emotion_label_same_as_affectnet default_reader_affectnet change_emotion_label_of_expw_same_as_affectnet default_reader ImageList switch_expression ImbalancedDatasetSampler RegionBranch eca_layer Attentiomasknet count_parameters extract_patches_attentivefeatures SharedAttentionBranch AttentionBranch main FocalLoss LDAMLoss focal_loss norm_angle count_parameters ResNet resnet50 Bottleneck resnet152 sigmoid conv3x3 resnet34 resnet18 main BasicBlock resnet101 validate batch_size count_parameters warn SGD pretrained DataLoader adjust_learning_rate save_checkpoint dataset max str ImbalancedDatasetSampler load_state_dict AttentionBranch parse_args to ImageList sum range format TestList resnet50 Compose test start_epoch add_param_group resume item power get_cls_num_list load RegionBranch print imagesize predict_test isfile train epochs array len zero_grad region_model to range cat update format size mean item basemodel enumerate time attention_model criterion backward print AverageMeter accuracy step len f1_score recall_score precision_score sum topk size statistic confusion_matrix t eq mul_ expand_as append numpy max eval AverageMeter time join str copyfile model_dir save print str param_groups topk size t eq mul_ expand_as append sum max items list print copy_ from_numpy state_dict load open _read_path_label list print change_emotion_label_same_as_affectnet shuffle dict append keys range len int print readlines close change_emotion_label_of_expw_same_as_affectnet open append split int print close dict append range open size unfold unsqueeze permute append regionnet size rand extract_patches_attentivefeatures device net exp sigmoid abs load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict model
# ABAW2020 DMACS SSSIHL <strong>This is code for our submission in the expression track of ABAW 2020 competition.</strong> <strong> Results: https://ibug.doc.ic.ac.uk/resources/fg-2020-competition-affective-behavior-analysis/ </strong> Link to Presentation in FG-2020: https://drive.google.com/file/d/1loxCTklHu5hhkA_3pq7qIGrmZZ7HoP_y/view?usp=sharing <strong>Title</strong>: Affect expression behaviour analysis in the wild using spatio-channel attention and Complementary Context Information Paper: http://arxiv.org/abs/2009.14440 <strong> Our proposed FER framework:</strong> ![Proposed framework](Images/graphicalabstract03.png) <strong>Spatio-Channel Attention Network(SCAN) :</strong>
5
1980x/SCAN-CCI-FER
['facial expression recognition']
['Landmark Guidance Independent Spatio-channel Attention and Complementary Context Information based Facial Expression Recognition', 'Affect Expression Behaviour Analysis in the Wild using Spatio-Channel Attention and Complementary Context Information']
GACNN/main_sfew.py dataset/ckplus_dataset_cv.py dataset/affectnet_dataset.py main_sfew.py main_oulucasia.py dataset/sampler.py GACNN/main_fplus.py utils/util.py OADN/models/resnet.py dataset/ferplus_dataset.py GACNN/ferplus_dataset.py OADN/train_ferplus.py OADN/dataset/rafdb_dataset_attentionmaps.py OADN/train_sfew.py OADN/dataset/sfew_dataset_attentionmaps.py GACNN/affectnet_dataset.py OADN/train_affectnet.py GACNN/main_jaffe.py GACNN/model.py OADN/train_rafdb.py OADN/dataset/affectnet_dataset_attentionmaps.py OADN/util.py OADN/models/attentionnet.py main_ferplus.py main_ckplus.py OADN/dataset/sampler.py main_affectnet_rafdb_test_fedro.py main_affectnet.py dataset/fedro_dataset.py models/attentionnet.py dataset/rafdb_dataset.py GACNN/jaffe_dataset.py main_rafdb.py GACNN/main_affectnet.py dataset/oulucasia_dataset_cv.py dataset/sfew_dataset.py OADN/dataset/ferplus_dataset_attentionmaps.py GACNN/sfew_dataset.py dataset/affectnet_rafdb_dataset.py models/resnet.py validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy adjust_learning_rate save_checkpoint main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy adjust_learning_rate save_checkpoint main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train switch_expression ImageList default_reader PIL_loader get_class PIL_loader change_emotion_label_same_as_affectnet default_reader_affectnet ImageList default_reader_rafdb switch_expression get_class PIL_loader loademotionlabels change_emotion_label_same_as_affectnet default_reader ImageList ImageList map_emotion_label default_reader PIL_loader PIL_loader _process_data default_reader ImageList make_emotion_compatible_to_affectnet get_class PIL_loader change_emotion_label_same_as_affectnet default_reader ImageList get_class PIL_loader change_emotion_label_same_as_affectnet default_reader ImageList ImbalancedDatasetSampler ImageList default_reader PIL_loader get_class PIL_loader average_point convert68to24 _process_data default_reader ImageList make_emotion_compatible_to_affectnet get_class PIL_loader convert68to24 change_emotion_label_same_as_affectnet default_reader ImageList validate AverageMeter accuracy ImbalancedDatasetSampler save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train Net View get_class PIL_loader convert68to24 change_emotion_label_same_as_affectnet default_reader ImageList RegionBranch eca_layer Attentiomasknet count_parameters extract_patches_attentivefeatures SharedAttentionBranch AttentionBranch main norm_angle count_parameters ResNet resnet50 Bottleneck resnet152 sigmoid conv3x3 resnet34 resnet18 main BasicBlock resnet101 validate AverageMeter accuracy ImbalancedDatasetSampler save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train validate AverageMeter accuracy save_checkpoint adjust_learning_rate main train accuracy AverageMeter load_state_dict get_class PIL_loader convert68to24 _process_data _gaussian draw_gaussian default_reader ImageList make_emotion_compatible_to_affectnet get_class PIL_loader convert68to24 change_emotion_label_same_as_affectnet _gaussian draw_gaussian default_reader ImageList ImbalancedDatasetSampler get_class PIL_loader convert68to24 change_emotion_label_same_as_affectnet _gaussian draw_gaussian default_reader ImageList LandmarksAttentionBranch RegionBranch count_parameters main norm_angle count_parameters ResNet resnet50 Bottleneck resnet152 sigmoid conv3x3 resnet34 resnet18 main BasicBlock resnet101 load_state_dict validate batch_size count_parameters warn SGD pretrained DataLoader save_checkpoint dataset max str ImbalancedDatasetSampler load_state_dict AttentionBranch parse_args to ImageList sum range format resnet50 Compose start_epoch add_param_group resume item power get_cls_num_list load RegionBranch print imagesize isfile epochs array len zero_grad region_model to range cat update format size mean item basemodel enumerate time attention_model criterion backward print AverageMeter accuracy step len eval AverageMeter time join model_dir save param_groups topk size t eq mul_ expand_as append sum max adjust_learning_rate TrainList TestList train folds replace str int readlines close shuffle strip dict split append range open int format get_class print change_emotion_label_same_as_affectnet readlines close open append sum range split int print close append open dict loademotionlabels map_emotion_label sum max range len replace list set change_emotion_label_same_as_affectnet asarray average_point astype float32 int32 append zeros len items enumerate int reshape transpose array model copyfile print str state_dict print convert68to24 size unfold unsqueeze permute append regionnet size rand extract_patches_attentivefeatures device net sigmoid abs load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict model num_classes LandmarksAttentionBranch max items list print copy_ from_numpy state_dict exp empty range sum _gaussian format get_class sum
Our work has been published in Pattern Recognition letters. https://authors.elsevier.com/a/1ca-7cAmylz0f https://www.sciencedirect.com/science/article/abs/pii/S0167865521000489#absh0002 <strong>Title:</strong> Landmark Guidance Independent Spatio-channel Attention and Complementary Context Information based Facial Expression Recognition https://arxiv.org/pdf/2007.10298v1.pdf This work was presented in FG-2020 workshop: Affect Recognition in-the-wild: Uni/Multi-Modal Analysis & VA-AU-Expression Challenges IEEE Face and Gesture Recognition (FG-2020) Competition: Affective Behavior Analysis in-the-wild (ABAW) Leaderboard: https://ibug.doc.ic.ac.uk/resources/fg-2020-competition-affect Darshan Gera and S. Balasubramanian."Affect Expression Behaviour Analysis in the Wild (ABAW) using Spatio-channel Attention and Complementary Context Information", Arxiv preprint: http://arxiv.org/abs/2009.14440 Source Code: https://github.com/1980x/ABAW2020DMACS
6
198808xc/OrganSegC2F
['pancreas segmentation']
['A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans']
OrganSegC2F/init.py OrganSegC2F/DataF.py OrganSegC2F/utils.py OrganSegC2F/fine_surgery.py OrganSegC2F/coarse_training.py OrganSegC2F/coarse_surgery.py OrganSegC2F/oracle_testing.py OrganSegC2F/coarse_fusion.py DATA2NPY/nii2npy.py OrganSegC2F/coarse_testing.py DATA2NPY/dicom2npy.py OrganSegC2F/fine_training.py OrganSegC2F/oracle_fusion.py OrganSegC2F/DataC.py OrganSegC2F/coarse2fine_testing.py transplant upsample_filt expand_score interp DataLayer DataLayer transplant upsample_filt expand_score interp snapshot_name_from_timestamp training_set_filename volume_filename_testing result_name_from_timestamp result_name_from_timestamp_s is_organ snapshot_name_from_timestamp_s testing_set_filename volume_filename_coarse2fine volume_filename_fusion post_processing snapshot_filename log_filename DSC_computation in_training_set print params range flat len data num upsample_filt print min shape range max join str str join sort len reversed listdir range snapshot_filename str join volume_filename_testing sort reversed listdir range len sum zeros_like where nonzero zeros sum array range
# OrganSegC2F: a coarse-to-fine organ segmentation framework version 1.11 - Dec 3 2017 - by Yuyin Zhou and Lingxi Xie ### Please note: an improved version of OrganSegC2F named OrganSegRSTN is available: https://github.com/198808xc/OrganSegRSTN It outperforms OrganSegC2F by 84.50% vs. 82.37% on the NIH pancreas segmentation dataset. #### Also NOTE: some functions have been optimized in the repository OrganSegRSTN, but not yet been transferred here. I will do these things in the near future - they do not impact performance, but will make the testing process MUCH faster. #### **Yuyin Zhou is the main contributor to this repository.** Yuyin Zhou proposed the algorithm, created the framework and implemented main functions. Lingxi Xie later wrapped up these codes for release. #### If you use our codes, please cite our paper accordingly:
7
198808xc/OrganSegRSTN
['pancreas segmentation']
['Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation', 'A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans']
OrganSegRSTN/surgery.py OrganSegRSTN/Crop.py OrganSegRSTN/coarse2fine_testing.py OrganSegRSTN/oracle_fusion.py OrganSegRSTN/Uncrop.py OrganSegRSTN/coarse_fusion.py OrganSegRSTN/Crop_old.py OrganSegRSTN/fast_functions.py OrganSegRSTN/indiv_training.py OrganSegRSTN/init.py OrganSegRSTN/coarse_testing.py OrganSegRSTN/Data.py OrganSegRSTN/joint_training.py OrganSegRSTN/oracle_testing.py OrganSegRSTN/utils.py DATA2NPY/dicom2npy.py DATA2NPY/nii2npy.py CropLayer CropLayer DataLayer _swig_repr swig_import_helper _swig_setattr_nondynamic _swig_getattr post_processing _swig_setattr DSC_computation transplant upsample_filt expand_score interp UncropLayer snapshot_name_from_timestamp volume_filename_testing snapshot_name_from_timestamp_2_s result_name_from_timestamp_s is_organ snapshot_filename in_training_set snapshot_name_from_timestamp_2 testing_set_filename result_name_from_timestamp_2_s log_filename result_name_from_timestamp valid_loss training_set_filename result_name_from_timestamp_2 snapshot_name_from_timestamp_s volume_filename_coarse2fine volume_filename_fusion post_processing DSC_computation find_module load_module get get __repr__ print params range flat len data num print shape upsample_filt max join str int min len splitlines float range find str join sort len reversed listdir range snapshot_filename str join sort len reversed listdir range snapshot_filename str join volume_filename_testing sort reversed listdir range len str join volume_filename_testing sort reversed listdir range len zeros
# OrganSegRSTN: an end-to-end coarse-to-fine organ segmentation framework version 2.0 - Jul 31 2018 - by Qihang Yu, Yuyin Zhou and Lingxi Xie ### NOTEs: #### 1. v2.0 is a MAJOR update to v1.0, which we: (1) slightly changed network architecture (score layers are removed and change to a saliency layer), so that network training is more robust (especially on some tiny targets such as pancreatic cysts); (2) carefully optimized codes so that the testing stage becomes much more efficient, especially when you use multiple processes to run different folds or datasets; (3) re-implemented two functions "post-processing" and "DSC_computation" in C, which is much faster. Note that our pre-trained models are also updated.
8
1989Ryan/Semantic_SLAM
['semantic slam', 'semantic segmentation', 'autonomous driving']
['Visual Semantic SLAM with Landmarks for Large-Scale Outdoor Environment']
Third_Part/PSPNet_Keras_tensorflow/train.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/convert.py Third_Part/PSPNet_Keras_tensorflow/ade20k_labels.py Third_Part/PSPNet_Keras_tensorflow/python_utils/callbacks.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/caffenet.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/__init__.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/dataset.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/__init__.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/transformers.py Third_Part/PSPNet_Keras_tensorflow/Semantic_Information_Publisher.py Third_Part/PSPNet_Keras_tensorflow/drawImage/drawModule.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/tensorflow/network.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/mnist/finetune_mnist.py Third_Part/PSPNet_Keras_tensorflow/python_utils/utils.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/helper.py Third_Part/PSPNet_Keras_tensorflow/layers_builder.py Third_Part/PSPNet_Keras_tensorflow/pascal_voc_labels.py Third_Part/PSPNet_Keras_tensorflow/weight_converter.py Third_Part/PSPNet_Keras_tensorflow/cityscapes_labels.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/tensorflow/transformer.py Third_Part/PSPNet_Keras_tensorflow/python_utils/preprocessing.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/vgg.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/resnet.py src/cluster.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/alexnet.py Third_Part/PSPNet_Keras_tensorflow/Semantic_Information_Publisher_C.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/layers.py src/DBSCAN.py src/map_engine.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/caffe/__init__.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/shapes.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/errors.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/caffe/caffepb.py Third_Part/PSPNet_Keras_tensorflow/drawImage/__init__.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/graph.py src/nearbyGPS.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/tensorflow/__init__.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/classify.py Third_Part/PSPNet_Keras_tensorflow/pspnet.py catkin_ws/.ycm_extra_conf.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/kaffe/caffe/resolver.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/googlenet.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/validate.py Third_Part/PSPNet_Keras_tensorflow/Image_converter.py Third_Part/PSPNet_Keras_tensorflow/caffe-tensorflow/examples/imagenet/models/nin.py GetCompilationInfoForFile IsHeaderFile MakeRelativePathsInFlagsAbsolute FlagsForFile DirectoryOfThisScript cluster expand map_engine temp_read NearbySearch Transform GoogleMaps import_labels_from_mat assureSingleInstanceName Image_converter residual_conv short_convolution_branch BN empty_branch ResNet residual_short interp_block residual_empty build_pyramid_pooling_module Interp build_pspnet generate_color_map generate_voc_labels PSPNet PSPNet50 PSPNet101 Semantic_Imformation_Publisher Semantic_Imformation_Publisher PSPNet train set_npy_weights rot90 main validate_arguments convert fatal_error main classify display_results process_image ImageProducer ImageNetProducer main validate load_model AlexNet CaffeNet GoogleNet alexnet_spec DataSpec std_spec get_models get_data_spec NiN ResNet50 ResNet101 ResNet152 VGG16 gen_data_batch gen_data print_stderr KaffeError Graph Node GraphBuilder NodeMapper NodeKind NodeDispatchError NodeDispatch LayerAdapter shape_data shape_not_implemented get_filter_output_shape shape_concat shape_convolution shape_inner_product shape_scalar shape_pool get_strided_kernel_output_shape shape_mem_data shape_identity SubNodeFuser BatchNormScaleBiasFuser DataReshaper BatchNormPreprocessor ReLUFuser DataInjector ParameterNamer NodeRenamer ReductionParameter HingeLossParameter BlobProto BlobProtoVector NetStateRule LayerParameter PowerParameter FillerParameter ArgMaxParameter V0LayerParameter InnerProductParameter ConvolutionParameter SolverState EltwiseParameter LossParameter SliceParameter BatchNormParameter WindowDataParameter DummyDataParameter HDF5OutputParameter TanHParameter TransformationParameter SoftmaxParameter ConcatParameter DataParameter SPPParameter ParamSpec EmbedParameter SolverParameter InputParameter MVNParameter ContrastiveLossParameter NetState NetParameter BiasParameter CropParameter DropoutParameter PoolingParameter Datum SigmoidParameter BlobShape ExpParameter AccuracyParameter LogParameter ThresholdParameter TileParameter MemoryDataParameter LRNParameter ReLUParameter ImageDataParameter ELUParameter ReshapeParameter InfogainLossParameter ScaleParameter V1LayerParameter HDF5DataParameter PReLUParameter FlattenParameter PythonParameter show_fallback_warning CaffeResolver has_pycaffe get_caffe_resolver layer Network MaybeActivated TensorFlowNode get_padding_type TensorFlowEmitter TensorFlowTransformer TensorFlowMapper BaseDraw LrReducer callbacks data_generator_s31 update_inputs generate preprocess_img to_color debug print_activation class_image_to_image color_class_image add_color array_to_str append join startswith IsHeaderFile compiler_flags_ exists compiler_flags_ GetCompilationInfoForFile compiler_working_dir_ MakeRelativePathsInFlagsAbsolute DirectoryOfThisScript remove strip readlines close open append split Label print append loadmat range str str residual_conv short_convolution_branch residual_conv empty_branch residual_empty residual_short print range print exit print tuple interp_block print ResNet SGD Model build_pyramid_pooling_module Input compile zeros bitget array range generate_color_map print enumerate join layers print name reshape item set_weights join load_model weights fit_generator set_npy_weights data_generator_s31 listdir build_pspnet len range print_stderr exit fatal_error print_stderr TensorFlowTransformer transform_data def_path add_argument convert code_output_path validate_arguments caffemodel ArgumentParser parse_args data_output_path phase basename format print round argmax enumerate get_data_spec float32 placeholder ImageProducer GoogleNet model_path image_paths classify to_float pack resize_images minimum slice to_int32 format print float32 placeholder get_models get_data_spec __name__ get_output format print placeholder int32 get_data_spec float in_top_k len validate load_model model exit ImageNetProducer get_data_spec list reshape shuffle images range len append next range gen_data write pad_h stride_w pad_w kernel_h kernel_w float stride_h height hasattr get_filter_output_shape kernel_parameters output_shape parameters width output_shape parameters list parents axis output_shape output_shape CaffeResolver write ceil float height width ReduceLROnPlateau ModelCheckpoint TensorBoard astype imresize join list defaultdict print shuffle listdir values join imresize update_inputs zoom astype shuffle imread enumerate color zeros uint8 range class_image_to_image add_color shape range zeros to_color print_activation print Model predict
# Semantic SLAM ![license](https://img.shields.io/bower/l/bootstrap.svg?color=blue) This on-going project is Semantic SLAM using ROS, ORB SLAM and PSPNet101. It will be used in autonomous robotics for semantic understanding and navigation. Now the visualized semantic map with topological information is reachable, where yellow represents buildings and constructions, green represents vegetation, blue represents vehicles, and red represents roads and sidewalks. The cube is ambiguous building location and green line is the trajectory. You can visualize these information using Rviz. ![semantic SLAM](real-time.gif) You can also get the semantic topological map which only contains the ambiguous building location and trajectory. The whole ROS communication structure of the project is shown below. ![structure](graph.png) ## Bibliograhpy If you are going to use our work in your research, please use the citation below.
9
1Reinier/Reservoir
['time series']
['Efficient Optimization of Echo State Networks for Time Series Datasets']
docs/conf.py reservoir/detail/robustgpmodel.py reservoir/esn.py setup.py reservoir/esn_cv.py reservoir/scr.py tests/test_esn.py reservoir/__init__.py reservoir/clustering.py reservoir/detail/esn_bo.py setup_package ClusteringBO EchoStateNetwork EchoStateNetworkCV generate_states_inner_loop SimpleCycleReservoir EchoStateBO FixedInverseGamma RobustGPModel Suppressor test_esn argv setup intersection zeros hstack range tanh show optimize EchoStateNetworkCV plot print loadtxt reshape train test title legend EchoStateNetwork predict
Reservoir ========= A Python 3 toolset for creating and optimizing Echo State Networks. >Author: Jacob Reinier Maat, Nikos Gianniotis >License: MIT >2016-2019 Contains: - Vanilla ESN and Simple Cyclic Reservoir architectures. - Bayesian Optimization with optimized routines for Echo State Nets through `GPy`. - Clustering routines to cluister time series by optimized model.
10
201518018629031/HGATRD
['graph attention']
['Heterogeneous Graph Attention Networks for Early Detection of Rumors on Twitter']
train.py models.py gat.py layers.py utils.py gcn.py SpGAT GCN GraphConvolution SpGraphAttentionLayer SpecialSpmm SpecialSpmmFunction Model evaluate pass_data_iteratively test adjust_learning_rate train evaluation_4class normalize_adj sparse_mx_to_torch_sparse_tensor load_user_tweet_graph accuracy build_symmetric_adjacency_matrix load_data normalize convert_to_one_hot load_vocab_len sum format arange LongTensor batch_size model backward nll_loss evaluate zero_grad print numpy item step range len arange len eval append range detach param_groups elapsed_time format tweets_count print pass_data_iteratively tolist dataset classification_report save accuracy_score state_dict evaluation_4class print classification_report pass_data_iteratively tolist cuda format normalize_adj LongTensor print tuple multiply sparse_mx_to_torch_sparse_tensor shape eye info range len diags flatten dot sum array sum type_as double data Size astype float32 from_numpy shape int64 format normalize_adj LongTensor print tuple multiply sparse_mx_to_torch_sparse_tensor eye range len flatten sum array diags multiply coo_matrix normalize_adj eye str index round float numpy max range len
# HGATRD The implementation of our IJCNN 2020 paper "Heterogeneous Graph Attention Networks for Early Detection of Rumors on Twitter" # Requirements python 3.6.6 numpy==1.17.2 scipy==1.3.1 pytorch==1.1.0 scikit-learn==0.21.3 # How to use ## Dataset
11
21lva/EEN_Tensorflow
['video prediction']
['Prediction Under Uncertainty with Error-Encoding Networks']
visualize.py new_train_d.py new_models.py new_train_s.py utils.py dataloaders/data_atari.py LatentNetwork DeterministicNetwork validation_epoch train train_epoch validation_epoch train train_epoch read_data log load_model ImageLoader get_batch print epoch_size apply_gradients compute_gradients range compute_gradients epoch_size get_batch range format model nEpoch system train_epoch datapath log save_weights filename append range validation_epoch open str print write system now close dirname open get eval cpu
# EEN_Tensorflow error encoding network by tensorflow only for breakout. The paper of EEN : https://arxiv.org/abs/1711.04994
12
21lva/EEN_acrobot
['video prediction']
['Prediction Under Uncertainty with Error-Encoding Networks']
train_d.py expert_policies/utils/tools.py pa.py models.py predict.py utilsf.py train_s.py changeJson.py LatentNetwork DeterministicNetwork DataMaker load_model DrawGraph invScaling BaseLineModelPredictor mape get_cond LatentModelPredictor validation_epoch train train_epoch validation_epoch train train_epoch read_data log json_to_par make_dir_if_none parse_input_str str2bool LatentNetwork DeterministicNetwork AdamOptimizer load_weights lr ri get_batch range model nState clf open str title ylim savefig append range plot npred close tfilename nAction ncond reader nTransporter print system DrawGraph model compute_loss get_cond save_dir datapath open writer tolist filename range predict npred close mean tfilename ncond nTransporter print writerow system decode DrawGraph model mae ri compute_loss get_cond save_dir datapath open writer invScaling tolist mape filename append range convert_to_tensor format npred close tfilename ncond print writerow system float32 mean_squared_error norm get_batch epoch_size mean apply_gradients compute_gradients range get_batch epoch_size mean compute_gradients range model nEpoch save_weights clf validation_epoch log datapath writer open title savefig filename append range format plot close writerow system train_epoch figure loss open __setattr__ list read_data keys str print write system now close dirname open parse_args add_argument ArgumentParser print exists makedirs
### EEN_acrobot Error encoding network for acrobot error encoding network by tensorflow.keras. The paper of EEN : https://arxiv.org/abs/1711.04994 need: python>=3.5 tensorflow (any version that support tf.keras) numpy matplotlib etc....
13
3778/Ward2ICU
['time series']
['Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs', 'Ward2ICU: A Vital Signs Dataset of Inpatients from the General Ward']
tests/test_samplers.py run-experiment.py tests/test_models.py ward2icu/models/classifiers.py ward2icu/metrics.py ward2icu/models/cnngan.py ward2icu/models/__init__.py ward2icu/trainers.py ward2icu/samplers.py tests/test_data.py ward2icu/models/rgan.py ward2icu/utils.py ward2icu/models/rcgan.py tests/test_utils.py tests/test_trainers.py ward2icu/layers.py ward2icu/__init__.py main cli test_TimeSeriesVitalSigns_transform_minmax_signals test_TimeSeriesVitalSigns_transform_minmax test_TimeSeriesVitalSigns_inv_transforms test_RCGAN test_BinaryRNNClassifier test_RGAN_forward test_FCCMLPGAN RGAN_setup test_BinaryCNNClassifier test_BinaryBalancedSampler test_BinaryBalancedSampler_batch test_IdentitySampler test_MinMaxGANTrainer_calc_gen_loss test_BinaryClassificationTrainer_train binaryclass_tiled_trainer test_MinMaxGANTrainer_calc_dsc_loss test_MinMaxGANTrainer_train minmaxgan_trainer binaryclass_trainer test_MinMaxGANTrainer_calculate_metrics test_BinaryClassificationTrainer_calculate_metrics_tiled test_BinaryClassificationTrainer_calculate_metrics test_tile Conv1dLayers AppendEmbedding rnn_layer Permutation View mean_feature_error classify tstr IdentitySampler BinaryBalancedSampler BinaryClassificationTrainer numpy_to_cuda flatten tile calc_conv_output_length n_ BinaryRNNClassifier BinaryCNNClassifier CNNCGANGenerator CNNCGANDiscriminator RCGANGenerator RCGANDiscriminator RGANDiscriminator RGANGenerator main cli test_TimeSeriesVitalSigns_transform_minmax_signals test_TimeSeriesVitalSigns_transform_minmax test_TimeSeriesVitalSigns_inv_transforms test_RCGAN test_BinaryRNNClassifier test_RGAN_forward test_FCCMLPGAN RGAN_setup test_BinaryCNNClassifier test_BinaryBalancedSampler test_BinaryBalancedSampler_batch test_IdentitySampler test_MinMaxGANTrainer_calc_gen_loss test_BinaryClassificationTrainer_train binaryclass_tiled_trainer test_MinMaxGANTrainer_calc_dsc_loss test_MinMaxGANTrainer_train minmaxgan_trainer binaryclass_trainer test_MinMaxGANTrainer_calculate_metrics test_BinaryClassificationTrainer_calculate_metrics_tiled test_BinaryClassificationTrainer_calculate_metrics test_tile Conv1dLayers AppendEmbedding rnn_layer Permutation View mean_feature_error classify tstr IdentitySampler BinaryBalancedSampler BinaryClassificationTrainer numpy_to_cuda flatten tile calc_conv_output_length n_ BinaryRNNClassifier BinaryCNNClassifier CNNCGANGenerator CNNCGANDiscriminator RCGANGenerator RCGANDiscriminator RGANDiscriminator RGANGenerator main generator model div abs cuda SequenceTrainer BinaryBalancedSampler tstr discriminator classify synthesis_df log_model trainer size info sample log_to_mlflow log_metric set_tag log_df TimeSeriesVitalSigns mean_feature_error numpy float128 synthesis_df y astype array array dict RGANDiscriminator RGANGenerator forward sampler RCGANGenerator RCGANDiscriminator sampler dict forward len BinaryRNNClassifier forward randn forward BinaryCNNClassifier randn sampler dict FCCMLPGANDiscriminator forward FCCMLPGANGenerator len BinaryBalancedSampler eye sample Tensor range BinaryBalancedSampler eye sample Tensor range ones Tensor IdentitySampler sample IdentitySampler SGD parameters cpu float BinaryClassificationTrainer IdentitySampler SGD parameters cpu float BinaryClassificationTrainer MinMaxBinaryCGANTrainer RCGANGenerator randn IdentitySampler RCGANDiscriminator SGD parameters randint train mean Tensor calc_dsc_real_loss calc_dsc_fake_loss calc_gen_loss Tensor Tensor calculate_metrics train sigmoid Tensor sum calculate_metrics sigmoid T sum calculate_metrics Tensor tile dict numpy_to_cuda BinaryBalancedSampler IdentitySampler Adam parameters info train cuda BinaryClassificationTrainer numpy_to_cuda BinaryBalancedSampler IdentitySampler train_test_split_tensor Adam parameters info train cuda BinaryClassificationTrainer _maybe_slice padding kernel_size stride dilation
--- <div align="center"> # Ward2ICU [![Paper](http://img.shields.io/badge/paper-arxiv.1910.00752-B31B1B.svg)](https://arxiv.org/abs/1910.00752) [![3778 Research](http://img.shields.io/badge/3778-Research-4b44ce.svg)](https://research.3778.care/projects/privacy/) [![3778 Research](http://img.shields.io/badge/3778-Survey-4b44ce.svg)](https://forms.gle/e2asYSVaiuPUUCKu8) </div> <!--ts--> * [Description](#description)
14
3lis/rnn_vae
['autonomous driving']
['On the Road with 16 Neurons: Mental Imagery with Bio-inspired Deep Neural Networks']
src/tester.py src/exec_eval.py src/gener.py src/h5lib.py src/exec_main.py src/losses.py src/cnfg.py src/exec_lata.py src/arch.py src/exec_feat.py src/exec_dset.py src/sample_sel.py src/trainer.py src/mesg.py src/pred.py RecMTimepred save_model Rec2Timepred RecTimepred RecMultiAE RecMultiVAE load_model load_vgg16 RecAE VAE AE get_unb_loss jaccard_loss MultipleVAE get_loss create_model stanh model_summary Autoencoder Multiple get_optim pre_unb_ones RecVAE Timepred MultipleAE Recursive get_config load_config get_args get_args_eval sel_test dset_class get_files_noclass dset_seq convert_gt dset_sel get_pos dset_time plain_link_synthia get_odom resize_img set_mseq_tree dset_seq_multi next_file save_dset_time get_files_class dict_odom next_timeframe check_valid dset_noclass frac_dataset resize_synthia make_symlink read_pose get_pitch load_dset_time check_timeseq pred_models frmt_result write_header eval_model_set eval_model pred_set recreate_model get_multi_decoders pred_model do_inspect_model pred_time get_decoder recall_img inspect_model eval_models get_encoder gen_latent dset_latent base_stat plot_seq_stat get_args seq_stat plot_code_stat code_stat read_latent test_model init_config archive create_model dset_class len_dataset iter_simple dset_multiple iter_multi_seq iter_double iter_seq iter_multiple dset_sequence dset_same gen_dataset dset_multi_seq load_h5_layer load_h5_group valid_layer load_h5 invalid_layer h5_multi_groups h5_single_group load_h5_vgg16 xentropy unb_lane array_to_image mse save_loss get_unb_pars unb_car get_unb_par image_to_array print_err print_wrn print_line print_msg TimeInterPred eval_latent in_rgb pred_tset eval_img create_test_vae accur_img model_dead accur_mutliseq get_targ eval_tset img_overlays pred_folder save_image interpolate_pred code_to_latent get_metric img_overlay eval_multitset file_to_array pred_image accur_sel_time model_outputs hallucinate pred_rec_tset model_weights eval_latent_folder create_test_model pred_time_sample swap_latent accur_tset_time create_test_mvae layer_outputs array_to_image pred_multi_image pred_multirec_image eval_multimg pred_time latent_to_image pred_multi_tset eval_raw pred_multirec_tset accur_class_latent code_to_path save_collage make_collage imgsize accur_tset save_gif layer_weights accur_class_multiseq out_rgb accur_latent rearrange_rtime ChangeKlWeight plot_history set_callback generate_dset_time train_ae_model train_time_model print_err format join lower load_img img_to_array pre_unb_ones abs sum print_err format join plot_model print_summary makedirs endswith join get_file join getcwd makedirs join save_weights RecVAE RecMTimepred format Timepred Rec2Timepred MultipleAE print_err RecAE VAE RecTimepred RecMultiAE RecMultiVAE AE MultipleVAE compile add_argument ArgumentParser add_argument ArgumentParser format print_msg print_err system dict list keys append format append sorted next_timeframe range str dirname len reshape array swapaxes split norm format print exit next_file read_pose get_pos get_pitch join get_odom format print len int join next_timeframe sorted format print reshape next_timeframe File close set array append keys range seed join list format print save frac_dataset sample dset_time range len load join slice int format print join isinstance relpath symlink isfile range len close array open ANTIALIAS size min convert save resize crop open fromarray read format list uint8 print exit Reader save array join resize_img format print rmtree convert_gt exists makedirs join format replace print rmtree func exists makedirs sorted format isinstance print append listdir sorted format isinstance print append listdir range len seed join get_files_noclass make_symlink rmtree frac_dataset sample exists makedirs seed list len make_symlink rmtree frac_dataset sample range array exists get_files_class makedirs seed list format get_files_noclass next_timeframe makedirs make_symlink sample rmtree array frac_dataset append range check_timeseq exists enumerate len format rmtree append exists makedirs seed set_mseq_tree list next_timeframe make_symlink sample array frac_dataset append range check_timeseq enumerate get_files_class len center format write expandtabs len max list std mean array values join list format create_model dset_time_neat print get_config exit dset_time keys compile print name format exit print name format exit format layers print name exit join eval_multitset ndarray isinstance eval_tset mean lower join recreate_model eval_model_set clear_session join write_header eval_model_set recreate_model write close open model_outputs model_weights create_test_model clear_session join recreate_model isfile inspect_model makedirs recreate_model get_decoder join lower pred_tset clear_session join pred_set recreate_model pred_time isfile makedirs clear_session join recreate_model pred_model clear_session join format print pred_image recreate_model exit isfile out_rgb save_image print join pred_image walk join list format print File close gen_latent create_dataset keys get_encoder makedirs join sorted format list print reshape File exit close shape array keys mean std base_stat clip append len base_stat format append clip len set_aspect arange len close pcolormesh savefig yticks set_aspect arange len close pcolormesh savefig yticks join format set_session isinstance print_err makedirs len strftime set_random_seed eval ConfigProto Session open join save_model model plot_history train_ae_model train_time_model clear_session pred_tset model eval_tset generate_dset_time interpolate_pred pred_folder eval_multitset time_test_set print_wrn recreate_model accur_sel_time hallucinate pred_rec_tset eval_latent_folder range get_encoder nn_best pred_time_sample format create_model accur_tset_time lower pred_time pred_multi_tset TimeInterPred pred_multirec_tset enumerate join accur_class_latent isinstance print makedirs accur_tset time_test_str get_decoder accur_class_multiseq len join format system makedirs ImageDataGenerator flow_from_directory format iter_simple save_image next range format iter_simple append save_image next range len format iter_simple copy append save_image next range enumerate len format iter_simple append save_image range enumerate len join iter_simple join iter_double join iter_multiple join iter_seq iter_multi_seq len walk len print_err format format print_wrn name set_weights shape get_weights array len list format print_wrn get_layer keys enumerate invalid_layer print_wrn format enumerate list File valid_layer keys list format print_wrn name File search valid_layer keys shape get_weights array enumerate set_weights join close float array open get_unb_par join zeros_like clip log where get_unb_pars xentropy get_unb_pars xentropy fromarray uint8 ptp min convert mean expand_dims close array open array_to_image save f_back basename format write exit getfile f_lineno f_back basename format write getfile flush f_lineno write isinstance input_shape isinstance isinstance output_shape expand_dims img_to_array load_img split_latent ndarray isinstance variable array_to_image eval img_overlays out_rgb expand_dims predict load join int array_to_image ndarray isinstance save new paste resize range len save make_collage save size new convert array_to_image where filter FIND_EDGES composite img_overlay ndarray isinstance array_to_image save array print_err format replace file_to_array in_rgb file_to_array isinstance out_rgb save_image predict in_rgb file_to_array isinstance img_overlays save_image predict in_rgb format isinstance makedirs enumerate img_overlays append save_image range predict len join replace pred_image pred_multi_image img_overlays save open makedirs join list sorted save_collage imgsize linspace out_rgb array range len join sorted list save_collage imgsize open linspace append array range len join sorted list save_collage size array_to_image linspace zip append array range len join sorted list save_collage size img_overlays linspace zip append array range len linspace list map append range predict format glob latent_to_image swapaxes zip join save_collage isinstance print_err imgsize out_rgb array len int join format list replace save_collage makedirs len imgsize save_gif save zip append expand_dims array range predict open save open list len append expand_dims range predict format replace update_frame latent_to_image zip int join save_collage imgsize save_gif array makedirs in_rgb variable img_overlays linspace save open split_latent file_to_array append predict format replace insert eval enumerate join save_collage imgsize save_gif makedirs join in_rgb split_latent file_to_array list save_collage concatenate variable map imgsize eval img_overlays save open predict makedirs logical_or sum logical_and logical_not append accur_img predict zip accur_img latent_to_image zip append expand_dims array predict items sorted file_to_array format list accur_img pred_image write close nanmean zip out_rgb array values open items sorted format list values print print_err write close nanmean open append accur_latent array range flush len sel_test values open sorted list header check_incl append range format next_timeframe close flush items print write nanmean accur_latent array sel_test accur_mutliseq values open sorted list header check_incl append range format next_timeframe close flush items print write nanmean array sel_test code_to_path format print accur_img write get_targ close nanmean latent_to_image check_incl append code_to_latent flush open latent_to_image code_to_latent get_targ eval_latent join save makedirs out_rgb in_rgb file_to_array evaluate file_to_array evaluate file_to_array pred_image variable metric eval out_rgb items sorted format list eval_img values write close nanmean zip array get_metric compile open items sorted format list values tuple write close nanmean zip eval_multimg array compile open Model get_layer lower get_layer Model fromarray join format save_collage print_msg ptp min imgsize append range max join format save_collage print_msg ptp reshape min isnan append sum array range join format isinstance layer_outputs print_msg pred_image print_err imgsize zip predict layer_weights isinstance zip list format print_msg ptp map linspace zeros sum range len array swapaxes rearrange_rtime load_dset_time join TensorBoard EarlyStopping ChangeKlWeight ceil ModelCheckpoint append makedirs len_dataset multi_gpu_model model now fit_generator gen_dataset ceil multi_gpu_model model fit now generate_dset_time len list format plot xlabel close ylabel mean ylim savefig legend std range len
# On the Road with 16 Neurons: Mental Imagery with Bio-inspired Deep Neural Networks ###### *Alice Plebe, Mauro Da Lio (2020).* --- This repository contains the source code related to the paper [*"On the Road with 16 Neurons: Mental Imagery with Bio-inspired Deep Neural Networks"*](http://arxiv.org/abs/2003.08745). The code is written in Keras using TensorFlow backend. <!-- The code is written in Keras 2.2.4 using TensorFlow 1.12.0 backend. The scripts are executed with Python 3.6.8. The networks are trained on multiple GPUs with CUDA 10.1. The neural models obtained from Keras are exported to __Wolfram Mathematica 11.3__ for visualization.
15
4kubo/bacf_python
['visual tracking']
['Learning Background-Aware Correlation Filters for Visual Tracking']
background_aware_correlation_filter.py special_operation/convertor.py utils/arg_parse.py special_operation/resp_newton.py utils/report.py utils/get_sequence.py image_process/feature.py utils/functions.py demo_on_otb.py BackgroundAwareCorrelationFilter show_image_with_bbox get_pixel get_pyhog get_pixels resize_DFT2 resp_newton parse_args get_subwindow_no_window get_sequence_info _get_path_to_seq _exclude load_image _get_gt_bbox _numerical_sort _get_frame_names LogManger imshow rectangle waitKey minimum astype maximum stack array minimum tuple astype maximum pad resize array float64 process astype copy squeeze min zeroes shape any array floor ceil prod int exp mod nanargmax reshape pi flatten prod floor tile real argmax max einsum add_argument ArgumentParser arange floor isscalar meshgrid array _get_path_to_seq format print _exclude _get_gt_bbox zip enumerate int format glob strip split append _get_frame_names exists enumerate len glob sorted format list map compile split array exists enumerate len append enumerate
# A port of [BACF in Matlab](http://www.hamedkiani.com/bacf.html) to python 2 Python 2 implementation of Background-Aware Correlation Filters for visual tracking. For more detail, please refer to [arXiv paper](https://arxiv.org/abs/1703.04590) and [the author's website ](http://www.hamedkiani.com/bacf.html) ![tracking_result_with_response](https://user-images.githubusercontent.com/12559799/30769981-61fa6fea-a060-11e7-9c0a-854131931867.png) # Requirements - python 2 - numpy - scipy - opencv - PIL
16
50kawa/mimick_chainer
['word embeddings']
['Mimicking Word Embeddings using Subword RNNs']
make_wordlist.py data_make.py util.py eval_wordsimilarity.py main.py model.py ngram zijougosa cos_sim cos_sim load_wordvector mse main calc_batch_loss cos_sim EncoderCNN load_model Interpreter Encoder EncoderGRU EncoderSumFF Model EncoderSum SplitWord Dataread PWIMdata load_word2vec_format load_fasttext_format random_cw_list extend sample cos_sim append range len data disable_update Interpreter ArgumentParser train_data GradientClipping cleargrads list use load_model setup hasattr exit Adam interpreter add_hook append parse_args range update save_hdf5 shuffle zip calc_batch_loss float enumerate check_cuda_available backward print load_wordvector add_argument test_data to_gpu len load max_input_len Model Dataread char_dict len
# mimick_chainer https://arxiv.org/abs/1707.06961 の実装をchainerでやってみました。 ## 動かし方
17
590shun/vsum_dsf
['video summarization']
['Video Summarization using Deep Semantic Features']
script/summarize.py func/sampling/vsum.py func/dataset/summe.py script/evaluate.py script/uniform_smpl.py func/nets/vid_enc.py func/nets/vid_enc_vgg19.py SUMME Model Model representativeness encodeSeg uniformity VSUM eval_human_summary eval_summary eval_f1 get_flabel uniform_sampling model reduce from_numpy feat sampleFrame mean getChrDistances mean getDistances int list astype load get T format eval_f1 loadmat open load get T format sum len min delete eval_f1 append loadmat max range open zeros list map zip print zeros float floor
# Video Summarization using Deep Semantic Features これは"Video Summarization using Deep Semantic Features" in ACCV'16 [[arXiv](arxiv.org/abs/1609.08758)]を書き直したものになります。 [実装元のリンク](http://github.com/mayu-ot/vsum_dsf),[備忘録](https://github.com/590shun/paper_challenge/issues/7) ## 実験手順 git clone https://github.com/590shun/vsum_dsf.git ### オプション このコードでは以下のM. Gygli *et al.* [1]を参考にしています。 環境構築は次のように行います。 cd vsum_dsf git clone https://github.com/gyglim/gm_submodular.git
18
5gon12eder/msc-graphstudy
['graph generation', 'data augmentation']
['Aesthetic Discrimination of Graph Layouts']
driver/quarry.py benchmarks/lib/history.py driver/www/property.py driver/www/graph.py utils/prepare-local-diffent-regression.py benchmarks/lib/__init__.py eval/progress.py driver/__init__.py eval/make-config-puncture.py utils/prepare-entropy-regression.py benchmarks/micro-driver.py driver/model.py driver/www/nn.py driver/www/perfstats.py test/driver-check.py utils/format-puncture.py driver/www/overview.py utils/format-confusion.py utils/format-huang-weights.py benchmarks/lib/manifest.py codegen/enums.py driver/cli.py driver/constants.py eval/cross-validate.py benchmarks/lib/fancy.py driver/layinter.py maintainer/maintainer.py driver/errors.py driver/crash.py driver/www/select.py report/pics/get-graph-info.py utils/download.py driver/doctests.py driver/layworse.py driver/graphs.py driver/www/property_plots.py driver/alternatives.py driver/www/__init__.py driver/manager.py driver/resources/__init__.py driver/integrity.py driver/www/badlog.py driver/compare.py benchmarks/lib/cli.py driver/www/layout.py eval/get-huang-parameters.py utils/format-competing-metrics.py utils/typeset.py driver/httpd.py driver/nn.py driver/deploy.py driver/features.py benchmarks/macro-driver.py driver/configuration.py driver/archidx.py benchmarks/history.py driver/layouts.py benchmarks/lib/runner.py driver/properties.py driver/utility.py driver/xjson.py driver/metrics.py utils/format-nn-info.py utils/picture-wrapper.py utils/format-disco.py driver/www/impl_common.py driver/imports.py driver/tools.py _action_export _action_list median_rel_stdev main is_outlier count _mean_stdev Generator MacroManifestLoader MacroCollectionRunner main _is_list_of_str MacroBenchmarkRunner MicroBenchmarkRunner main MicroCollectionRunner MicroManifestLoader _ArgInput use_color add_constraints add_argument_groups add_info add_manifest add_history_optional _ArgMaybe add_color add_help TerminalSizeHack add_alert _ArgRealFile add_benchmarks add_verbose add_history_mandatory _ArgNumber Reporter _fmttime NoAnsi _shorten Ansi _as_singleton HistoricResult _count ResultAggregation History InvalidManifestError ManifestLoader Result BenchmarkRunner CollectionRunner Constraints Failure _get_trend TimeoutFailure print_source dashy log print_header print_test main make_text_wrapper get_alternative_value get_alternative_context _get_graph_id _get_alternative_value_huang AlternativeContext _get_alternative_value_stress AlternativeContextHuang _Indexer parse_generator Main _patch_logging_module _get_log_level_environment _adjust_log_level _check_working_directory AbstractMain read_id_pairs process smart_open Main parse_dmspec make_id_output_projection get_id_pairs ValueErrorInDisguise _CfgGraphs ConfigReader _Config _CfgLayInter _CfgLayWorse _ConfigBySize _CfgImports _CfgLayouts _CfgPuncture Configuration _CfgMetrics _ConfigByRate BadLog _CfgProperties Binnings Id Layouts Tests Metrics sqlrepr LogLevels translate_enum_name Kernels enum_to_json TestStatus LayWorse _GraphSizeAttributes enum_from_json LayInter Generators Properties Actions GraphSizes dump_current_exception_trace _make_badlog parse_positive_float _perform_cleaning Main _perform_action _recursively_run_doctests _Status Error SanityError ConfigError RecoverableError FatalError _convert_result get_graph_features get_layout_features _ilog _emit_layout_features _emit_graph_features _gen_generic _quick_archive_import_eh _call_generic_tool do_graphs _BucketList _insert_layout _insert_graph _gen_import _generate_graphs _update_bucket_list RequestHandler _filename _validate_vicinity _validate_property _run_httpd_forever _ChunkedIO canonical_path Main _close_log_file _make_html_tree _open_log_file Httpd _convenient_name UrlImportSource NullImportSource DirectoryImportSource ImportSource _log_usage_notice get_well_known_import_sources TarImportSource _log_types_notice get_import_source_from_json _get_mandatory_and_optional_parameters _get_cache_file_name _check_attribute_type _isbetween _load_indexed_files Main _store_indexed_files _get_file_info Checker _interpolate_generic do_lay_inter _add_inter_layout _lay_generic do_layouts _worsen_generic _add_worse_layout do_lay_worse _log_data _has_functional_fileno_attribute _register_sqlite_types Manager _handle_popen_stdin _handle_popen_stdout _compute_metric do_metrics _get_graph_size _compute_all_metrics _insert_metric _test_alternatives _restore_features _save_test_score _do_compile_model _get_layout _check_for_non_finite_feature_vectors _DebugMain _build_model _restore_testscore _train_alternative_huang _import_3d_party_libraries _gather_worse _test_model _get_layout_features_curs _save_features _rank_layouts _gather_proper _restore_weighted_model _get_debugdir _get_normalizers _make_half_model _cache_put _dump_model_infos _train_model _log_discard_too_old _normalize_data _train_alternatives _gather_inter _cache_get _load_training_and_testing_data _DataSet _rank_layout_id _get_graph_id _graph_features _save_weighted_model _get_graph_id_curs _dump_corpus_infos _get_same_graph_id _layout_features _rank_layout_ids _get_same_graph_id_curs _load_huang_context_with_matrix_and_weights_vector _setup do_model _get_sql_view_columns _rank_layout _write_test_summary _get_worse_max _get_graph_features_curs Oracle NNTestResult NNFeature do_properties _insert_data _get_directory _get_executable _sql_insert_curs _temporary_to_persistent_name _maybe_insert_pca _rename_files_batch _compute_prop_outer _compute_prop_inner _roundabout pickle_objects unpickle_objects get_cache_directory get_gnuplot_options _find_tool _get_safe_command _get_cache_directory get_gnuplot_environment find_tool_lazily find_all_tools_eagerly GetNth Real rfc5322 value_or Power singleton get_one_or prepare_fingerprint attribute_projection count get_one encoded _format_number get_first_or index_projection change_file_name_ext replace_list is_sorted get_first Override Static make_ld_library_path fmtnum Integer GetNthOr _preprocess_line XJsonError load_xjson_file load_xjson_string _load_xjson_pp get_resource_as_stream serve _fmtkey serve append_common_property get_informal_layout_name get_bool_from_query get_int_from_query append_common_all_constants get_informal_layout_name_curs append_common_graph _offer_picture_to_cache _caching_policy _NotFound _get_primary_axes_curs _get_cache_file_name _append_pca _xmlkey _check_graphics_format _serve_for_graph _serve_picture _serve_by_id serve _serve_generic _append_one_property_inner _get_primary_axis_curs _get_picture_from_cache _get_primary_axes _sqlkey _get_worse_info _append_property _get_conversion_command_from_svg _append_one_property_outer _get_inter_info _get_layout_ids_from_query _404 _500 _load_feature_rows _serve_testscore _NotFound _serve_demo _get_random_layout_pairs _serve_features _get_random_count_from_query _get_special_test_case_selection_from_query serve serve _get_perf_stats_raw _get_perf_stats_cooked serve _append_scaled_histogram_stuff _check_graphics_format _UnknownHandler _serve_sliding_average _Handler _get_sigma_from_query _GN _serve_plot _serve_histogram _get_bincount_from_query _gpltin _get_vicinities_from_query _serve_entropy serve _get_type _add_row_value link_xslt Child validate_layout_id validate_graph_id validate_id _NoXslt parse_bool HttpError format_bool append_child get_one make_statistic combine_confusions get_confusion cross_validate main combine_huang_weights Confusion main print_punctures main canonical _property store_info print_message _file _warning Info _float_nn main update_progress_report load_info Error ForEachCfg get_vpath_and_vardeps_root BuildError _BuildStatus find_build_tool pretend _print_summary Main PipeGuard report _rfc5322 looks_like_build_directory get_top_level_source_directory main format_integer write_texdef check_graph_files check_graphs b2s main check_layouts check_database_exists parse_checksum_option extract_files_tar add_to_cache invoke_argument_parser guess_archive_format get_human_rate try_download extract_files serve_from_download info_ext copy_iostreams extract_files_zip info main serve_from_cache parse_extract_option hexhash bemoan load_rename_table _test analysis_dict2tuple dummy_analysis get_lazy_okay main write_output check_lazy_okay get bemoan _test get_lazy_okay main check_lazy_okay bemoan get_lazy_okay main parse_disco_file check_lazy_okay parse_dmnamespec bemoan load_rename_table get_lazy_okay main check_lazy_okay main Parser get_layer_dim_def bemoan load_rename_table _test info_2_mean_stdev get_puncture_properties get_lazy_okay main check_lazy_okay print_output main get_princomp main main Error get_aux_checksum quote get_tool_flags save_html_report find_dependencies find_dependency run_command ap_directory prepare_dependencies get_last_mtime run_toolchain main find_executable looks_like_error_line add_argument_group add_argument add_help add_parser ArgumentParser add_history_mandatory add_subparsers sorted format get_descriptions print keys localtime description max count sorted get_benchmark_results strftime stdev printhdr format results median_rel_stdev mean n print min timestamp filter len print_info use_color load add_argument_groups Reporter print manifest Constraints color MacroManifestLoader info compile mean stdev MicroManifestLoader add_constraints add_info add_manifest add_history_optional add_argument_group add_color add_help add_alert add_benchmarks add_verbose add_argument add_argument add_argument add_argument add_argument add_argument add_argument add_argument add_argument add_argument compile sqrt join items format strip loads infile startswith parse_args log makedirs sorted format print len map keys upper fill max make_text_wrapper enumerate sorted format print dashy keys len sorted format print dashy keys enumerate print defaultdict abs _get_graph_id get_cached_mean_and_stdev append sum add set get_one_or sql_select_curs abs enum_from_json join imported join getcwd debug format get list reverse addLevelName format print Lambda map lower setattr is_alternative zip oracle read_id_pairs combinations list stdin all extend map set any append error format map enumerate WARNING ERROR CRITICAL DEBUG INFO Enum isinstance mkstemp exc_info format critical known_bad do_properties format name do_metrics now rfc5322 do_model do_layouts do_graphs do_lay_inter do_lay_worse notice clean_worse format clean_graphs name now rfc5322 clean_metrics clean_inter clean_properties clean_layouts notice clean_model float update join format print testmod write extend iter_modules perf_counter stderr timedelta import_module print_report _Status list keys frozenset puncture get_one_or sql_select_curs join sql_select_curs list items extend dict FIXED_COUNT get_one_or get_wrapper items list defaultdict desired_graphs import_sources request _generate_graphs _update_bucket_list all_items list format _quick_archive_import_eh discard_unbounded_requests name info sum max change _gen_generic format name get_well_known_import_sources imported info _gen_import len pick join format extend target abs_bindir enum_to_json append total Id make_graph_filename getenv int int format configdir Configuration isinstance int close append_child isinstance Element list filter split dict get_import_source_from_json get getfullargspec join format throw_u list items _get_mandatory_and_optional_parameters throw _check_attribute_type type notice join notice format args dict defaults reversed args get_cache_directory format startswith notice expanduser isabs split notice format notice format S_ISSOCK st_mode S_ISDIR S_ISWHT S_ISBLK S_ISDOOR stat S_ISIFO S_ISCHR S_ISLNK S_ISREG S_ISPORT _interpolate_generic list format combinations sorted name map set info LAY_INTER get_bad keys len format debug rename dirname makedirs sorted sql_select format classify name LAYOUTS _lay_generic notice get_bad enumerate len name format info _worsen_generic format list sql_select name LAY_WORSE map set info get_bad keys len format debug rename dirname makedirs format rstrip len map splitlines notice enumerate _has_functional_fileno_attribute read BytesIO isinstance DEVNULL getbuffer _has_functional_fileno_attribute PIPE DEVNULL fileno getkey register_converter register_adapter list items _compute_all_metrics sql_select format _compute_metric name info notice get_bad METRICS append call_graphstudy_tool extend _insert_metric format rstrip map info enumerate len notice format format nn_weights nn_model save_weights info format nn_weights nn_model model_from_yaml load_weights info list format zip dict nn_features NNFeature info append float format reshape nn_features info _log_discard_too_old list executemany iter append float enumerate compile tuple format sql_exec get_one_or sql_select_curs notice _get_graph_id_curs format getenv format warning nndir _import_3d_party_libraries makedirs _train_alternatives _test_alternatives _load_training_and_testing_data _setup _build_model _train_model _write_test_summary info _save_weighted_model _test_model time list format _get_normalizers reshape _DataSet _check_for_non_finite_feature_vectors set dict info _dump_corpus_infos _save_features array _normalize_data len int format info fill zeros sum combinations list sql_select_curs _rank_layouts keys sql_select_curs combinations defaultdict _get_graph_id dict keys _rank_layout_ids sql_select_curs combinations defaultdict items _rank_layout_id _get_graph_id dict _get_worse_max keys time format _get_debugdir _make_half_model _do_compile_model Model _dump_model_infos notice info nn Input list Input warn sqrt value_or append round time format info out fit time format std size mean info sum out predict _rank_layout isnan zeros sum range nan_to_num range join format name plot_model info percentage join format bias info len time format name _train_alternative_huang info format scipy_opt_minimize metrics _load_huang_context_with_matrix_and_weights_vector name getattr nan info float sum enumerate format name filter info zeros is_alternative metrics _check_for_non_finite_feature_vectors nan_to_num get_cached_mean_and_stdev nan AlternativeContextHuang zeros get_cached_value array enumerate dict sql_select defaultdict items get_one set _compute_prop_outer moderately_useful_helper BOXED GAUSSIAN get format _get_directory name PROPERTIES _compute_prop_inner info notice get_bad get list _sql_insert_curs _temporary_to_persistent_name _maybe_insert_pca _rename_files_batch append enum_from_json _roundabout get format sql_exec_curs propsdir str join enum_to_json makedirs relpath debug format rename sql_insert_curs round dump now flush load list format __name__ read debug append fail enumerate _find_tool items list _get_cache_directory _find_tool format _get_safe_command getenv warning info which split join quote map split append get_one _get_cache_directory getenv format warning info join list format info run randint append split dict environ devnull isnan max splitext list list le enumerate clear extend isnan isinstance isinf isinstance isinstance getattr isinstance join isspace all map partition append_common_all_constants send_tree_xml ElementTree Element Enum isinstance str sql_select get_one_or append_common_graph name classify fmtnum localized int sql_select_curs sql_select warning str Element append_common_all_constants send_tree_xml append_common_graph ElementTree _get_primary_axes_curs fmtnum sql_select_curs sql_select_curs _offer_picture_to_cache get_one_or end_headers _check_graphics_format str sql_select append send_response get send_header format _get_picture_from_cache _get_primary_axes flush write _get_conversion_command_from_svg call_graphstudy_tool graphstudy_manager call_image_magick OK len lower format _get_primary_axis_curs get_one_or sql_select_curs sqrt format token_hex debug rename dirname _get_cache_file_name makedirs debug format _get_cache_file_name items list Element _get_layout_ids_from_query _load_feature_rows rfc5322 set query parse_qs send_tree_xml graphstudy_manager attribute_projection ElementTree list query parse_qs send_tree_xml ElementTree set list all map get items list dict parse_bool startswith enum_from_json sorted _get_perf_stats_cooked items list min _get_perf_stats_raw mean append median sum max len append sql_select defaultdict frozenset query parse_qs _check_graphics_format get_bool_from_query sql_select format name _serve_plot query FIXED_COUNT parse_qs _get_bincount_from_query find_tool_lazily get_first_or _check_graphics_format list get_bool_from_query format find_tool_lazily _get_sigma_from_query _serve_plot query parse_qs median keys query parse_qs values _check_graphics_format list get_bool_from_query sorted name _serve_plot append format frozenset set lower items join dict filter graphstudy_manager send_header str flush write call_gnuplot encode send_response OK end_headers len int float get list all map lstrip list get_bool_from_query get_int_from_query tuple map sql_exec join str isinstance IntEnum name fmtnum upper append_child next isinstance IntEnum items list replace SubElement set join addprevious ProcessingInstruction int strip Id format runs stdout writeout cmd cross_validate print defaultdict format replace make_statistic dict keys len list stdev dict mean keys mean stdev dict exclude set include print_punctures print name sorted time message print_message store_info file getenv start Info update_progress_report load_info FIELDS join format fmttabs fmtmsg fmttdiff out _warning float print report format getenv report abspath expanduser getenv isabs which print format append join report quote prettytime sorted format all PASSED SKIPPED map FAILED report keys values stdin write_texdef print format_integer function invoke_argument_parser guess_archive_format cache serve_from_download abspath serve_from_cache urls extract format isfile output extract_files copyfile filter info upper format hexhash info format copyfile dirname info link enumerate makedirs format info callback read write getenv columns str lower fromhex partition strip partition list map print Tests rename values defaultdict load_rename_table name repr enum_to_json append pattern sum glob analysis_dict2tuple enum_from_json write_output bemoan output check_lazy_okay join sorted basename format_rfc5322 replace format items print stdev mean sqrt filter strip map split bemoan format getenv print basename basename format_rfc5322 test update sorted print_names keys dmnames list Id Tests map enum_from_json float enumerate split enum_from_json Tests partition replace corpus get_layer_dim_def fmtint round compile get_puncture_properties print_output escape add properties directory join sorted basename format_rfc5322 replace format items print set get_princomp input minor major run sqrt get log2 nan find_dependencies find_executable str columns tex get_tool_flags bib prep_deps find_tools deps idx prepare_dependencies run_toolchain find_deps isdir format quote getenv info which join format info split list format quote debug find_dependency dict append join isdir endswith isfile sep join items rstrip format quote relpath symlink dirname info sep makedirs format format_rfc5322 debug run_command splitext info range join str format time quote debug map returncode dict floor info environ run update sorted format quote md5 debug glob encode join sorted format quote glob escape info formatdate startswith split
<!-- -*- coding:utf-8; mode:markdown; -*- --> <!-- Copyright (C) 2018 Moritz Klammler <moritz.klammler@alumni.kit.edu> --> <!-- --> <!-- Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free --> <!-- Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no --> <!-- Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. --> <!-- --> <!-- A copy of the license is included in the section entitled "GNU Free Documentation License". --> # Aesthetic Discrimination of Graph Layouts This package contains the all code for [Moritz Klammler's master's thesis](http://klammler.eu/msc/)
19
7-B/yoco
['style transfer']
['Deep Photo Style Transfer']
server.py app.py wsgi.py segmentation/torch_neural_style_transfer.py segmentation/merge_image.py convert.py segmentation/evaluate.py main add_header coloring github sketch image_resize convert_to_line png2svg simplify sobel main get_palette get_arguments image_resize image_loader StyleLoss Normalization gram_matrix imshow main ContentLoss get_style_model_and_losses run_style_transfer get_input_optimizer get join convert_to_line save filename join float resize CV_8U Sobel addWeighted image_resize GaussianBlur sobel imread system fromarray data time evaluate model print load_lua gpu_usage mean unsqueeze png2svg save_image forward std sketch join simplify mkdir add_argument ArgumentParser range load SCHPDataset Compose output DataParallel restore_weight eval load_state_dict DataLoader get_palette cuda get_arguments makedirs ceil unsqueeze open squeeze pause clone unloader title t size mm view deepcopy children format isinstance Sequential StyleLoss MaxPool2d add_module Conv2d len ReLU ContentLoss BatchNorm2d to range append detach LBFGS print clamp_ get_style_model_and_losses step get_input_optimizer ioff image_loader ToPILImage clone ion save_image resize to run_style_transfer open
<h1 align="center"> <br> <a href="https://141.223.140.22"><img src="img/YOCO-logo.png" alt="YOCO" width="200"><a> <br> </h1> ## Contents - [**Weekly Record**](https://github.com/7-B/yoco/wiki/Development-Record) - [**Reference**](https://github.com/7-B/yoco/wiki/%EC%B0%B8%EA%B3%A0-%EC%9E%90%EB%A3%8C) - [**Demo Video**](https://www.youtube.com/watch?v=Zw67sh-4jSI) <h4 align="center"> 사용자가 정한 이미지로 컬러링북을 제작합니다.</h4>
20
9ruddls3/CRNN_Pytorch
['optical character recognition', 'scene text recognition']
['An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition']
prepare.py train.py parms.py model/CRNN.py Preparing reset model_train BiDireRNN ToRNN CNN_block CRNN_model str list sort convert Compose tqdm abspath device append to listdir enumerate len hasattr isinstance xavier_normal fill_ Conv2d modules reset_parameters zero_ BatchNorm2d weight Linear train_model join backward print Variable train zero_grad range type loss_function IntTensor append dataset step init_hidden len
# CRNN_Pytorch 이 CRNN 패키지는 제 데이터셋을 기반으로 진행하였고, 구글 코랩을 기반으로 진행하였습니다. 전반적인 Hyper-Parameter는 prams.py 에서 선언되었으며, 이를 수정함으로써 튜닝 할 수 있습니다. 문자열 이미지데이터가 있고, 파일명이 'label.확장자'로 되어있다면, prepare.py 를 통해 Dataloader 로 변환 할 수 있습니다. This package is for my custom Dataset and based on Google Colaborative. If you use this for your custom dataset, you can convert your image folder to torchvision.Dataloader by using prepare.py * * * ## What is CRNN? Convolution layer를 이용해 이미지의 속성을 추출한 후, 추출한 이미지를 시계열 데이터로 간주하여 예측값을 predict 하는 기법 더 명확한 개념 설명은 아래 링크에서 확인 가능
21
A-Jacobson/Depth_in_The_Wild
['depth estimation']
['Single-Image Depth Perception in the Wild']
train.py datasets.py models.py train_utils.py criterion.py RelativeDepthLoss NYUDepth HourGlass ConvReluBN InceptionS InceptionL main validate AverageMeter save_checkpoint _fit_epoch prep_img fit NYUDepth load show state_dict plot xlabel ylabel RMSprop parameters save_checkpoint load_state_dict RelativeDepthLoss legend HourGlass cuda fit update format criterion model Variable backward AverageMeter zero_grad set_description avg tqdm_notebook train step cuda list format print DataLoader tqdm_notebook _fit_epoch range len update criterion model Variable AverageMeter eval DataLoader cuda dict copyfile save
# Depth in The Wild Pytorch implementation of Single-Image Depth Perception in the Wild https://arxiv.org/pdf/1604.03901.pdf - [x] Data Loader for NYUDepth - [x] Architecture - [x] Custom Criterion - [x] Train on Small sample - [x] Tweak Architecture - [ ] Fully Train Model (310/~500 epochs) - [ ] Validate Results ## Network Architecture:
22
A-ZHANG1/PSENet
['optical character recognition', 'scene text detection', 'curved text detection']
['Shape Robust Text Detection with Progressive Scale Expansion Network', 'Shape Robust Text Detection with Progressive Scale Expansion Network']
util/statistic.py util/tf.py pypse.py util/event.py train_ic15.py models/pvanet.py metrics.py util/feature.py util/proc.py test_ctw1500.py util/rand.py util/t.py util/neighbour.py util/caffe_.py models/__init__.py util/dtype.py dataset/icdar2015_loader.py util/ml.py util/str_.py utils.py util/test.py util/log.py test_ic15.py pse/.ycm_extra_conf.py pse/__init__.py util/misc.py eval/ic15/rrc_evaluation_funcs_v2.py pse/__main__.py util/__init__.py util/logger.py dataset/ctw1500_test_loader.py models/pvanet_1.py util/mask.py eval/ic15/rrc_evaluation_funcs_v1.py util/url.py dataset/ctw1500_loader.py eval/ctw1500/eval_ctw1500.py eval/ctw1500/file_util.py util/thread_.py models/fpn_resnet.py util/np.py util/dec.py util/io_.py util/mod.py train_ctw1500.py dataset/icdar2015_test_loader.py eval/ic15/rrc_evaluation_funcs.py eval/ic15/script.py util/cmd.py util/img.py dataset/__init__.py util/plt.py eval/ic15/file_util.py runningScore pse debug test write_result_as_txt extend_3c polygon_from_points debug test write_result_as_txt extend_3c polygon_from_points cal_kernel_score dice_loss ohem_batch adjust_learning_rate save_checkpoint ohem_single main cal_text_score train cal_kernel_score dice_loss ohem_batch adjust_learning_rate save_checkpoint ohem_single main cal_text_score train MultiCropEnsemble initvars get_img random_crop shrink get_bboxes random_rotate random_scale dist random_horizontal_flip scale CTW1500Loader perimeter get_img CTW1500TestLoader scale IC15Loader get_img random_crop shrink get_bboxes random_rotate random_scale dist random_horizontal_flip scale perimeter get_img IC15TestLoader scale get_union get_gt get_pred get_intersection write_file_not_cover write_file read_file read_dir write_file_not_cover write_file read_file read_dir validate_point_inside_bounds load_zip_file_keys validate_clockwise_points validate_lines_in_file decode_utf8 print_help main_validation get_tl_line_values load_zip_file get_tl_line_values_from_file_contents validate_tl_line main_evaluation validate_point_inside_bounds load_zip_file_keys validate_clockwise_points validate_lines_in_file decode_utf8 print_help main_validation get_tl_line_values load_zip_file get_tl_line_values_from_file_contents validate_tl_line main_evaluation validate_point_inside_bounds load_zip_file_keys validate_clockwise_points validate_lines_in_file decode_utf8 print_help main_validation get_tl_line_values load_zip_file get_tl_line_values_from_file_contents validate_tl_line main_evaluation evaluation_imports evaluate_method default_evaluation_params validate_data ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 ConvBnAct pvanet mCReLU_residual Inception PVANetFeat CReLU mCReLU_base PVANet ConvBnAct pvanet mCReLU_residual Inception PVANetFeat CReLU mCReLU_base PVANet GetCompilationInfoForFile IsHeaderFile MakeRelativePathsInFlagsAbsolute FlagsForFile DirectoryOfThisScript pse get_data get_params draw_log cmd print_calling_in_short_for_tf timeit print_calling print_test print_calling_in_short is_tuple int is_number is_str cast is_list double wait_key hog get_contour_min_area_box blur imwrite get_rect_iou black get_value put_text bgr2rgb get_roi render_points bgr2gray get_contour_region_iou resize convex_hull draw_contours get_contour_rect_box get_shape set_value is_in_contour is_valid_jpg move_win get_contour_region_in_rect fill_bbox imshow apply_mask random_color_3 imread bilateral_blur find_contours points_to_contours maximize_win rect_area rgb2bgr contour_to_points ds_size points_to_contour eq_color find_two_level_contours get_wh average_blur rgb2gray get_contour_region_in_min_area_rect rotate_point_by_90 white filter2D is_white translate get_contour_area rectangle min_area_rect rect_perimeter get_rect_points rotate_about_center gaussian_blur circle get_dir search is_dir dump_mat read_h5_attrs exists get_filename cd join_path load_mat get_file_size get_absolute_path cat dir_mat dump_json create_h5 dump pwd copy is_path mkdir ls open_h5 load remove write_lines read_h5 make_parent_dir find_files read_lines get_date_str init_logger plot_overlap savefig Logger LoggerMonitor find_black_components find_white_components init_params AverageMeter mkdir_p get_mean_and_std kmeans try_import_by_name add_ancester_dir_to_path import_by_name is_main get_mod_by_name add_to_path load_mod_from_path n2 _in_image count_neighbours get_neighbours n1 n1_count n8 n2_count n4 norm2_squared smooth flatten empty_list norm2 sum_all angle_with_x has_infty sin arcsin norm1 eu_dist iterable is_2D shuffle cos_dist chi_squared_dist clone is_empty has_nan has_nan_or_infty show plot_solver_data line show_images get_random_line_style to_ROI draw imshow rectangle hist maximize_figure set_subtitle save_image get_pid kill wait_for_pool get_pool cpu_count set_proc_name ps_aux_grep shuffle normal randint sample D E join index_of find_all is_none_or_empty ends_with remove_all remove_invisible to_lowercase starts_with is_str int_array_to_str contains to_uppercase split replace_all add_noise crop_into get_latest_ckpt get_init_fn gpu_config Print is_gpu_available min_area_rect focal_loss_layer_initializer get_variable_names_in_checkpoint sum_gradients get_all_ckpts get_update_op get_variables_to_train focal_loss get_iter get_available_gpus wait_for_checkpoint get_current_thread ThreadPool create_and_start ProcessPool is_alive get_current_thread_name download argv cit get_count exit sit connectedComponents get transpose copy put shape Queue zeros range len reshape concatenate imwrite concatenate print makedirs append range len join_path makedirs write_lines append range enumerate len int T empty model pse sign DataLoader resize resnet152 cuda max RETR_TREE list OrderedDict shape load_state_dict resnet101 append range format min_kernel_area resnet50 synchronize findContours astype debug copy write_result_as_txt mean eval img_paths resume scale flush enumerate load items time uint8 drawContours binary_th CHAIN_APPROX_SIMPLE print Variable reshape float32 sigmoid parameters CTW1500TestLoader isfile zeros tuple pypse minAreaRect boxPoints IC15TestLoader numpy int sort min astype sum concatenate ohem_single append float numpy range sigmoid sum view mean update astype int32 get_scores numpy update astype int32 get_scores numpy model zero_grad runningScore cuda cal_kernel_score step append cal_text_score sum range update format size astype ohem_batch item float flush enumerate time criterion backward Variable print AverageMeter numpy len param_groups lr join save SGD DataLoader adjust_learning_rate save_checkpoint Logger resnet152 cuda hasattr load_state_dict resnet101 append module range pretrain resnet50 close lr resume CTW1500Loader checkpoint optimizer flush join load print parameters n_epoch train set_names makedirs summary_path IC15Loader SummaryWriter pvanet add_scalar fill_ isinstance out_channels Conv2d normal_ sqrt zero_ BatchNorm2d imread int asarray remove_all append read_lines split range copy len warpAffine random getRotationMatrix2D range len max resize max min choice resize array min where randint max range len range PyclipperOffset int JT_ROUND area min append AddPath array perimeter ET_CLOSEDPOLYGON Execute print append split append int asarray split area append sort walk replace read close open join makedirs close write open join makedirs close write open write exit group match namelist append ZipFile group match namelist append ZipFile decode BOM_UTF8 replace startswith encode validate_tl_line decode_utf8 replace split get_tl_line_values validate_point_inside_bounds int replace group match replace argsort append get_tl_line_values split update default_evaluation_params_fn validate_data_fn writestr list items write dumps close dict print_help evaluate_method_fn ZipFile makedirs update default_evaluation_params_fn validate_data_fn print exit dict validate_clockwise_points float load_zip_file validate_lines_in_file compute_ap area list decode_utf8 append polygon_from_points range import_module get_intersection_over_union load_zip_file empty get_pred get_tl_line_values_from_file_contents float items namedtuple int8 rectangle_to_polygon Rectangle get_intersection zeros len load_url ResNet load_state_dict load_url ResNet load_state_dict list ResNet load_url load_state_dict keys state_dict list ResNet load_url load_state_dict keys state_dict list ResNet load_url load_state_dict keys state_dict PVANet append join startswith IsHeaderFile compiler_flags_ exists compiler_flags_ GetCompilationInfoForFile compiler_working_dir_ MakeRelativePathsInFlagsAbsolute DirectoryOfThisScript cpse array net Solver isinstance append net Solver isinstance show int get_random_line_style plot print readlines smooth len contains eval get_absolute_path save_image plt legend append float open isinstance debug waitKey ord bgr2rgb get_absolute_path wait_key namedWindow isinstance destroyAllWindows move_win rgb2bgr WINDOW_NORMAL imread maximize_win get_absolute_path rgb2bgr make_parent_dir moveWindow setWindowProperty WND_PROP_FULLSCREEN enumerate get_shape get_shape min max drawContours boundingRect get_contour_rect_box minAreaRect BoxPoints int0 get_shape get_contour_rect_box warpAffine int BoxPoints transpose hstack getRotationMatrix2D dot get_roi minAreaRect points_to_contour black draw_contours shape to_contours draw_contours assert_equal asarray range GaussianBlur bilateralFilter putText int32 FONT_HERSHEY_SIMPLEX get_shape int tuple warpAffine get_wh float32 cos deg2rad getRotationMatrix2D dot sin abs array _get_area transpose _get_inter zeros range len findContours asarray copy findContours copy pointPolygonTest convexHull randint list asarray range zip minAreaRect empty points_to_contour get_absolute_path makedirs get_dir mkdir get_absolute_path get_dir mkdir get_absolute_path get_absolute_path get_absolute_path is_dir get_absolute_path expanduser startswith chdir get_absolute_path append listdir get_absolute_path ends_with get_absolute_path open get_absolute_path make_parent_dir get_absolute_path get_absolute_path get_absolute_path savemat make_parent_dir get_absolute_path getsize get_absolute_path get_absolute_path make_parent_dir get_absolute_path get_absolute_path get_absolute_path join_path extend ls is_dir get_absolute_path find_files append get_absolute_path make_parent_dir now setFormatter basicConfig print join_path addHandler make_parent_dir StreamHandler get_date_str Formatter setLevel asarray arange plot numbers enumerate len pop black set_root insert copy get_neighbours N4 shape get_new_root get_root append set_visited range is_visited print DataLoader div_ zeros range len normal constant isinstance kaiming_normal Conv2d bias modules BatchNorm2d weight Linear makedirs asarray warn flatten append enumerate insert join_path add_to_path get_dir __import__ import_by_name get_absolute_path get_filename append _in_image append _in_image append _in_image append _in_image norm2 zip shape asarray extend len pi asarray asarray reshape flatten sqrt shape range shape asarray has_infty has_nan enumerate len show asarray join_path flatten linspace figure save_image load show val_accuracies list plot training_losses val_losses training_accuracies figure legend range len Rectangle add_patch full_screen_toggle get_current_fig_manager add_line Line2D linspace show_images show set_title set_subtitle axis colorbar bgr2rgb imshow maximize_figure subplot2grid append save_image enumerate get_absolute_path savefig imsave make_parent_dir set_xlim set_ylim suptitle maximize_figure randint len Pool join close setproctitle print get_pid cmd append int cmd split flatten flatten append pop list tuple to_lowercase is_str enumerate list tuple to_lowercase is_str enumerate to_lowercase findall replace replace_all binomial list_local_devices get_checkpoint_state is_dir get_absolute_path model_checkpoint_path get_checkpoint_state all_model_checkpoint_paths int get_latest_ckpt is_none_or_empty latest_checkpoint print get_model_variables startswith info append extend get_collection TRAINABLE_VARIABLES get_latest_ckpt NewCheckpointReader get_variable_to_shape_map dtype set_shape py_func ConfigProto ones_like zeros_like sigmoid_cross_entropy_with_logits float32 where reduce_sum sigmoid pow cast stop_gradient name reduce_mean add_n histogram zip append scalar UPDATE_OPS get_collection start Thread setName print urlretrieve stat show_images get_count imwrite get_count imwrite asarray
# Shape Robust Text Detection with Progressive Scale Expansion Network ## Requirements * Python 2.7 * PyTorch v0.4.1+ * pyclipper * Polygon2 * OpenCV 3.4 (for c++ version pse) * opencv-python 3.4 ## Introduction Progressive Scale Expansion Network (PSENet) is a text detector which is able to well detect the arbitrary-shape text in natural scene.
23
AADeLucia/gpt2-narrative-decoding
['response generation']
['Decoding Methods for Neural Narrative Generation']
data/count_length.py mturk_human_evaluation/fleiss.py mturk_human_evaluation/cohen.py evaluation.py generate_responses.py mturk_human_evaluation/format_generated_narratives_csv.py data/preproc.py mturk_human_evaluation/create_html_survey.py sentBERT distinct_2 distinct_1 clean_response parse_args parse_args batchify print_kappas create_option_list create_evaluation_display create_examples_display create_narrative_display get_idxs fleiss_kappa format_for_html add_argument ArgumentParser encode pdist set len split list extend zip split debug strip lower sub findall append enumerate device print format items list sub list keys enumerate shape float sum multiply update split enumerate set strip sub
# Decoding Methods for Neural Narrative Generation Alexandra DeLucia\*, Aaron Mueller\*, Xiang "Lisa" Li, João Sedoc This repository contains code for replicating the approach and results of the paper [Decoding Strategies for Neural Narrative Generation](https://arxiv.org/abs/2010.07375). ## Models Our GPT-2 Medium models fine-tuned on the medium and large datasets are available on the library. If you would like the GPT-2 Small model, please reach out to Alexandra DeLucia. ``` model = AutoModel.from_pretrained("aadelucia/GPT2_medium_narrative_finetuned_medium") ``` ## Preprocessing Our preprocessing script may be found in `data/preproc.py`. To replicate our setup, download the [writingPrompts dataset](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) and unzip the .tar into the `data` folder. Then run `preproc.py`.
24
AI-HPC-Research-Team/AIPerf
['automl']
['AIPerf: Automated machine learning as an AI-HPC benchmark']
examples/trials/network_morphism/cifar10/distributed_utils.py src/sdk/pynni/tests/models/pytorch_models/__init__.py tools/nni_cmd/command_utils.py tools/nni_cmd/url_utils.py tools/nni_annotation/testcase/usercode/nas.py src/nni_manager/training_service/test/mockedTrial.py src/sdk/pynni/tests/test_compressor.py src/sdk/pynni/setup.py src/sdk/pynni/tests/test_trial.py scripts/reports/save_log.py src/sdk/pynni/tests/test_protocol.py src/sdk/pynni/nni/networkmorphism_tuner/bayesian.py src/sdk/pynni/nni/networkmorphism_tuner/test_networkmorphism_tuner.py src/sdk/pynni/nni/nas_utils.py src/sdk/pynni/nni/hyperopt_tuner/hyperopt_tuner.py src/sdk/pynni/nni/trial.py src/sdk/pynni/tests/test_pruners.py src/sdk/pynni/nni/networkmorphism_tuner/networkmorphism_tuner.py src/sdk/pynni/nni/protocol.py examples/trials/network_morphism/imagenet/dataset.py src/sdk/pynni/nni/msg_dispatcher.py src/sdk/pynni/nni/networkmorphism_tuner/layer_transformer.py tools/nni_annotation/testcase/usercode/non_annotation/bar.py src/sdk/pynni/nni/utils.py setup.py tools/nni_cmd/updater.py tools/nni_cmd/config_schema.py examples/trials/network_morphism/cifar10/cifar10_keras_multigpu_ps.py tools/nni_gpu_tool/gpu_metrics_collector.py tools/nni_annotation/testcase/usercode/dir/simple.py src/sdk/pynni/nni/platform/local.py src/sdk/pynni/nni/common.py src/sdk/pynni/nni/tuner.py src/sdk/pynni/tests/__init__.py src/nni_manager/core/test/assessor.py src/sdk/pynni/nni/networkmorphism_tuner/layers.py tools/nni_annotation/testcase/annotated/dir/simple.py scripts/reports/score.py tools/nni_cmd/constants.py tools/nni_cmd/launcher.py src/sdk/pycli/setup.py tools/nni_annotation/testcase/usercode/non_annotation/foo.py scripts/reports/gen_report.py src/sdk/pynni/tests/models/pytorch_models/naive.py scripts/deploy/data_transmission/sshRemoteCmd.py tools/nni_cmd/launcher_utils.py examples/trials/network_morphism/cifar10/cifar10_pytorch.py src/sdk/pynni/nni/platform/test.py tools/nni_annotation/examples/mnist_generated.py tools/nni_cmd/nnictl_utils.py tools/nni_annotation/testcase/usercode/mnist.py scripts/build_data/preprocess_imagenet_validation_data.py src/sdk/pynni/tests/test_utils.py scripts/reports/report.py examples/trials/network_morphism/imagenet/imagenet_preprocessing.py src/sdk/pynni/tests/test_smartparam.py tools/nni_annotation/examples/mnist_without_annotation.py tools/nni_trial_tool/url_utils.py tools/nni_annotation/__init__.py tools/nni_trial_tool/constants.py examples/trials/network_morphism/imagenet/resource_monitor.py tools/nni_trial_tool/hdfsClientUtility.py examples/trials/network_morphism/imagenet/utils.py src/sdk/pynni/tests/test_assessor.py src/sdk/pynni/tests/test_msg_dispatcher.py tools/nni_trial_tool/test/test_hdfsClientUtility.py src/sdk/pynni/tests/models/pytorch_models/nested.py src/nni_manager/core/test/dummy_tuner.py src/sdk/pynni/nni/constants.py src/sdk/pynni/nni/networkmorphism_tuner/nn.py tools/nni_annotation/search_space_generator.py tools/nni_annotation/examples/mnist_with_annotation.py src/sdk/pynni/nni/platform/standalone.py src/sdk/pynni/nni/parameter_expressions.py src/sdk/pynni/tests/models/pytorch_models/mutable_scope.py src/sdk/pynni/nni/msg_dispatcher_base.py examples/trials/network_morphism/cifar10/cifar10_pytorch_allreduce.py src/sdk/pycli/nnicli/__init__.py src/nni_manager/core/test/dummy_assessor.py src/sdk/pynni/nni/__init__.py tools/nni_cmd/rest_utils.py src/sdk/pynni/nni/platform/__init__.py tools/setup.py src/sdk/pynni/nni/recoverable.py examples/trials/network_morphism/cifar10/resource_monitor.py scripts/build_data/build_imagenet_data.py tools/nni_cmd/nnictl.py tools/nni_trial_tool/log_utils.py tools/nni_cmd/ssh_utils.py tools/nni_cmd/common_utils.py tools/nni_annotation/testcase/annotated/non_annotation/bar.py examples/trials/network_morphism/imagenet/demo.py src/sdk/pynni/nni/assessor.py scripts/reports/profiler.py tools/nni_annotation/testcase/annotated/non_annotation/foo.py tools/nni_cmd/package_management.py tools/nni_cmd/tensorboard_utils.py tools/nni_annotation/testcase/annotated/handwrite.py src/sdk/pycli/nnicli/nni_client.py src/sdk/pynni/nni/networkmorphism_tuner/graph.py src/sdk/pynni/nni/networkmorphism_tuner/utils.py src/sdk/pynni/nni/__main__.py src/sdk/pynni/nni/smartparam.py tools/nni_trial_tool/rest_utils.py src/sdk/pynni/tests/test_builtin_tuners.py src/sdk/pynni/nni/networkmorphism_tuner/graph_transformer.py tools/nni_annotation/specific_code_generator.py tools/nni_annotation/testcase/annotated/mnist.py examples/trials/network_morphism/imagenet/imagenet_train.py tools/nni_cmd/config_utils.py src/sdk/pynni/nni/env_vars.py tools/nni_annotation/code_generator.py tools/nni_trial_tool/trial_keeper.py tools/nni_annotation/test_annotation.py src/sdk/pynni/nni/hyperopt_tuner/test_hyperopt_tuner.py src/sdk/pynni/tests/test_nas.py examples/trials/network_morphism/cifar10/utils.py tools/nni_annotation/testcase/annotated/nas.py read build_graph_from_json get_args train_eval parse_rev_args SendMetrics Epoch_num_record build_graph_from_json get_args parse_rev_args test train average_gradients broadcast_params DistModule dist_init record_device_info get_args test_record_device_info socket_client record_net_info write_file Cutout get_mean_and_std EarlyStopping data_transforms_mnist data_transforms_cifar10 init_params _parse_example_proto parse_record process_record_dataset get_filenames parse_rev_args build_graph_from_json get_args train_eval _aspect_preserving_resize _decode_crop_and_flip _central_crop _smallest_size_at_least _mean_image_subtraction _resize_image preprocess_image build_graph_from_json get_args train_eval parse_rev_args SendMetrics record_device_info get_args test_record_device_info record_net_info write_file MinGpuMem EarlyStopping predict_acc trial_activity ImageCoder _convert_to_example _process_image_files _process_image _int64_feature _process_dataset _is_cmyk _find_image_files _build_bounding_box_lookup _find_human_readable_labels _build_synset_lookup _find_image_bounding_boxes _bytes_feature _float_feature main _process_image_files_batch _is_png ssh_cmd read_file get_args GenPerfdata GenReport cal_add_flops cal_conv_flops cal_trial_flops_per_image cal_bn_flops cal_relu_flops cal_avgpool_flops profiler cal_dense_flops cal_maxpool_flops cal_softmax_flops main get_args formatnum main_grid display_log save_log cal_report_results find_max_acc find_all_trials find_startime conversion_time process_log receive send DummyAssessor DummyTuner user_code sdk_send_data get_trial_job list_trial_jobs export_data get_job_metrics get_experiment_profile get_experiment_status stop_nni set_endpoint _check_endpoint start_nni _http_succeed version get_job_statistics _nni_rest_get _create_process read Assessor AssessResult enable_multi_phase multi_thread_enabled enable_multi_thread init_standalone_logger init_logger multi_phase_enabled _LoggerFileWrapper _load_env_vars _create_parameter_id _pack_parameter _sort_history MsgDispatcher MsgDispatcherBase enas_mode darts_mode reload_tensorflow_variables training_update _decompose_general_key _get_layer_and_inputs_from_tuner oneshot_mode convert_nas_search_space rewrite_nas_space darts_training classic_mode _construct_general_key qloguniform normal choice qlognormal uniform qnormal lognormal randint loguniform quniform CommandType receive send Recoverable qloguniform normal mutable_layer choice qlognormal uniform qnormal lognormal function_choice randint loguniform quniform report_intermediate_result get_sequence_id get_experiment_id get_next_parameter report_final_result get_current_parameter get_trial_id Tuner json2space split_index OptimizeMode extract_scalar_reward MetricType json2parameter extract_scalar_history init_dispatcher_logger NodeType convert_dict2tuple NoMoreTrialError _create_tuner augment_classargs _create_assessor create_customized_class_instance _run_advisor create_builtin_class_instance main json2space json2parameter _add_index HyperoptTuner json2vals HyperoptTunerTestCase skip_connections_distance ReverseElem layers_distance vector_distance job SearchTree skip_connection_distance Elem layer_distance edit_distance bourgain_embedding_matrix contain_old attribute_difference BayesianOptimizer IncrementalGaussianProcess edit_distance_matrix contain NetworkDescriptor graph_to_onnx json_to_graph JSONModel Graph TorchModel TfModel ONNXModel onnx_to_graph Node KerasModel graph_to_json to_skip_connection_graph to_deeper_graph to_wider_graph to_skip_connection_graph2 create_new_layer legal_graph to_deeper_graph2 transform StubFlatten is_layer get_conv_class get_global_avg_pooling_class StubLayer get_dropout_class set_stub_weight_to_torch StubConv2d StubGlobalPooling1d StubConv3d StubBatchNormalization1d StubBatchNormalization2d get_batch_norm_class TorchConcatenate StubWeightBiasLayer get_pooling_class GlobalAvgPool1d set_keras_weight_to_stub StubBatchNormalization3d StubGlobalPooling set_stub_weight_to_keras StubConv StubDense StubDropout2d tf_dropout layer_width set_torch_weight_to_stub StubPooling3d StubConcatenate StubDropout StubGlobalPooling2d StubDropout1d StubDropout3d TorchAdd StubBatchNormalization TorchFlatten GlobalAvgPool2d StubAdd StubSoftmax StubPooling1d keras_dropout to_real_tf_layer StubReLU StubInput layer_description_builder layer_description_extractor StubGlobalPooling3d StubPooling2d get_n_dim StubAggregateLayer AvgPool to_real_keras_layer StubPooling GlobalAvgPool3d StubConv1d wider_pre_conv add_noise deeper_conv_block dense_to_deeper_block init_bn_weight wider_next_conv init_dense_weight wider_pre_dense wider_bn init_conv_weight wider_next_dense NetworkMorphismTuner NetworkGenerator MlpGenerator CnnGenerator NetworkMorphismTestCase Constant send_metric get_sequence_id get_experiment_id get_next_parameter request_next_parameter get_trial_id send_metric get_sequence_id get_experiment_id get_next_parameter get_trial_id send_metric get_sequence_id get_experiment_id get_next_parameter get_last_metric get_trial_id init_params AssessorTestCase _restore_io NaiveAssessor _reverse_io BuiltinTunersTestCase get_tf_model TorchModel CompressorTestCase tf2 _restore_io MsgDispatcherTestCase _reverse_io NaiveTuner NasTestCase ProtocolTestCase _prepare_receive _prepare_send PrunerTestCase Model validate_sparsity pruners_test foo bar SmartParamTestCase conv2D TrialTestCase UtilsTestCase Node Cell SpaceWithMutableScope Layer NaiveSearchSpace NestedSpace MutableOp Transformer test_variable_equal parse FuncReplacer parse_annotation replace_variable_node parse_annotation_function parse_nni_variable convert_args_to_dict parse_nni_function parse_annotation_mutable_layers replace_function_node make_lambda SearchSpaceGenerator generate Transformer test_variable_equal parse FuncReplacer parse_annotation replace_variable_node parse_annotation_function parse_nni_variable convert_args_to_dict parse_nni_function parse_annotation_mutable_layers replace_function_node make_lambda AnnotationTestCase generate_search_space _generate_file_search_space expand_annotations _generate_specific_file _expand_file_annotations bias_variable max_pool download_mnist_retry conv2d generate_defualt_params main avg_pool MnistNetwork weight_variable bias_variable max_pool download_mnist_retry conv2d generate_defualt_params main avg_pool MnistNetwork weight_variable bias_variable max_pool download_mnist_retry conv2d generate_defualt_params main avg_pool MnistNetwork weight_variable max_pool bias_variable generate_default_params max_pool conv2d main avg_pool MnistNetwork weight_variable add_one add_three add_four main add_two max_pool bar bias_variable generate_default_params max_pool conv2d main avg_pool MnistNetwork weight_variable add_one add_three add_four main add_two max_pool bar check_output_command install_requirements_command _get_pip_install kill_command install_package_command get_nni_installation_path print_normal check_tensorboard_version print_error get_python_dir get_json_content get_yml_content detect_port print_warning detect_process get_user setChoice setType setNumberRange setPathCheck Config Experiments set_platform_config set_remote_config set_frameworkcontroller_config set_dlts_config set_pai_config manage_stopped_experiment set_kubeflow_config get_log_path start_rest_server set_pai_yarn_config launch_experiment print_log_content start_monitor resume_experiment setNNIManagerIp create_experiment set_trial_config set_local_config set_experiment view_experiment validate_search_space_content validate_machine_list parse_tuner_content validate_kubeflow_operators parse_path validate_common_content parse_advisor_content parse_assessor_content parse_time validate_pai_config_path validate_annotation_content validate_customized_file validate_pai_trial_conifg parse_relative_path validate_all_content expand_path parse_args nni_info update_experiment convert_time_stamp_to_date trial_ls search_space_auto_gen hdfs_clean trial_kill log_trial stop_experiment experiment_list webui_url get_experiment_time remote_clean local_clean get_experiment_status get_config_filename experiment_clean list_experiment get_time_interval get_experiment_port get_config webui_nas export_trials_data check_rest check_experiment_id log_internal experiment_status trial_codegen parse_ids show_experiment_info log_stdout monitor_experiment platform_clean get_platform_dir set_monitor log_stderr package_install package_show process_install check_rest_server rest_get rest_post rest_put check_rest_server_quick rest_delete check_response remove_remote_directory create_ssh_sftp_client copy_remote_directory_to_local check_environment format_tensorboard_log_path start_tensorboard get_path_list parse_log_path copy_data_from_remote start_tensorboard_process stop_tensorboard import_data_to_restful_server load_search_space update_concurrency import_data get_query_type validate_file validate_dispatcher update_searchspace validate_digit update_duration update_trialnum update_experiment_profile trial_job_id_url import_data_url experiment_url cluster_metadata_url export_data_url tensorboard_url check_status_url get_local_urls trial_jobs_url parse_nvidia_smi_result main check_ready_to_run gen_empty_gpu_metric copyHdfsFileToLocal copyFileToHdfs copyHdfsDirectoryToLocal copyDirectoryToHdfs StdOutputType nni_log RemoteLogger LogType PipeLogReader NNIRestLogHanlder rest_post rest_put rest_delete rest_get download_parameter fetch_parameter_file check_version get_hdfs_client is_multi_phase main_loop trial_keeper_help_info gen_send_version_url gen_parameter_meta_url gen_send_stdout_url HDFSClientUtilityTest add_argument ArgumentParser debug produce_keras_model json_to_graph operation_history build_graph_from_json str multi_gpu_model debug to_categorical astype write close load_data split open len SGD open str Adam RMSprop append Adagrad debug Adamax close get_trial_id compile int Adadelta evaluate fit write split len produce_torch_model Adagrad Adadelta Adamax Adam SGD RMSprop parameters DataLoader data_transforms_cifar10 DataParallel CIFAR10 to CrossEntropyLoss criterion backward debug zero_grad step max net enumerate debug eval all_reduce parameters requires_grad data list values broadcast int str init_process_group set_device write close get_world_size set_start_method device_count get_rank open sleep str get_recv bytes_recv get_send write close sleep bytes_sent open close write open str join cpu_percent strip readlines sleep write_file used virtual_memory print sendto decode sleep Cutout cutout Compose cutout_length append Cutout cutout Compose cutout_length append print DataLoader div_ zeros range len normal constant isinstance kaiming_normal Conv2d bias modules BatchNorm2d weight Linear append join listdir str map_and_batch shuffle apply repeat prefetch parse_single_example one_hot cast int32 _parse_example_proto preprocess_image cast load close dumps produce_tf_model open MirroredStrategy batch_size TFRecordDataset warmup_1 rand train_data_dir warmup_3 parse_rev_args flat_map from_tensor_slices val_data_dir LearningRateScheduler get_filenames warmup_2 process_record_dataset min slave epochs sample_distorted_bounding_box constant random_flip_left_right decode_and_crop_jpeg extract_jpeg_shape stack unstack shape expand_dims cast float32 minimum int32 cast float32 shape _smallest_size_at_least _aspect_preserving_resize _decode_crop_and_flip multiply cast _central_crop _resize_image decode_jpeg int list get_experiment_id range float makedirs predict_acc ModelCheckpoint int enumerate str ppid_gpu_info system get_md5 sleep ppid_cpu_info max list str print subtract square curve_fit sqrt append sum array logfun len list readlines round min Example read print _is_cmyk cmyk_to_rgb png_to_jpeg _is_png decode_jpeg int join _convert_to_example arange _process_image TFRecordWriter print astype write SerializeToString close output_directory range flush len int ImageCoder Thread join print astype Coordinator start append range flush len seed list print Glob extend shuffle range len append append basename print _find_image_files _process_image_files _find_human_readable_labels labels_file readlines split print readlines append float split _process_dataset imagenet_metadata_file print train_directory validation_directory _build_synset_lookup train_shards output_directory validation_shards print exit str AutoAddPolicy connect close set_missing_host_key_policy SSHClient exec_command load int cal_conv_flops cal_bn_flops cal_add_flops cal_relu_flops close cal_avgpool_flops len cal_dense_flops cal_maxpool_flops cal_softmax_flops range open append join cal_trial_flops_per_image listdir subplot join plot xlabel grid close ylabel set_visible savefig figure set_major_formatter tick_params array cal_report_results display_log id main_grid join system mkdir range len join readlines close extend len append range open fromtimestamp join mktime strptime timestamp timedelta fromtimestamp min timestamp timedelta isfile range len int format arange find_max_acc e find_startime profiler log append float abs range len join list sorted isfile find_all_trials min dict process_log zip listdir keys range values len encode write flush int read decode getenv join len format range sdk_send_data sleep get format status_code _check_endpoint _http_succeed print readline strip Popen split split get setFormatter getLogger addHandler StreamHandler localtime Formatter _LoggerFileWrapper setLevel INFO open stdout setFormatter basicConfig getLogger addHandler StreamHandler Formatter setLevel INFO append enumerate acquire release list keys _get_layer_and_inputs_from_tuner get_next_parameter list format case len boolean_mask dict range get_variable get_shape list dropout get_next_parameter reduce_sum values len list format softmax add_n append values get_variable load sorted list format _get_layer_and_inputs_from_tuner _decompose_general_key get_next_parameter add set index warning split MomentumOptimizer compute_gradients apply_gradients run darts_training reload_tensorflow_variables get_current_parameter info get update list items isinstance error len dict warning info _construct_general_key debug acquire debug CommandType Process time int recv_pyobj strip start history acquire popen send_pyobj release to_json send_metric send_pyobj recv_pyobj isinstance isinstance isinstance join NNI_LOG_DIRECTORY NNI_LOG_LEVEL init_logger list isinstance append keys enumerate deepcopy list str isinstance len dict append randint keys enumerate items list class_constructor import_module getattr augment_classargs get class_constructor import_module getattr append get decode _create_tuner enable_multi_phase debug add_argument dumps enable_multi_thread parse_known_args _create_assessor MsgDispatcher loads ArgumentParser _run_advisor _on_exit run create_customized_class_instance get create_builtin_class_instance run create_customized_class_instance get create_builtin_class_instance create_customized_class_instance get create_builtin_class_instance deepcopy str choice dict randint log clip str list isinstance keys enumerate items list isinstance dict enumerate is_layer min layer_distance zeros range len abs zeros skip_connection_distance enumerate layers_distance layers edit_distance zeros enumerate array seed int list min log choice ceil array range append len join partial close map Pool range len produce_onnx_model save Graph parsing_onnx_model produce_json_model dumps items list int Graph tuple layer_description_builder dict Node loads append list is_layer to_wider_model units wide_layer_ids sample filters sorted sample skip_connection_layer_ids to_add_skip_model append to_concat_skip_model range len is_layer sample skip_connection_layer_ids to_add_skip_model append to_concat_skip_model range len layer_class is_layer shape DENSE_DROPOUT_RATE StubDense n_dim create_new_layer to_deeper_model deep_layer_ids sample layer_class get_conv_class StubReLU n_dim to_deeper_model shape deep_layer_ids2 get_layers_id sample get_batch_norm_class skip_connections extract_descriptor deepcopy N_NEIGHBOURS to_skip_connection_graph2 to_wider_graph to_deeper_graph2 randrange append range shape len is_layer shape len is_layer input list output isinstance startswith list isinstance is_layer import_weights import_weights_keras export_weights export_weights_keras isinstance tuple get_n_dim zeros filters range set_weights units eye zeros StubDense set_weights add_noise concatenate units copy input_units append randint get_weights StubDense range set_weights concatenate set_weights kernel_size filters copy get_n_dim stride append randint get_weights input_channel enumerate list concatenate tuple filters get_n_dim shape zeros get_weights input_channel set_weights num_features concatenate tuple copy get_n_dim zip get_weights set_weights int concatenate units copy input_units zeros get_weights StubDense set_weights ptp flatten uniform shape units zeros eye set_weights tuple get_n_dim zeros filters range set_weights num_features set_weights to_json send_metric load join request_next_parameter sleep open name print write close run encode flush open warning error loads info deepcopy loads seek seek Sequential compile BytesIO BytesIO format bias_mask numel item append remove format compress randn print model backward zero_grad export_model SGD parameters Model v step long cross_entropy strip elts Attribute values str Name Num append Dict parse value Str List Assign Call keywords func zip keys args parse parse_annotation value strip parse_annotation_function convert_args_to_dict append keyword convert_args_to_dict strip Str parse_annotation_function str Str strip append Dict args arguments AST list items isinstance parse_nni_variable visit parse_nni_function Transformer Import insert visit body enumerate visit parse SearchSpaceGenerator index update join replace endswith _generate_file_search_space walk len join replace endswith makedirs copyfile walk len truncated_normal constant range add_graph mkdtemp FileWriter download_mnist_retry get_default_graph build_network MnistNetwork read_data_sets mutable_layer report_intermediate_result report_final_result sleep format exit exists print_error call Process CTRL_BREAK_EVENT send_signal call _get_pip_install call _get_pip_install append print print print Process socket close connect AF_INET SOCK_STREAM join getusersitepackages try_installation_path_sequentially print_error exit getenv isfile join print check_output_command print_normal get_log_path strip system popen get_nni_installation_path join print_normal print_error exit detect_port get_log_path format print text rest_put dumps get_log_path dict cluster_metadata_url check_response get str text rest_put dumps dict cluster_metadata_url get_log_path get str isinstance text rest_put dumps get_log_path dict setNNIManagerIp cluster_metadata_url range len text rest_put dumps dict cluster_metadata_url get_log_path text rest_put dumps dict setNNIManagerIp cluster_metadata_url get_log_path text rest_put dumps dict setNNIManagerIp cluster_metadata_url get_log_path text rest_put dumps dict setNNIManagerIp cluster_metadata_url get_log_path text rest_put dumps dict setNNIManagerIp cluster_metadata_url get_log_path text rest_put dumps dict setNNIManagerIp cluster_metadata_url get_log_path get str format print_error rest_post text dumps get_log_path dict experiment_url append check_response pid print_normal set_frameworkcontroller_config format set_pai_yarn_config set_remote_config set_local_config print_error exit kill_command set_dlts_config set_pai_config set_kubeflow_config Config decode pid set_platform_config print_error port get_local_urls get_log_path start_rest_server add_experiment check_rest_server generate_search_space print_normal print_log_content mkdtemp exit start_monitor foreground get debug expand_annotations gettempdir get_json_content kill_command set_config join Experiments print dumps set_experiment get_user makedirs Config join config launch_experiment print_error exit get_yml_content port abspath sample validate_all_content ascii_letters digits set_config update_experiment Experiments Config format print_normal join launch_experiment print_error get_config exit id port get_all_experiments sample ascii_letters digits set_config manage_stopped_experiment manage_stopped_experiment get expanduser get join print_normal exit print_error get dirname parse_relative_path expand_path range len load get list print_error exit values open get exit print_error get validate list print_error __contains__ exit keys range len exit print_error validate_customized_file get validate_customized_file validate_customized_file get validate_search_space_content exit print_error exit print_error get format print_error exit get_yml_content get format print_error exit validate_pai_config_path print_warning get parse_tuner_content parse_path print_error parse_advisor_content exit parse_assessor_content validate_pai_trial_conifg parse_time validate_annotation_content validate_common_content print version add_argument add_parser func ArgumentParser set_defaults add_subparsers convert_time_stamp_to_date rest_get text experiment_url loads check_rest_server_quick Config Experiments list isinstance get_experiment_time get_config get_experiment_status get_all_experiments keys update_experiment Experiments list print_normal get isinstance print_error print exit id get_all_experiments append remove_experiment keys update_experiment Experiments list print_normal all join isinstance print_error endswith print exit id startswith get_all_experiments append remove_experiment keys Experiments print_error exit check_experiment_id get_all_experiments Experiments print_error exit check_experiment_id get_all_experiments get str strftime Config print_normal get_config check_rest_server_quick get_config_filename Config Experiments update_experiment print_normal time str parse_ids get_config exit system strftime kill_command localtime get_all_experiments print_warning set_config Config convert_time_stamp_to_date rest_get print_error print get_config text dumps check_rest_server_quick loads get_config_filename trial_jobs_url enumerate Config trial_job_id_url print_error print get_config text trial_id check_rest_server_quick get_config_filename rest_delete Config print_error expand_annotations exit trial_id get_config_filename check_experiment_id print_warning Config convert_time_stamp_to_date rest_get print_error print get_config text dumps check_rest_server_quick experiment_url loads get_config_filename Config print_normal print get_config text dumps check_rest_server_quick loads get_config_filename print join check_output_command get_config_filename log_internal log_internal Config get print_normal format rest_get print_error get_config text exit trial_id check_rest_server_quick loads get_config_filename append trial_jobs_url Config print get_config_filename get_all_config Config join print_normal format get_config get_config_filename get_nni_installation_path join print_normal run rmtree print_normal format get join str print_normal format create_ssh_sftp_client remove_remote_directory join print_normal format HdfsClient group delete match print_warning compile Config print_error id hdfs_clean remove_experiment str list print_normal all remote_clean local_clean exit get_all_experiments input append home get format get_config eval print_warning keys Experiments join print get str format append get config update_experiment print_normal format input print_error print remote_clean exit get_yml_content eval hdfs_clean abspath get_platform_dir print_warning update_experiment Experiments list print_normal all print exit get_all_experiments append print_warning keys seconds strptime mktime update_experiment Experiments list convert_time_stamp_to_date rest_get print text exit check_rest_server_quick loads get_all_experiments append print_warning trial_jobs_url keys enumerate update_experiment print_normal format show_experiment_info system exit get_experiment_status sleep kill_command time exit set_monitor print_error Config rest_get print_error get_config text exit check_rest_server_quick type loads get_config_filename append export_data_url join print_normal format trial_dir getcwd wait file print_warning expanduser exists join install_requirements_command print_error name process_install print join list keys put post get delete check_status_url range rest_get sleep check_status_url rest_get join listdir makedirs Transport connect from_private_key_file from_transport check_environment join remove listdir rmdir print_error search group trial_id exit append get join print_normal copy_remote_directory_to_local create_ssh_sftp_client append enumerate get join print_normal print_error parse_log_path copy_data_from_remote exit append enumerate str join print_normal pid print_error get_config exit port detect_port append get_local_urls set_config Config Experiments print_normal print_error get_config call check_experiment_id get_all_experiments set_config Config Experiments join get_path_list rest_get print_error get_config text exit gettempdir check_rest_server_quick loads check_experiment_id get_all_experiments trial_jobs_url start_tensorboard_process makedirs print_warning get_config exit print_error get_json_content dumps Config rest_get print_error get_config text rest_put dumps get_query_type check_rest_server_quick experiment_url loads get_config_filename print_normal load_search_space print_error get_experiment_port validate_file filename update_experiment_profile int value print_normal print_error get_experiment_port validate_digit update_experiment_profile int value print_normal print_error get_experiment_port parse_time update_experiment_profile int value print_normal print_error validate_digit update_experiment_profile load_search_space print_error get_experiment_port validate_file validate_dispatcher filename import_data_to_restful_server Config print_error rest_post get_config import_data_url check_rest_server_quick get_config_filename items list format address append pop int remove list check_output map getpid splitlines append split check_output exit parse_nvidia_smi_result split getElementsByTagName umask parseString umask join pathSuffix list_status copyHdfsFileToLocal makedirs remove format nni_log length copy_to_local get_file_status Info join mkdirs listdir isdir copy_from_local delete exists isdir print format now value hdfs_host webhdfs_path HdfsClient pai_hdfs_host pid nnimanager_port Stdout copyHdfsDirectoryToLocal Popen copyDirectoryToHdfs getcwd exit nnimanager_ip get_hdfs_client sleep Info nni_hdfs_exp_dir format hdfs_output_dir trial_command pai_hdfs_output_dir poll nni_log RemoteLogger log_collection makedirs split get_pipelog_reader print Error format nni_log rest_post gen_send_version_url group nnimanager_port nnimanager_ip dumps Warning version Info _exit str join format nni_log basename get_hdfs_client Debug copyHdfsFileToLocal listdir start FetchThread
![](https://github.com/AI-HPC-Research-Team/AIPerf/blob/master/logo.JPG) ![](https://github.com/AI-HPC-Research-Team/AIPerf/blob/master/logo_PCL.jpg) ![](https://github.com/AI-HPC-Research-Team/AIPerf/blob/master/logo_THU.jpg) **<font size=4>开发单位:鹏城实验室(PCL),清华大学(THU)</font>** **<font size=4>特别感谢国防科技大学窦勇老师及其团队的宝贵意见和支持</font>** # <span id="head1">AIPerf Benchmark v1.0</span> ## <span id="head2"> Benchmark结构设计</span> **关于AIPerf设计理念,技术细节,以及测试结果,请参考论文:https://arxiv.org/abs/2008.07141** AIPerf Benchmark基于微软NNI开源框架,以自动化机器学习(AutoML)为负载,使用network morphism进行网络结构搜索和TPE进行超参搜索。 ## <span id="head3"> Benchmark安装说明</span> **本文用于在容器环境下运行Benchmark**
25
AI-secure/VeriGauge
['autonomous driving']
['SoK: Certified Robustness for Deep Neural Networks']
convex_adversarial/examples/cifar.py recurjac/task_lipschitz.py convex_adversarial/examples/fashion_mnist.py cnn_cert/setup_mnist.py cnn_cert/train_resnet.py cnn_cert/CLEVER/setup_mnist.py recurjac/bound_fastlin_fastlip.py basic/core.py cnn_cert/CLEVER/nlayer_model.py cnn_cert/CLEVER/randsphere.py models/cnn_cert_model.py basic/stats.py recurjac/activation_functions.py recurjac/activation_functions_quad.py crown_ibp/datasets.py convex_adversarial/convex_adversarial/dual_inputs.py experiments/params.py basic/milp.py crown_ibp/converter/setup_mnist.py crown_ibp/eps_scheduler.py recurjac/bound_crown_quad.py adaptor/crown_adaptor.py experiments/evaluate.py cnn_cert/CLEVER/collect_gradients.py cnn_cert/cnn_to_mlp.py crown_ibp/train.py crown_ibp/convex_adversarial/dual_network.py recurjac/main.py eran/tf_verify/eran.py recurjac/bound_spectral.py cnn_cert/ReluplexCav2017/setup_mnist.py experiments/data_analyzer.py models/test_model.py cnn_cert/CLEVER/setup_cifar.py crown_ibp/convex_adversarial/dual.py eran/testing/check_models.py crown_ibp/converter/utils.py adaptor/cnncert_adaptor.py crown_ibp/config.py convex_adversarial/convex_adversarial/dual.py models/exp_model.py cnn_cert/utils.py models/crown_ibp_model.py cnn_cert/fastlin/setup_tinyimagenet.py convex_adversarial/examples/svhn.py convex_adversarial/examples/mnist_epsilon.py recurjac/task_landscape.py recurjac/utils.py eran/tf_verify/read_net_file.py cnn_cert/fastlin/setup_cifar.py convex_adversarial/examples/trainer.py cnn_cert/pymain.py convex_adversarial/examples/primal.py eran/tf_verify/tensorflow_translator.py eran/tf_verify/deeppoly_nodes.py adaptor/basic_adaptor.py convex_adversarial/convex_adversarial/__init__.py crown_ibp/convex_adversarial/utils.py cnn_cert/fastlin/get_bounds_ours.py convex_adversarial/examples/har.py crown_ibp/pgd.py recurjac/cachefy.py eran/tf_verify/optimizer.py main.py basic/utils.py model.py basic/models.py adaptor/lpdual_adaptor.py crown_ibp/converter/setup_cifar.py recurjac/converter/setup_mnist.py cnn_cert/Attacks/l1_attack.py recurjac/converter/mnist_cifar_models.py eran/tf_verify/onnx_translator.py recurjac/converter/keras2torch.py cnn_cert/CLEVER/clever.py cnn_cert/CLEVER/estimate_gradient_norm.py recurjac/train_nlayer.py convex_adversarial/convex_adversarial/dual_network.py crown_ibp/convex_adversarial/__init__.py convex_adversarial/examples/attacks.py cnn_cert/setup_tinyimagenet.py eran/tf_verify/ai_milp.py recurjac/mnist_cifar_models.py cnn_cert/Attacks/l2_attack.py experiments/trainer.py recurjac/setup_mnist.py convex_adversarial/examples/problems.py eran/tf_verify/read_zonotope_file.py crown_ibp/model_defs_gowal.py convex_adversarial/examples/mnist.py recurjac/task_robustness.py cnn_cert/fastlin/save_nlayer_weights.py cnn_cert/CLEVER/shmemarray.py recurjac/converter/utils.py crown_ibp/converter/keras2torch.py cnn_cert/train_cnn.py cnn_cert/CLEVER/CNNModel.py cnn_cert/fastlin/get_bounds_ours_sparse.py cnn_cert/activations.py models/zoo.py recurjac/parse_landscape.py constants.py cnn_cert/Attacks/li_attack.py crown_ibp/argparser.py cnn_cert/train_lenet.py models/recurjac_model.py crown_ibp/eval.py recurjac/setup_cifar.py crown_ibp/converter/mnist_cifar_models.py cnn_cert/cnn_bounds_full_core.py convex_adversarial/convex_adversarial/utils.py eran/tf_verify/analyzer.py convex_adversarial/examples/cifar_evaluate.py basic/intervalbound.py cnn_cert/setup_cifar.py basic/model_prepare.py recurjac/bound_crown.py basic/percysdp.py cnn_cert/cnn_bounds_full.py crown_ibp/converter.py crown_ibp/convex_adversarial/dual_inputs.py convex_adversarial/convex_adversarial/dual_layers.py recurjac/bound_interval.py adaptor/recurjac_adaptor.py cnn_cert/Attacks/cw_attack.py recurjac/bound_recurjac.py cnn_cert/fastlin/main_sparse.py cnn_cert/fastlin/utils.py cnn_cert/train_nlayer.py eran/tf_verify/config.py cnn_cert/ReluplexCav2017/to_nnet.py basic/lowerbound.py cnn_cert/fastlin/get_bounds_others.py eran/tf_verify/__main__.py recurjac/parse_lipschitz.py recurjac/converter/torch2keras.py eran/tf_verify/constraints.py adaptor/eran_adaptor.py recurjac/bound_base.py crown_ibp/converter/torch2keras.py recurjac/converter/setup_cifar.py adaptor/adaptor.py basic/components.py cnn_cert/fastlin/main.py convex_adversarial/examples/runtime.py cnn_cert/CLEVER/process_log.py crown_ibp/attack.py crown_ibp/bound_layers.py eran/data/create_zonotope.py experiments/model_pretransform.py eran/tf_verify/eranlayers.py eran/tf_verify/deepzono_nodes.py cnn_cert/CLEVER/defense.py cnn_cert/fastlin/setup_mnist.py eran/tf_verify/krelu.py experiments/model_summary.py datasets.py crown_ibp/convex_adversarial/dual_layers.py crown_ibp/model_defs.py basic/fastmilp.py _mnist _imagenet get_num_classes get_dataset _cifar10 NormalizeLayer get_input_shape pr load_model Adaptor BasicAdaptor RealAdaptorBase CWAdaptor FastLinIBPAdaptor PGDAdaptor VerifierAdaptor FastMILPAdaptor PercySDPAdaptor MILPAdaptor IBPAdaptor CleanAdaptor FazlybSDPAdaptor CNNCertBase fn FastLinSparseAdaptor LPAllAdaptor CNNCertAdaptor check_consistency sequential_torch2keras FullCrownAdaptor CrownAdaptorBase CrownIBPAdaptor IBPAdaptor AI2Adaptor RefineZonoAdaptor KReluAdaptor DeepZonoAdaptor ERANBase DeepPolyAdaptor init_domain RefinePolyAdaptor ZicoDualAdaptor FastLipAdaptor RecurJacAdaptor FastLinAdaptor SpectralAdaptor torch2keras RecurJacBase RecurBaseModel FastMILPVerifier FastIntervalBound IntervalFastLinBound BoundCalculator IntervalBound FastLinBound MILPVerifier to_var load_model LinfPGDAttack PercySDP data_loaders atand ada_linear_bounds atan_linear_bounds sigmoidut tanhlt tanhd tanhut sigmoidd tanh_linear_bounds tanhid sigmoidid sigmoidlt relu_linear_bounds atanut tanh atanid sigmoid atanlt atan sigmoid_linear_bounds UL_basic_block_2_bound UL_conv_bound UL_basic_block_bound UL_pool_bound Model UL_relu_bound compute_bounds find_output_bounds loss warmup run pool conv_full lower_bound_conv pool_linear_bounds upper_bound_conv fn conv_bound lower_bound_pool conv compute_bounds run upper_bound_pool find_output_bounds CNNModel warmup conv_bound_full fn get_weights printlog convert printlog run_all_general run_LP run_cnn command run_CLEVER run_global run run_attack run_all_relu TwoLayerCIFARModel CIFARModel CIFAR load_batch MNIST extract_data TwoLayerMNISTModel extract_labels MNISTModel MadryMNISTModel load_images tinyImagenet train train train_cnn_7layer Residual ResidualStart2 train ResidualStart Residual2 show l0_dist generate_data l2_dist l1_dist linf_dist loss cw_attack EADL1 CarliniL2 CarliniLi get_lipschitz_estimate fmin_with_reg parse_filename get_best_weibull_fit fit_and_test plot_weibull CNNModel collect_gradients defend_png defend_jpeg defend_tv bregman defend_crop make_defend_quilt defend_reduce defend_none fn EstimateLipschitz NLayerModel readDebugLog2array gen_table l1_samples randsphere randsign l2_samples linf_samples TwoLayerCIFARModel CIFARModel CIFAR load_batch MNIST extract_data TwoLayerMNISTModel extract_labels MNISTModel address_of_buffer NpShmemArray ShmemRawArray ShmemBufferWrapper get_layer_bound_LP spectral_bound fast_compute_max_grad_norm_zfw compute_max_grad_norm fast_compute_max_grad_norm_2layer_zfw fast_compute_max_grad_norm_2layer_z fast_compute_max_grad_norm_z compute_worst_bound ReLU fast_compute_max_grad_norm_2layer fast_compute_max_grad_norm_2layer_next fast_compute_max_grad_norm_2layer_fw inc_counter get_weights_list get_layer_bound fast_compute_max_grad_norm fast_compute_max_grad_norm_2layer_next_fw fast_compute_max_grad_norm_2layer_next_zfw init_layer_bound_relax_matrix_huan fast_compute_max_grad_norm_fw fast_compute_max_grad_norm_2layer_next_linear fast_compute_max_grad_norm_linear fast_compute_max_grad_norm_2layer_next_z get_layer_bound_relax_matrix_huan_optimized compute_worst_bound_multi get_layer_bound_relax_matrix_huan_optimized compute_worst_bound init_layer_bound_relax_matrix_huan ReLU get_weights_list get_layer_bound NLayerModel loss CNNModel TwoLayerCIFARModel CIFARModel CIFAR load_batch MNIST extract_data TwoLayerMNISTModel extract_labels MNISTModel MadryMNISTModel load_images tinyImagenet show l0_dist generate_data l2_dist l1_dist linf_dist MNIST extract_data TwoLayerMNISTModel extract_labels MNISTModel MadryMNISTModel fn get_weights_biases nnet DualObject DualLayer InfBallProjBounded InfBall select_input InfBallBounded L2Ball InfBallProj L2BallProj DualLinear conv_transpose2d Identity DualDense DualBatchNorm2d select_layer conv2d DualReshape DualConv2d unbatch DualReLU batch DualReLUProj robust_loss_parallel RobustBounds DualNetBounds robust_loss InputSequential DualNetwork get_epsilon epsilon_from_model DenseSequential GR Dense GL p_lower full_bias p_upper _fgs pgd mean _pgd fgs attack Flatten select_model select_model select_model f har_500_250_model cifar_model_resnet cifar_model mnist_loaders mnist_model_large Flatten mnist_model_wide mnist_500 har_500_model har_resnet_model replace_10_with_0 svhn_model cifar_loaders cifar_model_large har_500_250_100_model fashion_mnist_loaders args2kwargs argparser_evaluate svhn_loaders model_wide mnist_model_deep argparser mnist_model model_deep har_loaders Meter gpu_mem train_baseline evaluate_baseline sampler_robust_cascade AverageMeter evaluate_robust_cascade evaluate_robust evaluate_madry train_robust robust_loss_cascade train_madry isfloat isint argparser BoundSequential BoundDataParallel BoundFlatten BoundReLU BoundConv2d BoundLinear config_dataloader config_modelloader_and_convert2mlp get_file_close get_path config_modelloader update_dict load_config get_model_config main get_stats mnist_loaders svhn_loaders cifar_loaders EpsilonScheduler main model_cnn_4layer convert_conv2d_dense DenseConv2d model_cnn_1layer model_mlp_any model_mlp_uniform load_checkpoint_to_mlpany save_checkpoint model_cnn_10layer model_cnn_3layer_fixed model_cnn_2layer IBP_large IBP_debug pgd output mean _pgd attack main AverageMeter Logger Train Flatten NLayerModel get_model_meta_real get_model_meta CIFAR load_batch MNIST extract_data extract_labels model_mlp_any model_mlp_uniform Flatten show l0_dist generate_data l2_dist l1_dist linf_dist binary_search DualObject DualLayer InfBallProjBounded InfBall select_input InfBallBounded L2Ball InfBallProj L2BallProj DualLinear conv_transpose2d Identity DualFeatureMask2D DualDense DualBatchNorm2d select_layer conv2d DualReshape DualConv2d unbatch DualReLU batch DualReLUProj robust_loss_parallel RobustBounds DualNetBounds robust_loss InputSequential DualNetwork get_epsilon epsilon_from_model DenseSequential GR Dense GL p_lower full_bias p_upper get_tests normalize get_out_tensors handle_affine get_bounds_for_layer_with_milp handle_conv create_model verify_network_with_milp solver_call handle_residual Cache handle_relu handle_maxpool Analyzer layers config Device get_constraints_for_dominant_label label_index clean_string get_constraints_from_file DeeppolyMaxpool DeeppolyGather calc_bounds DeeppolyNode DeeppolyReluNodeFirst DeeppolyTanhNodeIntermediate DeeppolyInput DeeppolySigmoidNodeIntermediate DeeppolySubNodeIntermediate DeeppolyMulNodeIntermediate DeeppolyTanhNodeLast DeeppolyConv2dNodeFirst DeeppolyMulNodeFirst DeeppolyConv2dNodeIntermediate DeeppolyReluNodeIntermediate add_input_output_information_deeppoly DeeppolySubNodeFirst DeeppolyReluNodeLast DeeppolySigmoidNodeFirst DeeppolyTanhNodeFirst DeeppolyResadd DeeppolySigmoidNodeLast DeepzonoAffine DeepzonoMatmul DeepzonoNonlinearity DeepzonoTanh add_bounds DeepzonoSigmoid DeepzonoSub DeepzonoInputZonotope add_dimensions DeepzonoConvbias remove_dimensions DeepzonoResadd DeepzonoRelu add_input_output_information refine_relu_with_solver_bounds DeepzonoAdd DeepzonoMaxpool DeepzonoDuplicate DeepzonoConv DeepzonoMul DeepzonoGather get_xpp DeepzonoInput ERAN eran_input eran_dense eran_conv2d_without_activation eran_maxpool eran_activation eran_resnet_dense tensorshape_to_intlist eran_affine eran_resnet_conv2d eran_reshape eran_conv2d generate_linexpr0 make_krelu_obj grouping_heuristic encode_krelu_cons Krelu prepare_model ONNXTranslator reshape_nhwc nchw_to_nhwc onnxshape_to_intlist Optimizer permutation product numel runRepl parseVec read_tensorflow_net extract_mean myConst read_onnx_net extract_std read_zonotope calculate_padding tensorshape_to_intlist TFTranslator isnetworkfile get_tests denormalize parse_input_box normalize_poly normalize init_domain str2bool show_ascii_spec texify_percentage nice_print read_verify_data read_radius_data verify_texify texify_entire radius_texify texify_radius read_file texify_time _async_raise terminate verify_wrapper radius_wrapper try_load_weight nicenum struc_summary tex_output eliminate_redundent param_summary main calc_tot_torun load_from_tf load_cnn_cert_model sequential_keras2torch crown_ibp_mnist_cnn_3layer_fixed_kernel_5_width_16 load_model ibp_cifar_large ibp_mnist_large crown_ibp_mnist_cnn_2layer_width_2 crown_ibp_cifar_cnn_3layer_fixed_kernel_3_width_16 crown_ibp_cifar_cnn_2layer_width_2 mnist_conv_small in_cells two_layer_fc20 conv_super three_layer_fc100 mnist_conv_large mnist_conv_medium seven_layer_fc1024 cifar_conv_medium cifar_conv_large cifar_conv_small try_load_weight get_input_shape load_keras_model abstract_load_keras_model get_normalize_layer test_mnist_tiny test_mnist test_cifar10 test_cifar10_tiny model_cnn_4layer IBP_large model_cnn_1layer model_cnn_2layer model_mlp_any IBP_debug model_mlp_uniform cifar_model mnist_model model_cnn_10layer model_cnn_3layer_fixed mnist_model_tiny cifar_model_tiny Flatten relu_ub_p leaky_relu_lb_p act_sigmoid_d act_sigmoid relu_ub_n leaky_relu_ub_n general_ub_pn_scalar find_d_LB act_arctan plot_line leaky_relu_lb_n general_lb_n leaky_relu_ub_p act_tanh general_ub_n relu_ub_pn leaky_relu_ub_pn find_d_UB act_arctan_d general_ub_p relu_lb_n general_lb_pn general_lb_pn_scalar general_lb_p act_tanh_d relu_lb_pn leaky_relu_lb_pn relu_lb_p general_ub_pn get_lower_quad_parameterized_lgtu get_lower_quad_parameterized get_area plot_parameterized get_best_upper_quad get_upper_quad_parameterized get_best_lower_quad get_lower_quad_parameterized_lltu compute_bounds_integral compute_bounds ReLU get_weights_list myprint crown_adaptive_bound init_crown_bounds get_leaky_relu_bounds compile_crown_bounds get_general_bounds crown_general_bound get_relu_bounds proj_l2 proj_l1 proj_li crown_quad_bound get_layer_bound_quad_both fastlip_2layer_leaky fastlip_2layer fastlin_bound fastlip_bound fastlip_nplus1_layer init_fastlin_bounds fastlip_leaky_bound fastlip_nplus1_layer_leaky interval_bound recurjac_bound_2layer sigmoid_grad_bounds recurjac_bound_next_with_grad_bounds compile_recurjac_bounds recurjac_bound_backward_improved_1x1 recurjac_bound_wrapper recurjac_bound_forward_improved get_final_bound recurjac_bound_launcher recurjac_bound relu_grad_bounds last_layer_abs_sum construct_function form_grad_bounds recurjac_bound_next recurjac_bound_backward_improved recurjac_bound_forward_improved_1x1 spectral_bound add_cache remove_cache NLayerModel get_model_meta_real get_model_meta parse_result_line get_fig_info parse_single gen_title gen_legend gen_filename parse_single get_color parse_result_line get_linestyle gen_title get_fig_info CIFAR load_batch MNIST extract_data extract_labels task task task train show l0_dist generate_data l2_dist l1_dist linf_dist binary_search Flatten NLayerModel get_model_meta_real get_model_meta CIFAR load_batch MNIST extract_data extract_labels model_mlp_any model_mlp_uniform Flatten show l0_dist generate_data l2_dist l1_dist linf_dist binary_search ToTensor Compose list Compose ToTensor extend append join list ToTensor Compose extend Normalize append Graph Session norm print transpose random eval expand_dims numpy predict list set is_available cuda MNIST DataLoader Normalize CIFAR10 actd range act actd range act actd range act actd range act actd range act actd range act zeros range shape zeros range shape act actut shape actlt zeros actd range act actut shape actlt zeros actd range act actut shape actlt zeros actd range minimum maximum copy zeros range linear_bounds asarray UL_conv_bound shape UL_relu_bound linear_bounds asarray UL_conv_bound shape UL_relu_bound minimum zeros_like maximum copy range minimum maximum copy zeros range UL_basic_block_2_bound asarray linear_bounds ones reshape UL_pool_bound UL_basic_block_bound pool_linear_bounds copy UL_conv_bound shape UL_relu_bound zeros range conv_bound_full append compute_bounds range len list asarray print reshape sizes ascontiguousarray pads strides types func append clear_session pads set_random_seed flatten argmax log seed str list exp load_model squeeze Model types append range asarray inf CIFAR format generate_data sizes astype MNIST minimum time print reshape float32 strides maximum tinyImagenet find_output_bounds warmup len zeros range zeros max range abs sqrt shape conv zeros sum max range zeros range conv_full abs sqrt shape zeros sum max range minimum linear_bounds conv_full maximum shape zeros range minimum linear_bounds conv_full maximum shape zeros range inf min astype float32 where dot zeros sum max range len minimum conv_full pool_linear_bounds maximum shape zeros range minimum conv_full pool_linear_bounds maximum shape zeros range lower_bound_conv maximum upper_bound_conv lower_bound_pool upper_bound_pool tuple conv_bound fn CNNModel recompile print load_model print append zeros prod range MNIST printlog format CIFAR print compile Lambda Sequential len SGD add shape Dense save get_weights range Flatten set_weights command printlog split printlog format run_cnn run append len append run_cnn printlog format printlog format run append len collect_gradients printlog append format printlog format run append len append printlog cw_attack format read transpose fromstring append range seed join dtype permutation format print transpose float32 zeros listdir array open Sequential train_labels save train_data Flatten len Adam add format Lambda Dense load_weights zip compile print summary Conv2D BatchNormalization fit relu SGD shape MaxPooling2D Sequential SGD MaxPooling2D train_labels save train_data Flatten Adam add shape Dropout format Dense load_weights compile print fit summary Conv2D len Model Input fromarray join print squeeze flatten around save range flatten argmax str list predictor squeeze argmin len savetxt append range format set sample join print extend eye randint array makedirs seed MNIST time CIFAR load_model generate_data norm_fn set_random_seed tinyImagenet average attack kstest fit format subplots plot xlabel min close ylabel pdf title hist savefig linspace legend max nanmax format partial print map copy index fmin zip append amin amax print get_best_weibull_fit int basename splitext split get_lipschitz_estimate set_random_seed Pool seed sorted squeeze append format glob close ConfigProto flush join print parse_filename system EstimateLipschitz loadmat len uint8 astype float32 fromarray uint8 BytesIO astype float32 save fromarray uint8 BytesIO astype float32 save fromarray asarray inf shape sqrt zeros range uniform load reshape float32 placeholder matmul shape top_k to_float strip close open append split format write randint sum randn sort uniform empty reshape reshape l2_samples NpShmemArray float32 shape linf_samples transform_new range l1_samples clip get isinstance from_address get_address ShmemBufferWrapper __init__ sizeof len int itemsize ShmemRawArray c_char prod int norm format inf print ones shape zip append expand_dims range len addVar setParam MAXIMIZE str LinExpr Model objVal append range update format astype empty_like MINIMIZE time optimize print addConstr float32 setObjective reset len format print transpose U ascontiguousarray shape append get_weights enumerate sum norm empty_like dot abs max range ones empty range len sum norm zeros_like astype maximum float32 copy empty_like dot abs max range zeros_like astype float32 maximum dot zeros range zeros fast_compute_max_grad_norm_2layer range zeros_like maximum zeros abs fast_compute_max_grad_norm_2layer range fast_compute_max_grad_norm_2layer_next zeros_like astype float32 maximum dot zeros range astype float32 maximum dot zeros range list zeros_like fast_compute_max_grad_norm_2layer_fw maximum fast_compute_max_grad_norm_2layer_next_fw zeros abs range minimum zeros maximum zeros_like minimum zeros maximum range ones maximum fast_compute_max_grad_norm_2layer_next_z fast_compute_max_grad_norm_2layer_z zeros abs range minimum zeros maximum zeros_like minimum zeros maximum range list ones maximum fast_compute_max_grad_norm_2layer_zfw fast_compute_max_grad_norm_2layer_next_zfw zeros abs range zeros range fast_compute_max_grad_norm_2layer_next_linear zeros_like abs maximum dot sqrt zeros sum range len norm inc_counter zeros empty range format print compute_worst_bound any linspace range tuple compute_max_grad_norm ReLU ones append expand_dims range get_layer_bound format inf astype fast_compute_max_grad_norm empty_like init_layer_bound_relax_matrix_huan ord get_layer_bound_LP print min float32 get_layer_bound_relax_matrix_huan_optimized zeros len csr_matrix A shape zeros empty A csr_matrix multiply csr_matrix T load_model append range len append get_weights_biases len list detach isinstance zip print Conv2d ReLU BatchNorm2d Linear append append size item DataParallel set_start unsqueeze ReLU cuda is_cuda l append bounds size select_layer reversed InputSequential zip item enumerate T isinstance Variable any out_features int isinstance numel Conv2d unsqueeze Linear get_epsilon isinstance print numel unsqueeze append sum l time max p_upper p_lower data model backward Variable size zero_grad Adam sign sum data model Variable backward size clamp zero_grad Adam sign sum range format atk print squeeze mean parameters append robust_loss_batch enumerate cuda format print model_factor mnist_loaders unsqueeze DualNetBounds Sequential Conv2d ReLU Flatten Linear Sequential group ReLU Flatten Linear MNIST DataLoader MNIST DataLoader Sequential ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear DataLoader SVHN cuda TensorDataset float long DataLoader Sequential ReLU Linear Sequential ReLU Linear Sequential ReLU Linear Sequential DenseSequential Dense ReLU Linear DataLoader Normalize CIFAR10 isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear isinstance block DenseSequential extend out_channels Conv2d normal_ sqrt modules zero_ range sorted format print add_argument prefix ArgumentParser vars parse_args epsilon cuda_ids format print add_argument ArgumentParser parse_args cuda_ids new_query clip_grad_norm_ zero_grad robust_loss squeeze update format size item flush enumerate time backward Variable print AverageMeter parameters empty_cache train step len update time format model Variable print squeeze AverageMeter set_grad_enabled size robust_loss eval item empty_cache sum enumerate flush len update time format model Variable backward size AverageMeter zero_grad step print train sum enumerate flush len update time format model Variable print size AverageMeter eval sum enumerate flush len data model zero_grad sign Adam sum range update format size item flush enumerate time backward Variable clamp print AverageMeter train step len update time format model Variable print size AverageMeter _pgd eval item sum enumerate flush len data model size type_as rl item append float sum enumerate format print Variable set_grad_enabled SubsetRandomSampler DataLoader cat robust_loss_cascade dataset cuda enumerate append len update time format Variable print squeeze AverageMeter set_grad_enabled size eval item robust_loss_cascade empty_cache enumerate flush len float int seed int isint isfloat overrides seterr manual_seed float split glob sorted format print get items list format isinstance print config str format deepcopy replace isinstance list print items overrides_dict model_subset update_dict append range path_prefix get_file_close join makedirs load list model_class isinstance print get_path keys import_module getattr load_state_dict append cuda load convert_conv2d_dense list model_class format isinstance print get_path load_checkpoint_to_mlpany dense_m import_module getattr load_state_dict save_checkpoint append zeros keys load_config config_modelloader_and_convert2mlp view size sqrt dataset len list Subset dataset range list Subset range get_stats list print Subset Normalize range config_dataloader Logger argmax max log cuda open config_modelloader argmin append update format get_path mean zip float deepcopy print convert min median array append Linear ReLU Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear children list DenseConv2d isinstance kernel_size out_channels in_channels Conv2d bias __class__ append weight children list isinstance dense_w bias dense_bias OrderedDict save weight Linear join format replace endswith print model_mlp_any shape load_state_dict append state_dict Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear enumerate view output get_eps batch_size model zero_grad scatter_ unsqueeze tensor abs max log is_cuda cuda view new f step expand scatter sum range detach get update format inf LongTensor size mean eval item DualNetwork enumerate int time backward reshape AverageMeter min named_parameters cpu zeros train numpy std get_eps BoundDataParallel batch_size SGD MultiStepLR save Train dataset str StepLR Adam ceil EpsilonScheduler get_lr range get inf int join time update_dict parameters step std len print get_model_meta_real model_loader add set loads append to_json enumerate print format cond str read format reader open range len addTerms str addConstant addVar LinExpr addConstr EQUAL append prod range len str GREATER_EQUAL LESS_EQUAL addVar LinExpr addConstr addGenConstrIndicator EQUAL append float range len addTerms str addConstant addVar LinExpr addConstr EQUAL append range len str addConstant addVar LinExpr addConstr EQUAL append range len addTerms str cons addConstant GREATER_EQUAL LESS_EQUAL addVar LinExpr addConstr addGenConstrIndicator varsid EQUAL append max range enumerate len addConstant maxpool_counter addVar setParam abs addTerms str handle_affine LinExpr FeasibilityTol Model append prod range handle_conv EQUAL conv_counter residual_counter ffn_counter addConstr handle_residual handle_relu handle_maxpool len max optimize LinExpr min copy setObjective reset MINIMIZE MAXIMIZE flush update sorted calc_layerno create_model zip print numlayer Threads reset TimeLimit numproc setParam append zeros range enumerate len optimize create_model LinExpr numlayer setObjective setParam MINIMIZE len CPU append range int clean_string readlines len label_index startswith append range split calc_layerno elina_interval_array_free get_num_neurons_in_layer encode_krelu_cons box_for_layer append reduce elina_dimchange_alloc elina_dimchange_free elina_abstract0_add_dimensions elina_dimchange_init range elina_dimchange_alloc elina_dimchange_free elina_dimchange_init range elina_abstract0_remove_dimensions reduce calc_layerno abstract_information get_bounds_for_layer_with_milp specUB numlayer relu_zono specLB encode_krelu_cons relu_zono_layerwise append relu_zono_refined range elina_interval_array_free realdim elina_abstract0_to_box append intdim elina_abstract0_dimension Variable reshape reduce matmul tensorshape_to_intlist shape Variable int conv2d sigmoid tanh relu append append eran_affine eran_conv2d_without_activation int eran_conv2d_without_activation add eran_conv2d range eran_dense reduce add tensorshape_to_intlist shape eran_affine eran_reshape range ElinaDim coeff cst pointer elina_scalar_set_double ELINA_LINEXPR_SPARSE elina_linexpr0_alloc zip enumerate scalar len sorted pop ElinaDim sorted asarray int combinations time str print debug remove_dimensions min grouping_heuristic sqrt append add_dimensions len list reshape transpose prod initializer floor to_array attribute list name multiply node add shape i input append expand_dims ceil prod range onnxshape_to_intlist concatenate ints insert subtract copy take float s int t zeros nchw_to_nhwc len replace float64 search group split zeros range len float64 search group split zeros range len zeros range float64 concat myConst bias_add open transpose placeholder matmul add runRepl conv2d readline relu extract_mean extract_std tanh print reshape max_pool sigmoid parseVec load node check_model int read array split int max splitext product replace findall append split print range zeros range len zeros range len int list float dict append bool format print copy range enumerate print str format enumerate print str format enumerate list dict read_file append sum keys exists enumerate list dict read_file append sum keys exists enumerate print format enumerate verify time int verify time calc_radius PyThreadState_SetAsyncExc type c_long py_object _async_raise ident load print load_state_dict str remove sum isinstance model print apply OrderedDict abs prod len isinstance print refresh ReLU Linear range len enumerate print join data CallableModelWrapper set_grad_enabled zero_grad sign getpass DataLoader ReLU tf_model try_load_weight get_input_shape Session list run convert_pytorch_model_to_tf placeholder get_dataset load_state_dict generate sum save_with_configs size close eval item flush Linear load enumerate isinstance backward Variable clamp makedirs float32 ProjectedGradientDescent empty_cache m Tensor train load_model layers rate kernel_size MaxPool2D Sigmoid ReLU tensor Flatten Parameter list transpose MaxPool2d append get_weights BatchNormalization Lambda Tanh Dense __name__ Linear T input_shape isinstance pool_size strides output_shape Conv2d Conv2D BatchNorm2d Tensor Dropout join print load_from_tf cuda summary sequential_keras2torch load list isinstance print load_state_dict keys cuda model_cnn_2layer load_model model_cnn_3layer_fixed cuda load_model get_normalize_layer load_model Sequential cuda model_cnn_2layer get_normalize_layer load_model Sequential model_cnn_3layer_fixed cuda IBP_large cuda load_model IBP_large get_normalize_layer load_model Sequential cuda get_input_shape get_normalize_layer Sequential in_cells Sequential ReLU Flatten Linear in_cells Sequential ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear isinstance Sequential out_channels Conv2d normal_ sqrt modules zero_ ReLU Flatten Linear Sequential Linear Conv2d ReLU Flatten get_input_shape in_cells Sequential ReLU Flatten Linear layers rate Sequential copy_ ReLU Activation cuda Flatten list load_model append get_weights LeakyReLU range copy Tanh Dense alpha Linear isinstance print Tensor Dropout get_input_shape load get_normalize_layer Sequential load_state_dict cuda load get_normalize_layer Sequential load_state_dict cuda load get_normalize_layer Sequential load_state_dict cuda load get_normalize_layer Sequential load_state_dict cuda Sequential Conv2d ReLU Flatten Linear Sequential Conv2d ReLU Flatten Linear zeros_like full zeros_like len cosh dfunc logical_not func empty_like dfunc func dfunc func dfunc logical_not func empty_like empty_like func find_d_UB range len find_d_LB empty_like func range len func find_d_UB func find_d_LB range diff range diff plot print get_area format print get_area format get_upper_quad_parameterized func linspace plot norm get format print any compute_bounds linspace range minimum print set_printoptions eval input sum init_crown_bounds compile_recurjac_bounds tuple recurjac_bound_wrapper ReLU crown_quad_bound max crown_adaptive_bound compile_crown_bounds crown_general_bound append expand_dims sum format inf astype float flush pop minimum interval_bound print fastlin_bound fastlip_bound min float32 init_fastlin_bounds fastlip_leaky_bound myprint len ub_pn lb_pn lb_p lb_n ub_n ub_p ub_pn lb_pn lb_p lb_n ub_n ub_p ub_pn lb_pn lb_p lb_n ub_n ub_p ones empty range len format print exec globals compile interval_bound zeros_like empty_like copy dot get_bounds range interval_bound zeros_like astype maximum float32 copy empty_like dot range norm arange cumsum float32 empty_like maximum abs len zeros_like proj_li grad_ub f_qp_lb range inf astype empty_like grad_lb f_qp_ub T norm print maximum float32 dot proj_l2 proj_l1 get_best_lower_quad len zeros_like get_best_lower_quad get_best_upper_quad grad_ub f_qp_lb range inf astype empty_like grad_lb f_qp_ub T norm print float32 maximum dot proj len ones empty range len interval_bound zeros_like astype maximum float32 copy empty_like dot range zeros_like astype float32 maximum dot zeros range zeros range zeros_like sum fastlip_2layer logical_and fastlip_nplus1_layer maximum zeros abs range zeros range zeros_like zeros_like print astype float32 dot zeros range fastlip_2layer_leaky print abs logical_and maximum fastlip_nplus1_layer_leaky zeros sum range dot norm range empty_like zeros prange range zeros_like zeros_like prange get_grad_bounds zeros range zeros prange range zeros_like recurjac_bound_forward_improved_1x1_LB zeros_like dot recurjac_bound_forward_improved_1x1_UB expand_dims range zeros_like recurjac_bound_backward_improved_1x1_UB reshape recurjac_bound_backward_improved_1x1_LB dot expand_dims range recurjac_bound_forward_improved_1x1_LB zeros_like prange dot recurjac_bound_forward_improved_1x1_UB zeros expand_dims range zeros_like recurjac_bound_backward_improved_1x1_UB reshape prange recurjac_bound_backward_improved_1x1_LB dot zeros expand_dims range list zeros_like abs tuple recurjac_bound_backward_improved_1x1_UB maximum logical_not recurjac_bound logical_or empty recurjac_bound_next append zeros expand_dims sum range len join getsource replace endswith exec lstrip splitlines range globals __name__ compile len exec format globals compile recurjac_bound_next_with_grad_bounds print prange logical_and recurjac_bound_launcher construct_function get_grad_bounds append sum get_grad_bounds range len zeros_like tuple empty_like recurjac_bound form_grad_bounds append zeros range len recurjac_bound_2layer list recurjac_bound_forward_improved recurjac_bound_backward_improved range len norm abs maximum last_layer_abs_sum empty max range strip replace append float strip split parse_result_line startswith enumerate int split upper ModelCheckpoint model NLayerModel
AI-secure/VeriGauge
26
AIEMMU/MRI_Prostate
['medical image segmentation', 'medical diagnosis']
['Deep learning in magnetic resonance prostate segmentation: A review and a new perspective']
app/contour.py app/predictor.py app/viewer.py app/hooks/hook-fastprogress.py app/hooks/hook-PySide2.QtWebEngineWidgets.py app/hooks/hook-torchvision.py app/ui.py app/displayView.py app/utils.py app/displayViewModel.py app/dicom.py Contour dicom MainWindow DisplayViewModel predictor get_y enableButtons listify loadContour setify getPixmap saveContour distance SubScene find_nearest Viewer get_relative_path_if_possible isinstance uint8 astype shape stack QPixmap Format_RGB888 QImage listify setEnabled savetxt abs sorted asarray pyqtSignal QPoint Path
# Automatic Prostate segmentation from MR images This provides some deep Learning code for training a deep learning model for segmentation from Magnetic Resonance(MR) Images. There is also a pyqt software avaialble to allow users to explore the use of the deep learning model that was trained as part of this project. More details on the model can be found at [here](https://arxiv.org/abs/2011.07795) ![results of segmentation of the prostate mri](imgs/figure_1.png) ## Dependencies Before running the python software, you need install its dependecies, which relies on Pytorch, FastAi, and PyDicom.You can install the dependences from the command line using the following command: ``` pip install -r requirements.txt ``` ## Usage The segmentation model file can be downloaded from [here](https://www.dropbox.com/s/a2rwhy29wx9s448/export_orig.pkl?dl=0)
27
AIS-Bonn/FreqNet
['video prediction']
['Frequency Domain Transformer Networks for Video Prediction']
BBall/data_dynamic.py MMNIST/data_dynamic.py show_single_V bounce_n ar bounce_vec bounce_mat sigmoid new_speeds unsigmoid show_A matricize show_V show_single_V bounce_n ar bounce_vec bounce_mat sigmoid new_speeds unsigmoid show_A matricize show_V int norm randn transpose rand dot new_speeds zeros abs array range ar meshgrid zeros array range matricize bounce_n array matricize bounce_n array show int sqrt reshape show int print reshape sqrt range len show range len
# Frequency Domain Transformer Networks for Video Prediction If you use the code for your research paper, please cite the following paper: <p> Hafez Farazi<b></b>, and Sven Behnke:<br> <a href="https://arxiv.org/pdf/1903.00271.pdf"><u>Frequency Domain Transformer Networks for Video Prediction</u></a>&nbsp;<a href="https://arxiv.org/pdf/1903.00271.pdf">[PDF]</a> <a href="http://www.ais.uni-bonn.de/~hfarazi/papers/FreqNet.bib">[BIB]</a><br> In Proceedings of 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, April 2019.<br><b></b><br> </p>
28
AIS-Bonn/LocDepVideoPrediction
['video prediction']
['Location Dependency in Video Prediction']
data_dynamic.py show_single_V bounce_n ar bounce_vec bounce_mat sigmoid new_speeds unsigmoid show_A matricize show_V int norm randn transpose rand dot new_speeds zeros abs array range ar meshgrid zeros array range matricize bounce_n array matricize bounce_n array show int sqrt reshape show int print reshape sqrt range len show range len
# LocDepVideoPrediction If you use the code for your research paper, please cite the following paper: <p> Niloofar Azizi<b>*</b>, Hafez Farazi<b>*</b>, and Sven Behnke:<br> <a href="http://www.ais.uni-bonn.de/~hfarazi/papers/LocDep.pdf"><u>Location Dependency in Video Prediction</u></a>&nbsp; <a href="http://www.ais.uni-bonn.de/~hfarazi/papers/LocDep.pdf">[PDF]</a>&nbsp; <a href="http://www.ais.uni-bonn.de/~hfarazi/papers/LocDep.bib">[BIB]</a>&nbsp; <a href="https://github.com/AIS-Bonn/LocDepVideoPrediction">[CODE]</a><br> Accepted for 27th International Conference on Artificial Neural Networks (ICANN), Rhodes, Greece, to appear October 2018.<br><b> *: Both authors contributed equally to this work.</b><br> </p>
29
ALBERT-Inc/blog_ssap
['instance segmentation', 'semantic segmentation']
['SSAP: Single-Shot Instance Segmentation With Affinity Pyramid']
src/loss.py src/graph_partition.py src/SSAP.py src/mydatasets.py greedy_additive calc_js_div Edge make_ins_seg Partition calc_loss l2_loss focal_loss preprocess Mydatasets Up DoubleConv Down OutConv SSAP list heapify contract heappop removed heappush exp clip sum log greedy_additive sorted calc_js_div list ones nodes where log randint Edge repeat array append zeros argmax Partition range len float ones_like where l2_loss focal_loss tile
# SSAP再現実装公開用リポジトリ [SSAP [Proposal-freeなInstance Segmentation手法] の紹介と実験](https://blog.albert2005.co.jp/2020/08/18/ssap/)で使用したプログラムです. ## ファイル構成について ``` ├─ README.md # プロジェクトの説明 ├─ requirements.txt # すべてのPythonプログラムのベースとなるPythonパッケージ ├─ data/ # データフォルダ │ ├─ train2014/ # COCO2014の学習データ │ ├─ val2014/ # COCO2014の評価データ │ ├─ annotations/ # COCO2014のannotationデータ
30
AMLab-Amsterdam/CEVAE
['causal inference']
['Causal Effect Inference with Deep Latent-Variable Models']
cevae_ihdp.py utils.py datasets.py evaluation.py IHDP Evaluator get_y0_y1 fc_net format print write mean range flush
# CEVAE This repository contains the code for the Causal Effect Variational Autoencoder (CEVAE) model as developed at [1]. This code is provided as is and will not be updated / maintained. Sample experiment --- To perform a sample run of CEVAE on 10 replications of the Infant Health and Development Program (IHDP) dataset just type: `python cevae_ihdp.py` Other datasets
31
AMLab-Amsterdam/SEVDL_MGP
['gaussian processes']
['Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors']
mnist_classification.py optimizer.py nn_utils.py layers.py yacht_regression.py matrix_layers.py VMGNet.py sample_mgaus sample_gauss sample_mult_noise add_bias kldiv_gamma sample_mgaus2 Layer MatrixGaussDiagLayerLearnP MatrixGaussDiagLayerFF randmat randmat2 randvector tscalar multvector log_f Polygamma change_random_seed Psi BaseOptimizer Adam VMGNet loglike RandomStreams RandomState normal uniform normal concatenate sqrt uniform zeros astype floatX concatenate sum astype float32 zeros ravel range
Example implementation of the Bayesian neural network in: ***"Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors"***, Christos Louizos & Max Welling, ICML 2016, ([https://arxiv.org/abs/1603.04733]()) This code is provided as is and will not be maintained / updated.
32
ANLGBOY/SoftFlow
['point cloud generation']
['SoftFlow: Probabilistic Framework for Normalizing Flow on Manifolds']
toy_example/lib/layers/diffeq_layers/__init__.py toy_example/generate2.py toy_example/lib/layers/diffeq_layers/basic.py pointclouds/metrics/pytorch_structural_losses/nn_distance.py toy_example/lib/layers/container.py toy_example/lib/layers/odefunc.py pointclouds/utils.py pointclouds/metrics/pytorch_structural_losses/setup.py pointclouds/args.py pointclouds/models/glow.py pointclouds/models/networks.py toy_example/lib/visualize_flow.py toy_example/generate1.py toy_example/lib/layers/__init__.py pointclouds/generate2.py pointclouds/evaluate.py toy_example/lib/layers/cnf.py toy_example/lib/layers/wrappers/cnf_regularization.py pointclouds/train.py toy_example/lib/layers/squeeze.py pointclouds/models/AF.py toy_example/train.py toy_example/lib/toy_data.py toy_example/lib/utils.py pointclouds/metrics/pytorch_structural_losses/__init__.py pointclouds/metrics/evaluation_metrics.py pointclouds/generate1.py pointclouds/metrics/pytorch_structural_losses/match_cost.py pointclouds/datasets.py get_parser get_args add_args ShapeNet15kPointClouds get_testset get_data_loaders get_trainset Uniform15KPC init_np_seed main get_test_loader evaluate_gen viz_save_recon get_viz_config viz_save_sample gen_samples main gen_samples viz_save_sample main get_viz_config main_worker get_init_data main standard_normal_logprob AverageValueMeter visualize_point_clouds validate_sample validate bernoulli_log_likelihood truncated_normal apply_random_rotation kl_diagnormal_diagnormal set_random_seed gaussian_log_likelihood validate_conditioned resume kl_diagnormal_stdnormal save reduce_tensor _pairwise_EMD_CD_ jensen_shannon_divergence _jsdiv EMD_CD lgan_mmd_cov knn compute_all_metrics jsd_between_point_cloud_sets distChamferCUDA unit_cube_grid_point_cloud entropy_of_occupancy_grid emd_approx distChamfer MatchCostFunction NNDistanceFunction remove ActNorm Flow Invertible1x1Conv AF ConcatSquashLinear remove fused_add_tanh_sigmoid_multiply ActNorm Invertible1x1Conv WN Glow SoftPointFlow Encoder get_transforms save_fig get_transforms save_fig generate get_transforms compute_loss inf_train_gen standard_normal_logprob build_model_tabular count_parameters RunningAverageMeter AverageMeter count_nfe inf_generator logsumexp isnan save_checkpoint count_total_time get_logger makedirs plt_flow plt_samples visualize_transform plt_potential_func plt_flow_samples plt_flow_density _flip CNF SequentialFlow Swish ODEfunc sample_rademacher_like Lambda ODEnet divergence_approx divergence_bf sample_gaussian_like SqueezeLayer squeeze unsqueeze ConcatSquashLinear RegularizedODEfunc add_argument ArgumentParser add_args parse_args join get_parser initial_seed seed print ShapeNet15kPointClouds print ShapeNet15kPointClouds DataLoader get_datasets DataLoader get_testset batch_size get_test_loader std_max list std_scale compute_all_metrics pprint test_std_n append cat item sample JSD flush items print write dumps numpy load join print SoftPointFlow multi_gpu_wrapper makedirs load_checkpoint eval load_state_dict set_initialized cuda open set_title set_xlim add_subplot axis draw close shape scatter set_zlim figure get_xlim get_ylim savefig range get_zlim set_ylim set_xlim add_subplot axis draw close shape scatter set_zlim figure savefig range set_ylim tr_max_sample_points std_max join format get_testset reconstruct viz_save_recon print std_scale get_trainset viz_save_sample DataLoader test_std_n sample range cuda gpu enumerate get_viz_config cates str set_title enumerate len squeeze stack append batch_size model SoftPointFlow multi_gpu_wrapper get_trainset DataLoader save cuda resume_checkpoint make_optimizer str all_points_mean std_max StepLR all_points_std get_testset view set_device DistributedSampler len random_rotate strftime std_scale epochs shape rank range SummaryWriter format init_process_group randn_like distributed resume set_initialized enumerate join int time collect apply_random_rotation print set_epoch std_min shuffle_idx train step array gpu add_scalar std_max view std_scale randn_like get_trainset shape DataLoader std_min iter next seed world_size get_args spawn warn set_random_seed device_count distributed main_worker get_init_data save_dir gpu clamp exp log clamp exp exp expand_as squeeze add_ copy_ shape normal_ all_reduce clone get_world_size size log seed manual_seed_all manual_seed set_title transpose set_xlim add_subplot axis draw close scatter get_ylim figure get_xlim set_zlim _renderer array get_zlim set_ylim bmm ones reshape print rand cos pi sin zeros to join reconstruct print EMD_CD batch_size iter save append numpy gpu cat update join append batch_size print size compute_all_metrics pprint iter save sample numpy JSD gpu cat load load_state_dict join list items makedirs eval item use_latent_flow add_scalar float match_cost bmm size transpose expand_as long list min mean distChamferCUDA distChamfer append emd_approx range cat list view size min contiguous expand distChamfer distChamferCUDA append emd_approx range cat update topk float size sqrt index_select to range cat to size min mean float update _pairwise_EMD_CD_ lgan_mmd_cov knn t ndarray reshape float32 float range reshape squeeze fit float warn unit_cube_grid_point_cloud unique zeros kneighbors len _jsdiv sum entropy warn sum ModuleList append remove_weight_norm sigmoid tanh subplot join invert_yaxis set_xlim close set_ticks scatter set_facecolor figure set_alpha dirname savefig set_ylim makedirs to numpy int64 save_fig inf_train_gen std_max data std_weight view batch_size model randn_like std_min to sum RandomState ones rand cos astype pi sqrt binomial vstack linspace sin append randint range getLogger addHandler StreamHandler info DEBUG setLevel INFO FileHandler __iter__ join save makedirs exp isinstance squeeze sum max pi AccNumEvals apply Accumulator apply tuple SequentialFlow map split set_title reshape invert_yaxis hstack pcolormesh set_ticks linspace meshgrid Tensor numpy prior_logdensity transform cmap set_title reshape invert_yaxis hstack set_xlim pcolormesh set_ticks set_facecolor linspace get_cmap meshgrid to set_ylim int set_title reshape hstack set_ticks imshow int64 cat linspace meshgrid inverse_transform to sum append split int transform set_title invert_yaxis set_xlim set_ticks int64 scatter append to numpy set_ylim split set_title invert_yaxis set_xlim set_ticks scatter set_ylim plt_flow subplot plt_samples plt_potential_func plt_flow_samples clf plt_flow_density size dim arange range sum size view contiguous size view contiguous
# SoftFlow: Probabilistic Framework for Normalizing Flow on Manifolds This repository provides the implementation of SoftFlow on toy dataset and point clouds. Move to each folder, follow the instructions and enjoy the results! ## Overview <p align="center"> <img src="assets/training_technique.png" height=256/> </p> Flow-based generative models are composed of invertible transformations between two random variables of the same dimension. Therefore, flow-based models cannot be adequately trained if the dimension of the data distribution does not match that of the underlying target distribution. In this paper, we propose SoftFlow, a probabilistic framework for training normalizing flows on manifolds. To sidestep the dimension mismatch problem, SoftFlow estimates a conditional distribution of the perturbed input data instead of learning the data distribution directly. We experimentally show that SoftFlow can capture the innate structure of the manifold data and generate high-quality samples unlike the conventional flow-based models. Furthermore, we apply the proposed framework to 3D point clouds to alleviate the difficulty of forming thin structures for flow-based models. The proposed model for 3D point clouds, namely SoftPointFlow, can estimate the distribution of various shapes more accurately and achieves state-of-the-art performance in point cloud generation. ## Results - Toy datasets <p align="center">
33
ANLGBOY/WaveNODE
['speech synthesis']
['WaveNODE: A Continuous Normalizing Flow for Speech Synthesis']
preprocessing.py train.py data.py odefunc.py hps.py args.py layers.py synthesize.py test_speed.py test_cll.py utils.py model.py parse_args _pad collate_fn_synthesize _pad_2d collate_fn LJspeechDataset Hyperparameters fused_add_tanh_sigmoid_multiply ActNorm CNF WaveNetPrior logabs MovingBatchNorm1d MBNLayer remove NODEBlock WaveNODE DELayer SqueezeLayer ActnormLayer ODEfunc fused_add_tanh_sigmoid_multiply_with_t WaveNet ODEnet divergence_approx sample_rademacher_like _process_utterance preprocess build_from_path write_metadata load_checkpoint build_model synthesize load_dataset load_checkpoint build_model evaluate load_dataset load_checkpoint build_model synthesize load_dataset synthesize evaluate build_model load_checkpoint save_checkpoint load_dataset train actnorm_init get_logger count_nfe mkdir add_argument ArgumentParser pad seed contiguous microsecond array append randint max range len max contiguous append array range len sigmoid tanh ModuleList append remove_weight_norm sum sigmoid tanh view ProcessPoolExecutor load join T astype float32 maximum pad log10 save max clip len build_from_path write_metadata makedirs print sum max data_path DataLoader LJspeechDataset print WaveNODE time format write_wav print synchronize size new_zeros Normal new_ones sample to numpy enumerate load join format print load_state_dict sum format model print write dumps eval flush enumerate write dumps flush write dumps flush model clip_grad_norm_ zero_grad format flush enumerate synth_interval log_interval time collect backward print write dumps parameters step array len time len array eval join format save state_dict print next iter AccNumEvals apply join format open n_layer_wvn str T norm temp batch_size n_channel_wvn split_period tol n_block tol_synth model_name scale scale_init d_i makedirs
# Pytorch Implementation of WaveNODE PyTorch implementation of WaveNODE ## Abstract In recent years, various flow-based generative models have been proposed to generate highfidelity waveforms in real-time. However, these models require either a well-trained teacher network or a number of flow steps making them memory-inefficient. In this paper, we propose a novel generative model called WaveNODE which exploits a continuous normalizing flow for speech synthesis. Unlike the conventional models,
34
APooladian/ProxLogBarrierAttack
['adversarial attack']
['A principled approach for generating adversarial images under non-smooth dissimilarity metrics']
mnist-example/model/utils.py proxlogbarrier_Top1.py mnist-example/model/__init__.py proxlogbarrier_Top5.py mnist-example/model/mnist.py InitMethods.py prox.py mnist-example/run_attack.py mnist-example/model/blocks.py simplex.py GaussianInitialize UniformInitialize BlurInitialize HeatSmoothing SafetyInitialize L2NormProx_Batch LinfNormProx L0NormProx_Batch L1NormProx Attack Top1Criterion Attack Top5Criterion L1BallProj _proj simplexproj indices SimplexProj prod Conv Linear LeNet View arange expand shape div_ zeros argmax list ones tolist expand append enumerate len
# ProxLogBarrierAttack Public repository for the ProxLogBarrier attack, described in [A principled approach for generating adversarial images under non-smooth dissimilarity metrics](https://arxiv.org/abs/1908.01667). Abstract: Deep neural networks perform well on real world data, but are prone to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology catered not only for cases where the perturbations are measured by Lp norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, L1,L2,L_inf perturbations, and the L0 counting "norm", i.e. true sparseness. Our approach is a natural extension of a recent adversarial attack method, and eliminates the differentiability requirement of the metric. We demonstrate our algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We consider undefended and defended models, and show that our algorithm transfers to various datasets with little parameter tuning. Furthermore, we observe that ProxLogBarrier obtains the best results with respect to a host of modern adversarial attacks specialized for the L0 case. ## Implementation details Code is written in Python 3 and PyTorch 1.0. The implementation takes advantage of the GPU: a batch of images can be attacked at a given time. Hyperparameters are the defaults for the MNIST, CIFAR10, and ImageNet-1k datasets. We provide a pre-trained MNIST model (LeNet), with an example attack provided in `mnist-example/run_attack.py`. The attack is implemented as a class in `proxlogbarrier_Top1.py`. The Top5 version is included as well. ### Citation If you find the ProxLogBarrier attack useful in your scientific work, please cite as
35
ARMargolis/melanoma-pytorch
['network pruning']
['The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks']
model/data.py scripts/train_model.py model/model.py MelanomaDataset MelanomaNet BasicBlock BottleneckBlock
Goal: A PyTorch model for https://www.kaggle.com/c/siim-isic-melanoma-classification/data Technique: Pruning https://pytorch.org/tutorials/intermediate/pruning_tutorial.html#iterative-pruning Theory: Lottery Ticket Hypothesis https://arxiv.org/pdf/1803.03635.pdf # Setup ## Colab stuff To use the kaggle API in CoLab, follow the instructions here: [ https://medium.com/@move37timm/using-kaggle-api-for-google-colaboratory-d18645f93648#:~:text=To%20use%20the%20Kaggle%20API,of%20a%20file%20called%20'kaggle ] (some of the steps are present in the `setting_up_kaggle_credentials.ipynb` notebook, so you can copy them over) ## Dependencies We will use poetry to manage dependencies, so all you have to do is: 1. Clone the repo
36
ASMIftekhar/VSGNet
['human object interaction detection']
['VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions']
scripts_hico/HICO_eval/sample_complexity_analysis.py scripts/calculate_ap_classwise.py scripts_hico/train_test.py scripts/dataloader_vcoco.py scripts/prior_vcoco.py scripts_hico/helpers_preprocess.py scripts_hico/calculate_map_vcoco.py scripts_hico/model.py scripts_hico/HICO_eval/bbox_utils.py scripts/pred_vis.py scripts_hico/main.py scripts/model.py scripts_hico/pred_vis.py scripts/pool_pairing.py scripts/train_test.py scripts_hico/proper_inferance_file.py scripts/calculate_map_vcoco.py scripts_hico/HICO_eval/compute_map.py scripts/helpers_preprocess.py scripts/proper_inferance_file.py scripts_hico/calculate_ap_classwise.py scripts/main.py scripts_hico/pool_pairing.py setup.py scripts_hico/HICO_eval/load.py scripts_hico/HICO_eval/hico_constants.py scripts_hico/dataloader_hico.py vis_sub_obj_bboxes compute_area_batch join_bboxes_by_line add_bbox compute_iou compute_area compute_iou_batch vis_human_keypts vis_bboxes vis_bbox compute_ap eval_hoi compute_pr compute_normalized_pr load_gt_dets match_hoi ForkedPdb main HicoConstants load_pickle_object load_json_object read dump_json_object load_mat_object dump_pickle_object deserialize_object write load_yaml_object mkdir_if_not_exists dumps_json_object JsonSerializableClass serialize_object WritableToFile NumpyAwareJSONEncoder main compute_mAP polygon polygon_perimeter set_color min max compute_area zeros logical_and minimum compute_area_batch stack maximum min copy set_color polygon polygon_perimeter max range copy vis_bbox min copy line_aa max range circle vis_bboxes join_bboxes_by_line zip line_aa circle range copy compute_iou enumerate isnan any max arange cumsum array nan cumsum sum array nan int compute_ap format join compute_pr print match_hoi save append print join load_json_object append join load_json_object format starmap dump_json_object proc_dir print load_gt_dets num_processes mkdir_if_not_exists close set mkdir out_dir append parse_args Pool len decompress read loads compress write dumps encode compress write dumps dumps mkdir exists makedirs list items sorted compute_mAP HicoConstants bin_to_hoi_ids_json keys
# VSGNet ### [**VSGNet:Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions**](http://openaccess.thecvf.com/content_CVPR_2020/papers/Ulutan_VSGNet_Spatial_Attention_Network_for_Detecting_Human_Object_Interactions_Using_CVPR_2020_paper.pdf) [Oytun Ulutan*](https://sites.google.com/view/oytun-ulutan), [A S M Iftekhar*](https://sites.google.com/view/asmiftekhar/home), [B S Manjunath](https://vision.ece.ucsb.edu/people/bs-manjunath). Official repository of our [**CVPR 2020**](http://cvpr2020.thecvf.com/) paper. ![Overview of VSGNET](https://github.com/ASMIftekhar/VSGNet/blob/master/7850-teaser.gif?raw=true) ## Citing If you find this work useful, please consider our paper to cite: @InProceedings{Ulutan_2020_CVPR, author = {Ulutan, Oytun and Iftekhar, A S M and Manjunath, B. S.}, title = {VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions},
37
AWLyrics/SeqGAN_Poem
['text generation']
['SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient']
lyric_preprocessing.py preprocessing.py bleu_calc.py dataloader.py mydis.py myG_beta.py rhyme.py seq_gan.py data/preprocess.py mygen.py id2poem.py session_save.py translate.py calc_bleu Input_Data_loader Gen_Data_loader Dis_dataloader load_data linear highway Discriminator Generator G_beta rhyme2 rhyme1 generate_samples pre_train_epoch_v2 generate_samples_v2 pre_train_epoch main translate_file translate write2file preprocess sentence_bleu SmoothingFunction zip pad_sequences fit_on_texts Tokenizer texts_to_sequences as_list Dis_dataloader num_batch create_batches Session open seed run str calc_bleu Generator generate_v2 G_beta g_update Discriminator sum range load_train_data close get_reward mean update_params ConfigProto Input_Data_loader time train_op print write pre_train_epoch_v2 generate_samples_v2 global_variables_initializer next_batch reset_pointer range extend generate generate_v2 extend next_batch reset_pointer range pretrain_step num_batch append next_batch reset_pointer range pretrain_step_v2 num_batch append next_batch reset_pointer range get int append
# SeqGAN Poem 修改 SeqGAN 用于诗歌生成(去除 Oracle LSTM) ## Pipeline - 收集唐诗预处理(preprocessing.py),唐诗[数据集](https://github.com/chinese-poetry/chinese-poetry),选取了 5000 首五言律诗(20 词) - Tokenize,将中文字转换为 index(train.txt),建立词表(dict.pkl),记录 vocab size - Pretrain Generation - Adersarial Training ## Reference Paper: https://arxiv.org/abs/1609.05473 Original Repo: [SeqGAN](https://github.com/LantaoYu/SeqGAN)
38
AWehenkel/DAG-NF
['density estimation']
['Graphical Normalizing Flows']
UCIExperiments.py ImageExperimentsTest.py models/Conditionners/DAGConditioner.py UCIdatasets/miniboone.py ToyExperiments.py models/Normalizers/Normalizer.py UCIdatasets/__init__.py models/NormalizingFlowFactories.py models/__init__.py UCIdatasets/gas.py lib/toy_data.py models/Conditionners/CouplingConditioner.py UCIdatasets/bsds300.py models/Normalizers/MonotonicNormalizer.py UCIdatasets/download_dataset.py models/NormalizingFlow.py models/Normalizers/__init__.py lib/visualize_flow.py UCIdatasets/digits.py lib/dataloader.py models/Conditionners/Conditioner.py UCIdatasets/power.py models/Conditionners/__init__.py UCIdatasets/proteins.py lib/transform.py lib/utils.py models/Normalizers/AffineNormalizer.py ImageExperiments.py models/Conditionners/AutoregressiveConditioner.py UCIdatasets/hepmass.py models/MLP.py add_noise compute_bpp train load_data add_noise compute_bpp load_data test train_toy batch_iter train load_data dataloader inf_train_gen logit logit_back ZeroPadding Crop Transpose ToTensor Resize HorizontalFlip AddUniformNoise RunningAverageMeter AverageMeter inf_generator logsumexp isnan save_checkpoint get_logger makedirs plt_flow plt_samples visualize_transform plt_potential_func plt_flow_samples plt_flow_density plt_stream CIFAR10CNN IdentityNN MNISTCNN MLP FCNormalizingFlow NormalizingFlowStep NormalizingFlow CNNormalizingFlow buildCIFAR10NormalizingFlow NormalLogDensity buildMNISTNormalizingFlow MNIST_A_prior buildFCNormalizingFlow ConditionnalMADE AutoregressiveConditioner MADE MaskedLinear Conditioner CouplingConditioner CouplingMLP DAGMLP DAGConditioner AffineNormalizer IntegrandNet ELUPlus MonotonicNormalizer _flatten Normalizer BSDS300 load_data_normalised load_data_split_with_noise load_data DIGITS get_file load_mnist_images_np load_cifar10 ParanoidURLopener Progbar load_batch GAS get_correlation_numbers load_data load_data_and_clean load_data_and_clean_and_split load_data_no_discrete_normalised_as_array HEPMASS load_data_no_discrete load_data_no_discrete_normalised load_data load_data_normalised load_data MINIBOONE POWER load_data_split_with_noise load_data load_data_normalised PROTEINS load_data get_shd get_adj_matrix uniform_ log2 sum log MNIST int random_split DataLoader CIFAR10 model buildCIFAR10NormalizingFlow zero_grad timer cuda getAlpha values str list getNormalizers exit Adam getConditioners load_state_dict to sum get_logger range inf buildMNISTNormalizingFlow info item enumerate load items isinstance backward print isnan parameters empty_cache load_data Tensor step loss buildFCNormalizingFlow nb_steps subplots set_bad set_clim buildCIFAR10NormalizingFlow topological_sort save cuda max values str list getNormalizers matshow set_xlabel exit Adam colorbar getConditioners Writer load_state_dict savefig from_numpy_matrix to sum get_logger range state_dict format inf DAGness close buildMNISTNormalizingFlow info get_cmap float enumerate load items int isinstance print reshape clone parameters load_data Tensor train numpy buildFCNormalizingFlow set_detect_anomaly model zero_grad timer z_log_density steps max str exit Adam load_state_dict compute_ll to get_logger range format DAGness item info load backward print rc isnan parameters train step loss buildFCNormalizingFlow arange randperm cuda is_cuda split batch_iter x save state_dict isfile MNIST int random_split ConcatDataset print exit DataLoader CIFAR10 arange randn rand pi floor vstack linspace exp sin append range RandomState concatenate astype sqrt stack sample zeros T reshape repeat randint array sigmoid getLogger addHandler StreamHandler info DEBUG setLevel INFO FileHandler __iter__ join save makedirs exp isinstance squeeze sum max set_title reshape invert_yaxis hstack pcolormesh set_ticks linspace meshgrid Tensor numpy cmap reshape hstack set_edgecolor set_xlim pcolormesh set_ticks set_facecolor linspace meshgrid get_cmap sum set_ylim meshgrid streamplot hstack linspace int set_title reshape hstack set_ticks imshow int64 cat linspace meshgrid inverse_transform to sum append split int transform set_title set_ticks int64 hist2d append to numpy split invert_yaxis hist2d set_title set_ticks plt_flow subplot plt_samples plt_potential_func plt_flow_samples clf plt_flow_density NormalizingFlowStep conditioner_type normalizer_type append range zeros range view list NormalizingFlowStep zip normalizer_type NormalLogDensity MNISTCNN FCNormalizingFlow append range DAGConditioner len list NormalizingFlowStep zip CIFAR10CNN normalizer_type NormalLogDensity FCNormalizingFlow append range DAGConditioner len shuffle int RandomState load_data mean vstack load_data_split_with_noise std join print extractall retrieve close open expanduser makedirs load items list reshape close open get_file join str print reshape zeros range load_batch read_pickle drop corr sum get_correlation_numbers mean any load_data std drop int as_matrix read_csv load_data drop mean std load_data_no_discrete int T Counter load_data_no_discrete_normalised append load load_data rand hstack delete zeros get_adj_matrix zeros
# Graphical Normalizing Flows Offical codes and experiments for the paper: > Graphical Normalizing Flows, Antoine Wehenkel and Gilles Louppe. (May 2020). > [[arxiv]](https://arxiv.org/abs/2006.02548) # Dependencies The list of dependencies can be found in requirements.txt text file and installed with the following command: ```bash pip install -r requirements.txt ``` # Code architecture
39
AaltoVision/MaskMVS
['depth estimation']
['Unstructured Multi-View Depth Estimation Using Mask-Based Multiplane Representation']
models/__init__.py models/MaskNet.py models/DispNet.py generate_volume.py generate_volume gen_mask_gt where warping_neighbor DispNet upconv predict_disp conv crop_like mask_layer down_conv_layer up_conv_layer MaskNet crop_like conv_layer append size enumerate warping_neighbor bmm view grid_sample Variable clamp size expand from_numpy matrix cuda detach append type where FloatTensor
# MaskMVS Yuxin Hou · [Arno Solin](http://arno.solin.fi) · [Juho Kannala](https://users.aalto.fi/~kannalj1/) Codes for the paper: * Yuxin Hou, Arno Solin, and Juho Kannala (2019). **Unstructured multi-view depth estimation using mask-based multiplane representation**. *Scandinavian Conference on Image Analysis (SCIA)*. [[preprint on arXiv](https://arxiv.org/abs/1902.02166)] ## Summary MaskMVS is a method for depth estimation for unstructured multi-view image-pose pairs. In the plane-sweep procedure, the depth planes are sampled by histogram matching that ensures covering the depth range of interest. Unlike other plane-sweep methods, we do not rely on a cost metric to explicitly build the cost volume, but instead infer a multiplane mask representation which regularizes the learning. Compared to many previous approaches, we show that our method is lightweight and generalizes well without requiring excessive training. See the paper for further details. ## Requirements Tested with: * Python3 * Numpy
40
AaltoVision/Object-Retrieval
['image retrieval']
['Context Aware Query Image Representation for Particular Object Retrieval']
SA.py
# Object-Retrieval Particular object retrieval using CNN Code for the arxiv submission https://arxiv.org/pdf/1703.01226.pdf. The source code is a modification of the codes available at http://www.xrce.xerox.com/Our-Research/Computer-Vision/Learning-Visual-Representations/Deep-Image-Retrieval. Follow the instructions in the above link to setup the installations. Then replace the file "test.py" with "SA.py", and the same set of commands provided in their README should work fine. However, to reproduce the results in our paper, the command associated with multi-resolution should be used (explained in detail in README from the Xerox-link).
41
Abe404/segmentation_of_roots_in_soil_with_unet
['semantic segmentation']
['Segmentation of Roots in Soil with U-Net']
src/frangi/test.py src/data_utils.py src/frangi/segment.py src/unet/train.py src/unet/sys_utils.py src/unet/im_utils.py src/unet/elastic.py src/unet/log.py src/unet/unet.py src/metrics.py src/unet/checkpointer.py src/unet/loss.py src/unet/test.py src/frangi/train.py src/frangi/cmaes_utils.py src/unet/datasets.py src/frangi/frangi_filter.py get_files_split load_annotations_pool load_train_data zero_one load_images_pool get_paths get_metrics format_num evaluate_segmentations_from_dirs print_metrics_from_dirs evaluate_segmentations get_metrics_str assign_round get_best_params rescale_cmaes_params assign_proba_round get_cmaes_params_warp get_proba_rounded load_from_cache frangi ensure_cache_dir_exists _frangi_hessian_common_filter get_thresholded get_frangi is_in_lambda_cache save_to_cache get_sigmas segment_image segment_im_wrapper produce_segmentations produce_segmentations_pool save_segmentations_pool ensure_dir_exists seg_frangi_cca_test print_frangi_metrics load_best_params_from_log find_cmaes_settings get_f1_fast get_start_params cmaes_image_f1_score cmaes_metric_from_pool CheckPointer delete_old_checkpoints delete_all_except latest_pkl_file UNetTransformer UNetValDataset UNetTrainDataset transform_annotation get_elastic_map transform_photo get_indices add_salt_pepper get_tiles_and_masks_with_roots reconstruct_from_tiles get_val_tiles_and_masks tiles_from_coords add_gaussian_noise pad scale_zero_one get_train_tiles_and_masks get_random_tiles get_tiles Logger dice_loss combined_loss multi_process process_test_set skel_im csv_parts_list process_2016_grid_counted save_skels_csv segment_dir_with_unet join_data pixel_count print_correlations unet_segment train_unet kaiming_conv_init get_data_loaders evaluate DownBlock crop_tensor UNetGN UpBlock print shape print array join sorted load_annotations_pool realpath load_images_pool dirname get_paths join sorted realpath dirname get_paths items list float sum logical_and len print evaluate_segmentations_from_dirs get_metrics sorted reshape listdir evaluate_segmentations get_metrics print reshape zip append enumerate append argmin get_cmaes_params_warp warper get_proba_rounded int round array BoundTransform ceil abs floor ones shape join save isfile print makedirs load_from_cache str exp ensure_cache_dir_exists ones hessian_matrix_eigvals shape hessian_matrix any zeros array save_to_cache enumerate _frangi_hessian_common_filter array frangi rgb2grey rescale_intensity get_sigmas print makedirs astype get_thresholded zero_one get_frangi remove_small_objects join time print img_as_float segment_image zip array imsave makedirs append enumerate join print img_as_float zip append imsave enumerate load argmin print get_best_params print_metrics_from_dirs astype float32 produce_segmentations enumerate f1_score reshape astype append array zip enumerate segment_image get_f1_fast get_cmaes_params_warp load_train_data get_start_params partial join remove listdir glob max latest_pkl_file delete_all_except uniform gaussian_filter pad map_coordinates array reshape astype shape pad int64 array append int rand astype floor array normal reshape shape get_tiles_and_masks_with_roots get_random_tiles zip tiles_from_coords zip get_tiles ceil pad tiles_from_coords append zeros zip append int list astype tiles_from_coords pad zip append zip mul view softmax float sum join time print apply_async close append Pool shape eval get_tiles moveaxis enumerate load join print UNetGN load_state_dict device is_available imread listdir cuda enumerate unet_segment makedirs join int img_as_float astype skeletonize imread join imread print sorted multi_process append strip readlines split list csv_parts_list zip print linregress join save_skels_csv segment_dir_with_unet join_data multi_process listdir print_correlations makedirs print_metrics_from_dirs segment_dir_with_unet get_files_split UNetValDataset UNetTrainDataset DataLoader data isinstance Conv2d kaiming_normal_ ConvTranspose2d time concatenate print eval len get_metrics batch_size zero_grad SGD combined_loss MultiStepLR numpy Logger dataset cuda maybe_save assign_new_tiles UNetGN apply append range concatenate get_data_loaders softmax log_metrics CheckPointer flush enumerate time evaluate backward print write parameters cnn train step len size
Abe404/segmentation_of_roots_in_soil_with_unet
42
AbertayMachineLearningGroup/machine-learning-SIEM-water-infrastructure
['anomaly detection', 'cyber attack detection']
['Improving SIEM for Critical SCADA Water Infrastructures Using Machine Learning']
preprocessing.py classification-with-confidence.py classification.py ResultData calculate_accuracy write_data_to_file main main main read_file_and_write_rows DataSetFile count_nonzero index argsort ResultData predict_proba classes_ unique zeros range len StratifiedKFold SVC KNeighborsClassifier values LogisticRegression fit_transform format write_data_to_file GaussianNB mkdir RandomForestClassifier calculate_accuracy fit DecisionTreeClassifier transform dropna StandardScaler read_csv split score crosstab list transpose range predict to_csv writer is_anomaly situation print writerow fileName operational_scenario combined_affected_component combined_situation DataSetFile close append read_file_and_write_rows affected_component open
# Improving SIEM for Critical SCADA Water Infrastructures Using Machine Learning This work aims at using different machine learning techniques in detecting anomalies (including hardware failures, sabotage and cyber-attacks) in SCADA water infrastructure. ## Dataset Used The dataset used is published [here](https://www.sciencedirect.com/science/article/pii/S2352340917303402) ## Citation If you want to cite the paper please use the following format; ```` @InProceedings{10.1007/978-3-030-12786-2_1, author="Hindy, Hanan and Brosset, David and Bayne, Ethan and Seeam, Amar and Bellekens, Xavier", editor="Katsikas, Sokratis K. and Cuppens, Fr{\'e}d{\'e}ric and Cuppens, Nora and Lambrinoudakis, Costas and Ant{\'o}n, Annie and Gritzalis, Stefanos and Mylopoulos, John and Kalloniatis, Christos",
43
AbnerHqC/GaitSet
['gait recognition']
['GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition', 'GaitSet: Cross-view Gait Recognition through Utilizing Gait as a Deep Set']
train.py model/network/basic_blocks.py model/utils/sampler.py model/network/__init__.py model/network/gaitset.py model/model.py model/utils/__init__.py work/OUMVLP_network/basic_blocks.py test.py model/utils/data_loader.py work/OUMVLP_network/gaitset.py model/initialization.py config.py model/network/triplet.py model/__init__.py model/utils/data_set.py pretreatment.py model/utils/evaluator.py boolean_string log2str cut_pickle log_print cut_img de_diag boolean_string boolean_string initialize_model initialize_data initialization Model SetBlock BasicConv2d SetNet TripletLoss load_data DataSet cuda_dist evaluation TripletSampler SetBlock BasicConv2d HPM SetNet log2str print int sum concatenate cumsum size warn resize zeros argmax log_print range join sort warn listdir log_print imsave cut_img mean sum diag print load_all_data load_data deepcopy join int print map Model prod print chdir initialize_data load join list sorted format DataSet shuffle set save append listdir makedirs relu transpose matmul sqrt unsqueeze cuda sum list sort set cuda_dist round zeros isin numpy array enumerate len
# GaitSet [![LICENSE](https://img.shields.io/badge/license-NPL%20(The%20996%20Prohibited%20License)-blue.svg)](https://github.com/996icu/996.ICU/blob/master/LICENSE) [![996.icu](https://img.shields.io/badge/link-996.icu-red.svg)](https://996.icu) GaitSet is a **flexible**, **effective** and **fast** network for cross-view gait recognition. The [paper](https://ieeexplore.ieee.org/document/9351667) has been published on IEEE TPAMI. #### Flexible The input of GaitSet is a set of silhouettes. - There are **NOT ANY constrains** on an input, which means it can contain **any number** of **non-consecutive** silhouettes filmed under **different viewpoints** with **different walking conditions**. - As the input is a set, the **permutation** of the elements in the input
44
ActiveVisionLab/ANCNet
['semantic correspondence']
['Correspondence Networks with Adaptive Neighbourhood Consensus']
lib/model.py lib/tools.py lib/pf_dataset.py lib/normalization.py lib/point_tnf.py lib/visualisation.py lib/pf_pascal_dataset.py lib/constant.py lib/conv4d.py lib/im_pair_dataset.py lib/torch_util.py lib/interpolator.py lib/plot.py lib/transformation.py lib/dataloader.py lib/eval_util.py eval_pf_pascal.py main Conv4d conv4d DataLoaderIter default_collate DataLoader _worker_loop ExceptionWrapper pin_memory_batch _pin_memory_loop pck_metric pck ImagePairDataset Interpolator LocationInterpolator InverInterpolator CreateCon4D FeatureExtraction SpatialContextNet Pairwise featureL2Norm NeighConsensus NonIsotropicNCB MutualMatching ImMatchNet maxpool4d FeatureCorrelation NonIsotropicNCA NonIsotropicNCC normalize_image NormalizeImageDict PFPascalDataset ImagePairDataset ImagePairDatasetKeyPoint plot_image save_plot unnormalize_axis PointsToUnitCoords bilinearInterpPointTnf corr_to_matches nearestNeighPointTnf normalize_axis PointsToPixelCoords calc_accuracy seed_torch ExtractFeatureMap validate calc_distance calc_pck NormalisationPerRow visualise_feature distance save_checkpoint calc_gt_indices graph_matching calc_pck0 corr_to_matches pure_pck calc_mto str_to_bool save_checkpoint BatchTensorToVars collate_custom Softmax1D expand_dim AffineTnf AffineGridGen displaySingle2 displaySingle displayPair validate model pck_metric DataLoader ArgumentParser PFPascalDataset str exit OrderedDict visualise_feature corr_to_matches parse_args a flatnonzero size vis eval batch_tnf is_available checkpoint enumerate load print num_examples add_argument ImMatchNet ImagePairDataset tqdm BatchTensorToVars zeros len size contiguous conv3d half shape get_device HalfTensor zeros range cuda is_cuda cat seed get set_num_threads put collate_fn get pin_memory_batch isinstance put is_tensor list sum isinstance Sequence new zip _new_shared Mapping Mapping is_tensor isinstance Sequence ne float size mean pow expand_as zeros le sum range data list PointsToUnitCoords size pck bilinearInterpPointTnf numpy range PointsToPixelCoords expand_as list Sequential ReLU append Conv4d range len size max view tuple div fmod unsqueeze append max range cat isinstance Variable size add expand div unsqueeze cuda is_cuda show uint8 view Variable astype add imshow cuda is_cuda set_major_locator NullLocator set_axis_off margins subplots_adjust savefig list view size softmax linspace expand_as meshgrid max range view min pow sqrt unsqueeze cat int view isinstance Variable toidx size multrows abs sqrt unsqueeze long topoint sum cuda is_cuda clone normalize_axis expand_as unnormalize_axis clone expand_as seed str manual_seed_all manual_seed join str basename dirname save makedirs float long max div float max float norm div bmm pure_pck div float sum norm bmm distance bmm max div cpu tensor long sum range _eps cat sum expand_as ExtractFeatureMap model calc_gt_indices permute calc_pck0 range format keycorr_to_matches displayPair batch_preprocessing_fn enumerate int join min makedirs tqdm extract_featuremap cpu zeros train len ExtractFeatureMap subplots model axis clf calc_gt_indices normalise_image fromarray len cm_hot shape imshow scatter savefig permute range format _colors keycorr_to_matches get_cmap batch_preprocessing_fn enumerate int uint8 join min tqdm extract_featuremap cpu train numpy makedirs is_tensor Mapping isinstance exp unsqueeze copyfile size list axis imshow scatter savefig clf get_cmap range plot axis imshow scatter savefig clf get_cmap range zeros displaySingle2 displaySingle zip
ActiveVisionLab/ANCNet
45
AdamByerly/micro-pcb-analysis
['data augmentation']
['On the Importance of Capturing a Sufficient Diversity of Perspective for the Classification of micro-PCBs']
train.py constructs/loggable.py etc/create_tf_records.py constructs/learning_rate.py input/simple_input_pipeline.py models/model_base.py models/nn_ops.py models/variable.py constructs/metrics.py input/micro_pcb_input_pipeline.py input/micro_pcb_input_pipeline2.py constructs/optimizer.py models/SimpleMonolithic.py input/input_pipeline_base.py constructs/output.py input/micro_pcb_input_pipeline_base.py constructs/loops.py constructs/ema_weights.py go EMAWeights ExponentialDecay CappedExponentialDecay Loggable Loops Metrics RMSProp Adam Output _convert_to_example _process_image_files process_dataset _int64_feature _find_image_files _bytes_feature _float_feature _process_image_files_batch _process_image InputPipelineBase MicroPCB MicroPCBUseAllAndAugmentAll MicroPCBBase SimpleInputPipeline ModelBase hvc batch_norm caps_from_conv_xyz fc conv_3x3 caps_from_conv_zxy max_pool flatten SimpleMonolithic get_conv_3x3_variable get_conv_variable Variable get_hvc_variable get_variable MicroPCBUseAllAndAugmentAll Output MicroPCB MirroredStrategy binary_type Example int uint8 numpy cast resize round decode_jpeg int join basename arange _process_image ord _convert_to_example TFRecordWriter print astype write SerializeToString close range flush len int Thread join print astype Coordinator start append range flush len seed list format glob print shuffle range len ceil _find_image_files _process_image_files len
# Documentation is forthcoming...
46
AdamCobb/GP-LAPLACE
['gaussian processes']
['Identifying Sources and Sinks in the Presence of Multiple Agents with Gaussian Process Vector Calculus']
code/GP_deriv.py code/utils.py code/GP_vec.py notebooks/nb_utils.py code/MAS_exp.py code/utils_synthetic.py Pred_var_deriv Pred_mean Loglik optNLLfun SE_covariance_derivative_x1x2 DerivGP var_bounds SE_covariance_derivative_x1 SE_covariance_derivative_xx Pred_var DerivGP_2nd SE_covariance_derivative_x2 SE_covariance Pred_var_deriv_xx SE_covariance_mult_dim Pred_vector_field Multiple_traj SE_covariance_mult_dim_derivative_x1 Agent ContinuousEnvironment KL_div subplot_format plot_laplacian power exp outer power exp outer power exp outer power exp outer exp power outer zeros range dot cho_factor eye cho_solve dot T cho_factor cho_solve dot cho_factor eye cho_solve dot cho_factor eye cho_solve transpose len pi dot cho_solve log cho_factor slogdet print identity SE_covariance len sqrt vstack diagonal zeros exp range zeros exp range reshape log ylim tick_params xlim subplot_format set_yticklabels linspace tick_params show subplot argmin colorbar scatter contourf range tight_layout mu1 add_axes set_tick_params enumerate get_yticklabels text rc min mu2 figure len
# GP-LAPLACE A Gaussian process based technique for locating attractors from trajectories in time-varying fields. This repository contains code used in the experiments of our paper: "Identifying Sources and Sinks in the Presence of Multiple Agents with Gaussian Process Vector Calculus" by Adam D. Cobb, Richard Everett, Andrew Markham, and Stephen J. Roberts. ## Abstract In systems of multiple agents, identifying the cause of observed agent dynamics is challenging. Often, these agents operate in diverse, non-stationary environments, where models rely on handcrafted environment-specific features to infer influential regions in the system’s surroundings. To overcome the limitations of these inflexible models, we present *GP-LAPLACE*, a technique for locating sources and sinks from trajectories in time-varying fields. Using Gaussian processes, we jointly infer a spatio-temporal vector field, as well as canonical vector calculus operations on that field. Notably, we do this from only agent trajectories without requiring knowledge of the environment, and also obtain a metric for denoting the significance of inferred causal features in the environment by exploiting our probabilistic method. To evaluate our approach, we apply it to both synthetic and real-world GPS data, demonstrating the applicability of our technique in the presence of multiple agents, as well as its superiority over existing methods. ## Example GP-LAPLACE applied to pelagic seabirds flying over the Mediterranean sea: ![Alt Text](https://github.com/AdamCobb/GP-LAPLACE/blob/master/skl_velocity_Z5.gif) ## Reproducing Results We have created a number of Jupyter notebooks to reproduce our results:
47
Adamdad/Filter-Gradient-Decent
['stochastic optimization']
['Stochastic Gradient Variance Reduction by Solving a Filtering Problem']
optmizor/sgd_WT.py model/MLP.py model/CIFAR10/Resnet.py model/MNIST/Resnet.py optmizor/sgd.py utils/ploter.py optmizor/sgd_MA.py MNIST_exp.py plot.py numberical_plot.py CIFAR10_exp.py optmizor/sgd_ARMA.py NUMBERICAL_exp.py optmizor/Kalman_opt.py optmizor/__init__.py model/NonConvex.py main train accuracy test main train accuracy test main z_func plot_variance plot_value plot_loss plot_figure print_acc MLP Function1 ResNet ResNet18 ResNet34 Bottleneck ResNet101 test ResNet50 BasicBlock ResNet152 conv1x1 resnext50_32x4d wide_resnet50_2 ResNet resnet50 resnext101_32x8d Bottleneck resnet152 wide_resnet101_2 conv3x3 _resnet resnet34 resnet18 BasicBlock resnet101 KGD SGD ARMAGD MASGD WTGD RecordWriter AverageMeter update model backward AverageMeter zero_grad size write tqdm item LOSS_FUNC step enumerate format print AverageMeter eval avg checkpoint_path save_model batch_size SGD DataLoader save device memory seed RecordWriter StepLR exit Adam epochs append to MASGD range state_dict update val checkpoint_name format KGD test lr avg manual_seed CIFAR10 is_available optimizer join print write parameters model_name ARMAGD WTGD train step makedirs momentum MNIST model zero_grad backward show max list plot print record tight_layout add_patch Rectangle figure legend gca array range enumerate len show subplot list plot yticks record tight_layout ylim figure legend array range enumerate len show subplot list plot record tight_layout ylim figure legend array range enumerate len subplot list plot yticks record name tight_layout ylim savefig figure legend sample array range enumerate len print max record enumerate randn ResNet18 size net ResNet
# Filter-Gradient-Decent Update: This project also include the code for paper **Kalman Optimizer for Consistent Gradient Descent** *Xingyi Yang, (ICASSP2021)* [paper](https://ieeexplore.ieee.org/document/9414588) Course project for ECE 251C UCSD. Code for paper, **Stochastic Gradient Variance Reduction by Solving a Filtering Problem** In this paper, we propose Filter Gradient Decent (FGD), an efficient stochastic optimization algorithm that make consistent estimation of the local gradient by solving an adaptive filtering problem with different design of filters.
48
AdeDZY/DeepCT
['passage retrieval']
['Context-Aware Sentence/Passage Term Importance Estimation For First Stage Retrieval']
scripts/get_training_query_term_recall_1to1.py modeling.py optimization.py optimization_test.py tokenization_test.py run_deepct.py scripts/bert_term_sample_to_json.py scripts/bert_term_sample_to_json_car.py to_sentences.py modeling_test.py tokenization.py HDCT/passage2doc_bert_term_sample_to_json.py scripts/get_training_query_term_recall.py __init__.py extract_features.py read_examples InputFeatures input_fn_builder InputExample _truncate_seq_pair convert_examples_to_features main model_fn_builder embedding_lookup reshape_from_matrix dropout assert_rank reshape_to_matrix layer_norm_and_dropout get_shape_list gelu create_initializer BertConfig attention_layer get_activation layer_norm embedding_postprocessor transformer_model create_attention_mask_from_input_mask get_assignment_map_from_checkpoint BertModel BertModelTest create_optimizer AdamWeightDecayOptimizer OptimizationTest QueryProcessor MarcoQueryProcessor CarJsonDocProcessor InputExample gen_target_token_weights model_fn_builder file_based_convert_examples_to_features MarcoTsvDocProcessor convert_examples_to_features MarcoDocProcessor PaddingInputExample DataProcessor create_model IdContentsJsonDocProcessor InputFeatures input_fn_builder main convert_single_example file_based_input_fn_builder CarDocProcessor validate_case_matches_checkpoint convert_by_vocab FullTokenizer BasicTokenizer convert_ids_to_tokens WordpieceTokenizer printable_text convert_tokens_to_ids load_vocab whitespace_tokenize convert_to_unicode _is_whitespace _is_control _is_punctuation TokenizationTest subword_weight_to_word_weight flat_max flat_sum json_to_trec flat_avg position_decay_sum position_decay_avg subword_weight_to_word_weight tsv_to_weighted_doc subword_weight_to_word_weight json_to_trec text_clean text_clean input_type_ids input_mask append unique_id input_ids join text_b InputFeatures convert_tokens_to_ids _truncate_seq_pair tokenize info append unique_id text_a enumerate len pop len FullTokenizer read_examples input_fn_builder bert_config_file TPUEstimator set_verbosity PER_HOST_V2 convert_examples_to_features input_file from_json_file model_fn_builder INFO RunConfig sqrt erf lower name group OrderedDict match list_variables layer_norm dropout one_hot reshape get_shape_list matmul expand_dims get_variable one_hot reshape get_shape_list layer_norm_and_dropout matmul assert_less_equal get_variable ones reshape get_shape_list float32 cast dense dropout multiply get_shape_list reshape transpose float32 matmul transpose_for_scores expand_dims sqrt cast softmax float reshape_to_matrix int get_shape_list append reshape_from_matrix range reshape_to_matrix as_list assert_rank name shape append enumerate reshape ndims get_shape_list name integer_types ndims isinstance trainable_variables list constant get_or_create_global_step gradients clip_by_global_norm group float32 apply_gradients cast int32 zip polynomial_decay CrossShardOptimizer AdamWeightDecayOptimizer get startswith join isinstance text convert_tokens_to_ids len InputFeatures term_recall_dict gen_target_token_weights guid info append tokenize enumerate segment_ids target_mask create_int_feature TFRecordWriter write SerializeToString close OrderedDict Example input_mask info create_float_feature input_ids target_weights enumerate convert_single_example isinstance split value info reshape concat get_sequence_output shape get_all_encoder_layers get_variable BertModel segment_ids target_mask target_weights convert_single_example get_train_examples TPUClusterResolver init_checkpoint output_dir do_train do_predict file_based_convert_examples_to_features validate_case_matches_checkpoint tpu_name data_dir max_seq_length len do_lower_case append PaddingInputExample use_tpu predict predict_batch_size lower MakeDirs num_train_epochs info int warmup_proportion join file_based_input_fn_builder get_test_examples train train_batch_size match group isinstance PY3 PY2 isinstance PY3 PY2 OrderedDict append strip split category category startswith startswith category ord get int strip sqrt startswith zip append float round max split append float sum append float sum float sum sum max word_tokenize sub replace
# DeepCT and HDCT: Context-Aware Term Importance Estimation For First Stage Retrieval This repository contains code for two of our papers: - arXiv paper "Context-Aware Sentence/Passage Term Importance Estimation For First Stage Retrieval" [arXiv](https://arxiv.org/abs/1910.10687), 2019 - The WebConf2020 paper "Context-Aware Document Term Weighting for Ad-Hoc Search" [pdf](http://www.cs.cmu.edu/~zhuyund/papers/TheWebConf_2020_Dai.pdf), 2020 *Feb 19, 2019*: Checkout our new WebConf2020 paper ["Context-Aware Document Term Weighting for Ad-Hoc Search" ](http://www.cs.cmu.edu/~zhuyund/papers/TheWebConf_2020_Dai.pdf)! It presents HDCT, which extends DeepCT to support long documents and weakly-supervised training! *Feb 19, 2019*: Data and instructions for HDCT will come soon. *May 21, 2020*: Rankings generaed by HDCT for MS-MARCO-Doc: [here](http://boston.lti.cs.cmu.edu/appendices/TheWebConf2020-Zhuyun-Dai/rankings/) Term frequency is a common method for identifying the importance of a term in a query or document. But it is a weak signal. This work proposes a Deep Contextualized Term Weighting framework that learns to map BERT's contextualized text representations to context-aware term weights for sentences and passages. - DeepCT is a framwork for sentence/passage term weighting. When applied to **passages**, DeepCT-Index produces term weights that can be stored in an ordinary inverted index for passage retrieval. When applied to **query** text, DeepCT-Query generates a weighted bag-of-words query that emphasizes essential terms in the query. - HDCT extends DeepCT to support long documents. It index **documents** into an ordinary inverted index for retrieval.
49
Adelaide-AI-Group/MCVL
['visual localization', 'visual place recognition']
['Visual Localization Under Appearance Change: Filtering Approaches']
libs/vlfeat-0.9.21/docsrc/doxytag.py libs/vlfeat-0.9.21/docsrc/mdoc.py libs/vlfeat-0.9.21/docsrc/wikidoc.py libs/vlfeat-0.9.21/docsrc/webdoc.py libs/vlfeat-0.9.21/docsrc/formatter.py Doxytag Terminal Lexer B PL L lex Formatter DL BL E extract towiki depth_first breadCrumb MFile Node runcmd xscan wikidoc usage bullet indent inner_content PL group match DL BL len pid Popen waitpid children group lstrip match startswith append open join addMFile addChildNode print sort MFile Node match listdir __next__ prev runcmd join wikidoc print print insert print readlines close len writelines append range exists open
# MCVL Visual Localization Under Appearance Change: A Filtering Approach(DICTA 2019 Best paper) https://arxiv.org/abs/1811.08063 About ============ MATLAB code of our DICTA 2019 paper: "Visual Localization Under Appearance Change: A Filtering Approach" - DICTA 2019 **(Best paper award)**. [Anh-Dzung Doan](https://sites.google.com/view/dzungdoan/home), [Yasir Latif](http://ylatif.github.io/), [Thanh-Toan Do](https://sites.google.com/view/thanhtoando/home), [Yu Liu](https://sites.google.com/site/yuliuunilau/home), Shin-Fang Ch’ng, [Tat-Jun Chin](https://cs.adelaide.edu.au/~tjchin/doku.php), and [Ian Reid](https://cs.adelaide.edu.au/~ianr/). [[pdf]](https://arxiv.org/abs/1811.08063) If you use/adapt our code, please kindly cite our paper. ![Result on Oxford RobotCar](result.png) Dependencies ============
50
Adelaide-AI-Group/ST-CLSTM
['depth estimation', 'monocular depth estimation']
['Exploiting temporal consistency for real-time video depth estimation']
CLSTM_Depth_Estimation-master/prediction/utils_for_2DCNN_prediction/functions_for_prediction.py CLSTM_Depth_Estimation-master/models_CLTSM/R_NLCRNN_modules.py CLSTM_Depth_Estimation-master/models_2D/backbone_dict.py CLSTM_Depth_Estimation-master/models_discriminator/resnet_models.py CLSTM_Depth_Estimation-master/demo/data/depth_2_heat_map.py CLSTM_Depth_Estimation-master/data/create_list_nyu_v2_2D.py CLSTM_Depth_Estimation-master/models_discriminator/short_resnet_models.py CLSTM_Depth_Estimation-master/data/test.py CLSTM_Depth_Estimation-master/models_discriminator/customize_models.py CLSTM_Depth_Estimation-master/prediction/utils_for_CLSTM_prediction/functions_for_prediction.py CLSTM_Depth_Estimation-master/models_CLTSM/net.py raw_nyu_v2_build/utils.py CLSTM_Depth_Estimation-master/models_discriminator/C2D_models.py CLSTM_Depth_Estimation-master/prediction/prediction_2D_main.py CLSTM_Depth_Estimation-master/models_CLTSM/R_NLCLSTM_modules.py CLSTM_Depth_Estimation-master/prediction/utils_for_CLSTM_prediction/metrics.py CLSTM_Depth_Estimation-master/models_discriminator/discriminator_dict.py CLSTM_Depth_Estimation-master/models_2D/modules.py raw_nyu_v2_build/test_scenes_extraction.py CLSTM_Depth_Estimation-master/models_CLTSM/R_CLSTM_modules_2.py CLSTM_Depth_Estimation-master/models_2D/senet.py raw_nyu_v2_build/main_frame.py CLSTM_Depth_Estimation-master/prediction/utils_for_2DCNN_prediction/metrics.py CLSTM_Depth_Estimation-master/prediction/utils_for_CLSTM_prediction/loaddata.py CLSTM_Depth_Estimation-master/models_CLTSM/resnet.py CLSTM_Depth_Estimation-master/prediction/prediction_CLSTM_main.py CLSTM_Depth_Estimation-master/prediction/utils_for_2DCNN_prediction/loaddata.py CLSTM_Depth_Estimation-master/models_2D/net.py raw_nyu_v2_build/main_clips.py raw_nyu_v2_build/main_train_scenes_extraction.py CLSTM_Depth_Estimation-master/models_CLTSM/modules.py CLSTM_Depth_Estimation-master/models_CLTSM/densenet.py CLSTM_Depth_Estimation-master/models_CLTSM/non_local.py CLSTM_Depth_Estimation-master/data/create_list_nyu_v2_3D.py CLSTM_Depth_Estimation-master/models_2D/densenet.py CLSTM_Depth_Estimation-master/models_2D/resnet.py CLSTM_Depth_Estimation-master/models_CLTSM/backbone_dict.py raw_nyu_v2_build/test.py CLSTM_Depth_Estimation-master/models_CLTSM/refinenet_dict.py make_if_not_exist create_dict make_if_not_exist video_split create_dict tensor_2_img densenet161 DenseNet _DenseLayer _DenseBlock _Transition MFF R E_densenet D E_senet _UpProjection E_resnet model ResNet resnet50 Bottleneck conv3x3 resnet34 resnet18 BasicBlock se_resnext50_32x4d senet154 SENet SEResNetBottleneck SEBottleneck SEResNeXtBottleneck initialize_pretrained_model Bottleneck se_resnet152 se_resnet50 se_resnext101_32x4d SEModule se_resnet101 densenet161 DenseNet _DenseLayer _DenseBlock _Transition MFF E_densenet D E_senet _UpProjection E_resnet cubes_2_maps model Non_local_d Non_local_h ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 BasicBlock resnet101 R_CLSTM_2 R_CLSTM_4 R_CLSTM_1 maps_2_cubes R_CLSTM_6 R maps_2_maps R_CLSTM_3 R_3 R_CLSTM_8 R_CLSTM_10 R_CLSTM_5 R_10 R_cell R_2 R_CLSTM_7 R_d R_CLSTM_9 ConvLSTMCell R_CLSTM_1 ConvLSTM ConvRNNCell ConvRNN R_NLCRNN_1 C_C2D_1 C_C3D_2 C_C3D_1 conv3x3x3 ResNet Bottleneck resnet18_b resnet18 BasicBlock conv3x3x3 ResNet_1 Bottleneck BasicBlock short_resnet9 inference make_if_not_exist delete_if_exist _is_numpy_image Crop load_annotation_data ReScale getTestingData depthDataset Compose ToTensor Normalize _is_pil_image deta REL RMS log10 metric_list make_if_not_exist maps_2_cubes cubes_2_maps inference delete_if_exist _is_numpy_image Crop load_annotation_data ReScale video_loader getTestingData depthDataset Compose ToTensor Normalize _is_pil_image pil_loader deta REL RMS log10 metric_list make_if_not_exits pil_loader json_loader get_indices makedirs join make_if_not_exist format glob sort zip append listdir enumerate append list range int test_loc video_split read fl overlap fps len get_cmap cmap delete load_url load_state_dict DenseNet load_url ResNet load_state_dict load_url ResNet load_state_dict load_url ResNet load_state_dict load_url load_state_dict initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet initialize_pretrained_model SENet shape permute load_url ResNet load_state_dict load_url ResNet load_state_dict shape view shape view ResNet ResNet_1 remove exists eval reset DataLoader depthDataset append join exists pil_loader makedirs ceil list array range
# ST-CLSTM Exploiting temporal consistency for real-time video depth estimation (ICCV 2019) https://arxiv.org/abs/1908.03706 By Haokui Zhang, [Chunhua Shen](https://cs.adelaide.edu.au/~chhshen/), Ying Li, Yuanzhouhan Cao, [Yu Liu](https://sites.google.com/site/yuliuunilau/home), Youliang Yan Some video results can be found at https://youtu.be/B705k8nunLU Requirements: Pytorch>=0.4.0 Python=3.6 matlab (for converting raw data, dump and pgm files to jpg and png files) Data preprocess: Data preprocess consists of three steps, including
51
Adirlou/OptML_Project
['stochastic optimization']
['Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication']
decentralized_SGD_logistic.py decentralized_SGD_classifier.py print_util.py quantizer.py decentralized_SGD_least_squares.py communicator.py helpers.py Communicator DecentralizedSGDClassifier DecentralizedSGDLogistic plot_losses standardize plot_losses_with_std run_logistic load_data load_csv_data run_logistic_n_times clean log_acc_loss_header log_acc_loss Color Quantizer yscale show plot xlabel squeeze ylabel ylim title savefig figure legend range yscale show plot xlabel ylabel mean ylim title savefig figure legend fill_between std range len genfromtxt ones len mean argmax range unique mean std clean standardize load_csv_data format print score DecentralizedSGDLogistic fit print run_logistic range append print ljust END print ljust END
# Optimization for Machine Learning: Mini-Project ## Convergence of Decentralized SGD under Various Topologies *by Paul Griesser, Adrien Vandenbroucque, Robin Zbinden* In this project, we propose Python code to compute a decentralized version of the SGD algorithm presented initially in the Github repository [here](https://github.com/epfml/ChocoSGD). The contribution is twofold: 1) The original code from https://github.com/epfml/ChocoSGD has been partially rewritten in order to allow simulations with more machines and proposes more already implemented topologies. The user can also provide his own custom topology. In some cases, the execution speed is improved by computing the gradient in matrix form for all machines. 2) We created various experiences in order to try to answer the following questions: - We reproduce results which are well known and are the answer to the question: How does the network topology and number of nodes affect the convergence rate of decentralized SGD? What happens when one tries to optimize using a real-world network? - In order for the decentralized SGD to converge nicely in few iterations, the nodes in the network must be well connected. If one takes a graph such as the barbell graph or the path graph, does one notice an "information bottleneck"? That is, if there are very sparse cuts, does the information flows well in the network and does it affect the convergence rate of decentralized SGD? - One of the assumption of the paper about Choco-SGD (see https://arxiv.org/abs/1902.00340) is that the transition matrix is symmetric, which thus guarantees that the limiting distribution of the induced Markov Chain is uniform among the nodes. What happens when one allow for more general matrices? Are convergence results still obtained?
52
Aditi138/LASE-Agreement
['cross lingual transfer']
['Automatic Extraction of Rules Governing Morphological Agreement']
annotation_site/serve.py create_trees_website.py dataloader.py create_triples.py utils.py computeAverage.py baseline.py automated_metric addValuesEval printTreeWithExamplesPDF getLeafInfo printPOSInfomation train getTree FrequretrivePossibleTuples getUnique retrivePossibleTuples DataLoader wasserstein_distance colorRetrival assert_path retrivePossibleTuples print_examples isAgreement example_web_print pruneTree printTreeForBinaryFeatures getTestData parseLeafInformation convertProb plot_histogram getAggreeingExamples distributional_metric constructTree automated_metric printMultipleLines convertStringToset debug collateTree printDataStat find_agreement get_vocab_from_set el_tense_server el_numer_server el_mood_server el_person_server server el_gender_server el_case_server el_server items list strip retrivePossibleTuples getTestData load_from_file printPOSInfomation export_graphviz hard best_estimator_ lower mkdir zip append StringIO split int join str graph_from_dot_data append split items sorted defaultdict list join add train_random_samples getAggreeingExamples append keys enumerate graph_from_dot_data colorRetrival isAgreement pruneTree str list prune append format replace printMultipleLines hard set lower collateTree enumerate join int split find items list keys getHistogram pos_dictionary pos_id2tag round printTreeForBinaryFeatures count_nonzero sorted printTreeWithExamplesPDF relation_id2tag used_head_pos PredefinedSplit apply append range constructTree relation_dictionary concatenate set export_text enumerate GridSearchCV fit used_relations best_estimator_ used_child_pos DecisionTreeClassifier len set add lower append head lemma defaultdict id set feats find_agreement deprel enumerate upos id upos find_agreement append deprel enumerate feats split add set print int head conll pop join list rstrip format set add split append count items list sqrt chisquare append int count split int join conll write head split items parseLeafInformation defaultdict sorted list id assert_path feats lower find_agreement append enumerate upos print sum format len append join list len print format isAgreement int list items defaultdict remove str format colorRetrival set add lower zip append union isAgreement enumerate split int remove rstrip defaultdict str add set append enumerate split len items list close ylabel bar title savefig legend zip append xticks items list parseLeafInformation strip id assert_path feats lower find_agreement round isAgreement load_from_file upos join parseLeafInformation defaultdict list set assert_path add defaultdict id lower upos find_agreement feats isAgreement mean abs append sum array
# LASE: Automated Extraction of Agreement Rules ## Requirements A python version >=3.5 is required. Additional requirements are present in the **requirements.txt** 1. In the **decision_tree_files.txt**, enter the path to the treebanks for which you want to extract the rules. This code can work with UD/SUD dependency (v2.5) treebanks. To download all the SUD treebanks: https://surfacesyntacticud.github.io/data/ 2. Run the following command to create the rules: ``` python create_trees_website.py --folder_name website | tee output.log ``` Running this will create the html files with the decision trees, examples and other relevant information. Simply open the `website/index.html` in any browser to navigate the trees.
53
AdityaGolatkar/Sparse-Kernel-PCA-for-outlier-detection
['outlier detection']
['Sparse Kernel PCA for Outlier Detection']
skpca_codes/kspca_fashion_g2.py skpca_codes/kspca_cancer_g2.py skpca_codes/kspca_fashion_g.py skpca_codes/kspca_fruits2_var.py skpca_codes/kspca_digit_g.py skpca_codes/kspca_digit_g2.py skpca_codes/kspca_cancer_g.py skpca_codes/kspca_fruits.py skpca_codes/kspca_fruits2.py skpca_codes/kspca_satimage2_g.py skpca_codes/kspca_spiral_g2.py skpca_codes/kspca_digit_var.py skpca_codes/kspca_spiral_g.py skpca_codes/kspca_fruits_var.py gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main gram_generate recon_error kernel gram_eigen_vectors naive_spca main norm exp norm print kernel zeros sum range eigh sqrt range svd T sum coef_ print dot sqrt ElasticNet append zeros range diag fit kernel dot zeros sum range load gram_generate print recon_error mean shape gram_eigen_vectors dot savemat naive_spca append sum std range int zeros abs norm choice save loadmat
# Sparse-Kernel-PCA-for-outlier-detection Codes and results for our work on SKPCA for outlier detection. Paper link : https://arxiv.org/abs/1809.02497.
54
AdivarekarBhumit/ID-Card-Segmentation
['face detection']
['MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream']
test_model.py model/iou_loss.py model/train.py model/unet_model.py dataset/download_dataset.py dataset/stack_npy.py main read_image main IoU v_generator t_generator get_model load COLOR_BGR2GRAY fillPoly reshape shape resize zeros imread array cvtColor open str remove sorted replace print glob system rmtree save zip append expand_dims listdir array read_image load vstack range sum zeros range zeros range clear_session concatenate print Model summary Input compile
# ID-Card-Segmentation Segmentation of ID Cards using U-Net ### U-Net Architecture <img src="http://deeplearning.net/tutorial/_images/unet.jpg" width="500" height="400" alt="U-net"> ### Our Result's ![Test_Image](https://github.com/AdivarekarBhumit/ID-Card-Segmentation/blob/master/images/test.jpg) ![Output_Image](https://github.com/AdivarekarBhumit/ID-Card-Segmentation/blob/master/images/output.jpg) ### Requirements - Tensorflow-GPU 1.12 - Keras 2.1
55
AhmedImtiazPrio/heartnet
['anomaly detection']
['Learning Front-end Filter-bank Parameters using Convolutional Neural Networks for Abnormal Heart Sound Detection']
codes/learnableFilterbanks.py codes/utils.py codes/legacyUtils.py codes/train.py codes/AudioDataGenerator.py codes/custom_layers.py codes/modules.py BalancedAudioDataGenerator AudioDataGenerator NumpyArrayIterator _NumpyArrayIterator random_brightness _Iterator DCT1D Conv1D_gammatone Conv1D_linearphase Conv1D_linearphaseType_legacy Conv1D_linearphaseType Conv1D_zerophase Conv1D_zerophase_linear DCT1D Conv1D_gammatone Conv1D_linearphase Conv1D_linearphaseType_legacy Conv1D_linearphaseType Conv1D_zerophase Conv1D_zerophase_linear load_data_LEGACY log_macc_LEGACY reshape_folds_LEGACY branch heartnetTop smooth LRdecayScheduler get_activations loadFIRparams plot_freq calc_metrics load_model eerPred parts2rec plot_metric display_activations smooth_win get_weights rec2parts plot_coeff reshape_folds cc2rec_labels model_confidence load_dataa log_metrics sessionLog idx_rec2parts grad_cam plot_log_metrics cc2rec log_fusion load_data predict_parts uniform join open_file chr print to_categorical append range reshape_folds_LEGACY len reshape transpose join int branch loadFIRparams Model Input join list zip DataFrame tail concat print to_csv dict shape isfile idxmax read_csv reshape hstack open_file print input range append len show int format print reshape transpose hstack shape sqrt imshow floor expand_dims enumerate len append list join format print dict idxmax read_csv print reshape transpose shape join open_file chr print reshape_folds to_categorical append range len join asarray print reshape_folds to_categorical loadmat join isdir model_from_json print summary isfile append int mean zip int transpose parts2rec append argmax predict abs roc_curve print sum format asarray print float64 Series astype dict unique ravel array roc_auc_score len join load_model ones load_weights zip zeros get_weights len argmax predict subplots run show load_model FIR_type name append get_weights plot concatenate tight_layout mean load_weights impulse_gammatone enumerate join set_style std len subplots pi abs max unwrap run freqz load_model FIR_type name angle log10 twinx append get_weights plot concatenate tight_layout load_weights impulse_gammatone enumerate join set_style len join asarray subplots plot set_ylim set_xlim smooth set_xlabel set_ylabel legend read_csv values enumerate join asarray subplots plot set_ylim set_xlim smooth set_xlabel read_csv values enumerate int list sum range ones sum eval convolve function print interp1d maximum mean get_layer linspace iterate f1 expand_dims range append ones append enumerate
# heartnet
56
Ahmedest61/CNN-Region-VLAD-VPR
['visual place recognition']
['A Holistic Visual Place Recognition Approach using Lightweight CNNs for Significant ViewPoint and Appearance Changes']
Region-VLAD.py produceResults.py groundTruth showAUCPR showResults load_obj getROIs getVLAD binaryProto2npy load_obj endswith grid str list ylabel precision_recall_curve ylim title savefig append close xlim listdir load_obj auc enumerate join items print xlabel fill_between step clear join list defaultdict save_obj sorted items sum endswith print OrderedDict sqrt append listdir load_obj enumerate subplots endswith str list set_title imshow savefig append imread close listdir load_obj enumerate join items print set_yticks dict set_xticks data read blobproto_to_array BlobProto ParseFromString array clear list sorted range from_iterable append label sum zeros regionprops len n_clusters abs reshape cluster_centers_ sign flatten shape sqrt dot labels_ zeros sum range predict
# A Holistic Visual Place Recognition Approach using Lightweight CNNs for Significant ViewPoint and Appearance Changes ![result_image](frontpage.jpg) - There are five benchmark datasets tested on the proposed methodology: 1) Berlin Halenseestrasse 2) Berlin Kudamm 3) Berlin A100 4) Garden Point 5) Syhthesized Nordland If you use these datasets and code, please cite the following publication: ```
57
AidanRocke/vertex_prediction
['semantic segmentation']
['Annotating Object Instances with a Polygon-RNN']
vertex_pred/image_encoder.py vertex_pred/training_image_encoder.py vertex_pred/group_norm.py vertex_pred/test.py GroupNormalization image_encoder MyTest encoder_inference files_to_array training_batch reset_default_graph image_encoder zeros load append files_to_array arange choice
AidanRocke/vertex_prediction
58
Aiqz/bayes-by-hypernet
['normalising flows']
['Implicit Weight Uncertainty in Neural Networks']
run_map_exp.py run_bbb_cifar_resnet_exp.py run_bbb_exp_kernels.py run_bbb_exp.py run_dropout_cifar_resnet_exp.py layers.py utils.py run_mnf_cifar_resnet_exp.py run_mnf_exp.py run_ensemble_cifar_resnet_exp.py base_layers.py experiments.py run_dropout_exp.py networks.py run_bbh_cifar_resnet_exp.py run_bbh_exp.py experiments_cifar.py utils_cifar.py run_map_cifar_resnet_exp.py run_ensemble_exp.py MaskedNVPFlow PlanarFlow outer BBHLayer BBBLayer BBHDynLayer BBHDiscriminator Layer run_klapprox_experiment run_disc_experiment run_l2_experiment run_ensemble_experiment weight_summaries run_analytical_experiment analysis run_klapprox_experiment run_disc_experiment run_l2_experiment run_ensemble_experiment weight_summaries run_analytical_experiment analysis run_vanilla_experiment VanillaDenseLayer MNFDenseLayer BBHDenseLayer BBHNormDenseLayer BBHDynDenseLayer MNFConvLayer BBBConvLayer VanillaConvLayer BBHConvLayer BBHNormConvLayer BBHDynConvLayer BBBDenseLayer get_bbh_cifar_resnet get_mnf_cifar_resnet get_ensemble_mnist get_bbb_cifar_resnet get_ensemble_cifar_resnet get_bbb_mnist get_dropout_cifar_resnet get_dropout_mnist get_cifar_image get_bbh_mnist get_mnf_mnist get_vanilla_mnist get_vanilla_cifar_resnet get_pred_df build_result_dict rotate calc_entropy build_adv_examples calc_ent_auc get_probs get_pred_df build_result_dict rotate calc_entropy build_adv_examples calc_ent_auc get_probs reduce_max reduce_mean histogram scalar reduce_min moments subplots concat Saver save corr DataFrame heatmap xticks run yticks list build_result_dict savefig range format tight_layout mean swapaxes zip distplot join print set_yticks figure zeros std len concat TRAINABLE_VARIABLES set_random_seed weight_summaries Normal RMSPropOptimizer log seed transpose len get_collection reduce_min reduce_sum merge_all shape cast expand_dims range get format RandomState square sqrt sample float join add_check_numerics_ops minimize labels float32 AdamOptimizer reduce_mean histogram eye global_variables_initializer placeholder_with_default scalar makedirs concat TRAINABLE_VARIABLES set_random_seed weight_summaries Normal RMSPropOptimizer gather seed transpose len get_collection merge_all reduce_sum shape cast discriminator range random_shuffle BBHDiscriminator get format RandomState sample join add_check_numerics_ops minimize reshape labels float32 sigmoid AdamOptimizer reduce_mean histogram global_variables_initializer placeholder_with_default scalar makedirs concat TRAINABLE_VARIABLES set_random_seed weight_summaries add_n RMSPropOptimizer gather seed transpose len get_collection merge_all range random_shuffle get format RandomState join add_check_numerics_ops minimize labels AdamOptimizer histogram global_variables_initializer placeholder_with_default scalar makedirs TRAINABLE_VARIABLES set_random_seed get_regularization_loss RMSPropOptimizer seed len get_collection merge_all get RandomState join add_check_numerics_ops minimize labels AdamOptimizer global_variables_initializer placeholder_with_default scalar makedirs get join seed RandomState makedirs get_collection labels TRAINABLE_VARIABLES set_random_seed AdamOptimizer get_regularization_loss RMSPropOptimizer global_variables_initializer placeholder_with_default len UPDATE_OPS group UPDATE_OPS group UPDATE_OPS get join seed RandomState minimize makedirs get_collection TRAINABLE_VARIABLES group set_random_seed merge_all AdamOptimizer UPDATE_OPS RMSPropOptimizer global_variables_initializer placeholder_with_default scalar len group UPDATE_OPS sign flatten get_regularization_losses add_n argmax random_normal log c2 BBHDenseLayer fc2 get_collection placeholder cast BBHConvLayer append c1 range format relu sparse_softmax_cross_entropy_with_logits stack softmax float moments equal reshape float32 max_pool add_to_collection reduce_mean int32 eye fc1 histogram placeholder_with_default bool cond placeholder sign get_regularization_losses add_n argmax random_normal log BBHDenseLayer placeholder cast BBHConvLayer append prod range call_resnet format sparse_softmax_cross_entropy_with_logits stack softmax get_cifar_image float equal enumerate print reshape float32 add_to_collection reduce_mean int32 eye placeholder_with_default BatchNormalization sign flatten get_regularization_losses BBBDenseLayer argmax c2 fc2 placeholder cast c1 relu BBBConvLayer sparse_softmax_cross_entropy_with_logits softmax equal reshape float32 max_pool reduce_mean int32 fc1 placeholder_with_default sign get_regularization_losses BBBDenseLayer argmax placeholder cast append range call_resnet format BBBConvLayer sparse_softmax_cross_entropy_with_logits softmax get_cifar_image equal enumerate print float32 reduce_mean int32 placeholder_with_default BatchNormalization sign flatten get_regularization_losses argmax c2 MNFDenseLayer fc2 placeholder cast c1 relu sparse_softmax_cross_entropy_with_logits softmax equal MNFConvLayer reshape float32 max_pool reduce_mean int32 fc1 placeholder_with_default sign argmax MNFDenseLayer placeholder cast append range call_resnet format sparse_softmax_cross_entropy_with_logits softmax get_cifar_image equal enumerate print MNFConvLayer float32 reduce_mean int32 placeholder_with_default BatchNormalization dense argmax l2_regularizer reshape float32 placeholder max_pool flatten conv2d reduce_mean sparse_softmax_cross_entropy_with_logits cast int32 softmax sign placeholder_with_default equal sign argmax l2_regularizer placeholder conv2d batch_normalization cast range relu sparse_softmax_cross_entropy_with_logits softmax get_cifar_image equal dense float32 reduce_mean int32 placeholder_with_default len dense argmax l2_regularizer dropout reshape float32 placeholder max_pool flatten conv2d reduce_mean sparse_softmax_cross_entropy_with_logits cast int32 softmax sign placeholder_with_default equal sign argmax l2_regularizer placeholder conv2d batch_normalization cast range dropout relu sparse_softmax_cross_entropy_with_logits softmax get_cifar_image equal dense float32 reduce_mean int32 placeholder_with_default len reshape float32 placeholder int32 placeholder_with_default range float32 placeholder int32 get_cifar_image placeholder_with_default range reshape list concat range zip DataFrame get_probs len len stack zeros range run mean range run cumsum histogram diff calibration_curve get_pred_df ones DataFrame mean images calc_entropy build_adv_examples linspace calc_ent_auc get_probs len stack pad
Aiqz/bayes-by-hypernet
59
AirBernard/Scene-Text-Detection-with-SPCNET
['scene text detection', 'instance segmentation', 'semantic segmentation']
['Scene Text Detection with Supervised Pyramid Context Network']
train.py data/__init__.py demo.py nets/config.py nets/utils.py nets/resnet_v1.py nets/model.py nets/resnet_utils.py data/icdar.py data/data_util.py get_model_list get_image_list InferenceConfig read_image get_result main GeneratorEnqueuer get_image build_rpn_targets generator get_batch polygon_area resize_image_and_annotation compute_backbone_shapes get_set_list get_annotation read_image Config Block conv2d_same subsample resnet_arg_scope stack_blocks_dense resnet_v1_152 resnet_v1_101 bottleneck resnet_v1_200 resnet_v1_50 resnet_v1 astype float32 mimread imread array get_checkpoint_state all_model_checkpoint_paths latest_checkpoint append join listdir isfile reset_default_graph gpu_list trainable_variables checkpoint_path pretrained_model_path MkDir moving_average_decay Saver GPUOptions global_variables get_collection gpu_list apply build_input_graph apply_gradients polynomial_decay build_SPC get_or_create_global_step group compute_gradients assign_from_checkpoint_fn learning_rate AdamOptimizer ExponentialMovingAverage MODEL_VARIABLES global_variables_initializer join join read_image exists astype float32 join extract_bboxes astype randint resize_mask int32 resize_image fliplr BACKBONE callable sum zip ones compute_overlaps choice RPN_TRAIN_ANCHORS_PER_IMAGE zeros argmax amax len arange IMAGE_SHAPE compute_backbone_shapes RPN_ANCHOR_RATIOS BACKBONE_STRIDES get_set_list generate_pyramid_anchors BATCH_SIZE MAX_GT_INSTANCES shape resize_image_and_annotation MINI_MASK_SHAPE get_image build_rpn_targets format minimize_mask astype shuffle choice USE_MINI_MASK get_annotation join uint8 RPN_ANCHOR_SCALES print RPN_ANCHOR_STRIDE zeros len generator get is_running print start sleep GeneratorEnqueuer array pad
# Scene-Text-Detection-with-SPCNET Unofficial repository for [Scene Text Detection with Supervised Pyramid Context Network][https://arxiv.org/abs/1811.08605] with tensorflow. ## 参考代码 网络实现主要借鉴Keras版本的[Mask-RCNN](https://github.com/matterport/Mask_RCNN.git),训练数据接口参考了[argman/EAST](https://github.com/argman/EAST).论文作者在知乎的文章介绍[SPCNet](https://zhuanlan.zhihu.com/p/51397423). ## 训练 ### 1、训练数据准备 训练数据放在data/下,训练数据准备在data/icdar.py: >data >>icdar2017 >>>Annotaions //image_1.txt
60
Aitslab/BioNLP
['multi label classification']
['Macro F1 and Macro F1']
jennie_jesper/evaluation.py hannes/keras_model/train-silver-standard.py formatconversion/format_conversion_scripts/BioXML_IOB2_conversion_tool.py olof_vilhelm/keywords.py antton/formatting/gold_to_text.py jennie_jesper/tagger.py emil_petter/evalCombined.py Adam_Ola/Format_Input/formatInputFile.py jennie_jesper/random_subset.py antton/formatting/json_to_txt.py lykke_klara/scripts/build_art_corpus.py antton/formatting/gold_to_test.py anna_eric/tagger.py hannes/keras_model/neural_model.py hannes/keras_model/train-BioInfer.py antton/utils/pubannotationevaluator.py lykke_klara/scripts/evaluation.py antton/utils/use_evaluator.py lykke_klara/scripts/add_custom_labels.py hannes/bioInferTrainingParser.py nicolas/conllparser.py olof_vilhelm/gui.py anna_eric/xml_to_json.py marcus/dictionarytagger/mentionindex/index.py emil_petter/buildDict.py olof_vilhelm/spacynlp.py anna_eric/getID.py olof_vilhelm/final_test_predict_bioinfer.py emil_petter/mentionindex/index.py carl/tests/test_basic.py olof_vilhelm/scrape_abstracts.py antton/formatting/rename_gold.py emil_petter/mentionindex/__init__.py olof_vilhelm/read_bioinfer.py lykke_klara/scripts/plot.py lykke_klara/scripts/bert_finetune.py antton/formatting/text_to_tsv.py antton/formatting/pubannot_to_tsv.py emil_petter/evalBert.py emil_petter/buildHGNC.py carl/app/clinicalParser_SE/parser/journalparser.py antton/utils/split_traindev.py emil_petter/corpus.py antton/formatting/eval_to_pubannot.py marcus/dictionarytagger/mentionindex/__init__.py emil_petter/protein.py jennie_jesper/make_meta_gold.py jennie_jesper/make_meta_subset_100.py carl/app/sentenceGen_SE/replacer.py carl/tests/context.py oskar/Util/merge_chs.py hannes/svm_model.py artificial_corpus/build_corpus.py carl/app/clinicalParser_SE/parser/spacyloader.py emil_petter/buildUniprot.py emil_petter/evalDict.py lykke_klara/main.py carl/app/clinicalParser_SE/parser/__init__.py olof_vilhelm/entity_relations_model.py oskar/Util/merge.py antton/utils/fuse_tsvs.py hannes/train-SVM.py hannes/text_tools.py convert_iob list_to_tsv convert_to_list getID write_json_file fuse_multiword_entities relative_to_real build_denot_array sentence_to_tokens removeall_replace sentence_to_tokens removeall_replace sentence_to_tokens removeall_replace print_progress_bar print_progress PubannotationEvaluator load_nlp triplereplace singlereplace quadreplace doublereplace BasicTestSuite main altVersion createID combineDicts getCorpus getMatches main getUnion U getIntersect I main Protein MentionIndex extract_words_from_passage tokenize_text write_to_IOB2_format convert_words_tags_to_IOB2 read_and_parse_input_xml main parse_training_set RelationExtractorModel clean_tokens replace_tokens remove_stoppers normalize tokenize pos_tag build_token join_tokens main build_gram build_features clean_list transform_results entity_dist build_negatives build_sentences_and_targets join_tokens main fix_sentence clean_list transform_results entity_dist build_sentences_and_targets join_tokens build_negatives build_all_sentences main fix_sentence update_scoredict get_recall get_precision main get_dicts tag_article path_to_paper_id read_meta main generate_jsons read_article setup_dicts run_eval run_add_custom_labels run_build_art_corpus run_plot run_bert_finetune write_files map_chemprot_labels_to_custom_labels make_data_dict run run write_corpus build_corpus run evaluate run read_data plot_loss_acc plot_metrics run MentionIndex ConllCorpus mention_words_idx check_numpy_array get_feats clean_token read_file set_feats ConllDoc load_file gather_feats Entity Source RelationalSet Relation find_prep gen_indices main Gui Dynamic BioInfer get_abstracts normalize_text get_encoding handle_options print_help save_abstracts main find_prep gen_indices chunks sub list split rstrip enumerate append pop append build_denot_array extend removeall_replace print_progress_bar print int float format load list replace print set append len list replace print set append len list replace print set append len list replace print set append len sub replace int species_id append update uniprot_id hgnc_id load items list combineDicts connect_jvm build_keyed_index altVersion strip write close MentionIndex names createID values open len strip set add split open append strip split open str dump print set getMatches getCorpus len sort U append range len append range U len getUnion getIntersect BeautifulSoup open text replace split append find_all append split print close open read_and_parse_input_xml extract_words_from_passage tokenize_text write_to_IOB2_format convert_words_tags_to_IOB2 find_all get int list parse dict getroot split findall append len append append append sklearn_test split_data decision_function parse_training_set REM append randint train range build_features deepcopy join list tag_ text pos_ index join_tokens nlp append build_gram tokenize dep_ append list range nlp set_verbosity round translate_results build_model ERROR print_accuracy plot_history pred build_sentences_and_targets make_vectorizer print_fscores transform_results build_negatives print_confusion transform zeros deepcopy join join_tokens append fix_sentence clean_list choice sample fix_sentence append append glob join fit count_predictions append fix_sentence permutations list keys sorted update_scoredict get_precision get_recall dict abspath zip get_dicts split open sort list replace lower append read_article keys path_to_paper_id read_meta range len tag_article setup_dicts print run print run print run print run print run append append list len enumerate write_files map_chemprot_labels_to_custom_labels make_data_dict from_pretrained model tuple get_linear_schedule_with_warmup clip_grad_norm_ zero_grad DataLoader numpy device format_time max seed str list read_data get_encodings device_count TensorDataset encode append to range manual_seed_all format size eval save_pretrained manual_seed is_available enumerate time backward print AdamW makedirs named_parameters parameters empty_cache train step len append strip range choice write_corpus build_corpus from_pretrained model_path score print classification_report cm len device open to append split listdir loads evaluate show list plot xlabel tight_layout savefig legend xticks show list format plot xlabel ylabel savefig legend xticks print plot_metrics plot_loss_acc replace append items sorted print set_mentions_features text list len append children extend Entity name __relation issubset add Gui Source to_ RelationalSet save_abstracts get_abstracts get_encoding print text exit tag dict getroot iter str int print normalize_text makedirs close write open sent_tokenize print exit int print exit close print_help open range len
BioNLP ======= Repository for student projects within biomedical text mining from Lund University. All code comes with a GPLv3 licence. # Resources ## NLP libraries AllenNLP NLP library built on PyTorch https://allennlp.org/ CoreNLP contains many of Stanford’s NLP tools
61
Akanni96/feng-hirst-rst-parser
['discourse segmentation']
['Two-pass Discourse Segmentation with Pairing and Global Features']
src/parse2.py src/parser_wrapper2.py src/test_feng.py src/imdb_script.py src/trees/lexicalized_tree.py src/classifiers/crf_classifier.py tools/crfsuite/crfsuite-0.12/swig/python/setup.py src/utils/serialize.py src/utils/treebank_parser.py tools/crfsuite/crfsuite-0.12/example/chunking.py src/document/dependency.py src/document/constituent.py tools/crfsuite/crfsuite-0.12/example/ner.py src/utils/utils.py tools/crfsuite/crfsuite-0.12/example/pos.py src/parsers/base_parser.py src/paths.py src/logs/log_writer.py src/features/tree_feature_writer.py src/imdb_preprocess.py src/sanity_check.py src/utils/cue_phrases.py src/document/token.py tools/crfsuite/crfsuite-0.12/swig/python/crfsuite.py src/features/segmenter_feature_writer.py src/utils/rst_lib.py src/parsers/multi_sentential_parser.py src/document/base_representation.py src/utils/RST_Classes.py src/prep/prep_utils.py src/segmenters/crf_segmenter.py src/parsers/intra_sentential_parser.py src/utils/yappsrt.py tools/crfsuite/crfsuite-0.12/example/crfutils.py tools/crfsuite/crfsuite-0.12/swig/python/sample_train.py src/document/sentence.py tools/crfsuite/crfsuite-0.12/example/template.py src/parser_wrapper3.py src/treebuilder/build_tree_CRF.py src/prep/syntax_parser.py tools/crfsuite/crfsuite-0.12/swig/python/sample_tag.py src/utils/Stanford_Deps.py src/prep/preprocesser2.py src/document/doc.py src/trees/parse_tree.py replace_br preprocess_imdb add_space_after_sentence parse_imdb get_parser_stdout main ParserException get_parser_stdout main ParserException check_CRFSuite check_ssplit check_syntax_parser test_feng_fail parse_file test_feng_short test_feng_long CRFClassifier BaseRepresentation Constituent Dependency Document Sentence Token SegmenterFeatureWriter CRFTreeFeatureWriter LogWriter BaseParser IntraSententialParser MultiSententialParser Preprocesser create_lexicalized_tree get_parsed_trees_from_string replace_words SyntaxParser CRFSegmenter CRFTreeBuilder LexicalizedTree ParseTree loadData saveData parse Treebank TreebankScanner print_error scan wrap_error_reporter __repr__ Parser Scanner SyntaxError token NoMoreTokens feature_extractor readiter to_crfsuite apply_templates escape main output_features get_shape get_all_other observation contains_digit get_4d get_capperiod disjunctive contains_symbol get_2d contains_alpha feature_extractor contains_upper degenerate contains_lower b get_dand get_da get_type feature_extractor _swig_repr swig_import_helper _swig_setattr_nondynamic _swig_getattr StringList Attribute Tagger Trainer SwigPyIterator version _swig_setattr Item ItemSequence instances instances Trainer get_librarydir get_rootdir get_includedir join time format print add_space_after_sentence replace_br listdir makedirs sub startswith split append range len join remove format time replace flush communicate print write close split listdir exists Popen open read close open stdout write dumps loads feng_main open devnull append communicate print extend split Popen enumerate unload SyntaxParser parse_sentence enumerate join print unload CRFSUITE_PATH CRFClassifier classify split parse_file parse_file join map compile escape append fromstring strip lexicalize copy join dump close open load join close open Treebank TreebankScanner scan pos join group patterns match input append len pos print rfind repr msg find max count append apply_templates append join range len strip split append range len isinstance write escape range len isinstance escape Attribute append Item ItemSequence stdin model Tagger to_crfsuite add_option separator parse_args range readiter OptionParser tag output_features join feature_extractor split len islower isdigit isupper discard len set islower range isupper isalpha isdigit isdigit isalnum get_capperiod get_shape get_2d islower get_all_other isupper isdigit contains_digit lower degenerate b get_dand get_da contains_upper get_4d contains_symbol contains_alpha contains_lower get_type append range observation disjunctive range len find_module load_module get get __repr__ delete_SwigPyIterator delete_Item delete_ItemSequence delete_StringList Attribute_value_get _swig_property Attribute_value_set delete_Attribute Attribute_attr_get Attribute_attr_set delete_Trainer delete_Tagger rfind float strip ItemSequence Attribute append Item split StringList
## DEVELOPERS * Original author: [Vanessa Wei Feng](mailto:weifeng@cs.toronto.edu), Department of Computer Science, University of Toronto, Canada * [Arne Neumann](mailto:github+spam.or.ham@arne.cl) updated it to use nltk 3.4 on [this github repo](https://github.com/arne-cl/feng-hirst-rst-parser), and created a Dockerfile. * [Zining Zhu](mailto:zining@cs.toronto.edu) updated the scripts to use Python 3. ## TODO - [ ] update Dockerfile to use Python 3. ## REFERENCES * Vanessa Wei Feng and Graeme Hirst, 2014. Two-pass Discourse Segmentation with Pairing and Global Features. arXiv:1407.8215v1. http://arxiv.org/abs/1407.8215 * Vanessa Wei Feng and Graeme Hirst, 2014. A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-2014), Baltimore, USA. http://aclweb.org/anthology/P14-1048 ## GENERAL INFOMRATION
62
AkiraTOSEI/Funnel-Activation-for-Visual-Recognition
['scene generation', 'semantic segmentation']
['Funnel Activation for Visual Recognition']
train_test.py resnet.py FReLU.py FReLU Identity_basic_block Conv_bottleneck_block define_Pooling define_GlobalPooling Fin_layer Identity_bottleneck_block define_ConvLayer define_activation ResnetBuilder define_NormLayers Conv_basic_block Conv_stage1_block image_shift flip_image bn1 pool1 define_Pooling ConvLayer define_ConvLayer define_activation MaxPooling TimeDistributed act1 define_NormLayers conv1 NormLayer bn1 conv2 relu1 bn2 conv3 ConvLayer define_ConvLayer define_activation TimeDistributed define_NormLayers relu2 relu_m conv1 bn3 NormLayer bn1 conv2 relu1 bn2 conv3 s_bn ConvLayer define_ConvLayer define_activation s_conv TimeDistributed define_NormLayers relu2 relu_m conv1 bn3 NormLayer bn1 conv2 relu1 bn2 ConvLayer define_ConvLayer define_activation TimeDistributed define_NormLayers relu_m conv1 NormLayer bn1 conv2 relu1 bn2 s_bn ConvLayer define_ConvLayer define_activation s_conv TimeDistributed define_NormLayers relu_m conv1 NormLayer dense gp define_GlobalPooling GlobalPooling Dense flat TimeDistributed Flatten random_crop resize_with_crop_or_pad
# Funnel-Activation-for-Visual-Recognition this repository is for check FReLU([arXiv:2007.11824](https://arxiv.org/abs/2007.11824)) on CIFAR10. I have tested ReLU, Swish and FReLU 3times using ResNet18. The result is shown as following. ![frelu](https://github.com/AkiraTOSEI/Funnel-Activation-for-Visual-Recognition/blob/master/frelu.png) |Activation Function|minimum validation loss| |---|---| |ReLU|0.764 ± 0.009| |Swish|0.763 ± 0.008| |__FReLU__|__0.743 ± 0.006__|
63
AlbanSeurat/keras-style-transfer
['style transfer']
['Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization']
style.py debug.py models.py layers.py utils.py main.py loss.py _print_tensor display_layer dump_model AdaIN ReflectionPadding2D LossFunction main ReEncoderModel LossModel Vgg19TruncatedModel DecoderModel EncoderModel StyleTransfer mirror_model image_postprocess list_images clone_model list_batch_images NBatchLogger preload_img show subplot imshow figure range set_session get_session LocalCLIDebugWrapperSession has_inf_or_nan StyleTransfer train add_tensor_filter layers range len get_weights layers name new_layer append deserialize Input set_weights img_to_array load_img imresize astype copy name join preload_img scandir asarray list_images append next range
A simple attempt to implement transfer style learning using AdaIN paper : https://arxiv.org/abs/1703.06868 some results without color code transfer ![alt text](https://raw.githubusercontent.com/AlbanSeurat/keras-style-transfer/master/seurat-2.jpeg) ![alt text](https://raw.githubusercontent.com/AlbanSeurat/keras-style-transfer/master/paccots.jpg) ![alt text](https://raw.githubusercontent.com/AlbanSeurat/keras-style-transfer/master/paccots-seurat.png) Still very much in progress
64
AlbertChen1991/nEM
['relation extraction', 'denoising']
['Uncover the Ground-Truth Relations in Distant Supervision: A Neural Expectation-Maximization Framework']
main.py data_loader.py model.py DataLoader RE_Dataset train_epoch_RE get_pre_recall em_step pred_RE test_model train_EM MAP e_step PR_curve save_checkpoint resume pr_with_threshold train_RE precision_recall main write_list_to_file use_optimizer PCNNEncoder CNNEncoder Embedding Extractor BagEncoder Y_S GRUEncoder RE Z_Y save_dir save load save_dir isfile list asarray mean shape append array range len append float sum range len precision_recall save_dir save shape range plot xlabel grid ylabel ylim clf write_list_to_file get_texts legend setp savefig xlim get_legend range len concatenate RE_Dataset average_precision_score ravel pred eval DataLoader pr_with_threshold append numpy enumerate E_step concatenate RE_Dataset eval DataLoader append numpy enumerate len zero_grad e_step DataLoader save_checkpoint save save_dir clip M_step append item clip_grad_norm enumerate backward print RE_Dataset parameters train step len backward print zero_grad RE_Dataset parameters DataLoader save_checkpoint enumerate item append clip_grad_norm train step clip baseModel len pos_dim n_relation DataLoader n_pos save_dir RE cuda train_epoch_RE reg_weight data_dir max_bags load_state_dict dim range use_optimizer resume n_word print max_len hdim epochs makedirs pos_dim n_relation e_step DataLoader n_pos save_dir RE cuda reg_weight data_dir max_bags load_state_dict dim range use_optimizer resume n_word load print max_len hdim epochs em_step makedirs pos_dim n_relation DataLoader n_pos save_dir RE cuda reg_weight pred_RE data_dir max_bags load_state_dict dim use_optimizer resume n_word get_pre_recall print max_len hdim makedirs print test_model train_EM train_RE em
# nEM Code and data for EMNLP2019 Paper ["Uncover the Ground-Truth Relations in Distant Supervision: A Neural Expectation-Maximization Framework"](https://arxiv.org/pdf/1909.05448.pdf). ## Description This work focuses on the noisy-label problem in distant supervision, while most of the previous works in this setting assume that the labels of a bag is clean and what makes the noisy nature of distant-supervised data is that the sentences in the bag may be noisy. Through a careful investigation of the distant-supervised data sets, we argue that both sentences and labels can be noisy in a bag. The following figure uncovers the cause of noise in distant supervision. <p align="center"> <img src="https://github.com/AlbertChen1991/nEM/blob/master/fig/noise.png"> </p> In distant supervision, one attempts to connect labels of a bag (left side in the figure) with relations of the knowledge graphs (right side in the figure). The ground-truth labels (center in the figure) however can not be seen. The noise occurs due to the gaps between the bags and the knowledge graphs. For example, the bag in the above figure will be assigned three labels (r2, r3, r4). However, r4 is an obviously wrong label for this bag since all the sentences (s1, s2, s3) don't support r4. And r1 is a missing label for this bag because s1 support relation r1. These two cases are so-called noisy-label problem. The extensively studied noisy-sentence problem is also reflected in the figure. For example, s1 and s3 are noisy sentences for bag label r2, since s1 and s3 don't support relation r2. We proposed a nEM framework to deal with the noisy-label problem. We manually labeled a subset of the test set of the Riedel dataset (NYT). The following figure shows the evaluation result on this clean test set. The baselines are PCNN+MEAN (Lin et al., 2016), PCNN+MAX (Jiang et al., 2016) and PCNN+ATT (Lin et al., 2016) <p align="center">
65
AlbertUW807/DLNN-Algo
['stochastic optimization']
['Adam: A Method for Stochastic Optimization']
Optimization/test_cases.py Logistic Regression/Logistic_Regression.py Model Initialization/init_utils.py Regularization Methods/reg_utils.py Deep Learning Model/helper_functions.py Model Initialization/initialization.py Optimization/opt_utils.py Gradient Check/gc_utils.py Gradient Check/test_cases.py Optimization/optimization.py Regularization Methods/regularization.py Deep Learning Model/DNN.py Gradient Check/gradient_check.py gradients_to_vector relu sigmoid vector_to_dictionary dictionary_to_vector backward_propagation_n forward_propagation_n backward_propagation gradient_check_n gradient_check forward_propagation gradient_check_n_test_case plot_decision_boundary relu backward_propagation sigmoid update_parameters forward_propagation compute_loss load_dataset predict_dec load_cat_dataset predict plot_decision_boundary relu load_params_and_grads load_2D_dataset backward_propagation compute_cost sigmoid forward_propagation initialize_parameters load_dataset predict_dec predict initialize_adam_test_case random_mini_batches_test_case update_parameters_with_momentum_test_case update_parameters_with_adam_test_case update_parameters_with_gd_test_case initialize_velocity_test_case plot_decision_boundary relu load_2D_dataset backward_propagation compute_cost sigmoid update_parameters forward_propagation load_planar_dataset initialize_parameters load_dataset predict_dec predict exp maximum reshape concatenate reshape reshape concatenate print norm forward_propagation backward_propagation relu multiply dot sigmoid sum T multiply dot int64 sum str norm forward_propagation_n gradients_to_vector print copy vector_to_dictionary dictionary_to_vector zeros range seed array randn dot sigmoid relu T multiply dot int64 sum range len multiply nansum reshape T File array str print mean forward_propagation zeros range show arange model xlabel reshape ylabel shape scatter contourf meshgrid forward_propagation seed T reshape make_circles scatter seed randn seed randn sqrt zeros range len multiply sum scatter T loadmat make_moons seed randn seed randn seed randn seed randn seed randn seed randn seed int list T randn square linspace zeros range nansum File array
# DLNN-Algo 〽️ Deep Learning & Neural Networks Projects 〽️ ### Install Numpy ``` $ install numpy ``` ### Projects #### [Logistic Regression](https://github.com/AlbertUW807/DLNN/tree/master/Logistic%20Regression) - Implemented an Image Recognition Algorithm that recognizes cats with 67% accuracy! - Used a logistic regression model.
66
AleT-Cig/DependencySyntax_DeepIrony
['word embeddings']
['Multilingual Irony Detection with Dependency Syntax and Neural Models']
Tweet.py Features_manager.py Model_udpipe.py Main_Generate_Output.py Database_manager.py Database_manager make_database_manager make_feature_manager Features_manager Model_udpipe Tweet strip_accents make_tweet Database_manager Features_manager Tweet encode normalize decode
# **Multilingual Irony Detection with Dependency Syntax and Neural Models** Code repository for COLING 2020 submission. Further details will be added upon acceptance and when the anonymity period will be over.
67
AleksandarHaber/Subspace-Identification-State-Space-System-Identification-of-Dynamical-Systems-and-Time-Series-
['time series analysis', 'time series']
['Subspace Identification of Temperature Dynamics']
functionsSID.py discretization_test.py test_subspace.py simulate whiteTest modelError systemSimulate_Kopen estimateInitial estimateInitial_K systemSimulate_Kclosed portmanteau estimateMarkovParameters estimateModel systemSimulate zeros range pinv zeros range matmul svd concatenate matmul sqrt pinv zeros range diag zeros range matmul matrix_power reshape matmul flatten pinv zeros range det norm T reshape maximum matmul flatten log T arange diag inv matmul sqrt append zeros range T arange inv matmul trace append zeros cdf range matrix_power block reshape asmatrix matmul flatten pinv zeros range zeros range matmul zeros range matmul
AleksandarHaber/Subspace-Identification-State-Space-System-Identification-of-Dynamical-Systems-and-Time-Series-
68
AlessandroSaviolo/HBPSegmentation
['semantic segmentation']
['Learning to Segment Human Body Parts with Synthetically Trained Deep Convolutional Networks']
utils.py predict.py preprocessing_module.py main getArgs predict save HEDnetwork detect save run getDataLoader parseHistory getAugmentation toTensor normalize getPreprocessing Dataset visualize getDataLoader visualize glob makedirs len astype ValidEpoch DiceLoss save info range run axis close add_axes shape imshow savefig figure add_argument ArgumentParser load data info model UnetPlusPlus load_state_dict device image_scale to use_preprocessing_module predict visualize glob resize createCLAHE merge apply eval detect save info imread array enumerate len FloatTensor astype float32 ascontiguousarray cpu NORM_MINMAX DataLoader Dataset expand_dims show subplot items len imshow title figure xticks enumerate yticks glob rename max read_csv
# Human Body Part Segmentation This repository contains the code associated to our paper: *Learning to Segment Human Body Parts with Synthetically Trained Deep Convolutional Networks*. <p align="center"> <img src="https://github.com/AlessandroSaviolo/HBPSegmentation/blob/main/paper/framework.png" width="800"> </p> **Abstract**. This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data. The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts. Our contributions include a data generation pipeline, that exploits a game engine for the creation of the synthetic data used for training the network, and a novel pre-processing module, that combines edge response map and adaptive histogram equalization to guide the network to learn the shape of the human body parts ensuring robustness to changes in the illumination conditions. For selecting the best candidate architecture, we performed exhaustive tests on manually-annotated images of real human body limbs. We further present an ablation study to validate our pre-processing module. The results show that our method outperforms several state-of-the-art semantic segmentation networks by a large margin. If you use this code in an academic context, please cite our [paper](https://arxiv.org/abs/2102.01460): ```
69
Alex-Fabbri/lang2logic-PyTorch
['semantic parsing']
['Language to Logical Form with Neural Attention']
seq2seq/atis/lstm/tree.py seq2tree/atis/lstm/data.py seq2seq/jobqueries/attention/util.py seq2tree/geoqueries/attention/util.py seq2tree/geoqueries/lstm/tree.py seq2seq/geoqueries/lstm/sample.py seq2seq/jobqueries/lstm/util.py seq2tree/atis/lstm/tree.py seq2tree/jobqueries/lstm/main.py seq2seq/jobqueries/attention/main.py seq2seq/atis/attention/tree.py seq2tree/atis/attention/sample.py seq2tree/atis/attention/tree.py seq2seq/atis/lstm/sample.py seq2seq/geoqueries/attention/data.py seq2tree/atis/attention/main.py seq2tree/geoqueries/lstm/sample.py seq2tree/jobqueries/attention/data.py seq2tree/jobqueries/attention/sample.py seq2tree/atis/lstm/main.py seq2tree/geoqueries/attention/sample.py seq2tree/jobqueries/attention/tree.py seq2seq/atis/lstm/main.py seq2tree/geoqueries/attention/data.py pull_data.py seq2tree/geoqueries/lstm/util.py seq2seq/geoqueries/lstm/tree.py seq2seq/atis/attention/util.py seq2seq/atis/lstm/data.py seq2seq/geoqueries/attention/main.py seq2seq/jobqueries/attention/tree.py seq2tree/geoqueries/lstm/main.py seq2tree/jobqueries/lstm/data.py seq2seq/geoqueries/lstm/util.py seq2seq/jobqueries/lstm/tree.py seq2tree/atis/lstm/sample.py seq2tree/atis/lstm/util.py seq2seq/geoqueries/lstm/data.py seq2tree/jobqueries/lstm/sample.py seq2tree/jobqueries/lstm/tree.py seq2tree/atis/attention/data.py seq2tree/geoqueries/attention/main.py seq2tree/geoqueries/attention/tree.py seq2tree/jobqueries/attention/main.py seq2seq/atis/attention/main.py seq2seq/atis/lstm/util.py seq2seq/jobqueries/attention/data.py seq2seq/geoqueries/attention/sample.py seq2seq/jobqueries/lstm/main.py seq2seq/atis/attention/sample.py seq2seq/jobqueries/lstm/data.py seq2seq/jobqueries/attention/sample.py seq2seq/geoqueries/attention/tree.py seq2seq/geoqueries/attention/util.py seq2seq/atis/attention/data.py seq2tree/geoqueries/lstm/data.py seq2tree/atis/attention/util.py seq2tree/jobqueries/attention/util.py seq2seq/geoqueries/lstm/SymbolsManager.py seq2seq/geoqueries/lstm/main.py seq2tree/jobqueries/lstm/util.py seq2seq/jobqueries/lstm/sample.py serialize_data process_train_data RNN eval_training LSTM AttnUnit main convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data RNN eval_training LSTM AttnUnit main convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training main EncoderRNN DecoderRNN LSTM convert_to_string do_generate SymbolsManager Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data RNN eval_training LSTM AttnUnit main convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training AttnUnit Dec_LSTM main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training AttnUnit Dec_LSTM main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training AttnUnit Dec_LSTM main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy serialize_data process_train_data eval_training main EncoderRNN DecoderRNN LSTM convert_to_string do_generate Tree convert_to_tree norm_tree compute_tree_accuracy MinibatchLoader is_all_same SymbolsManager compute_accuracy init_from_file max_vocab_size time format print data_dir min_freq vocab_size SymbolsManager load data_dir format open random_batch enc_seq_length decoder dec_seq_length batch_size backward size zero_grad clip_grad_value_ encoder parameters attention_decoder zeros grad_clip step cuda range checkpoint_dir RNN num_batch vocab_size save cuda open seed data_dir AttnUnit init_weight RMSprop range format max_epochs eval_training param_groups manual_seed uniform_ learning_rate_decay load NLLLoss requires_grad time MinibatchLoader print named_parameters parameters train makedirs int get_idx_symbol append range len max decoder insert get_symbol_idx encoder attention_decoder array resize append zeros tensor cuda range len add_child range Tree children isinstance sort Tree num_children append range len range len format print is_all_same min range len print append to_list range len EncoderRNN DecoderRNN append isinstance get_symbol_idx num_children len learning_rate int add_child
This repo contains a PyTorch port of the lua code [here](https://github.com/donglixp/lang2logic#setup) for the paper ["Language to Logical Form with Neural Attention."](https://arxiv.org/pdf/1601.01280.pdf) This code was written last year as part of a project with [Jack Koch](https://jbkjr.com) and is not being actively worked on or maintained. Nevertheless, I am putting the code here in case it is useful for anyone. The code runs on PyTorch 0.4.1 although it was written for an earlier version. Let me know if you encounter any errors. For more recent PyTorch code by Li Dong, check out the GitHub [repo](https://github.com/donglixp/coarse2fine) for the paper ["Coarse-to-Fine Decoding for Neural Semantic Parsing."](http://homepages.inf.ed.ac.uk/s1478528/acl18-coarse2fine.pdf)
70
AlexMeinke/certified-certain-uncertainty
['out of distribution detection']
["Towards neural networks that provably know when they don't know"]
utils/traintest.py utils/models.py utils/eval.py utils/confident_classifier.py run_training.py utils/mc_dropout.py utils/preproc.py utils/deep_ensemble.py utils/dataloaders.py utils/adversarial.py gen_attack_stats.py utils/odin.py model_paths.py utils/single_maha.py utils/gmm_helpers.py utils/hendrycks.py load_pretrained.py gen_eval.py utils/resnet_orig.py gen_gmm.py utils/edl.py utils/plotting.py model_params.py get_auroc SVHN_params CIFAR100_params MNIST_params FMNIST_params CIFAR10_params MNIST_models SVHN_models CIFAR10_models FMNIST_models CIFAR100_models create_adv_sample_loader gen_pca_noise gen_adv_sample create_adv_noise_loader gen_adv_noise vgg13 make_layers LeNet VGG MNIST TinyImages LSUN_CR UniformNoise TinyImagesDataset SVHN PrecomputeLoader ImageNetMinusCifar10 GrayCIFAR10 CIFAR10 Noise EMNIST TinyImagesTestSampler FMNIST CIFAR100 DeepEnsemble ListModule EDL ResNet ResNet18 LeNet BasicBlock train_edl aggregate_adv_stats_out test_metrics evaluate aupr_score write_log StatsContainer aggregate_adv_stats evaluate_model rescale find_lam get_b get_b_out ResNet BasicBlock MNIST_ConvNet ResNet18 LeNet VGG MC_Model_logit MC_dropout vgg13 make_layers MC_Model PerceptualPCA GMM RobustModel DoublyRobustModel MixtureModel MyPCA LeNet LpMetric Metric PerceptualMetric SumMetric LeNetMadry ScaleMetric PCAMetric aggregate_stats grid_search_variables get_auroc ModelODIN LeNetTemp ResNetTemp plot_samples Gray GaussianFilter Transpose AdversarialNoise ContrastRescaling PermutationNoise ResNet18_100 ResNet WideResNetBasicBlock ResNet18 WideResNet BasicBlock test_metrics ResNet ResNet18 Mahalanobis ModelODIN LeNetMadry BasicBlock get_median train_CEDA train_CEDA_gmm_out test get_conf train_plain get_mean train_ACET train_CEDA_gmm exp cat item append numpy roc_auc_score enumerate ones clone eval to range ones clone eval range eval range cpu len TensorDataset append zeros cat enumerate cpu len TensorDataset append zeros cat enumerate Conv2d make_layers VGG DataLoader Compose RandomChoice DataLoader Compose RandomChoice FashionMNIST Compose RandomChoice DataLoader DataLoader Compose RandomChoice CIFAR10 MNIST CIFAR10 FashionMNIST Compose PrecomputeLoader DataLoader SVHN CIFAR100 DataLoader rand TensorDataset zeros_like DataLoader Compose RandomChoice Compose RandomChoice DataLoader DataLoader Compose RandomChoice print LSUN Compose DataLoader print ToTensor Subset ImageFolder DataLoader append TensorDataset cat TinyImagesDataset TinyImagesTestSampler DataLoader format model backward print train dataset zero_grad mean item to step max enumerate len insert sorted array trapz append test_metrics DataFrame index add_scalar create_adv_sample_loader print create_adv_noise_loader write_log evaluate_model singular_values get_b tuple cpu rand MyPCA gen_pca_noise clone t eval stack brentq append tensor range cat enumerate tuple MyPCA rand get_b_out tensor iter TinyImages append to range cat singular_values gen_pca_noise eval stack enumerate clone t brentq cpu percentile detach append numpy cat enumerate percentile detach numpy append sum cat enumerate norm_const exp view squeeze D item logvar log norm_const exp view squeeze D item logvar log aggregate_stats get_auroc shape ModelODIN linspace item append meshgrid tuple rand clone eval stack iter TinyImages append to range cat enumerate show subplot print imshow title cpu xticks range yticks NLLLoss format criterion model backward print dataset zero_grad item tensor train step max enumerate len NLLLoss max format criterion model to backward print len zero_grad dataset item tensor train step rand_like cat enumerate model zero_grad tensor dataset max log view mm logsumexp to rand_like cat format item enumerate NLLLoss criterion backward print train step len model zero_grad tensor dataset max log view mm logsumexp to cat format item enumerate NLLLoss criterion backward print mm_out train step len model zero_grad gen_adv_noise tensor dataset max to rand_like cat format mean item enumerate NLLLoss criterion backward print clone train step len eval eval cat eval cat eval cat
# Towards neural networks that provably know when they don't know This repository contains the code that was used to obtain the results reported in https://arxiv.org/abs/1909.12180. In it we propose a *Certified Certain Uncertainty* (CCU) model with which one can train deep neural networks that provably make low-confidence predictions far away from the training data. <p align="center"><img src="res/two_moons.png" width="600"></p> ## Training the models Before training a CCU model, one has to first initialize a Gaussian mixture model on the datasets from the in- and out-distribution [80 Million Tiny Images](http://horatio.cs.nyu.edu/mit/tiny/data/tiny_images.bin). ``` python gen_gmm.py --dataset MNIST --PCA 1 --augm_flag 1 ``` The PCA option refers to the fact that we use a modified distance metric. Most models in the paper are trained via the script in **run_training.py**. Hyper parameters can be passed as options, but defaults are stored in **model_params.py**. For example the following lines train a plain model, [ACET](https://arxiv.org/abs/1812.05720) model and a [CCU](https://arxiv.org/abs/1909.12180) model on augmented data on MNIST.
71
AlexMoreo/TweetSentQuant
['sentiment analysis']
['Tweet Sentiment Quantification: An Experimental Re-Evaluation']
src/quapy/util.py src/quapy/__init__.py src/tables.py src/quapy/classification/svmperf.py src/quapy/method/aggregative.py src/settings.py src/quapy/functional.py src/quapy/method/base.py src/quapy/method/non_aggregative.py src/main.py src/quapy/dataset/text.py repair_semeval15_test.py src/app_helper.py src/quapy/error.py src/quapy/method/__init__.py src/quapy/optimization.py src/evaluate.py src/plot_drift.py evaluate_method_point_test produce_predictions_ELM instantiate_error evaluate_experiment load_dataset_model_selection produce_predictions set_random_seed save_arrays optimization instantiate_learner run_name instantiate_quantifier load_dataset_model_evaluation model_selection produce_predictions_general resample_training_prevalence __clean_name statistical_significance get_ranks_from_Gao_Sebastiani load_Gao_Sebastiani_previous_results evaluate_directory main nice load_trdataset load_dataset_prevalences save_table color_from_abs_rank color_from_rel_rank nicerm acce mae smoothed mrae rae ae f1e adjusted_quantification classifier_tpr_fpr classifier_tpr_fpr_from_predictions prevalence_from_labels prevalence_from_probabilities artificial_prevalence_sampling normalize_prevalence strprev optimize_for_classification optimize_for_quantification_ELM optimize_for_quantification get_parallel_slices plot_diagonal plot_error_by_drift plot_error_histogram_ plot_error_histogram parallelize SVMperf TQDataset filter_by_occurrences LabelledCollection ProbabilisticAdjustedClassifyAndCount SVMQ binary_quant_task train_task SVMAE ExplicitLossMinimisation SVMRAE AdjustedClassifyAndCount AggregativeProbabilisticQuantifier SVMKLD OneVsAllELM AggregativeQuantifier ExpectationMaximizationQuantifier ProbabilisticClassifyAndCount training_helper SVMNKLD ClassifyAndCount BaseQuantifier MaximumLikelihoodPrevalenceEstimation seed from_sparse dataset print info print from_sparse info undersampling trainp info lower info lower info info instantiate_error optimization info optimize_for_classification optim_for_quantification isinstance optimize_for_quantification training test optimize_for_quantification_ELM lower sample_size info isinstance list asarray info TEST_REPETITIONS zip TEST_PREVALENCES list asarray zip TEST_REPETITIONS info preclassify_collection TEST_PREVALENCES join results print TEST_REPETITIONS mean save_arrays eval_measure zip std print labels n_classes save_arrays results_point prevalence_from_labels eval_measure quantify documents asarray info makedirs lower replace __clean_name defaultdict glob eval append __clean_name items sorted defaultdict asarray ttest_ind_from_stats list glob print extend mean eval_measure item std len sorted asarray list set argsort load_Gao_Sebastiani_previous_results enumerate evaluate_method_point_test evaluate_experiment load_dataset_model_selection produce_predictions training set_random_seed test instantiate_learner instantiate_quantifier load_dataset_model_evaluation model_selection fit load dump HIGHEST_PROTOCOL open print mean smoothed reshape repeat linspace len list defaultdict asarray dict unique zip mean argmax classfier_fn documents fpr tpr clip sum list set_params asarray product isinstance print error argmin array split_stratified zip append keys values fit list set_params asarray product isinstance print error argmin array split_stratified zip append preclassify_collection keys values fit learner list set_params product isinstance print error predict labels tqdm set_description split_stratified documents keys values fit int cpu_count get_parallel_slices len subplots grid set_aspect show list scatter savefig legend get_position errorbar plot set_position set_xlim set unique zip enumerate sort fill_between array set_ylim subplots grid linspace max log digitize list sorted savefig legend append get_position errorbar set_position set_xlim set mean upper keys enumerate items print min extend std items list defaultdict plot_error_by_drift append subplots grid linspace max log digitize list sorted savefig legend append get_position errorbar set_position set_xlim set mean keys enumerate parent print min std makedirs flatten print print labels split_stratified CalibratedClassifierCV documents fit documents fit predict
# Tweet Sentiment Quantification: An Experimental Re-Evaluation ## ECIR2021: Reproducibility track This repo contains the code to reproduce all experiments discussed in the paper entitled _Tweet Sentiment Quantification: An Experimental Re-Evaluation_ which is submitted for consideration to the _ECIR2021's track on Reproducibility_ ## Requirements * skicit-learn, numpy, scipy * svmperf patched for quantification (see below) * absl-py * tqdm
72
AlexandreAbraham/frontiers2013
['time series']
['Machine Learning for Neuroimaging with Scikit-Learn']
scripts/miyawaki_decoding.py scripts/visualization_101.py scripts/adhd_ica.py scripts/miyawaki_encoding.py scripts/utils/datasets.py scripts/haxby_decoding.py scripts/utils/masking.py scripts/utils/resampling.py scripts/utils/searchlight.py scripts/adhd_clustering.py scripts/utils/signal.py plot_labels plot_ica_map plot_haxby plot_lines plot_lines _chunk_read_ fetch_haxby fetch_adhd _fetch_file _tree fetch_nyu_rest fetch_haxby_simple _read_md5_sum_file _get_dataset_dir _chunk_report_ ResumeURLOpener fetch_craddock_2011_atlas _fetch_files fetch_msdl_atlas fetch_icbm152_2009 fetch_yeo_2011_atlas load_harvard_oxford _format_time fetch_miyawaki2008 _uncompress_file _md5_sum_file apply_mask unmask _smooth_array resample_img get_bounds to_matrix_vector from_matrix_vector GroupIterator SearchLight _group_iter_search_light search_light _standardize high_variance_confounds butterworth _detrend _mean_of_squares clean seed axes random astype axis imshow figure max axes astype axis logical_not masked_array imshow figure rot90 T axis subplots_adjust get_data imshow title figure Rectangle legend contour ndindex add_line Line2D shape md5 read update open readline split open time write float round max int time read write _chunk_report_ get_all getenv join getcwd makedirs copyfileobj remove print extractall close dirname splitext ZipFile open join remove basename time _chunk_read_ move print addheader close ResumeURLOpener urlopen open getsize exists makedirs get join move _get_dataset_dir _uncompress_file append _fetch_file join sorted isdir append listdir dict list _fetch_files zip dict list _fetch_files zip dict list _fetch_files zip _fetch_files warn _tree _fetch_files dirname append _read_md5_sum_file len join _fetch_files warn len genfromtxt asarray warn _fetch_files len _fetch_files load join list asarray find_objects append text get_data shape unique zip findall zeros max values _fetch_files load asarray isinstance astype get_affine get_data _smooth_array gaussian_filter1d copy sqrt sum log enumerate load T isinstance astype shape zeros shape dtype zeros T get_data to_matrix_vector ndindex list all ndarray from_matrix_vector shape asarray affine_transform get_bounds load isinstance inv get_affine dot eye array diag GroupIterator time min write warn dict mean zeros float round max enumerate cross_val_score len sqrt sum copy _detrend mean empty gen_even_slices copy gen_even_slices arange copy lfilter butter T svd scoreatpercentile copy _detrend _mean_of_squares genfromtxt T _standardize ndarray isinstance hstack butterworth isnan append
Machine Learning for Neuroimaging with Scikit-Learning ======================================================= Paper on using scikit-learn for NeuroImaging, for the special issue "Python in Neurosciences II" of frontiers in NeuroInformatics. The scripts that can generate the figure, and underly the examples of the paper, can be found in the 'scripts' directory. **Note**: The [nilearn](http://nilearn.github.io) package makes all the patterns exposed here easier to write. It is maintained, unlike this repository.
73
AlexeySorokin/NeuralMorphemeSegmentation
['morphological analysis']
['Convolutional neural networks for low-resource morpheme segmentation: baseline or state-of-the-art?']
neural_morph_segm.py data/morphochallenge_to_morphemes.py tabled_trie.py read.py Partitioner is_correct_morpheme_sequence get_next_morpheme_types load_cls get_next_morpheme generate_data make_model_file read_config _make_vocabulary measure_quality to_one_hot collect_buckets make_bucket_lengths generate_BMES partition_to_BMES extract_morpheme_type read_splitted read_BMES read_input test_precomputing_symbols Trie test_performance test_basic precompute_future_symbols load_trie test_encoding make_trie TrieMinimizer read_pairs read_words extract_pairs_for_words eye rfind sorted append sorted range len list bisect_left from_iterable append make_bucket_lengths enumerate Partitioner items list hasattr models_ build _make_morpheme_tries load_weights zip setattr enumerate append get_next_morpheme_types split any enumerate split shuffle to_one_hot int list zip len append extend zip len shuffle list range len shuffle list range len zip startswith split append range len shuffle list range len sorted minimize Trie print fit TrieMinimizer len data _get_letters set add final zip _get_children range enumerate find_partitions minimize Trie print words load_trie save TrieMinimizer fit time format minimize Trie print load_trie save TrieMinimizer fit time format list Trie print add make_cashed range TrieMinimizer data join minimize Trie print add zip sum TrieMinimizer fit append print
AlexeySorokin/NeuralMorphemeSegmentation
74
AliLotfi92/Deep-Variational-Information-Bottlenck
['adversarial attack']
['Deep Variational Information Bottleneck']
VIBV4.py weights evaluate_test mulitlayer_perceptron bias truncated_normal constant NormalWithSoftplusScale relu weights matmul add bias histogram sample run
# Deep Variational Information Bottleneck This repository provides the implementation of Deep Variational Information Bottleneck. The main idea of DVIB is to impose a bottleneck (here in the dimensionality) through which only necessary information for the reconstruction of $X$ can pass. I tried to implement this in the simplest from so that _Information Bottleneck_ can be easily leveraged as a regularizer or metric for other projects. ### Requirements - $X$ is the input, - $Y$ is the label, - We look for a latent variable $Z$ that maximizes the mutual information $I(Z;Y)$, meanwhile, it has to minimize $I(Z;X)$. - For more detials and theoritical proofs please check https://arxiv.org/abs/1612.00410 ### How to run ```bash python VIBV4.py
75
AliLotfi92/Deep_Variational_Information_Bottlenck
['adversarial attack']
['Deep Variational Information Bottleneck']
VIBV4.py weights evaluate_test mulitlayer_perceptron bias truncated_normal constant NormalWithSoftplusScale relu weights matmul add bias histogram sample run
# Deep Variational Information Bottleneck This repository provides the implementation of Deep Variational Information Bottleneck. The main idea of DVIB is to impose a bottleneck (here in the dimensionality) through which only necessary information for the reconstruction of $X$ can pass. I tried to implement this in the simplest from so that _Information Bottleneck_ can be easily leveraged as a regularizer or metric for other projects. ### Requirements - $X$ is the input, - $Y$ is the label, - We look for a latent variable $Z$ that maximizes the mutual information $I(Z;Y)$, meanwhile, it has to minimize $I(Z;X)$. - For more detials and theoritical proofs please check https://arxiv.org/abs/1612.00410 ### How to run ```bash python VIBV4.py
76
AliLotfi92/SNNLSTM
['time series']
['Long Short-Term Memory Spiking Networks and Their Applications']
WordTOVec.py SpikingFSDD.py SpikingMNIST.py WordLevelSpikingLSTM.py CharLevelSpikingLSTM.py SpikingEMNIST.py LSTM_Cell deriv_Tanhspike LSTM_Sample softmax deriv_spike spike cross_entropy LSTM_Cell deriv_Tanhspike predict Test softmax deriv_spike spike cross_entropy LSTM_Cell predict deriv_spike2 Test softmax deriv_spike spike cross_entropy LSTM_Cell deriv_Tanhspike predict Test softmax deriv_spike spike cross_entropy tanh deriv_sigmoid LSTM_Cell load_doc sigmoid deriv_spike2 LSTM_Sample softmax deriv_spike deriv_tanh spike cross_entropy clean_doc save_doc load_doc clean_doc exp clip log reshape dot softmax append spike range column_stack len randint dot append zeros argmax spike range column_stack LSTM_Cell dot mean softmax argmax cross_entropy dot LSTM_Cell argmax softmax shape uniform normal read close open punctuation replace maketrans split clip clip tanh sigmoid tanh most_similar reshape sigmoid print join close write open
# Spiking LSTM [Long Short-Term Memory Spiking Networks and Their Applications](https://dl.acm.org/doi/abs/10.1145/3407197.3407211) in Python ![alt text](https://github.com/AliLotfi92/SNNLSTM/blob/master/assets/Framework.png) ### Requirements - Python 3.7 ### How to run ```bash python SpikingEMINST.py ``` # Results
77
AliOsm/semantic-question-similarity
['question similarity', 'data augmentation']
['Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic']
extract_sentences_embeddings.py 1_preprocess.py plot_sequence_weighted_attention.py 4_train.py data_generator.py plot_sentences_embeddings.py plot_examples_per_data_augmentation_type.py 5_infer.py vote.py 3_build_embeddings_dict.py 2_enlarge.py helpers.py add_item dfs build_model DataGenerator f1 process map_sentence load_embeddings_dict tsne_plot append sorted tuple list add Bidirectional ONLSTM word_attention SeqWeightedAttention Model summary word_lstm2 word_lstm1 Input compile join replace split recall precision show get_display TSNE reshape tight_layout scatter figure annotate append fit_transform range len
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/tha3aroon-at-nsurl-2019-task-8-semantic/question-similarity-on-q2q-arabic-benchmark)](https://paperswithcode.com/sota/question-similarity-on-q2q-arabic-benchmark?p=tha3aroon-at-nsurl-2019-task-8-semantic) # Semantic-Question-Similarity The official implementation of our paper: [Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic](https://arxiv.org/abs/1912.12514), which was part of [NSURL-2019](http://nsurl.org/tasks/task8-semantic-question-similarity-in-arabic/) workshop on [Task 8](https://www.kaggle.com/c/nsurl-2019-task8) for Arabic Semantic Question Similarity. ## 0. Prerequisites - Python >= 3.6 - Install required packages listed in `requirements.txt` file - `pip install -r requirements.txt` - To use ELMo embeddings: - Clone ELMoForManyLangs repository - `git clone https://github.com/HIT-SCIR/ELMoForManyLangs.git`
78
Alibaba-NLP/DAAT-CWS
['chinese word segmentation']
['Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation']
train.py tagger.py cws.py gcnn.py utils.py layer.py process_train_sentence read_train_file evaluator create_output data_iterator Embedding_layer GAN CRF_Layer gcnn GCNN_layer CRF_layer TextCNN_layer Embedding_layer create_input data_iterator Model create_dic create_mapping data_to_ids create_input data_iterator create_dic create_mapping fresh_dir data_to_ids list strip extend split append len list min shuffle zip append max len append process_train_sentence append join zip join remove create_output print makedirs close system enumerate open list items sorted append array max zip append map_seq zip Exists MakeDirs DeleteRecursively
DAAT-CWS Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation Paper accepted by ACL 2020 ### Prerequisites - python == 2.7 - tensorflow == 1.8.0 ### Dataset source domain dataset PKU and five distantly-annotated target datasets are put in `data/datasets` directory ### Usage Run `python train.py --tgt_train_path <tgt_train_path> --tgt_test_path <tgt_test_path>`
79
Allen--Wu/Symmetry-Detection-of-Occluded-Point-Cloud-Using-Deep-Learning
['symmetry detection']
['Symmetry Detection of Occluded Point Cloud Using Deep Learning']
tools/_init_paths.py lib/transformations.py tools/pr-curve.py tools/train.py tools/pr-curve-occ.py lib/utils.py lib/knn/knn_pytorch/__init__.py lib/loss.py tools/eval_linemod.py lib/loss_refiner.py tools/train-gen.py tools/eval_ycb.py lib/network.py tools/train-gen_backup.py lib/extractors.py lib/knn/__init__.py lib/knn/build_ffi.py lib/pspnet.py ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 load_weights_sequential BasicBlock resnet101 Loss loss_calculation loss_calculation Loss_refine PoseRefineNetFeat PoseRefineNet PoseNet PoseNetFeat ModifiedResnet PSPModule PSPUpsample Upsample PSPNet orthogonalization_matrix vector_product inverse_matrix euler_matrix translation_matrix shear_matrix vector_norm quaternion_from_matrix quaternion_inverse Arcball projection_matrix unit_vector rotation_from_matrix random_rotation_matrix quaternion_from_euler affine_matrix_from_points decompose_matrix clip_matrix quaternion_conjugate quaternion_slerp quaternion_about_axis arcball_map_to_sphere scale_from_matrix euler_from_quaternion angle_between_vectors scale_matrix random_quaternion quaternion_matrix quaternion_imag superimposition_matrix arcball_nearest_axis projection_from_matrix translation_from_matrix is_same_quaternion shear_from_matrix euler_from_matrix rotation_matrix random_vector compose_matrix identity_matrix reflection_matrix concatenate_matrices is_same_transform arcball_constrain_to_axis reflection_from_matrix quaternion_multiply quaternion_real _import_module setup_logger TestKNearestNeighbor KNearestNeighbor _import_symbols get_bbox printCurve main printCurve main generate_obj_file_norm_pred generate_obj_file_norm generate_obj_file_norm_pred generate_obj_file_norm generate_obj_file_sym_pred generate_obj_file main generate_obj_file_norm_pred generate_obj_file_norm generate_obj_file_sym_pred generate_obj_file main main generate_obj_file generate_obj_file_norm_pred generate_obj_file_sym_pred items list OrderedDict load_state_dict zip ResNet ResNet ResNet ResNet ResNet bmm view size min repeat permute abs norm KNearestNeighbor contiguous add knn index_select mean unsqueeze len identity dot unit_vector identity squeeze eig array cos identity dot sin unit_vector array diag T squeeze eig atan2 trace array dot unit_vector identity diag squeeze eig array trace dot unit_vector array identity T squeeze eig dot array len dot unit_vector tan identity T vector_norm squeeze eig identity cross dot atan array T vector_norm asin inv cos copy atan2 dot any negative zeros array dot euler_matrix identity radians cos sin svd T concatenate inv identity quaternion_matrix roll dot eigh pinv vstack sum array identity sqrt atan2 empty cos sin array cos vector_norm dot array outer eigh trace negative empty array negative array negative array pi dot sin negative unit_vector acos sqrt rand pi sqrt negative array vector_norm dot arcball_constrain_to_axis array atleast_1d sqrt sum array atleast_1d sqrt expand_dims sum array sum array clip dot identity array array import_module setFormatter getLogger addHandler StreamHandler Formatter setLevel FileHandler dir _wrap_function getattr append callable int range len print append array range len axis Loss clf Pool symmetry ylabel set_printoptions title savefig load_state_dict legend range get format plot outf Normalize load print xlabel num_points PoseRefineNet PoseNet get_bbox randint flatten numpy dataset_root fmin abs argmax max open view masked_not_equal matmul add pad sum readline format setdiff1d concatenate getmaskarray size close astype shuffle estimator mean copy sqrt square join time T masked_equal float32 dot int32 zeros loadmat split printCurve squeeze astype float32 from_numpy tensor batch_size zero_grad nepoch DataLoader dataset_root save cuda max enumerate noise_trans PoseDataset_ycb repeat_epoch PoseDataset_linemod Adam strftime get_sym_list Inf state_dict size setup_logger estimator w mean start_epoch gmtime eval info listdir resume_refinenet refine_start int remove join log_dir time criterion num_points_mesh resume_posenet get_num_points_mesh backward parameters iteration train step sym_list fmin abs cam_cx_1 root loadmat randint cam_fx_1 list view add matmul append getmaskarray shuffle copy sqrt masked_equal float32 int32 array arange flatten argmax sum astype dot numpy get_bbox cam_fy_1 open masked_not_equal pad setdiff1d concatenate square T zeros cam_cy_1 len logical_and generate_obj_file_norm_pred
# Symmetry Detection of Occluded Point Cloud Using Deep Learning > Zhelun Wu, Hongyan Jiang, Siyun He > https://arxiv.org/pdf/2003.06520 ## Overview Symmetry detection has been a classical problem in computer graphics, many of which using traditional geometric methods. In recent years, however, we have witnessed the arising deep learning changed the landscape of computer graphics. In this paper, we aim to solve the symmetry detection of the occluded point cloud in a deep-learning fashion. To the best of our knowledge, we are the first to utilize deep learning to tackle such a problem. In such a deep learning framework, double supervisions: points on the symmetry plane and normal vectors are employed to help us pinpoint the symmetry plane. We conducted experiments on the YCB-video dataset and demonstrate the efficacy of our method. ## Requirements Python 3.6, PyTorch 0.4.1 ## Running `sh experiments/scripts/train_ycb.sh` ## Symmetries
80
Allen--Wu/dense_symmetry
['symmetry detection']
['Symmetry Detection of Occluded Point Cloud Using Deep Learning']
tools/_init_paths.py lib/transformations.py tools/pr-curve.py tools/train.py tools/pr-curve-occ.py lib/utils.py lib/knn/knn_pytorch/__init__.py lib/loss.py tools/eval_linemod.py lib/loss_refiner.py tools/train-gen.py tools/eval_ycb.py lib/network.py tools/train-gen_backup.py lib/extractors.py lib/knn/__init__.py lib/knn/build_ffi.py lib/pspnet.py ResNet resnet50 Bottleneck resnet152 conv3x3 resnet34 resnet18 load_weights_sequential BasicBlock resnet101 Loss loss_calculation loss_calculation Loss_refine PoseRefineNetFeat PoseRefineNet PoseNet PoseNetFeat ModifiedResnet PSPModule PSPUpsample Upsample PSPNet orthogonalization_matrix vector_product inverse_matrix euler_matrix translation_matrix shear_matrix vector_norm quaternion_from_matrix quaternion_inverse Arcball projection_matrix unit_vector rotation_from_matrix random_rotation_matrix quaternion_from_euler affine_matrix_from_points decompose_matrix clip_matrix quaternion_conjugate quaternion_slerp quaternion_about_axis arcball_map_to_sphere scale_from_matrix euler_from_quaternion angle_between_vectors scale_matrix random_quaternion quaternion_matrix quaternion_imag superimposition_matrix arcball_nearest_axis projection_from_matrix translation_from_matrix is_same_quaternion shear_from_matrix euler_from_matrix rotation_matrix random_vector compose_matrix identity_matrix reflection_matrix concatenate_matrices is_same_transform arcball_constrain_to_axis reflection_from_matrix quaternion_multiply quaternion_real _import_module setup_logger TestKNearestNeighbor KNearestNeighbor _import_symbols get_bbox printCurve main printCurve main generate_obj_file_norm_pred generate_obj_file_norm generate_obj_file_norm_pred generate_obj_file_norm generate_obj_file_sym_pred generate_obj_file main generate_obj_file_norm_pred generate_obj_file_norm generate_obj_file_sym_pred generate_obj_file main main generate_obj_file generate_obj_file_norm_pred generate_obj_file_sym_pred items list OrderedDict load_state_dict zip ResNet ResNet ResNet ResNet ResNet bmm view size min repeat permute abs norm KNearestNeighbor contiguous add knn index_select mean unsqueeze len identity dot unit_vector identity squeeze eig array cos identity dot sin unit_vector array diag T squeeze eig atan2 trace array dot unit_vector identity diag squeeze eig array trace dot unit_vector array identity T squeeze eig dot array len dot unit_vector tan identity T vector_norm squeeze eig identity cross dot atan array T vector_norm asin inv cos copy atan2 dot any negative zeros array dot euler_matrix identity radians cos sin svd T concatenate inv identity quaternion_matrix roll dot eigh pinv vstack sum array identity sqrt atan2 empty cos sin array cos vector_norm dot array outer eigh trace negative empty array negative array negative array pi dot sin negative unit_vector acos sqrt rand pi sqrt negative array vector_norm dot arcball_constrain_to_axis array atleast_1d sqrt sum array atleast_1d sqrt expand_dims sum array sum array clip dot identity array array import_module setFormatter getLogger addHandler StreamHandler Formatter setLevel FileHandler dir _wrap_function getattr append callable int range len print append array range len axis Loss clf Pool symmetry ylabel set_printoptions title savefig load_state_dict legend range get format plot outf Normalize load print xlabel num_points PoseRefineNet PoseNet get_bbox randint flatten numpy dataset_root fmin abs argmax max open view masked_not_equal matmul add pad sum readline format setdiff1d concatenate getmaskarray size close astype shuffle estimator mean copy sqrt square join time T masked_equal float32 dot int32 zeros loadmat split printCurve squeeze astype float32 from_numpy tensor batch_size zero_grad nepoch DataLoader dataset_root save cuda max enumerate noise_trans PoseDataset_ycb repeat_epoch PoseDataset_linemod Adam strftime get_sym_list Inf state_dict size setup_logger estimator w mean start_epoch gmtime eval info listdir resume_refinenet refine_start int remove join log_dir time criterion num_points_mesh resume_posenet get_num_points_mesh backward parameters iteration train step sym_list fmin abs cam_cx_1 root loadmat randint cam_fx_1 list view add matmul append getmaskarray shuffle copy sqrt masked_equal float32 int32 array arange flatten argmax sum astype dot numpy get_bbox cam_fy_1 open masked_not_equal pad setdiff1d concatenate square T zeros cam_cy_1 len logical_and generate_obj_file_norm_pred
# Symmetry Detection of Occluded Point Cloud Using Deep Learning > Zhelun Wu, Hongyan Jiang, Siyun He > https://arxiv.org/pdf/2003.06520 ## Overview Symmetry detection has been a classical problem in computer graphics, many of which using traditional geometric methods. In recent years, however, we have witnessed the arising deep learning changed the landscape of computer graphics. In this paper, we aim to solve the symmetry detection of the occluded point cloud in a deep-learning fashion. To the best of our knowledge, we are the first to utilize deep learning to tackle such a problem. In such a deep learning framework, double supervisions: points on the symmetry plane and normal vectors are employed to help us pinpoint the symmetry plane. We conducted experiments on the YCB-video dataset and demonstrate the efficacy of our method. ## Requirements Python 3.6, PyTorch 0.4.1 ## Running `sh experiments/scripts/train_ycb.sh` ## Symmetries
81
AllenChen1998/AIEF
['denoising', 'face verification']
['HRFA: High-Resolution Feature-based Attack']
nets.py dnnlib/networks_stylegan.py dnnlib/perceptual_model.py dnnlib/tflib/tfutil.py dnnlib/util.py utils.py main.py dnnlib/__init__.py dnnlib/tflib/__init__.py dnnlib/tflib/network.py main create_variable_for_generator inception_resnet_v1 reduction_b Generator block8 block35 reduction_a create_stub inference block17 initialize_uninitialized Facenet output build get_time PerceptualModel unpack_bz2 load_images tf_custom_logcosh_loss tf_custom_l1_loss EasyDict get_top_level_function_name get_dtype_and_ctype open_url get_obj_by_name list_dir_recursively_with_ignore get_module_dir_by_obj_name ask_yes_no is_pickleable get_module_from_obj_name Logger is_top_level_function get_obj_from_module format_time copy_files_and_create_dirs tuple_product is_url call_func_by_name variables_initializer clip format print makedirs astype output float32 build mean get_time close save open reset_default_graph walk range run concat concat get_variable load initialize_uninitialized as_default get_default_session print reshape relu minimize float32 reduce_sum placeholder open tile append abs range get_variable variables_initializer len global_variables run print items round isinstance list LANCZOS vstack resize append expand_dims array read int rint print format dtype hasattr isinstance name __name__ import_module sub get_obj_from_module split getattr split get_module_from_obj_name get_obj_by_name get_module_from_obj_name remove basename walk normpath dirname copyfile makedirs urljoin urlparse join replace glob hex sub hexdigest makedirs
AllenChen1998/AIEF
82
Alpaca07/dtr
['scene text recognition']
['Focus-Enhanced Scene Text Recognition with Deformable Convolutions']
train.py torch_deform_conv/layers.py eval.py dataset.py models/deformable_crnn.py utils.py torch_deform_conv/deform_conv.py TestDataset LMDBDataset val weights_init train_batch averager AlignCollate loadData ResizeNormalize strLabelConverter DeformableCRNN ResidualBlock BidirectionalLSTM th_flatten th_batch_map_coordinates th_map_coordinates np_repeat_2d sp_batch_map_offsets th_batch_map_offsets sp_batch_map_coordinates th_generate_grid th_repeat th_gather_2d ConvOffset2D normal_ __name__ fill_ data decode batch_size DataLoader IntTensor max crnn view add iter encode next range averager loadData size eval zip float criterion print Variable min parameters len criterion model backward loadData size step zero_grad IntTensor encode next copy_ expand_dims tile th_flatten index_select size detach clamp size stack type long th_gather_2d array clip concatenate Variable _get_vals_by_coords size stack cuda type long is_cuda cat detach reshape repeat sp_batch_map_coordinates list reshape np_repeat_2d stack meshgrid type cuda range view th_batch_map_coordinates size th_generate_grid type is_cuda
# Deformable Text Recognition This software implements the Deformable Convolutional Recurrent Neural Network, a combination of of Convolutional Recurrent Neural Network, Deformable Convolutional Networks and Residual Blocks. Some of the codes are from [crnn.pytorch](https://github.com/meijieru/crnn.pytorch) and [Deformable-ConvNets](https://github.com/msracver/Deformable-ConvNets). For details, please refer to [our paper](https://ieeexplore.ieee.org/abstract/document/9064428). ## Requirements * [Python 3.6](https://www.python.org/) * [PyTorch 1.0](https://pytorch.org/) * [TorchVision](https://pypi.org/project/torchvision/) * [Numpy](https://pypi.org/project/numpy/) * [Six](https://pypi.org/project/six/) * [Scipy](https://pypi.org/project/scipy/) * [LMDB](https://pypi.org/project/lmdb/)
83
Alpha-Video/AlphaVideo
['action detection']
['Asynchronous Interaction Aggregation for Action Detection']
alphavideo/mot/TubeTK/head.py alphavideo/mot/TubeTK/config.py alphavideo/model/TubeTK.py alphavideo/model/__init__.py alphavideo/model/AlphAction.py alphavideo/action/AlphAction/utils/misc.py alphavideo/action/AlphAction/utils/structures.py alphavideo/mot/TubeTK/TubeTK.py alphavideo/__init__.py alphavideo/loss/__init__.py alphavideo/action/AlphAction/IA_structure.py alphavideo/action/AlphAction/ActionDetector.py alphavideo/utils/tube_nms.py alphavideo/base/fpn.py alphavideo/base/resnet3d.py alphavideo/action/AlphAction/SlowFast.py alphavideo/loss/focal_loss.py alphavideo/action/AlphAction/config.py alphavideo/csrc/tube_nms_c/setup.py alphavideo/utils/load_url.py alphavideo/action/AlphAction/utils/registry.py alphavideo/utils/roi_align_3d.py alphavideo/mot/TubeTK/utils.py setup.py alphavideo/action/AlphAction/head.py alphavideo/action/AlphAction/utils/IA_helper.py action_detector_res50 ActionDetector action_detector_res101 Pooler3d ROIActionHead PostProcessor FCPredictor MLPFeatureExtractor InteractionBlock separate_batch_per_person ParallelIAStructure unfuse_batch_num make_ia_structure fuse_batch_num DenseSerialIAStructure IAStructure init_layer separate_roi_per_person SerialIAStructure slowfast_res50 SlowFast slowfast_res101 has_memory has_person has_object _block_set pad_sequence prepare_pooled_feature cat _register_generic Registry BoxList MemoryPool FPN resnet101 ResNet resnet50 Bottleneck one_hot_embedding focal_loss alphaction_res101 alphaction_res50 tubeTK TrackHead multi_apply Scale TubeTK bbox_enclose align_bbox_on_frame iou_loss area volume bbox_iou distance2bbox tube_giou bbox_iou_loss get3bboxes_from_tube bbox_overlaps giou_loss tube_iou download_file_from_google_drive save_response_content get_confirm_token load_url_google ProgressBar ROIAlign3d _ROIAlign3d multiclass_tube_nms tube_nms WEIGHT_50 WEIGHT_101 zip size cat device append zeros enumerate len size device append zeros enumerate len size size normal_ weight constant_ bias resnet50 resnet101 list from_iterable I_BLOCK_LIST _block_set I_BLOCK_LIST _block_set I_BLOCK_LIST _block_set new_full size enumerate add_field detach BoxList zip append split ResNet ResNet eye one_hot_embedding pow float cuda list map clamp tube_iou clamp tube_giou bbox_enclose align_bbox_on_frame area volume get3bboxes_from_tube bbox_overlaps align_bbox_on_frame clamp area volume get3bboxes_from_tube bbox_overlaps ndarray isinstance min shape repeat zeros max minimum ndarray isinstance clamp min maximum max clip minimum ndarray isinstance clamp min maximum max clip bbox_iou area bbox_overlaps join str download_file_from_google_drive print expanduser makedirs get get_confirm_token save_response_content Session items startswith get basename ProgressBar new_full sort nms_op new_zeros append range cat Tensor new_zeros isinstance nms
## Introduction AlphaVideo is an open-sourced video understanding toolbox based on [PyTorch](https://pytorch.org/) covering multi-object tracking and action detection. In AlphaVideo, we released the first one-stage multi-object tracking (MOT) system **TubeTK** that can achieve 66.9 MOTA on [MOT-16](https://motchallenge.net/results/MOT16) dataset and 63 MOTA on [MOT-17](https://motchallenge.net/results/MOT17) dataset. For action detection, we released an efficient model **AlphAction**, which is the first open-source project that achieves 30+ mAP (32.4 mAP) with single model on [AVA](https://research.google.com/ava/) dataset. ## Quick Start ### pip Run this command: ```shell pip install alphavideo ```
84
AmingWu/CCN
['visual commonsense reasoning']
['Connective Cognition Network for Directional Visual Commonsense Reasoning']
utils/smalldetector.py utils/newdetector.py dataloaders/vcr.py utils/pytorch_misc.py utils/detector.py dataloaders/box_utils.py dataloaders/mask_utils.py config.py utils/testdetector.py dataloaders/bert_field.py train/train.py BertField load_image to_tensor_and_normalize resize_image make_mask _spaced_points VCR _fix_tokenization collate_fn VCRLoader AttentionQA _to_gpu SimpleDetector _load_resnet_imagenet _load_resnet SimpleDetector _load_resnet_imagenet _load_resnet extra_leading_dim_in_sequence detokenize find_latest_checkpoint Flattener pad_sequence restore_best_checkpoint batch_iterator print_para restore_checkpoint save_checkpoint time_batch clip_grad_norm batch_index_iterator SimpleDetector _load_resnet_imagenet _load_resnet SimpleDetector _load_resnet_imagenet _load_resnet pad size min resize _spaced_points reshape Path meshgrid zeros append BertField isinstance SequenceLabelField as_tensor_dict zip Batch stack get_text_field_mask long load_url resnet50 range load_state_dict resnet50 range time enumerate len max enumerate new_zeros items norm format sorted print tuple size isnan mul_ item float prod str join format append listdir split join format print copyfile save state_dict load join load_state_dict isinstance load int isinstance find_latest_checkpoint print move_optimizer_to_cuda load_state_dict to_string requires_grad format set_index set_option print named_parameters range batch_index_iterator len
# CCN ## Connective Cognition Network for Directional Visual Commonsense Reasoning (NeurIPS 2019) ![Method](https://github.com/AmingWu/CCN/blob/master/pic/fig1.png?raw=true "Illustration of our method") Visual commonsense reasoning (VCR) has been introduced to boost research of cognition-level visual understanding, i.e., a thorough understanding of correlated details of the scene plus an inference with related commonsense knowledge. We propose a connective cognition network (CCN) to dynamically reorganize the visual neuron connectivity that is contextualized by the meaning of questions and answers. And our method mainly includes visual neuron connectivity, contextualized connectivity, and directional connectivity. ![Framework](https://github.com/AmingWu/CCN/blob/master/pic/fig2.png?raw=true "Illustration of our framework") The goal of visual neuron connectivity is to obtain a global representation of an image, which is helpful for a thorough understanding of visual content. It mainly includes visual element connectivity and the computation of both conditional centers and GraphVLAD. ![Visual Neuron Connectivity](https://github.com/AmingWu/CCN/blob/master/pic/fig3.png?raw=true "Illustration of Visual Neuron Connectivity") ## Setting Up and Data Preparation We used pytorch 1.1.0, python 3.6, and CUDA 9.0 for this project. Before using this code, you should download VCR dataset from this link, i.e., https://visualcommonsense.com/. Follow the steps given by the link, i.e., https://github.com/rowanz/r2c/, to set up the running environment. ## Training and Validation
85
Ananaskelly/TPE
['face verification']
['Triplet Probabilistic Embedding for Face Verification and Clustering']
core/tpe_batcher.py utils/tf_utils.py core/metrics.py core/tpe_model.py experiments/main_cos_dist.py utils/data_processing.py calc_eer cos_similarity euclid_similarity TPEBatcher TPEModel generate_tuples cross_arrays load_data txt_to_numpy split_set bias_variable weight_variable_xavier weight_variable minimum min maximum linspace zeros float abs max enumerate len glob join load join int format basename glob lstrip append shape full combinations uniquie squeeze shape argwhere listdir array range truncated_normal truncated_normal xavier_initializer
# TPE https://arxiv.org/pdf/1604.05417.pdf
86
AnastasisKratsios/NeurIPS2020_Non_Euclidean_Universal_Approximation_Example_DNN_Layer_Comparisons
['gaussian processes']
['Non-Euclidean Universal Approximation']
Init_Dump.py Helper_Functions.py Hyperparameter_Grid.py Optimal_Deep_Feature_and_Readout_Util.py Example.py Prepare_Data_California_Housing.py Helper_Utility.py def_trainable_layers_Nice_Input_Output build_and_predict_bad_model build_and_predict_Vanilla_model build_and_predict_nice_model def_trainable_layers_Randomized_Feature def_trainable_layers_Bad_Input_Output build_and_predict_nice_modelII def_trainable_layers_Vanilla def_trainable_layers_Nice_Input_Output reporter build_simple_deep_classifier def_simple_deep_classifer mean_absolute_percentage_error fullyConnected_Dense_Invertible fullyConnected_Dense build_ffNN write_results Depth_Selector_sparse_trainable Modulator_Readout check_file feature_map check_path evaluate_structure Rescaled_Leaky_ReLU calculate_results evaluate_branching_structure build_grid prepare_manual_clusters prepare_columntransformer reporter data_to_spherical_to_euclidean prepare_data fit_structure mean_absolute_percentage_error fullyConnected_Dense add_is_train def_model is_in_random_ball init_delta check_path compositer_univariate fullyConnected_Dense_Invertible activ_univariate fullyConnected_Dense fullyConnected_Dense_Desctructor activ_univariate_inv fullyConnected_Dense_Random data_to_spherical_to_euclidean prepare_columntransformer prepare_data feature_map leaky_relu relu Adam Model Input range compile KerasRegressor RandomizedSearchCV predict fit relu Adam Model Input compile KerasRegressor RandomizedSearchCV predict fit relu Adam Model Input range compile KerasRegressor RandomizedSearchCV predict fit relu Adam Model Input compile KerasRegressor RandomizedSearchCV predict fit swish RandomizedSearchCV KerasRegressor best_estimator_ sum predict fit Sequential add Dense range compile RandomizedSearchCV KerasClassifier best_estimator_ sum predict fit DataFrame array mkdir Path norm longitude cos latitude mean sin DataFrame data_to_spherical_to_euclidean concat reset_index train_test_split assign concat join concat to_csv feature_map get_dummies print_save_messsage add_is_train read_csv drop ColumnTransformer columns train_test_split DataFrame array read_csv drop Adam Model Input range compile clear_session time collect best_params_ predict fit mean mean_squared_error mean_absolute_percentage_error mean_absolute_error prepare_columntransformer KerasRegressor RandomizedSearchCV Pipeline build_grid list columns fit_structure evaluate_structure keys print join check_path str join concatenate maximum evaluate_structure float range values len expanduser bool mean pdist exp pow sign sqrt sign
# NeurIPS - 2020: [Non Euclidean Universal Approximation](https://arxiv.org/abs/2006.02341) Coauthored by: - [Anastasis Kratsios](https://people.math.ethz.ch/~kratsioa/) - [Ievgen Bilokopytov](https://apps.ualberta.ca/directory/person/bilokopy) # Cite As: @inproceedings{NEURIPS2020_786ab8c4, author = {Kratsios, Anastasis and Bilokopytov, Ievgen}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin}, pages = {10635--10646},
87
AndMu/Market-Wisdom
['sentiment analysis']
['Market Trend Prediction using Sentiment Analysis: Lessons Learned and Paths Forward']
src/PortfolioBasic/stockstats.py src/learning/__init__.py src/__init__.py src/utilities/Constants.py src/MarketData.py src/utilities/FileIterators.py src/DataLoader.py src/utilities/Utilities.py src/learning/BasicLearning.py src/utilities/TextHelper.py src/utilities/NumpyHelper.py src/utilities/DocumentExtractors.py src/Experiment.py src/PortfolioBasic/Technical/Indicators.py src/utilities/LoggingFileHandler.py src/utilities/DataLoaders.py src/PortfolioBasic/Technical/Analysis.py src/utilities/__init__.py src/PortfolioBasic/Definitions.py DataLoader build_model get_data lstm_prediction processing svm_prediction RedditMarketDataSource QuandlMarketDataSource MarketData BloombergMarketDataSource RbfClassifier LinerClassifier BaseClassifier CalibratedClassifier HeaderFactory Utilities StockDataFrame TechnicalPerformance RsiIndicator Williams BollingerIndicator MACDIndicator CommodityChannelIndex Indicator MomentumIndicator AverageTrueRange AverageDirectionalIndex CombinedIndicator TripleExponentialMovingAverage SemEvalDataLoader DataLoader ImdbDataLoader MultiFileLineSentence MultiFileLineDocument SingleFileLineSentence FileIterator SemEvalFileReader ClassDataIterator SemEvalDataIterator SingeDataIterator DataIterator LoggingFileHandler NumpyDynamic TextHelper Utilities CollectionUtilities ClassConvertor Sequential add Dense Conv1D LSTM MaxPooling1D Activation compile Dropout QuandlMarketDataSource BloombergMarketDataSource RedditMarketDataSource DataLoader load_data transform StandardScaler fit build_model reshape make_dual make_single_dimension summary predict fit predict_proba Pipeline predict fit measure_performance_auc get_data lstm_prediction svm_prediction train_test_split measure_performance PorterStemmer
# Market Trend Prediction using Sentiment Analysis: Lessons Learned and Paths Forward [Published Paper](wisdom_paper.pdf) Twitter bot running as [MarketPredGuy](https://twitter.com/MarketPredGuy) and its [code](https://github.com/AndMu/Wikiled.Market) ## *pSenti* Lexicon system * Download [*pSenti*](https://github.com/AndMu/Wikiled.Sentiment/releases/tag/2.6.55) lexicon based utility
88
AndersonPeng/imitation-learning-seq-conv
['imitation learning']
['Imitation Learning for Sentence Generation with Dilated Convolutions Using Adversarial Training']
distribs.py train_pg.py seq_conv_model.py test.py runner.py word2vec_model.py preprocess.py train_word2vec.py ops.py train_mle.py CategoricalDistrib DiagGaussianDistrib lstmCell sample_top single_layer_lstmCell static_lstm fc conv2d deconv2d conv1d layer_norm sample dynamic_lstm multi_layer_lstmCell tensor_array_lstm embed tensor_array_lstm_decoder normalize_text create_skipgram create_dict discounted_returns Runner SeqConvModel res_block pad_sentence_batch exp_decay sample_state_action_batch pad_sentence_batch gen_state_action_batch Word2vecModel as_list sqrt as_list sqrt as_list sqrt as_list sqrt sqrt sqrt dynamic_rnn zero_state multi_layer_lstmCell float32 transpose stack unstack append range as_list while_loop transpose write TensorArray stack range while_loop transpose write TensorArray sqrt stack sample shape random_uniform sum most_common Counter enumerate min append max range len int list zeros_like reversed range conv1d relu append max len zeros int32 enumerate zeros range int32 randint zeros range int32 randint
# Imitation Learning for Sentence Generation with Dilated Convolutions Using Adversarial Training The source code for the paper "Imitation Learning for Sentence Generation with Dilated Convolutions Using Adversarial Training". <br> ## Prerequisites - [python >= 3.5.0](https://www.python.org/) - [tensorflow >= 1.8.0](https://www.tensorflow.org/) <br> ## Execution **1.** For data preprocessing : ```
89
AndlollipopFU/PCB
['person retrieval', 'person re identification', 'data augmentation']
['Camera Style Adaptation for Person Re-identification', 'Beyond Part Models: Person Retrieval with Refined Part Pooling (and a Strong Convolutional Baseline)']
model/PCB/model.py model/ft_net_dense/model.py model/ft_ResNet50/train.py model/PCB/train.py model/ft_ResNet50/model.py random_erasing.py prepare.py train.py evaluate.py prepare_static.py test.py evaluate_rerank.py re_ranking.py model/fp16/train.py model.py demo.py model/ft_net_dense/train.py evaluate_gpu.py model/fp16/model.py imshow sort_img compute_mAP evaluate compute_mAP evaluate compute_mAP evaluate ft_net_dense ClassBlock ft_net ft_net_NAS PCB_test weights_init_classifier ft_net_middle PCB weights_init_kaiming prepare_model RandomErasing k_reciprocal_neigh re_ranking load_network get_id extract_feature fliplr train_model save_network draw_curve ft_net_dense ClassBlock ft_net PCB_test weights_init_classifier ft_net_middle PCB weights_init_kaiming train_model save_network draw_curve ft_net_dense ClassBlock ft_net PCB_test weights_init_classifier ft_net_middle PCB weights_init_kaiming train_model save_network draw_curve ft_net_dense ClassBlock ft_net ft_net_NAS PCB_test weights_init_classifier ft_net_middle PCB weights_init_kaiming train_model save_network draw_curve ft_net_dense ClassBlock ft_net PCB_test weights_init_classifier ft_net_middle PCB weights_init_kaiming train_model save_network draw_curve title imread pause view argsort intersect1d numpy argwhere in1d cpu mm append setdiff1d compute_mAP dot argsort intersect1d argwhere append flatten argwhere in1d zero_ range len view numpy cpu mm data normal_ kaiming_normal_ __name__ constant_ data normal_ __name__ constant_ time format print shape zeros range zeros_like around max list exp transpose append sum range concatenate astype mean unique minimum int float32 argpartition k_reciprocal_neigh zeros len load join which_epoch load_state_dict index_select long norm view FloatTensor print Variable size model div sqrt cuda zero_ expand_as float PCB fliplr range cat append int basename data draw_curve Softmax model zero_grad max sm shape load_state_dict append range state_dict detach format save_network time criterion backward print Variable train step join plot savefig legend append join save is_available cuda state_dict half
<h1 align="center"> Person_reID_baseline_pytorch </h1> [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/layumi/Person_reID_baseline_pytorch.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/layumi/Person_reID_baseline_pytorch/context:python) [![Build Status](https://travis-ci.org/layumi/Person_reID_baseline_pytorch.svg?branch=master)](https://travis-ci.org/layumi/Person_reID_baseline_pytorch) [![Total alerts](https://img.shields.io/lgtm/alerts/g/layumi/Person_reID_baseline_pytorch.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/layumi/Person_reID_baseline_pytorch/alerts/) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) A tiny, friendly, strong baseline code for Person-reID (based on [pytorch](https://pytorch.org)). - **Strong.** It is consistent with the new baseline result in several top-conference works, e.g., [Beyond Part Models: Person Retrieval with Refined Part Pooling(ECCV18)](https://arxiv.org/abs/1711.09349) and [Camera Style Adaptation for Person Re-identification(CVPR18)](https://arxiv.org/abs/1711.10295). We arrived Rank@1=88.24%, mAP=70.68% only with softmax loss. - **Small.** With fp16, our baseline could be trained with only 2GB GPU memory. - **Friendly.** You may use the off-the-shelf options to apply many state-of-the-art tricks in one line. Besides, if you are new to person re-ID, you may check out our **[Tutorial](https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/tutorial)** first (8 min read) :+1: .
90
AndreaBorghesi/anomaly_detection_HPC
['anomaly detection']
['Anomaly Detection using Autoencoders in High Performance Computing Systems']
util.py detect_anomalies.py main correlation_autoencoder semi_supervised_ae_based retrieve_data millis_unix_time encode_category count_idle_periods find_gaps_timeseries evaluate_predictions is_node_idle pairwise compute_error_threshold plot_error_distribution_singleNode find_idle_periods unix_time_millis split_dataset get_labels plot_errors_DL is_in_timeseries add_freq_govs_to_plot drop_stuff check_df error_distribution_2_class_varyThreshold preprocess_noScaling create_df preprocess prepare_dataframe load_data plot_errors_DL_fixIdle replace split_dataset compile fit predict shape Model plot_errors_DL evaluate_predictions Input error_distribution_2_class_varyThreshold values len columns correlation_autoencoder find_gaps_timeseries int retrieve_data create_df print prepare_dataframe transpose semi_supervised_ae_based DataFrame check_df isfile dropna fillna encode_category MinMaxScaler fit_transform encode_category concat get_dummies list asarray mean shape nanmean Decimal sqrt r2_score append abs keys range len add_subplot AutoDateFormatter set_major_formatter show list set_major_locator axvline title append add_freq_govs_to_plot range plot keys print add_patch AutoDateLocator date2num figure plot_errors_DL_fixIdle Rectangle len next tee pairwise append pairwise total_seconds append is_node_idle iterrows iterrows print append is_node_idle float date2num Rectangle add_patch print format load_data update items list sorted print OrderedDict timedelta keys show list plot add_freq_govs_to_plot yticks add_subplot title figure append xticks keys range len isfile nunique preprocess_noScaling columns find_idle_periods count_idle_periods index apply preprocess drop_stuff list concatenate set append range len range len abs percentile show list axvline ylabel shape legend precision_recall_fscore_support append range asarray format plot keys print xlabel figure ravel len percentile list asarray shape append abs keys range len show list xlabel yticks ylabel shape hist figure legend append xticks abs keys range len
# Semi-supervised Autoencoder-based anomaly detection on HPC Systems This repository contains the set of script capable to replicate parts of the work described in: 1) "Anomaly Detection using Autoencoders in High Performance Computing Systems", Andrea Borghesi, Andrea Bartolini, Michele Lombardi, Michela Milano, Luca Benini, IAAI19 (proceedings in process) -- https://arxiv.org/abs/1902.08447 2) "Online Anomaly Detection in HPC Systems", Andrea Borghesi, Antonio Libri, Luca Benini, Andrea Bartolini, AICAS19 (proceedings in process) -- https://arxiv.org/abs/1811.05269 Fine-grained data was collected on the D.A.V.I.D.E. HPC system (developed in
91
AndreaXu0401/ALIDRC
['active learning', 'relation classification']
['Using active learning to expand training data for implicit discourse relation recognition']
data_helpers.py active_learning_2.py train.py text_cnn.py dev_step train_step batch_iter load_data_and_labels build_input_data build_word_vocab_embd build_pos_vocab_embd load_pos_pkl clean_str load_word_pkl load_data_and_labels_discourse _variable_on_cpu TextCNN dev_step train_step print isoformat format run list batch_iter batch_size print classification_report extend zip f1_score accuracy_score argmax run sub list readlines concatenate int permutation arange min array range len list replace readlines append split array load close append open load close append open load insert tolist close len append range open load insert tolist close load_word2vec_format append open add_summary str write tolist close open
Using Active Learning to Expand Training Data for Implicit Discourse Relation Recognition ===== It is slightly simplified implementation of our Using Active Learning to Expand Training Data for Implicit Discourse Relation Recognition paper in Tensorflow. Requirements ----- Python 3.5 Tensorflow 1.4 Numpy sklearn
92
AndresPMD/StacMR
['cross modal retrieval']
['StacMR: Scene-Text Aware Cross-Modal Retrieval']
coco-caption/pycocoevalcap/__init__.py GCN_lib/Rs_GCN.py cocoapi-master/PythonAPI/pycocotools/coco.py models/S2VTModel.py models/DecoderRNN.py cocoapi-master/PythonAPI/build/lib.linux-x86_64-2.7/pycocotools/mask.py cocoapi-master/PythonAPI/pycocotools/mask.py coco-caption/pyciderevalcap/ciderD/ciderD.py opts.py CTC_img_downloader.py coco-caption/pyciderevalcap/ciderD/__init__.py cocoapi-master/PythonAPI/pycocotools/__init__.py train.py coco-caption/pycocoevalcap/rouge/__init__.py coco-caption/pycocoevalcap/bleu/bleu_scorer.py cocoapi-master/PythonAPI/build/lib.linux-x86_64-2.7/pycocotools/cocoeval.py coco-caption/pyciderevalcap/cider/cider_scorer.py coco-caption/pycocoevalcap/meteor/__init__.py data.py evaluation.py evaluation_models.py coco-caption/pycocoevalcap/tokenizer/ptbtokenizer.py coco-caption/pyciderevalcap/eval.py misc/utils.py coco-caption/pyciderevalcap/tokenizer/ptbtokenizer.py models/EncoderRNN.py models/__init__.py coco-caption/pycocoevalcap/rouge/rouge.py cocoapi-master/PythonAPI/setup.py misc/rewards.py coco-caption/pyciderevalcap/cider/__init__.py coco-caption/pycocoevalcap/meteor/meteor.py cocoapi-master/PythonAPI/build/lib.linux-x86_64-2.7/pycocotools/__init__.py cocoapi-master/PythonAPI/build/lib.linux-x86_64-2.7/pycocotools/coco.py coco-caption/pyciderevalcap/tokenizer/__init__.py misc/cocoeval.py coco-caption/pycocotools/cocoeval.py models/S2VTAttModel.py coco-caption/pyciderevalcap/__init__.py models/Attention.py coco-caption/pycocoevalcap/eval.py coco-caption/pyciderevalcap/ciderD/ciderD_scorer.py model.py coco-caption/pycocoevalcap/cider/cider.py coco-caption/pycocoevalcap/bleu/bleu.py evaluate_models.py coco-caption/pycocoevalcap/bleu/__init__.py coco-caption/pyciderevalcap/cider/cider.py cocoapi-master/PythonAPI/pycocotools/cocoeval.py coco-caption/pycocoevalcap/tokenizer/__init__.py coco-caption/pycocotools/__init__.py coco-caption/pycocotools/mask.py vocab.py coco-caption/pycocotools/coco.py extract_feats.py get_loaders CocoDataset get_precomp_loader get_transform FlickrDataset get_loader_single get_test_loader collate_fn PrecompDataset get_paths i2t encode_data AverageMeter LogCollector t2i evalrank extract_feats i2t encode_data AverageMeter LogCollector t2i evalrank parse_opt validate accuracy save_checkpoint adjust_learning_rate main train Vocabulary from_txt main from_flickr_json build_vocab CIDErEvalCap Cider precook CiderScorer cook_test cook_refs CiderD precook CiderScorer cook_test cook_refs PTBTokenizer COCOEvalCap Bleu precook BleuScorer cook_test cook_refs Cider Meteor my_lcs Rouge PTBTokenizer COCO _isArrayLike Params COCOeval encode decode area toBbox COCO _isArrayLike Params COCOeval encode decode area toBbox COCO _isArrayLike Params COCOeval encode decode area toBbox Rs_GCN suppress_stdout_stderr COCOScorer score test_cocoscorer array_to_str init_cider_scorer get_self_critical_reward LanguageModelCriterion decode_sequence RewardCriterion Attention DecoderRNN EncoderRNN S2VTAttModel S2VTModel load join list sort stack zip long enumerate FlickrDataset DataLoader DataLoader PrecompDataset Compose Normalize join use_restval endswith get_precomp_loader get_transform data_path get_loader_single data_name get_paths join use_restval endswith get_precomp_loader get_transform data_path get_loader_single data_name get_paths time AverageMeter copy forward_emb LogCollector zeros val_start enumerate load workers encode_data i2t batch_size print tuple crop_size flatten get_test_loader t2i load_state_dict VSRN data_name save range len median reshape min order_sim flatten mean floor append zeros numpy cuda range len median T min order_sim dot shape mean floor cuda zeros numpy array range len load workers encode_data batch_size print crop_size get_test_loader load_state_dict VSRN data_name save len parse_args add_argument ArgumentParser workers vocab_path validate batch_size adjust_learning_rate save_checkpoint VSRN ArgumentParser data_name max open basicConfig crop_size load_state_dict logger_name parse_args range configure get_loaders format resume num_epochs optimizer load join print add_argument isfile train len update val time format validate train_start AverageMeter train_emb LogCollector log_value save_checkpoint tb_log info max enumerate len i2t encode_data log_step log_value t2i info copyfile save lr_update param_groups learning_rate topk size t eq mul_ expand_as append sum max enumerate update join word_tokenize decode Vocabulary print add_word from_txt Counter from_flickr_json enumerate build_vocab defaultdict tuple split range len get items precook list min append float sum max len list items precook max range len shape print compute_score method zip score COCOScorer range len model print size OrderedDict repeat compute_score numpy range size item cpu range append
# StacMR (Scene Text Aware Cross Modal Retrieval) Dataset and code based on our WACV 2021 Accepted Paper: https://arxiv.org/abs/2012.04329 Official Website is online! https://europe.naverlabs.com/research/computer-vision/stacmr-scene-text-aware-cross-modal-retrieval/ Project is built on top of the [VSRN] (https://github.com/KunpengLi1994/VSRN) in PyTorch. ## Introduction Recent models for cross-modal retrieval have benefited from an increasingly rich understanding of visual scenes, afforded by scene graphs and object interactions to mention a few. This has resulted in an improved matching between the visual representation of an image and the textual representation of its caption. Yet, current visual representations overlook a key aspect: the text appearing in images, which may contain crucial information for retrieval. In this paper, we first propose a new dataset that allows exploration of cross-modal retrieval where images contain scene-text instances. Then, armed with this dataset, we describe several approaches which leverage scene text, including a better scene-text aware cross-modal retrieval method which uses specialized representations for text from the captions and text from the visual scene, and reconcile them in a common embedding space. Extensive experiments confirm that cross-modal retrieval approaches benefit from scene text and highlight interesting research questions worth exploring further. Dataset and code are available at https://europe.naverlabs.com/research/computer-vision/stacmr Task: <a href="url"><img src="paper_images/Figure1.png" align="center" height="430" width="430" ></a> <p></p> ## Install Environment
93
Andrew-Qibin/SPNet
['scene parsing', 'semantic segmentation']
['Strip Pooling: Rethinking Spatial Pooling for Scene Parsing']
util/cityscapes.py lib/psa/modules/__init__.py lib/psa/functions/psamask.py lib/sync_bn/modules/sync_bn.py models/model_store.py lib/sync_bn/src/__init__.py models/spnet.py util/loss.py lib/psa/src/__init__.py lib/psa/functions/__init__.py lib/sync_bn/functions/__init__.py util/dataset.py lib/psa/modules/psamask.py models/fcn.py tool/test.py models/base.py tool/demo.py tool/train.py lib/psa/functional.py util/config.py models/model_zoo.py util/util.py models/customize.py lib/sync_bn/functions/sync_bn.py lib/sync_bn/modules/__init__.py util/transform.py models/resnet.py psa_mask PSAMask PSAMask _act_forward _act_backward inp_syncbatchnorm_ syncbatchnorm_ moments BatchNorm1d BatchNorm2d BatchNorm3d SyncBatchNorm BaseNet View StripPooling PyramidPooling Normalize ConcurrentModule Sum Mean GramMatrix GlobalAvgPool2d FCNHead FCN GlobalPooling Identity get_model_file short_hash purge pretrained_model_list get_model ResNet resnet50 Bottleneck conv3x3 SPBlock resnet101 BasicBlock SPHead SPNet check test net_process scale_process get_parser main get_logger check test net_process cal_acc scale_process get_parser main get_logger validate main_process get_parser main_worker main train get_logger worker_init_fn get_city_pairs CitySegmentation load_cfg_from_cfg_file merge_cfg_from_list CfgNode _assert_with_logging _decode_cfg_value _check_and_coerce_cfg_value_type is_image_file Ade20kData make_dataset CityscapesData OhemCrossEntropyLoss BGR2RGB Crop RandRotate ToTensor Compose RandomVerticalFlip Resize RandomGaussianBlur Normalize RandomHorizontalFlip RandScale RGB2BGR intersectionAndUnionGPU check_mkdir colorize poly_learning_rate AverageMeter check_makedirs intersectionAndUnion step_learning_rate init_weights group_weight slope leaky_relu_forward is_cuda slope is_cuda leaky_relu_backward get join remove format print check_sha1 expanduser download exists makedirs join remove endswith expanduser listdir lower load ResNet load_state_dict load ResNet load_state_dict config load_cfg_from_cfg_file merge_cfg_from_list add_argument image ArgumentParser opts parse_args setFormatter getLogger addHandler StreamHandler Formatter setLevel INFO compact train_w train_h shrink_factor test_w image test_h get_parser cuda PSANet base_size PSPNet load_state_dict get_logger format astype test eval classes info model_path load join check scales isfile sub_ transpose shape flip div_ interpolate softmax zip float numpy cuda cat int copyMakeBorder float min copy shape resize ceil zeros BORDER_CONSTANT max range argmax join uint8 format imwrite info COLOR_BGR2RGB float colorize IMREAD_COLOR shape scale_process save resize zeros imread round cvtColor save_folder DataLoader SPNet cal_acc HTNet CityscapesData index_step Compose Ade20kData index_start data_list min len squeeze transpose update check_makedirs eval enumerate time data_list AverageMeter numpy len update join sum val format data_list AverageMeter intersectionAndUnion mean IMREAD_GRAYSCALE info imread _class_to_index array range enumerate len seed manual_seed seed manual_seed_all int train_gpu world_size spawn multiprocessing_distributed ngpus_per_node manual_seed main_worker workers validate batch_size multiprocessing_distributed SGD DataParallel use_apex DistributedDataParallel DataLoader save cuda SPNet str initialize set_device DistributedSampler rank save_freq load_state_dict append get_logger CrossEntropyLoss CityscapesData range SummaryWriter format main_process init_process_group sync_bn save_path Compose Ade20kData start_epoch distributed resume classes info load int batch_size_val remove evaluate print set_epoch dict isfile BatchNorm2d train epochs SyncBatchNorm add_scalar model multiprocessing_distributed index_split zero_grad aux_weight ignore_label base_lr cuda epochs new_tensor sum range update val format main_process param_groups size mean avg classes item info enumerate int time intersectionAndUnionGPU backward add_scalar poly_learning_rate AverageMeter divmod step len model multiprocessing_distributed ignore_label interpolate cuda new_tensor sum range update val format main_process size mean eval classes item info enumerate time intersectionAndUnionGPU criterion AverageMeter len print join get_path_pairs items list CfgNode deepcopy zip _decode_cfg_value setattr _check_and_coerce_cfg_value_type literal_eval append type conditional_cast debug lower join format print strip readlines split append len float reshape size histogram copy cpu histc view mkdir makedirs _ConvNd isinstance named_parameters bias normal_ _BatchNorm weight modules xavier_normal_ LSTM kaiming_normal_ constant_ Linear _ConvNd isinstance bias dict _BatchNorm modules append weight Linear convert putpalette
# Strip Pooling: Rethinking Spatial Pooling for Scene Parsing This repository is a PyTorch implementation for our [CVPR2020 paper](https://arxiv.org/pdf/2003.13328.pdf) (non-commercial use only). The results reported in our paper are originally based on [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding) but the environment settings are a little bit complicated. To ease use, we reimplement our work based on [semseg](https://github.com/hszhao/semseg). ### Strip Pooling ![An efficient way to use strip pooling](strip.png)
94
AndrewJGaut/Towards-Understanding-Gender-Bias-in-Neural-Relation-Extraction
['word embeddings', 'relation extraction', 'data augmentation']
['Towards Understanding Gender Bias in Relation Extraction']
ModelResultParsing/Utility.py Models/OpenNRE/nrekit/nrekit/network/encoder.py Models/RESIDE/online/base_model.py WordEmbeddings/Word2VecTraining.py Models/OpenNRE/nrekit/nrekit/__init__.py Models/RESIDE/online/online_reside.py Models/OpenNRE/nrekit/nrekit/rl.py Models/RESIDE/online/pcnn.py ModelResultParsing/AttentionResultParsing.py Models/OpenNRE/test_debiasingoptions.py WordEmbeddings/web/datasets/__init__.py WordEmbeddings/web/utils.py WordEmbeddings/web/evaluate.py Models/RESIDE/preproc/make_bags.py WikigenderJsonParsing/CreateBootstrappedDatasets.py Models/RESIDE/online/server.py WordEmbeddings/web/tests/test_embedding.py WordEmbeddings/debiaswe/learn_gender_specific.py WikigenderJsonParsing/test.py WordEmbeddings/web/tests/test_transform_words.py WordEmbeddings/DebiasEmbeddings.py WordEmbeddings/web/datasets/categorization.py Models/OpenNRE/Utility.py WikigenderJsonParsing/genderSwap.py WordEmbeddings/web/datasets/analogy.py GlobalFiles/Utility.py WordEmbeddings/web/datasets/utils.py WordEmbeddings/debiaswe/data.py ModelResultParsing/GetPPSScores.py ModelResultParsing/GetAbsoluteScores.py WikigenderJsonParsing/NameProbs.py ModelResultParsing/AttentionResultParsing2.py ModelResultParsing/ParseResults.py Models/OpenNRE/nrekit/nrekit/network/embedding.py Models/OpenNRE/nrekit/nrekit/data_loader.py ModelResultParsing/GetBootstrappedResults.py Models/RESIDE/helper.py WordEmbeddings/debiaswe/debias.py WordEmbeddings/web/tests/test_vocabulary.py Models/OpenNRE/nrekit/nrekit/network/selector.py WikigenderJsonParsing/DatasetStatistics.py WikigenderJsonParsing/Utility.py Models/RESIDE/reside.py WordEmbeddings/web/tests/test_analogy.py Models/OpenNRE/nrekit/nrekit/network/__init__.py Models/RESIDE/preproc/generate_pickle.py Models/RESIDE/online/bgwa.py Models/RESIDE/relation_alias.py ModelResultParsing/CreateTableCode.py WordEmbeddings/Utility.py WordEmbeddings/web/analogy.py WikigenderJsonParsing/get_dataset_results.py Models/OpenNRE/train4.py WordEmbeddings/debiaswe/we.py WordEmbeddings/web/version.py WikigenderJsonParsing/DebiasJsonDataset.py WordEmbeddings/web/tests/test_similarity.py WordEmbeddings/web/embedding.py WordEmbeddings/web/datasets/similarity.py Models/OpenNRE/nrekit/nrekit/framework.py Models/RESIDE/plot_pr.py WikigenderJsonParsing/ConvertWikigenderToOpenNREFormat.py WordEmbeddings/web/embeddings.py Models/OpenNRE/nrekit/nrekit/network/classifier.py WordEmbeddings/web/vocabulary.py WordEmbeddings/web/tests/test_categorization.py WordEmbeddings/web/_utils/compat.py WordEmbeddings/web/tests/test_fetchers.py Models/OpenNRE/train_debiasingoptions.py Models/OpenNRE/draw_plot.py getModelName getModelNameSimulateArgs getWordEmbeddingFileName getNameSuffix getNameSuffixSimulateArgs getAttScoresPerEpoch plotDataForSentence getBags plotGroupedBarChart getEntity2ID getAttScoreStringsPerEpoch aggregateDiff getBagsMapping compareBags loadDataWithPickle getID2Entity getRealEntityPairNames format_num getTableCodeForEncoderSelectorPairs getTrueFalseCombos format_nums getPPSScoreTable checkmark calculate_f1_score getAbsoluteResults updateVariances getUpperAndLowerEstimatesForInterval getStandardDeviations writeAbsAndGenderDiffsToSheet updateSumsAndRanges getMeansAndRanges getBootstrappedMetricScores writeAbsAndGenderDiffsToFile aggregate_objects squareRootVariances calculate_scores get_pps_scores_for_model calc_fbeta_score get_raw_results getRawMetricAbsoluteScores get_pps_scores createGenderDifferencesResultsFile getGenderDifferencesResults getTestFiles getModelName getNumForModelName getTrueFalseCombos readFromJsonFile getModelNameSimulateArgs getBootstrappedTestFiles getWordEmbeddingFileName writeToJsonFile getNameSuffix getNameSuffixSimulateArgs main model model model getModelName getDataNames getModelNameSimulateArgs getWordEmbeddingFileName getNameSuffix getNameSuffixSimulateArgs npy_data_loader file_data_loader json_file_data_loader calculate_auc average_gradients calculate_f1_score re_framework re_model rl_re_framework policy_agent soft_label_softmax_cross_entropy output sigmoid_cross_entropy softmax_cross_entropy word_position_embedding pos_embedding word_embedding birnn __dropout__ __piecewise_pooling__ __pooling__ __rnn_cell__ cnn rnn __cnn_cell__ pcnn bag_average bag_cross_max __dropout__ __attention_train_logit__ bag_one __attention_test_logit__ bag_attention __logit__ instance debug_nn partition getPhr2vec getChunks make_dir set_gpu checkFile mergeList get_logger getEmbeddings loadData plotPR get_probable_rel RESIDE Base BGWA RESIDE PCNN resideMain procData get_prob_rels get_alias2rel getId getIdMap read_file get_index posMap genIdMapping getSuffixNameFromFileName convertEntryToOpenNREFormat createOpenNREFilesFromFolder createOpenNREFile createOpenNREFilesWithArgs convertEntriesToOpenNREFormat genId createBootstrappedDataset getAverageSentencesPerArticle getProportionsOfSentencesPerRelationPerGender getMaleFemaleAverageInstancePerArticle countInstances printOutMaleFemaleDatasetNumbers removeDuplicates initRelationDict countEntityPairs oldcountInstances getMaleFemaleAverageInstancePerArticle2 createNameAnonymizedJsonDatasetEntries createEqualizedJsonDatasetEntries createNameAnonymizationDict getEqualizedEntriesThroughDownSampling nameAnonymizeJson createAllDebiasedDatasets getLenEntries genderSwapSubsetOfDataset join_punctuation createGenderSwappedDatasetEntries createEqualizedJsonDataset nameAnonymizeSubsetOfDataset getRandomArrayIndex addNameToAnonymizationDict nameAnonymizeStr createDebiasedDataset createNameAnonymizedJsonDataset createDebiasedDatasetWithArgs clean getTextfile createSwapDict join_punctuation genderSwap createGenderedSetsAndLists binarySearch createGenderedSets clean getName replaceInStr getInstancesForSentences filterResults prettify bidirectionalRelations NameProb getNamesFromFileToDict getSpecificEntries getAllEntries getMaleAndFemaleNames addPunctuationToTags findCharacterPosInSent findWordPosInSent getWordEmbeddingFileName writeToJsonFile startNERServer getNamesFromFileToDict getTrueFalseCombos computePositioning getNameSuffixSimulateArgs getEntriesForDataType findListSubset readFromJsonFile getModelNameSimulateArgs getModelName getCommandLineArgs getNameSuffix debiasEmbeddingsWithArgs debiasEmbedding debiasEmbeddings getEntriesForDataType getSpecificEntries getAllEntries getMaleAndFemaleNames getNamesFromFileToDict getTrueFalseCombos readFromJsonFile addPunctuationToTags getWordEmbeddingFileName getCommandLineArgs writeToJsonFile getNameSuffix getNameSuffixSimulateArgs trainWord2VecEmbedding formatWordVectors getAllSentences trainWord2VecEmbeddingsWithArgs createArgsString convertHardDebiasedEmbeddingFileToOpenNREFormat preprocessDebiasedFile formatWordVectorFile trainWord2VecEmbeddings load_professions debias safe_word dedup viz doPCA text_plot_words WordEmbedding to_utf8 drop SimpleAnalogySolver Embedding fetch_GloVe fetch_HDC fetch_PDC fetch_LexVec fetch_morphoRNNLM fetch_FastText fetch_NMT fetch_HPCA load_embedding fetch_harddebiased fetch_conceptnet_numberbatch evaluate_categorization evaluate_on_WordRep evaluate_on_all evaluate_analogy calculate_purity evaluate_on_semeval_2012_2 evaluate_similarity batched standardize_string _open any2utf8 Vocabulary CountedVocabulary OrderedVocabulary count fetch_msr_analogy fetch_google_analogy fetch_wordrep fetch_semeval_2012_2 fetch_battig fetch_ESSLI_2c fetch_ESSLI_1a fetch_BLESS fetch_AP fetch_ESSLI_2b fetch_SimLex999 fetch_TR9856 fetch_RW fetch_multilingual_SimLex999 fetch_WS353 fetch_MEN fetch_RG65 fetch_MTurk _chunk_read_ _makedirs _get_cluster_assignments _filter_column _fetch_file readlinkabs _chunk_report_ _format_time _uncompress_file _tree _get_dataset_descr movetree _md5_sum_file _read_md5_sum_file _filter_columns _change_list_to_np _get_dataset_dir test_analogy_solver test_wordrep_solver test_semeval_solver test_categorization test_purity test_save_2 test_save test_standardize test_standardize_preserve_identity test_RW_fetcher test_MEN_fetcher test_simlex999_fetchers test_analogy_fetchers test_MTurk_fetcher test_ws353_fetcher test_categorization_fetchers test_RG65_fetcher test_similarity_norm test_similarity test_noinplace_transform_word_OrderedVocabulary test_inplace_transform_word_prefer_occurences_CountedVocabulary test_noinplace_transform_word_CountedVocabulary test_inplace_transform_word_prefer_shortestword_OrderedVocabulary test_noinplace_transform_word_prefer_occurences_CountedVocabulary test_noinplace_transform_word_prefer_shortest_ord1_Vocabulary test_inplace_transform_word_CountedVocabulary test_noinplace_transform_word_prefer_occurences_OrderedVocabulary test_inplace_transform_word_prefer_shortestword_CountedVocabulary test_noinplace_transform_word_prefer_shortestword_OrderedVocabulary test_inplace_transform_word_prefer_occurences_OrderedVocabulary test_noinplace_transform_word_prefer_shortestword2_Vocabulary test_noinplace_transform_word_prefer_shortestword_CountedVocabulary test_inplace_transform_word_OrderedVocabulary test_noinplace_transform_word_Vocabulary md5_hash name_anonymize equalized_gender_mentions gender_swap swap_names bootstrapped neutralize parse_args add_argument ArgumentParser getNameSuffix debiased_embeddings parse_args add_argument ArgumentParser debiased_embeddings list getAttScoreStringsPerEpoch append float split show list plot print dict append range len show subplots arange set_title set_xticklabels tight_layout bar set_ylabel set_xticks legend autolabel len dict readFromJsonFile load dict readFromJsonFile load join loadDataWithPickle dict getRealEntityPairNames range len range aggregateDiff float format format_num print getTrueFalseCombos getModelNameSimulateArgs dict get_raw_results get_pps_scores range checkmark print getModelNameSimulateArgs getBootstrappedTestFiles get_raw_results get_pps_scores dict calculate_f1_score percentile min max dict list extend dict Workbook join writeNewResultsSheet save writeToJsonFile join makedirs dict list extend sqrt list extend updateVariances getAbsoluteResults dict getGenderDifferencesResults getTestFiles squareRootVariances updateSumsAndRanges getAbsoluteResults dict getGenderDifferencesResults getTestFiles aggregate_objects getMeansAndRanges getStandardDeviations dict getTestFiles getRawMetricAbsoluteScores getBootstrappedTestFiles print calc_fbeta_score append abs len list calculate_scores dict append range len get_raw_results get_pps_scores getModelNameSimulateArgs dict join getModelName add_argument encoder getGenderDifferencesResults selector ArgumentParser writeToJsonFile parse_args getTestFiles join readFromJsonFile makeResultsCompatible join readFromJsonFile load join str format plot print xlabel grid ylabel ylim title savefig legend xlim max auc print getWordEmbeddingFileName getNameSuffix male_test_files concat reduce_mean zip append expand_dims precision_recall_curve pos_embedding word_embedding reduce_max embedding_lookup expand_dims constant conv1d embedding_lookup reduce_sum transpose matmul __dropout__ __logit__ append word_vec randn word_tokenize float32 word_vec append zeros len makedirs graph FileWriter global_variables_initializer ConfigProto Session run load stdout setFormatter replace getLogger addHandler make_dir StreamHandler dictConfig Formatter open float len load reshape array open load show format plot name loadData print xlabel grid ylabel tight_layout average_precision_score precision_recall_curve ylim savefig legend xlim enumerate getPhr2vec cdist argmin set embed_dim enumerate read_data create_feed_dict choice logits run getPhr2vec cdist argmin set embed_dim enumerate append get_alias2rel enumerate print append range dict add genId set list dict append range len join strip genIdMapping format getAllEntries print readFromJsonFile system writeToJsonFile convertEntriesToOpenNREFormat join createOpenNREFile getCommandLineArgs dataset getNameSuffix print createOpenNREFile join listdir int join format print len readFromJsonFile bootstrapped writeToJsonFile sample split deepcopy list readFromJsonFile add dict set writeToJsonFile append add set add set format getSpecificEntries countInstances getMaleAndFemaleNames print readFromJsonFile countEntityPairs countInstances len dict list deepcopy list getSpecificEntries countInstances getMaleAndFemaleNames print readFromJsonFile extend dict initRelationDict append getMaleAndFemaleNames print readFromJsonFile dict getAverageSentencesPerArticle list format getSpecificEntries getMaleAndFemaleNames print readFromJsonFile extend dict initRelationDict deepcopy list format print genderSwap computePositioning append range len genderSwapSubsetOfDataset extend getRandomArrayIndex extend pop getSpecificEntries getEqualizedEntriesThroughDownSampling getNamesFromFileToDict readFromJsonFile writeToJsonFile getEqualizedEntriesThroughDownSampling getNamesFromFileToDict isalpha isspace clean word_tokenize str word_tokenize addNameToAnonymizationDict print get_entities addPunctuationToTags dict clean startNERServer range len nameAnonymizeStr computePositioning clean range len next iter set word_tokenize len clean range split createNameAnonymizationDict nameAnonymizeJson getAllEntries readFromJsonFile createNameAnonymizationDict writeToJsonFile createNameAnonymizationDict nameAnonymizeJson getAllEntries format createNameAnonymizedJsonDatasetEntries print createEqualizedJsonDatasetEntries createGenderSwappedDatasetEntries readFromJsonFile writeToJsonFile exists split getTrueFalseCombos readFromJsonFile createDebiasedDataset createNameAnonymizedJsonDatasetEntries name_anonymize format equalized_gender_mentions createNameAnonymizedJsonDatasetEntries print gender_swap createEqualizedJsonDatasetEntries createGenderSwappedDatasetEntries readFromJsonFile swap_names neutralize writeToJsonFile dataset getNameSuffix exists int getTextfile readlines add set lower str list int getTextfile print strip readlines close dict lower NameProb append clean range split join getTextfile len readlines dict range split replace join word_tokenize get_entities createSwapDict len createGenderedSetsAndLists addPunctuationToTags clean getName range split writeToJsonFile readFromJsonFile list word_tokenize sorted print readFromJsonFile set dict add append print readFromJsonFile add dict set list print readFromJsonFile genderSwap add dict set writeToJsonFile append range len readlines strip add set open findCharacterPosInSent getPosRangeForNames punctuation whitespace len set difference cleanWord union range split combinations word_tokenize findListSubset len range split range len Ner getNamesFromFileToDict append list append list append list range len dict deepcopy list append parse_args add_argument ArgumentParser WordEmbedding debias save debiasEmbedding getWordEmbeddingFileName getTrueFalseCombos getNameSuffixSimulateArgs bootstrapped getWordEmbeddingFileName debiasEmbedding getCommandLineArgs getNameSuffix getCommandLineArgs getEntriesForDataType list punctuation word_tokenize translate maketrans append vocab list dict writeToJsonFile item append list readlines dict open writeToJsonFile append split join replace write split open preprocessDebiasedFile read formatWordVectorFile name_anonymize equalized_gender_mentions gender_swap swap_names bootstrapped neutralize join getAllSentences chdir save_word2vec_format readFromJsonFile system getWordEmbeddingFileName Word2Vec formatWordVectorFile print getTrueFalseCombos trainWord2VecEmbedding getNameSuffixSimulateArgs trainWord2VecEmbedding bootstrapped createArgsString getCommandLineArgs getNameSuffix print join norm words v set sqrt normalize enumerate drop set isinstance print join int list join print rescale any zip range len fit PCA append v array len load from_dict from_glove from_word2vec standardize_words normalize_words open _fetch_file _fetch_file _fetch_file _fetch_file _fetch_file _fetch_file _fetch_file _fetch_file _fetch_file format _fetch_file T zeros_like astype set dot zeros enumerate vectors from_dict list format isinstance debug choice mean vstack calculate_purity max range fit_predict len vectors from_dict list defaultdict T isinstance fetch_semeval_2012_2 mean dot OrderedDict vstack correlation append sum keys len from_dict isinstance SimpleAnalogySolver set OrderedDict mean sum predict from_dict fetch_wordrep format product isinstance SimpleAnalogySolver category set info zeros float sum predict vectors from_dict format isinstance mean warning vstack array word_id from_dict iteritems y format evaluate_categorization isinstance join evaluate_analogy info DataFrame X evaluate_similarity isinstance text_type islice iter splitext isinstance join list int check_random_state glob len choice range _fetch_file append set startswith split append set split join defaultdict glob set split append union _fetch_file _get_cluster_assignments values apply _get_as_pd flatten float astype _get_as_pd values values astype mean _get_as_pd float std values _get_as_pd join read_csv values _fetch_file makedirs join glob _get_dataset_dir enumerate _fetch_file isabs readlink write float max time update int read strip write close len join _makedirs print readlinkabs extend getenv islink append copyfileobj remove print extractall close is_zipfile is_tarfile dirname splitext ZipFile open zeros logical_or isinstance _filter_column ones fcomb logical_and logical_or zeros print dirname abspath join _makedirs move rmdir listdir _makedirs move warn exists urlparse basename dirname _get_dataset_dir close mkdir splitext join remove print _fetch_helper dumps _uncompress_file path rmtree movetree hexdigest join sorted isdir append listdir from_word2vec evaluate_on_semeval_2012_2 _fetch_file from_word2vec evaluate_on_WordRep _fetch_file list from_word2vec fetch_google_analogy evaluate_analogy choice range _fetch_file array from_word2vec fetch_ESSLI_2c _fetch_file from_word2vec standardize_string words standardize_words _fetch_file from_dict standardize_words join Vocabulary Embedding to_word2vec from_word2vec mkdtemp array join from_word2vec to_word2vec mkdtemp _fetch_file fetch_battig fetch_ESSLI_2c fetch_ESSLI_1a fetch_BLESS fetch_AP fetch_ESSLI_2b fetch_MTurk fetch_RW fetch_RG65 fetch_MEN product set fetch_WS353 set fetch_SimLex999 fetch_multilingual_SimLex999 fetch_msr_analogy iteritems fetch_wordrep X_prot fetch_semeval_2012_2 fetch_google_analogy vectors list y fetch_SimLex999 from_word2vec _fetch_file words dict zip X evaluate_similarity y fetch_SimLex999 from_word2vec _fetch_file X normalize_words evaluate_similarity basicConfig getstate Embedding CountedVocabulary transform_words basicConfig getstate Embedding CountedVocabulary transform_words basicConfig getstate Embedding CountedVocabulary transform_words Embedding basicConfig OrderedVocabulary transform_words Embedding basicConfig OrderedVocabulary transform_words Embedding basicConfig OrderedVocabulary transform_words Embedding basicConfig transform_words Vocabulary Embedding basicConfig transform_words Vocabulary Embedding basicConfig transform_words Vocabulary basicConfig getstate Embedding CountedVocabulary transform_words basicConfig getstate Embedding CountedVocabulary transform_words basicConfig getstate Embedding CountedVocabulary transform_words Embedding basicConfig OrderedVocabulary transform_words Embedding basicConfig OrderedVocabulary transform_words Embedding basicConfig OrderedVocabulary transform_words md5 encode update
# The source code for the paper titled "Towards Understanding Gender Bias in Neural Relation Extraction" by Tony Sun and Andrew Gaut et. al This code contains several different modules used for the experimentation given in the paper ## General Usage * Running full experiments * To run all experiments for varying encoder/selector, use ./run_bootstrapping_modelcombos.sh * To run all experiments varying debiasing method, use ./run_bootstrapping.sh * All sub-modules use the same command line arguments. These are * -gs : indicates that you want to use counterfactual data augmentation * -egm : indicates you want to use balanced gender mentions * -de : indicates you want to use debiased word embeddings
95
Andrewsher/X-Net
['lesion segmentation', 'semantic segmentation']
['X-Net: Brain Stroke Lesion Segmentation Based on Depthwise Separable Convolution and Long-range Dependencies']
loss.py FSM.py data.py utils.py main.py model.py train_data_generator val_data_generator create_val_date_generator create_train_date_generator fsm conv2d_bn_relu create_fsm_model dice get_loss dice_loss main train depth_conv_bn_relu x_block conv2d_bn_relu create_xception_unet_n get_score_for_one_patient get_score_from_all_slices print File shuffle append range len append File range len print add dot int_shape conv2d_bn_relu Model Input fsm summary sum clear_session __next__ create_xception_unet_n save_weights ReduceLROnPlateau get_score_from_all_slices DataFrame str list TensorBoard create_val_date_generator append range predict fit_generator mean mkdir keys compile join print EarlyStopping to_csv create_train_date_generator CSVLogger ModelCheckpoint array len list print to_csv keys mean KFold split append train DataFrame array range enumerate depth_conv_bn_relu conv2d_bn_relu add format print shape Model load_weights x_block summary Input fsm conv2d_bn_relu print logical_and count_nonzero print get_score_for_one_patient range append
# X-Net [X-Net: Brain Stroke Lesion Segmentation Based on Depthwise Separable Convolution and Long-range Dependencies (MICCAI 2019)](https://arxiv.org/abs/1907.07000) # 作者 Kehan Qi, Hao Yang, Cheng Li, Zaiyi Liu, Meiyun Wang, Qiegen Liu, and Shanshan Wang # 项目简介 ## 1. 功能 采用X-Net实现对ATLAS数据集的图像分割 ## 2. 性能 |Dice|IoU|Precision|Recall|Number of Parameters| |-----|-----|-----|-----|-----|
96
Andy-CSKim/fashion-mnist
['data augmentation']
['DENSER: Deep Evolutionary Network Structured Representation']
utils/helper.py configs.py benchmark/convnet.py app.py benchmark/runner.py utils/argparser.py visualization/project_zalando.py utils/mnist_reader.py app_GD.py myNeuralNet myNeuralNet get_json_logger touch touch_dir _get_logger main cnn_model_fn PredictJob JobWorker JobManager get_args_request parse_arg get_args_cli now_int upload_result_s3 get_sprite_image invert_grayscale create_sprite_image vector_to_matrix_mnist UploadS3Thread load_mnist dirname makedirs makedirs setFormatter touch_dir DEBUG getLogger addHandler StreamHandler Formatter touch setLevel INFO FileHandler setFormatter getLogger addHandler Formatter touch setLevel INFO FileHandler dense max_pooling2d dropout one_hot minimize reshape GradientDescentOptimizer conv2d softmax_cross_entropy asarray evaluate print Estimator shuffle labels images numpy_input_fn train range read_data_sets int append items list defaultdict utcfromtimestamp info int isinstance ones sqrt ceil array range vector_to_matrix_mnist invert_grayscale join
Andy-CSKim/fashion-mnist
97
AndyShih12/SSDC
['density estimation']
['Smoothing Structured Decomposable Circuits']
CollapsedCompilation/order/order.py CollapsedCompilation/order/pybn/engines.py CollapsedCompilation/order/pybn/util.py CollapsedCompilation/order/pybn/learn.py CollapsedCompilation/order/pybn/networks/grid/grid.py CollapsedCompilation/order/pybn/net.py InverseAckermannCalculation/a.py CollapsedCompilation/order/pybn/network.py CollapsedCompilation/order/pybn/__init__.py SemigroupRangesum/semigroup_rangesum.py CollapsedCompilation/order/pybn/learn-old.py CollapsedCompilation/order/pybn/net_io.py SimulateIntervals/simulate_intervals.py CollapsedCompilation/order/pybn/core.py mkdir_p ent Learn DataSet project_onto Network has_valid_cpts tuples is_binary_network is_bayesian_network file_tokenizer open_uai topological_order parse_order parse_uai reorder_network is_topologically_ordered Index get_hostname get_username gen_grid_edges_orig binary_grid_to_wcnf gen_grid gen_grid_edges binary_grid_to_uai parse_options Grid Pots binary_grid_to_fastinf main get_inv_ack_params R main RangeSum Naive RangeSum test makedirs enumerate len pop remove pot_domains size add reverse iter append next range enumerate pop remove pot_domains size add range enumerate pots pot_domains abs tuples var_sizes zip sum enumerate print str has_valid_cpts type var_sizes pots pot_domains sort len var_sizes topological_order zip type Index enumerate strip close split open int file_tokenizer append next range parse_uai is_bayesian_network split close open iter expanduser basename list range int gen_grid_edges m ids do_toroid log10 Grid n s append range extend append range extend join argv zip len write close edge_pots open edges s m n len write close edge_pots open edges m n len write close enumerate open edges m range append n add_option OptionParser p error add_option_group OptionGroup s parse_args set_defaults type Pots destroy q n min R int get_inv_ack_params R setrecursionlimit print min eval input append list print_gates debug RangeSum strip map gates retrieve split process len time print Naive RangeSum range query append randint round log enumerate
AndyShih12/SSDC
98
Anery/RSAN
['joint entity and relation extraction', 'relation extraction']
['A Relation-Specific Attention Network for Joint Entity and Relation Extraction']
train.py data/webnlg/util.py data_prepare.py data/multiNYT/util.py misc/utils.py networks/__init__.py misc/LossWrapper.py eval_utils.py DataLoader.py config.py networks/embedding.py networks/encoder.py data/webnlg/config.py model/__init__.py model/Rel_based_labeling.py networks/decoder.py Test.py parse_opt Data Loader train_collate dev_test_collate Train_data pickle_load pickle_dump DataPrepare eval evaluate multiple_test is_normal_triple is_EPO overlapping_test Test load_label is_SEO train save_checkpoint load_label write_data build_labels build_word_dict save_relation_freq build_tags read_json build_char build_rel2id run_split split parse_opt write_data build_labels build_word_dict save_relation_freq build_tags read_json build_char run_split split LossWrapper pickle_load set_lr attn_mapping clip_gradient pickle_dump build_optimizer get_chunk_type get_chunks CrossEntropy tag_mapping Rel_based_labeling setup AttentionDot Decoder AttentionNet charEmbedding Embedding Encoder parse_args add_argument ArgumentParser append tensor stack append tensor cat PY3 PY3 test_len dev_len eval reset train load open add set is_normal_triple is_EPO add set is_normal_triple print is_EPO eval zip append is_SEO print eval zip append len load eval_batch_size Loader multiple_test evaluate overlapping_test input_rel2id load_label input_label2id rel_num cuda open join format checkpoint_path print makedirs save state_dict Loader batch_size zero_grad save_checkpoint load_label input_label2id cuda eval_batch_size set_lr learning_rate_decay_every clip_gradient learning_rate_decay_start build_optimizer load_state_dict grad_clip range get format synchronize size start_from get_batch_train learning_rate_decay item float rel_num load join time learning_rate evaluate backward print LossWrapper parameters current_lr isfile step LW_model close write update word_tokenize list insert set save list set enumerate sample range append len write_data print read_json split open list dump insert add set open append dump print open print len dict dump get open print param_groups param_groups clamp_ split append get_chunk_type enumerate append list tuple get_chunks zip append range load join load_from Rel_based_labeling load_state_dict
Source code for IJCAI 2020 paper "[A Relation-Specific Attention Network for Joint Entity and Relation Extraction](https://www.ijcai.org/Proceedings/2020/0561.pdf)" ## Prerequisites - Pytorch (1.0.1) - nltk - numpy - six ## Code ├── config.py ├── **data** ├── DataLoader.py
99