You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Lending Dataset

This dataset was build using the same methods and filters that generates the dataset for the INKMAN model. It is mainly based in two pillars, one is the sql filter used to select the merchants that were eligible for loans at some point in the past. The second are the features selected as meaningful by the lending team. To access these repositories you need to ask special permissions, talk to your leader about it.

We will do as best as we can to keep this dataset updated using the latest updates from the lending team. If by any change this dataset do not reflect the latest versions from lending models. Please let any of the authors know about it (contact in sections below).

Metodology

As we already stated, the main objective of this dataset is to be as faithful as possible to the original data source as possible, with only minor pre-processing.

Preprocessing

In this dataset data types were normalized to common data types to minimize variance. We used only Int64, float64, datetime (using UTC), string and bool. All datatypes were casted/coalesced to these types as best suitable.

Features were reordered, being the label the first field (for best visualization) and batch as the second. The batch information is somewhat redundant since each data subset is named. We have chosen leave this information since its easier to drop a column than to add a column based on a filename regex.

Split labeling was kept as the original dataset, besides it has serious issues with distribution regarding splits and labels. More on that later in Biases section.

Redundant columns were dropped, since the original status, fist_loan and first_loan_or_not_repaid provided direct information about the label. Being the status equal the label, except it was categorical and first_loan_or_not_rapaid could infer the label using and or boolean operation. This would lead to a strongly biased training data.

Other columns that were not much meaningful for model training were kept for sake of keep the dataset close as possible to original. To make things clear, by meaningful we meant they do not provide any inference information, for example merchant_id, or they are not ethical to use, for example geographical locations.

Data organization

This dataset is organized in two dimensions, first one is the split that is train if the dataset was originaly labeled as training data and test if the data was originaly labeled as test. The second dimension is the batch, the experiment name, as executed by the lending team. In this sense you can fetch the whole dataset or just a given experiment (or set of ones), based in the batch name, and you can also obtain the data for training or test.

In repository each batch/split contain its own file. The number of entries (rows) in each one of the experiments isn't uniform, also the distribution about split it isn't too. Be warned. More about this in Biases section below.

Features in dataset

Feature Description
label The target label, being True(1) when we should lend for this used and False(0) when we should not lend to this user.
batch The experiment name where this data was obtained
merchant_id The merchant_id identifier in the original database
loan_id The loand_id identifier in the original database
market_id The market segment related to this user
user_loan_count How many loans this user has already taken
borrowed_amount The amount of money borrowed
paid_amount The amount of money paid
cumulative_repayments The number of repayments regarding the lending operation
percentage_paid The percentage of lending paid
event_timestamp When the loan was taken
loan_duration How much time the loan has taken
allowlist_date When this loan allowance was given (not when the loan was taken
geohash_7 Geohash 7 information
geohash_6 Geohash 6 information
geohash_5 Geohash 5 information
geohash_4 Geohash 4 information
geohash_3 Geohash 3 information
code_tract_id IBGE code tract georeference
active_cnp_days_since_aquisition_from_merchant The 25th percentile of transactions amount at merchant (approved+denied)
active_cp_days_from_merchant_last_30d The maximum transactions amount at merchant (approved+denied)
active_cp_days_since_aquisition_from_merchant The 90th percentile of transactions amount at merchant (approved+denied)
active_days_high_amount_since_acquisition_from_merchant Count of unique customer at merchant (card_number+card_holder_name)
active_over_50_brl_cp_days_from_merchant_last_30d Ratio of recurring customer to unique customer at merchant
active_over_50_brl_pix_days_from_merchant_last_30d Max approved amount transaction form merchant
address_updated_at Number of days in which the merchant has had at least one approved card present transaction with any amount in the past 30d.
afternoon_transaction_count_and_transaction_count_ratio_from_merchant_last_90d_v2 Number of days in which the merchant has had at least one approved card present transaction with amount larger than 50 BRL in the past 30d.
amount_approved_contactless_sum_from_merchant Number of days in which the merchant has had at least one approved pix transaction with any amount larger than 50 BRL in the past 30d.
amount_approved_pl_avg_from_merchant (median device lifespan (= date of latest session start - date of earliest session start with the device) in days)/min(user lifespan (= date of latest session start - date of earliest session start) in days, 360 days)
amount_contactless_avg_from_merchant Distinct number of days when any type of non automatic events were registered in the last 30 days
amount_contactless_sum_from_merchant Number of sessions divided by the number of active days in the last 30 days
amount_not_pl_avg_from_merchant Sum of the time spent in seconds per each session in the last 30 days
amount_not_pl_sum_from_merchant CNPJ merchants = Time between company opening and opening account at CW. CPF merchants = 0. Company opening and opening account at CW are the same.
amount_pl_avg_from_merchant Date which the merchant account was created on CW
amount_sum_from_merchant_last_7d CNPJ merchants = Date the company was opened at Receita Federal. CPF merchants = Date the merchant created the account at CW.
amount_sum_from_merchant_last_30d Cumulative transaction in card present
amount_sum_from_merchant_last_90d CNAE avg related with transaction
amount_transaction_max_from_merchant Approved amount from merchant last 5 days
amount_transaction_p25_from_merchant From merchant historical approved amount last 5 days
amount_transaction_p90_from_merchant Average time to first reply in seconds (last 90 days)
app_active_days_from_merchant_last_30d Sum amount of transactions from merchant last 30d (approved and denied included).
approved_amount_transaction_max_from_merchant Sum amount of approved transactions from merchant last 30d.
approved_contactless_transaction_amount_from_merchant_last_30d Sum amount of denied transactions from merchant last 30d.
approved_pix_amount_from_merchant Transaction count between 6 and 12 BRT from merchant in last 90 days.
approved_transaction_amount_from_merchant_last_12h Transaction ratio done between 12 and 18 form merchant last 90 days.
approved_transaction_amount_from_merchant_last_30d Transaction ratio done between 6 and 12 from merchant last 90 days.
app_sessions_per_active_day_from_merchant_last_30d Sum the approved amount in contactless transactions at merchant in the last 30 days.
app_sum_session_time_from_merchant_last_30d Transaction count after 18 BRT from merchant in last 30 days.
avg_cp_installment_from_merchant_last_30d Transaction ratio done after 18 BRT from merchant last 30 days.
avg_time_first_reply_seconds_from_merchant_last_90d Total amount sum in merchant in last 90 days.
cnp_top1_cid_app_tpv_from_merchant Sum amount of transactions from merchant last 7d (approved and denied included).
company_opening_and_account_opening_time_diff_from_merchant Transaction ratio done between 6 and 12 from merchant last 7 days.
cp_top1_cid_app_tpv_from_merchant Total amount sum of all transactions with authorization code for insufficient funds ('51') from merchant last 30d.
cp_top1_cid_count_trx_from_merchant Average installments from CP transactions from merchant in the last 30 days.
cp_top1_cid_tpv_ratio_from_merchant Sum amount of transactions with most frequent BIN from merchant in the last 30 days.
created_at_from_merchant Sum amount of approved CP transactions with most frequent BIN from merchant in the last 30 days.
cumulative_daily_count_cp_trx Total approved transaction sum in merchant in last 12h.
customer_recurring_count_and_customer_unique_count_ratio_from_merchant Day of last CP transaction (independent of status)
customer_unique_count_from_merchant Day of last denied CP transaction
day_last_cp_transaction Day of last Payment Link Web transaction (independent of status)
day_last_ctls_transaction Day of last Contactless transaction (independent of status)
day_last_denied_cp_transaction Day of last denied Contactless transaction
day_last_denied_ctls_transaction Day of last Payment Link transaction (independent of status)
day_last_pl_transaction Date of the last address update
day_last_plw_transaction Maximum days with sustained relationship with the financial system. Including legal representative and cnpj of merchant.
days_financial_system_relationship_from_merchant Ratio of active credit to the total credit available to the merchant. Including legal representative and cnpj of merchant.
denied_bin_count_from_merchant Total credit amount available to the merchant. Including legal representative and cnpj of merchant.
denied_bin_sum_from_merchant Percentile of time since the last transaction from the merchant.
denied_transaction_amount_from_merchant_last_30d Percentile of time since the last transaction from the merchant within the last 15 days.
evening_transaction_count_and_transaction_count_ratio_from_merchant_last_30d_v2 Ratio of approved transactions amount from the top one card_token_id from merchant.
evening_transaction_count_from_merchant_last_30d_v2 Amount of approved transactions from the top one card_token_ids from merchant.
inactivity_ratio_from_merchant Ratio between days without approved transactions above 50 reais and all days since aquisition
median_device_lifespan_over_user_lifespan Days since aquisition (until blocked if blocked) where the merchant had more than 50 reais in cnp transactions
morning_transaction_count_and_transaction_count_ratio_from_merchant_last_7d Days since aquisition (until blocked if blocked) where the merchant had more than 50 reais in cp transactions
morning_transaction_count_and_transaction_count_ratio_from_merchant_last_90d_v2 Days since aquisition (until blocked if blocked) where the merchant had at least one approved transaction with amount of 2000 or higher.
morning_transaction_count_from_merchant_last_90d_v2 Max denied sum in a single bin from merchant
opening_date_from_merchant Max denied count in a single bin from merchant
percentile_time_since_last_txn_from_merchant Total sum of transactions amount not in payment link
percentile_time_since_last_txn_from_merchant_last_15d Average transaction amount not in payment link
ratio_active_credit_total_credit_from_merchant Total sum of contactless amount in merchant
rolling_cnae_avg Total sum of contactless amount in merchant
sum_amount_transaction_with_highest_freq_bin_from_merchant_last_30d Total sum of approved contactless amount in merchant
sum_approved_amount_cp_transaction_with_highest_freq_bin_from_merchant_last_30d Amount average of payment link transactions from merchant
sum_credit_from_merchant Approved amount average of payment link transactions from merchant
top_one_card_token_id_transaction_amount_ratio_from_merchant_last_180d Sum of all approved pix transactions from merchant overall.
top_one_card_token_id_transactions_amount_from_merchant_last_180d Approved TPV of the non-null Card Token Id that transacted the most (in amount) with the merchant in card not present transactions
transaction_amount_authorization_code_insufficient_funds_from_merchant_last_30d How much of the merchant's card present TPV (approved + denied) comes from the non-null Card Token Id that most transacted (in amount) in card present transactions
transaction_approved_amount_5d Approved TPV of the non-null Card Token Id that transacted the most (in amount) with the merchant in card present transactions
transaction_approved_count_5d Count of transactions (approved + denied) of the non-null Card Token Id that transacted the most with the merchant in card present transactions

Biases and other issues

To start, be warned that the dataset is unbalanced in many levels. We will detail the unbalance and possible strategies to mitigate this.

The first unbalance is related the distribution of labels. There are around 80% of positive labels and 20% of negative labels. Also the ratio of positive/negative is very different in training and test data. This leads to a model more inclined to positive answers than negative ones. Also during test, because of the label unbalance, a model inclined to positive responses will achieve a very high values in positive metrics as precision, recall and F1. To avoid this some strategies can be used, one is use boosting to create a new dataset with more uniform distribution, other possibility is during training use a sample size over the positive labels taken from the size of samples in negative labels. The same approach can be used in testing to try to remove the skew from metrics.

The next balance issue is related to the distribution of splits among the batches experiments. Different batches contain a different ratio of training/test split. If, by hypothesis, the type of user, and by consequence the data related to the user, changes beetween batches the training will take more influence from batches with larger prorportion than the ones with smaller, hence the test can not be able to capture correctly the characteristics of the user. To avoid this, assuming that hypothesis is true, we should use the same proportions (or at least try to) among all batches. Unfortunately this is not possible since some batches are so unbalanced that some lack test samples while others lack training samples.

Some of the columns do not provide any meaningful information for inference, for example features like batch, merchant_id. loan_id are only identifiers to the original database and we strongly recommend to drop them for any mode training. We kept them just to keep the data as raw as possible. More refined datasets will not contain these informations.

Timestamp based columns as event_timestamp and allowlist_date do not contain any meaningful information for regular training, it can somehow be used for timeseries data but we conjecture that there isn't enough datapoints for any timeseries training in this dataset, therefore we also recommend not use them for training.

Geographical data as geohashes_* and code_tract_id are geographical and as is, can lead to social status biases, we recommend not use them.

Revenue based features as borrowed_amount, paid_amount, percentage_paid can be used as an alternative fitting function if we want to maximize the profit. Use "percentage_paid > 1" as alternative label can lead to interesting results. We advice to use them as substitute to label.

Loan behavior features as user_loan_count, loan_duration can have very low meaning, for example small loan_duration also reflect a small percentage_paid since faster the payment, smaller the interest over it.

Possible enchancments

We should analize how the the positive and negative labels are reflected in different batches, does the behavior of the data changes?

There are some batches a very low number of samples. We need to investigate this and add more samples in these batches.

How to use this dataset

Simple reading

This dataset was built for ease of use. The simpler code to use it is:

import datasets

lending_train, lending_test = datasets.load_dataset(
    'igormorgado-cw/datadrafts',
    split=['train', 'test'],
    token=[YOUR_HUGGING_FACE_TOKEN]
)

Acessing one batch

To retrieve a single batch, in this example the bias batch, you can use the following snippet

import datasets

lending_bias_train, lending_bias_test = datasets.load_dataset(
    'igormorgado-cw/datadrafts',
    split=['train', 'test'],
    name='bias'
    token=[YOUR_HUGGING_FACE_TOKEN]
)

To obtain the experiment names, you can query to the hugging face api.

import datasets

builder = datasets.load_dataset_builder('igormorgado-cw/datadrafts')
batches = builder.builder_configs.keys()
print(list(batches))

And more...

Other datasets will be created derived from this dataset to achieve better computing performance or tailored to specific tasks. Stay tuned...

Authors

Igor Morgado igor.morgado@cloudwalk.io

License

Proprietary. This dataset is owned by Cloudwalk Inc. Copy or use without previous permission is striclty forbidden. If you, by any means, had aceess to this data, please contact the authors informing the full contents of the data you have and the location where you have found it.

Downloads last month
16
Edit dataset card