{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2951d7a5",
   "metadata": {},
   "source": [
    "# <center><font face=\"楷体\" color=\"lightblue\">数据预处理</font></center>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b687d5f",
   "metadata": {},
   "source": [
    "#### <center><font face=\"宋体\" color=\"orange\">天津大学——马梓航</font></center>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "cad178da",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "from tqdm import tqdm #这是进度条\n",
    "from dateutil.relativedelta import relativedelta #进行date运算\n",
    "import datetime\n",
    "import h5py\n",
    "import matplotlib.pyplot as plt\n",
    "from pandas.tseries.offsets import BDay\n",
    "from multiprocessing import Pool"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e287db77",
   "metadata": {},
   "source": [
    "##### **1.原始文件处理**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a884a4f1",
   "metadata": {},
   "source": [
    "> 逻辑描述：读取指定文件夹的中的数据，并且简单模拟对数据的预处理，如“排除停牌”、“排序”、“日期转换”（实际要复杂得多"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "f208b2a5",
   "metadata": {},
   "outputs": [],
   "source": [
    "def merge_data():\n",
    "    rawdata_dirname=r\"B:\\desktop\\python\\quant\\master\\复现_基于多任务学习的低频量价模型\\rawdata\"\n",
    "\n",
    "    all_filename=os.listdir(rawdata_dirname)\n",
    "    df_all=pd.DataFrame()\n",
    "    for filename in all_filename:\n",
    "        name=os.path.join(rawdata_dirname,filename)\n",
    "        temp_df=pd.read_csv(filepath_or_buffer=name,\n",
    "            skiprows=0,\n",
    "            parse_dates=[\"time\"])\n",
    "        df_all=pd.concat((df_all,temp_df))\n",
    "    df_all.columns=['date', 'code', 'open', 'close', 'low', 'high', 'vwap', 'volume']\n",
    "    #vwap即Volume Weighted Average Price\n",
    "    df_all.sort_values(by=[\"date\",\"code\"],ascending=True,inplace=True)\n",
    "    df_all=df_all[df_all[\"volume\"]>0]#排除掉停牌的\n",
    "    df_all.reset_index(inplace=True,drop=True)\n",
    "    return df_all"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "509bbf01",
   "metadata": {},
   "source": [
    "##### **2.计算标签（后面会作为网络的预测结果**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "01d6932d",
   "metadata": {},
   "source": [
    "> 逻辑描述：先对数据以代码为第一优先级、日期为第二优先级进行升序排序；对数据按code进行分组并batch计算未来第十日的收益率作为标签，并且去除分组结果的第一个index与第一步排完序的数据进行拼接"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "f3c38e5b",
   "metadata": {},
   "outputs": [],
   "source": [
    "def cal_future_rev(df,days=10):\n",
    "    df=df.sort_values([\"code\",\"date\"],ascending=True)\n",
    "\n",
    "    df[\"return_10d\"]=df.groupby([\"code\"]).apply(\n",
    "        lambda x:x[\"close\"].pct_change(days).shift(-days)\n",
    "    ).reset_index(level=0,drop=True)\n",
    "    return df"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9466eb49",
   "metadata": {},
   "source": [
    "##### **3.处理周频数据**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c61a672",
   "metadata": {},
   "source": [
    "> 逻辑描述：将每个日期还原至在它之前且离得最近的一个周一（'2025-06-15'周日-->'2025-06-9'周一"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "19791c6d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_monday_from_date(df_date):\n",
    "    date = pd.to_datetime(df_date)\n",
    "    monday = date - pd.to_timedelta(date.dayofweek, unit='D')\n",
    "    return monday.date() + BDay(0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0be0fa9e",
   "metadata": {},
   "source": [
    "> 逻辑描述：将数据还原至当月一号（'2025-06-15'-->'2025-06'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "4eae4ea5",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_month_from_date(target_date):\n",
    "    date = pd.to_datetime(target_date)\n",
    "    return datetime.datetime.strftime(date, \"%Y-%m\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e441b277",
   "metadata": {},
   "source": [
    "> 逻辑描述：判断是周频还是月频的数据，再分别进行不同的处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "c646b2aa",
   "metadata": {},
   "outputs": [],
   "source": [
    "def gen_agg_time(df, date_col='date', agg_dim=\"W\"):\n",
    "    df_ms = df.copy()\n",
    "    if agg_dim == \"W\":\n",
    "        df_ms['new_date'] = pd.to_datetime(df[date_col]).apply(get_monday_from_date)\n",
    "        #想对Series的每一行进行一个操作要借助apply\n",
    "    elif agg_dim == \"M\":\n",
    "        df_ms['new_date'] = pd.to_datetime(df[date_col]).apply(get_month_from_date)\n",
    "    return df_ms['new_date']"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72a947a2",
   "metadata": {},
   "source": [
    "> 逻辑描述：对数据进行聚合，并且返回以code为第一优先级、以date为第二优先级的数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "07b00573",
   "metadata": {},
   "outputs": [],
   "source": [
    "def agg_col(df_ms, agg_col_name, seq_len):\n",
    "    weekly_data = df_ms.groupby([\"code\", agg_col_name]).agg({\n",
    "        'date': 'last',  # 周最后交易日作为周结束日\n",
    "        'open': 'first',  # 周第一个交易日的开盘价\n",
    "        'high': 'max',  # 周内最高价\n",
    "        'low': 'min',  # 周内最低价\n",
    "        'close': 'last',  # 周最后一个交易日的收盘价\n",
    "        'vwap': lambda x: np.average(x, weights=df_ms.loc[x.index, 'volume']),  # 成交量加权平均\n",
    "        'volume': 'sum'  # 周成交量总和\n",
    "    })\n",
    "    weekly_data = weekly_data.reset_index()\n",
    "\n",
    "    all_agg_date = weekly_data[agg_col_name].unique()\n",
    "    # all_agg_date = np.sort(all_agg_date)\n",
    "    beg_time = all_agg_date[-seq_len:][0]#只要后20周\n",
    "\n",
    "    weekly_data = weekly_data[weekly_data[agg_col_name] >= pd.Timestamp(beg_time)]\n",
    "\n",
    "    weekly_data = weekly_data.reset_index(drop=True)\n",
    "    weekly_data = weekly_data.sort_values([\"code\", agg_col_name], ascending=True)\n",
    "    return weekly_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d55f69a",
   "metadata": {},
   "source": [
    "> 逻辑描述：将指定周数的周频数据聚合并返回"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "2d2d3ca4",
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_weekly_data(df_outer, end_date, weeks=20):\n",
    "    df=df_outer.copy()\n",
    "    df[\"date\"]=pd.to_datetime(df[\"date\"])#以防万一\n",
    "    df=df.where(df[\"date\"]<=end_date).dropna()\n",
    "    begin_time=end_date+relativedelta(weeks=-weeks-2)#防止有节假日，多搞两周\n",
    "    df=df.where(df[\"date\"]>begin_time).dropna()\n",
    "\n",
    "    new_date=gen_agg_time(df,agg_dim=\"W\")\n",
    "    df[\"monday_date\"]=new_date\n",
    "    data=agg_col(df,\"monday_date\",seq_len=weeks)\n",
    "    return data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "017193fc",
   "metadata": {},
   "source": [
    "##### **4.处理月频数据**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cdf5ff23",
   "metadata": {},
   "source": [
    "> 逻辑描述：将指定月数的月频数据聚合并返回"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "a14cf58d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_monthly_data(daily_data, end_date, months=20):\n",
    "    df_ms = daily_data.copy()\n",
    "    df_ms['date'] = pd.to_datetime(df_ms['date'])\n",
    "    df_ms = df_ms.where(df_ms['date'] <= end_date).dropna()\n",
    "    beg_date = pd.to_datetime(end_date) + relativedelta(weeks=-weeks - 2)#多传两周，怕数量不够\n",
    "    df_ms = df_ms.where(df_ms['date'] >= beg_date).dropna()\n",
    "\n",
    "    new_date = gen_agg_time(df_ms, agg_dim=\"W\")\n",
    "    df_ms[\"monday_date\"] = new_date\n",
    "    data = agg_col(df_ms, agg_col_name='monday_date', seq_len=weeks)\n",
    "    return data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "238a2d56",
   "metadata": {},
   "source": [
    "##### **5.拆分成样本**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5101cc5",
   "metadata": {},
   "source": [
    "> 逻辑描述：将指定code的数据提取出来"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "487906a1",
   "metadata": {},
   "outputs": [],
   "source": [
    "def split_by_code(code, df):\n",
    "    data = df[df[\"code\"] == code]\n",
    "    data.sort_values(by=[\"date\"], ascending=True, inplace=True)\n",
    "    data = data.loc[:, \"open\":\"volume\"]\n",
    "    return data.values"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "012310a8",
   "metadata": {},
   "source": [
    "> 逻辑描述：将指定code的规定频次数据提取出来同时进行时间序列标准化，并且对于不满足数量要求的数据返回None"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "cdbcd249",
   "metadata": {},
   "outputs": [],
   "source": [
    "def main_fun(code, daily_data, weekly_data, monthly_data, daily, weeks, months, end_time):\n",
    "    tmp = daily_data[daily_data[\"code\"] == code]\n",
    "    if len(tmp) < daily:\n",
    "        return None\n",
    "    x1_daily_tmp = split_by_code(code, daily_data)\n",
    "    x2_weekly_tmp = split_by_code(code, weekly_data)\n",
    "    x3_monthly_tmp = split_by_code(code, monthly_data)\n",
    "    x1_daily_tmp, x2_weekly_tmp, x3_monthly_tmp = preprocess_features(x1_daily_tmp, x2_weekly_tmp,\n",
    "                                                                      x3_monthly_tmp)\n",
    "    if len(x2_weekly_tmp) < weeks or len(x3_monthly_tmp) < months:\n",
    "        return None\n",
    "    y_tmp = daily_data[daily_data[\"date\"] == end_time][daily_data[\"code\"] == code][\"return_10d\"].values[0]\n",
    "    return x1_daily_tmp, x2_weekly_tmp, x3_monthly_tmp, y_tmp, [code, end_time]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17151ccb",
   "metadata": {},
   "source": [
    "##### **6.数据预处理**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3c3cf69",
   "metadata": {},
   "source": [
    "> 逻辑描述：进行时间序列标准化处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6c125cbf",
   "metadata": {},
   "outputs": [],
   "source": [
    "def preprocess_features(X1, X2, X3):\n",
    "    # 时间序列标准化\n",
    "    # 主要针对成交量，成交量太大了\n",
    "    X1 = X1 / X1[-1:, :]\n",
    "    X2 = X2 / X2[-1:, :]\n",
    "    X3 = X3 / X3[-1:, :]\n",
    "    return X1, X2, X3"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26ad6eae",
   "metadata": {},
   "source": [
    "##### **7.主程序**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "548fba13",
   "metadata": {},
   "outputs": [
    {
     "ename": "NameError",
     "evalue": "name 'merge_data' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[31m---------------------------------------------------------------------------\u001b[39m",
      "\u001b[31mNameError\u001b[39m                                 Traceback (most recent call last)",
      "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[2]\u001b[39m\u001b[32m, line 5\u001b[39m\n\u001b[32m      2\u001b[39m months=\u001b[32m12\u001b[39m\n\u001b[32m      3\u001b[39m daily=\u001b[32m40\u001b[39m\n\u001b[32m----> \u001b[39m\u001b[32m5\u001b[39m df_total=\u001b[43mmerge_data\u001b[49m()\n\u001b[32m      6\u001b[39m all_date=df_total[\u001b[33m\"\u001b[39m\u001b[33mdate\u001b[39m\u001b[33m\"\u001b[39m].unique()\u001b[38;5;66;03m#获取日历\u001b[39;00m\n\u001b[32m      7\u001b[39m \u001b[38;5;66;03m# all_date=np.sort(all_date)\u001b[39;00m\n",
      "\u001b[31mNameError\u001b[39m: name 'merge_data' is not defined"
     ]
    }
   ],
   "source": [
    "weeks=20\n",
    "months=12\n",
    "daily=40\n",
    "\n",
    "df_total=merge_data()\n",
    "all_date=df_total[\"date\"].unique()#获取日历\n",
    "# all_date=np.sort(all_date)\n",
    "df_total=cal_future_rev(df_total)#计算标签\n",
    "\n",
    "results=[]\n",
    "\n",
    "for i in tqdm(range(len(all_date)-1)):\n",
    "    end_time=all_date[i]\n",
    "    if end_time<pd.Timestamp(\"2015-12-1\"):\n",
    "        continue #防止处理月频时越界\n",
    "    if str(all_date[i])[5:7]==str(all_date[i+1])[5:7]:\n",
    "        continue #只对每个月的最后一天进行处理\n",
    "    \n",
    "    end_index=np.where(all_date==end_time)[0]\n",
    "    begin_time=all_date[end_index[0]-daily]\n",
    "\n",
    "    #where函数会将为False的分量标为Nan\n",
    "    daily_data=df_total.where((df_total[\"date\"]>begin_time)&(df_total[\"date\"]<=end_time)).dropna()\n",
    "\n",
    "    weekly_data=generate_weekly_data(df_total,end_time,weeks)\n",
    "    monthly_data=generate_monthly_data(df_total,end_time,months)\n",
    "    \n",
    "    daily_data.sort_values(by=[\"code\",\"date\"],ascending=True,inplace=True)\n",
    "\n",
    "    all_code=daily_data[\"code\"].unique()\n",
    "    all_code=np.sort(all_code)\n",
    "\n",
    "    #下面都是抄的\n",
    "    pool = Pool(10)#开始并行\n",
    "    for code in all_code:\n",
    "        result = pool.apply_async(main_fun, (\n",
    "            code, daily_data, weekly_data, monthly_data, daily, weeks, months, end_time))\n",
    "        results.append(result)\n",
    "    pool.close()\n",
    "    pool.join()\n",
    "\n",
    "x1_daily = []\n",
    "x2_weekly = []\n",
    "x3_monthly = []\n",
    "y = []\n",
    "codes_dates = []\n",
    "for item in tqdm(results):\n",
    "    if item.get():#数据进一步分类\n",
    "        (x1_daily_tmp, x2_weekly_tmp, x3_monthly_tmp, y_tmp, codes_dates_tmp) = item.get()\n",
    "        x1_daily.append(x1_daily_tmp)\n",
    "        x2_weekly.append(x2_weekly_tmp)\n",
    "        x3_monthly.append(x3_monthly_tmp)\n",
    "        y.append(y_tmp)\n",
    "        codes_dates.append(codes_dates_tmp)\n",
    "\n",
    "#划分数据集\n",
    "split_ratio = 0.9\n",
    "x1_daily_train, x2_weekly_train, x3_monthly_train, y_train, codes_dates_train = x1_daily[:int(\n",
    "    len(y) * split_ratio)], x2_weekly[:int(len(y) * split_ratio)], x3_monthly[:int(len(y) * split_ratio)], y[:int(\n",
    "    len(y) * split_ratio)], codes_dates[:int(len(y) * split_ratio)]\n",
    "\n",
    "#写入文件\n",
    "with h5py.File('data/stock_data_train_month.h5', 'w') as f:\n",
    "        f.create_dataset('x1_daily', data=x1_daily_train)\n",
    "        f.create_dataset('x2_weekly', data=x2_weekly_train)\n",
    "        f.create_dataset('x3_monthly', data=x3_monthly_train)\n",
    "        f.create_dataset('y', data=y_train)\n",
    "        f.create_dataset('codes_dates', data=np.array(codes_dates_train, dtype='S'))\n",
    "\n",
    "x1_daily_test, x2_weekly_test, x3_monthly_test, y_test, \\\n",
    "codes_dates_test = x1_daily[int(len(y) * split_ratio):], \\\n",
    "                    x2_weekly[int(len(y) * split_ratio):], \\\n",
    "                    x3_monthly[int(len(y) * split_ratio):], \\\n",
    "                    y[int(len(y) * split_ratio):], \\\n",
    "                    codes_dates[int(len(y) * split_ratio):]\n",
    "with h5py.File('data/stock_data_test_month.h5', 'w') as f:\n",
    "    f.create_dataset('x1_daily', data=x1_daily_test)\n",
    "    f.create_dataset('x2_weekly', data=x2_weekly_test)\n",
    "    f.create_dataset('x3_monthly', data=x3_monthly_test)\n",
    "    f.create_dataset('y', data=y_test)\n",
    "    f.create_dataset('codes_dates', data=np.array(codes_dates_test, dtype='S'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc6ddab8",
   "metadata": {},
   "source": [
    "> 电脑跑不动。。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "qt11",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
