<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Siyuan Li (黎思源)</title>
    <link rel="stylesheet" href="styles.css">
    <link rel="preconnect" href="https://fonts.googleapis.com">
    <link href="https://fonts.googleapis.com/css2?family=Roboto:wght@300;400&family=Merriweather:wght@700&display=swap" rel="stylesheet">
    <link rel="apple-touch-icon" sizes="180x180" href="images/favicon/apple-touch-icon.png">
    <link rel="icon" type="image/png" sizes="32x32" href="images/favicon/favicon-32x32.png">
    <link rel="icon" type="image/png" sizes="16x16" href="images/favicon/favicon-16x16.png">
    <link rel="manifest" href="images/favicon/site.webmanifest">
</head>
<body>
    <header>
        <div class="container header-container">
            <button id="theme-toggle" class="theme-toggle">🌛</button>
            <div class="profile-info">
                <h1>Siyuan Li <span class="chinese-name">(黎思源)</span></h1>
                <p class="subtitle"><b>🤖 Artificial Intelligence Student</b></p>
                <p><span class="icon">📞</span> Tel: (+86) 13815397394</p>
                <p><span class="icon">✉️</span> Email: <a href="mailto:1793706453@qq.com">1793706453@qq.com</a> &amp; <a href="mailto:lyq@shu.edu.cn">lyq@shu.edu.cn</a></p>
                <p><span class="icon">💬</span> WeChat: abcdefghi314159</p>
                <div class="links">
                    <a href="https://github.com/liaoyanqing666" target="_blank" class="button">🐱 GitHub</a>
                    <a href="files/resume.pdf" target="_blank" class="button">📄 Resume</a>
                    <a href="files/undergraduate transcript.pdf" target="_blank" class="button">📝 Undergraduate Transcript</a>
                    <a href="files/undergraduate grade and ranking.pdf" target="_blank" class="button">🎓 Certificate of Grade Point and Rank</a>
                </div>
            </div>
            <div class="carousel-container">
                <img src="images/photo2.jpg" class="carousel-image active" alt="Siyuan Li Photo 1">
                <img src="images/photo3.jpg" class="carousel-image" alt="Siyuan Li Photo 2">
                <img src="images/photo4.jpg" class="carousel-image" alt="Siyuan Li Photo 3">
                <img src="images/photo1.jpg" class="carousel-image" alt="Siyuan Li Photo 4">
                <a class="prev" onclick="changeSlide(-1)">&#10094;</a>
                <a class="next" onclick="changeSlide(1)">&#10095;</a>
            </div>
        </div>
    </header>

    <script>
        let currentSlide = 0;
        function changeSlide(direction) {
            const slides = document.querySelectorAll('.carousel-image');
            slides[currentSlide].classList.remove('active');
            currentSlide += direction;
            if (currentSlide >= slides.length) currentSlide = 0;
            if (currentSlide < 0) currentSlide = slides.length - 1;
            slides[currentSlide].classList.add('active');
        }

        document.getElementById('theme-toggle').addEventListener('click', () => {
            document.documentElement.classList.toggle('dark-theme');
            const isDark = document.documentElement.classList.contains('dark-theme');
            document.getElementById('theme-toggle').textContent = isDark ? '🌞' : '🌛';
        });
    </script>
    
    <section class="intro">
        <div class="container">
            <p>
                I am Siyuan Li, an incoming Ph.D. student in Computer Science at the <a href="https://uga.edu" target="_blank">University of Georgia</a>. I earned my Bachelor's degree in Artificial Intelligence from <a href="https://www.shu.edu.cn/" target="_blank">Shanghai University</a>, graduating first in my major. I prefer to explore, learn and create things from scratch and bottom, rather than simply using pre-made solutions by others. For instance, I built this webpage myself (with an Easter egg 🐣), and have received <a href="https://github.com/liaoyanqing666/liaoyanqing666.github.io" target="_blank"><img src="https://img.shields.io/github/stars/liaoyanqing666/liaoyanqing666.github.io?style=social" alt="Stars"/></a> <a href="https://github.com/liaoyanqing666/liaoyanqing666.github.io" target="_blank"><img src="https://img.shields.io/github/forks/liaoyanqing666/liaoyanqing666.github.io?style=social" alt="Forks"/></a>. As an intern at the <a href="https://en.westlake.edu.cn/" target="_blank">AI for Scientific Simulation and Discovery Lab</a> at <a href="https://ai4s.lab.westlake.edu.cn/" target="_blank">Westlake University</a>, I led an  explorative project on AI-driven physical law discovery, under the mentorship of <a href="https://tailin.org/" target="_blank">Prof. Tailin Wu</a>. Within my university, I have participated in research projects under <a href="https://yuhang.netlify.app/" target="_blank">Prof. Hang Yu</a> and in the Brain-like Computing Center Lab led by <a href="https://www.dlcloud.info/" target="_blank">Prof. Huiran Zhang</a>, and have written a paper as the first author. I have worked on numerous projects, all of which can be viewed on my <a href="https://github.com/liaoyanqing666/" target="_blank">GitHub</a>.
            </p>
            <p>
                My research interests are primarily focused on innovative modeling across several domains, including Multi-Modality and Natural Language Processing (NLP). I also have a strong interest in the application of AI to scientific inquiries, particularly in the development of AI models tailored to address specific scientific questions. From my viewpoint, scientific data can be considered as a type of modality, where methods from Multi-Modality, CV, and NLP can be adapted to apply. To summarize, my fundamental research interest is: <b>Innovating and Applying Artificial Intelligence Models (Designing Algorithms)</b>.
            </p>
        </div>
    </section>

    <main>
        <div class="container">
            <section>
                <h2>🎓 Education</h2>
                <div class="education-item">
                    <div class="education-header">
                        <div class="details">
                            <h3>University of Georgia (UGA)</h3>
                            <p>Ph.D. Student in Computer Science (<a href="https://computing.uga.edu/">School of Computing</a>)</p>
                            <p class="date">08/2025 (defer to 01/2026) - 06/2030</p>
                        </div>
                    </div>
                </div>
                
                <div class="education-item">
                    <div class="education-header">
                        <div class="details">
                            <h3>Shanghai University (SHU)</h3>
                            <p>B.E. in Artificial Intelligence (<a href="https://cs.shu.edu.cn/">School of Computer Engineering and Science</a>)</p>
                            <p class="date">09/2021 - 06/2025</p>
                        </div>
                        <div class="education-info">
                            <p><strong>GPA:</strong> 3.81/4.0 (93.60/100)</p>
                            <p><strong>Ranking:</strong> 1/52 (1/31 in class)</p>
                        </div>
                    </div>
                    <div class="education-introduction">
                        <p><b>Key courses</b>: Calculus(94), Linear Algebra(100), Object Oriented Programs(94), Probability and Statistics(95), Data Structure(97), Pattern Recognition(90), Computer Vision(91), Operations and Optimization(88), Data Mining and Knowledge Processing(94), Mathematical Logic(95), Principles and Techniques of LLMs(95), Principle and Algorithm of AI(93)</p>
                    </div>
                </div>
            </section>

            <section>
                <h2>🔬 Research & Internship</h2>
                <!-- <div class="experience-item">
                    <div class="details-with-logo">
                        <h3>
                            <img src="images/Stanford.png" alt="Stanford" class="logo">
                            Remote Research Intern in Stanford University
                        </h3>
                        <p>VLM for medicine (with RAG)</p>
                        <p>Advisor: Dr. Wei Peng (Stanford University)</p>
                        <p class="date">02/2025 - now</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Writing a review paper about RAG and VLM.</li>
                        </ul>
                    </div>
                </div> -->

                <div class="experience-item">
                    <div class="details-with-logo">
                        <h3>
                            <img src="images/Huawei.png" alt="Huawei" class="logo">
                            AI intern (NLP focus) in Huawei
                        </h3>
                        <p>Location: Huawei Lianqiu Lake R&D Center, Shanghai</p>
                        <p>Department: Ascend Computing Inference Development
                        <p><a href="https://github.com/vllm-project/vllm-ascend">Project website</a></p>
                        <p class="date">12/2024 - 03/2025</p>
                    </div>

                    <div class="content">
                        <ul>
                            <li>I am so proud that I reported a bug (<a href="https://github.com/vllm-project/vllm/issues/11978">issue #11978</a>) to <a href="https://github.com/vllm-project/vllm">vLLM</a>, and my fix has been merged into the main branch (<a href="https://github.com/vllm-project/vllm/pull/11979">PR #11979</a>). Now, I am a small contributor to vLLM. 😎</li>
                            <li> Migrate vLLM to Ascend NPU platform (<a href="https://github.com/vllm-project/vllm-ascend">vllm-ascend</a>), responsible for unit testing and adaptation of some operators.</li>
                            <li> Adapt speculative decoding on vllm-ascend.
                        </ul>
                    </div>
                </div>

                <div class="experience-item">
                    <div class="details-with-logo">
                        <h3>
                            <img src="images/SHU_BLCC.png" alt="SHU_BLCC" class="logo">
                            SHU Brain-like Computing Center Lab
                        </h3>
                        <p>AI for Recognizing Preference</p>
                        <p>Advisor: Prof. Huiran Zhang (Shanghai University)</p>
                        <p>Project undisclosed</p>
                        <p class="date">04/2024 - 09/2024</p>
                    </div>

                    <div class="content">
                        <ul>
                            <li>Proposed a novel ERP composite formula for analyzing human preferences.</li>
                            <li>Achieved effective classification of preferences using AI methods combined with the developed formula.</li>
                            <li>Authored a paper as the first author, available on <a href="https://arxiv.org/abs/2505.19879">arXiv</a>.</li>
                        </ul>
                    </div>
                </div>

                <div class="experience-item">
                    <div class="details-with-logo">
                        <h3>
                            <img src="images/WestlakeU.png" alt="WestlakeU" class="logo">
                            Research Intern in Westlake University
                        </h3>
                        <p>Rule Discovery in Physical Data/Video</p>
                        <p>Advisor: Prof. Tailin Wu (Westlake University), Prof. Sebastian Musslick (Brown University)</p>
                        <p>Project undisclosed</p>
                        <p class="date">07/2023 - 06/2024</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Developed a transformer-based model and programmed to experiment with symbolic regression tasks.</li>
                            <li>Extended symbolic regression from mathematical expressions to the video domain by building a multimodal model.</li>
                            <li>Explored the discovery of physical system patterns from videos to empower scientific discovery tasks.</li>
                        </ul>
                    </div>
                </div>

                <div class="experience-item">
                    <div class="details-with-logo">
                        <h3>
                            <img src="images/SHU.png" alt="Shanghai University" class="logo">
                            Research in Shanghai University
                        </h3>
                        <p>Video Frame Interpolation with PVT</p>
                        <p>Advisor: Prof. Hang Yu (Shanghai University)</p>
                        <p><a href="https://github.com/liaoyanqing666/PVT_v2_video_frame_interpolation">Project website</a></p>
                        <p class="date">04/2023 - 06/2023</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Proposed a novel encoder-decoder video frame interpolation model leveraging PVT v2 as the encoder and a UNet-like decoder with deconvolution and residual concatenation.</li>
                            <li>Achieved an SSIM of 0.9879 on Vimeo90K Dataset, surpassing state-of-the-art methods.</li>
                        </ul>
                    </div>
                </div>
            </section>

            <section>
                <h2>💻 Projects (Selected by Learning Path)</h2>
                <div class="project-item">
                    <div class="details">
                        <h3>Fine-Tuning of Multimodal Medical Large Models Integrating the RAG Mechanism</h3>
                        <p>Graduation Project</p>
                        <p class="date">05/2025</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Designed and implemented a medical content generation system combining Retrieval-Augmented Generation (RAG) and multimodal large language model (MLLM) fine-tuning.</li>
                            <li>Developed a multimodal RAG framework supporting joint image-text input, featuring multiple retrieval paradigms such as joint embedding, label-guided retrieval, and image-text pair binding.</li>
                            <li>Fine-tuned the Qwen2.5-VL model in two stages using Chinese medical QA and image-text datasets, yielding the Qwen2.5-VL-Med model with domain-specific reasoning capabilities.</li>
                            <li>Built a modular web-based interactive system supporting local/cloud API deployment, multimodal input, streaming response, and history tracking.</li>
                        </ul>
                    </div>
                </div>

                <div class="project-item">
                    <div class="details">
                        <h3>Cross-Modal Pretrained Model Alignment</h3>
                        <p><a href="https://github.com/liaoyanqing666/Pretrained_Model_Alignment">Project website</a></p>
                        <p class="date">07/2024</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Proposed and implemented a method to quickly align pre-trained models from different modalities.</li>
                            <li>Designed a twin neural network similarity module to align pretrained models with varying embedding dimensions.</li>
                            <li>Achieved rapid model alignment between text and image modalities with minimal training on a standard image classification dataset, rather than requiring a large "image-description" dataset typical for models like CLIP.</li>
                            <li>Experimentally demonstrated the project's ability to align quickly with minimal GPU requirements and satisfactory performance.</li>
                        </ul>
                    </div>
                </div>
                
                <div class="project-item">
                    <div class="details">
                        <h3>Reproduction and Experimentation of Paper of TextCNN</h3>
                        <p><a href="https://github.com/liaoyanqing666/TextCNN-Chinese-text-data-mining-">Project website</a></p>
                        <p class="date">03/2023</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Reproduced and experimented with the TextCNN model.</li>
                            <li>Performed tokenization and encoding of sentence content, followed by padding or truncating sentence lengths.</li>
                            <li>Implemented word embedding and utilized multiple convolutional kernels of varying sizes for feature extraction, pooling, and final classification through fully connected layers.</li>
                        </ul>
                    </div>
                </div>

                <div class="project-item">
                    <div class="details">
                        <h3>Force Video Classification Based on CNN-LSTM</h3>
                        <p><a href="https://github.com/liaoyanqing666/CNN_LSTM_video_classification">Project website</a></p>
                        <p class="date">02/2023</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Developed a network model based on CNN for video frame feature extraction and LSTM for sequential frame feature computation.</li>
                            <li>Compared the classification performance of KNN and ANN after freezing the feature extraction model parameters.</li>
                            <li>Achieved 92% accuracy on a public dataset, comparable to results from another study using a non-public dataset.</li>
                        </ul>
                    </div>
                </div>

                <div class="project-item">
                    <div class="details">
                        <h3>Handwriting Recognition System Based on Siamese Neural Networks</h3>
                        <p><a href="https://github.com/liaoyanqing666/siamese_neural_network_OfHSV">Project website</a></p>
                        <p class="date">11/2022</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Independently designed and coded a system utilizing VGG16 for signature feature extraction.</li>
                            <li>Achieved 100% accuracy on the CEDAR dataset using Siamese neural networks for classification.</li>
                            <li>Developed frontend-backend interaction programs enabling the utilization of training results on web platforms.</li>
                        </ul>
                    </div>
                </div>
            </section>

            <section>
                <h2>🏆 Awards</h2>
                <ul class="awards-list">
                    <li><a href="https://icpc.global/" target="_blank">ICPC (International Collegiate Programming Contest) Asia Regional Contest (Hefei)</a> Bronze Medal 🥉 <span class="date">11/2023</span></li>
                    <li><a href="https://icpc.global/" target="_blank">ICPC (International Collegiate Programming Contest) Asia Regional Contest (Nanjing)</a> Bronze Medal 🥉 <span class="date">11/2022</span></li>
                    <li><a href="http://www.asc-events.org/" target="_blank">ASC Student Supercomputer Challenge</a> National Second Prize 🥈 <span class="date">02/2024</span></li>
                    <li><a href="https://dasai.lanqiao.cn/" target="_blank">Group A of C/C++ Division of Blue Bridge Programming Cup</a> National Third Prize 🥉 <span class="date">06/2023</span></li>
                    <li><a href="https://dasai.lanqiao.cn/" target="_blank">Group A of C/C++ Division in Shanghai Division of Blue Bridge Programming Cup</a> First Prize 🥇 <span class="date">04/2023</span></li>
                    <li><a href="https://ccpc.io/" target="_blank">CCPC (China Collegiate Programming Contest) Shanghai Programming Contest</a> Silver Medal 🥈 <span class="date">10/2022</span></li>
                </ul>
            </section>

            <section>
                <h2>📚 Papers</h2>
                <div class="papers-item">
                    <div class="details">
                        <h3>The Study of Human Preference Based on Integrated Analysis of N1 and LPP Components.</h3>
                        <p><a href="https://arxiv.org/abs/2505.19879">Paper link.</a> arXiv:2505.19879.</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Authors: Siyuan Li, Xiangze Meng, Yijian Yang, Yiwen Xu, Yunfei Wang, Chenghu Qiu, Hanyi Jiang, Pin Wu, Shengbo Chen, Xiao Wei, Hao Wang, Lan Ni, Huiran Zhang.</li>
                        </ul>
                    </div>
                </div>

                <div class="papers-item">
                    <div class="details">
                        <h3>Research advanced in offline handwritten signature verification.</h3>
                        <p><a href=https://www.ewadirect.com/proceedings/ace/article/view/4365>Paper link.</a> Applied and Computational Engineering, 6(1), 1244-1252. DOI: 10.54254/2755-2721/6/20230653.</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Co-authors: Yuhang Guo, Siyuan Li (Co-first author), Jinxuan Wu</li>
                        </ul>
                    </div>
                </div>
            </section>

            <section>
                <h2>🛠️ Skills</h2>
                <ul class="skills-list">
                    <li><strong>Programming languages:</strong> Python (Advanced), C++ (Proficient), Html(Proficient), Matlab (Familiar), CSS(Familiar), javascript(Familiar)</li>
                    <li><strong>Tools:</strong> Git, Office Word, LATEX(overleaf), Markdown, Remote SSH of VSCode</li>
                    <li><strong>AI-related skills:</strong> Pytorch (Advanced), Transformers (Proficient), LLM (General), vLLM(General)</li>
                </ul>
            </section>

            <section>
                <h2>🤝 Extracurricular & Volunteer Activities</h2>
                <div class="activity-item">
                    <div class="details">
                        <h3>New Media Center, School of Computer Engineering and Science</h3>
                        <p>Chairman</p>
                        <p class="date">01/2022 - 01/2023</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Managed content publication on the School of Management's official WeChat account and coordinated daily tasks.</li>
                            <li>Organized and managed recruitment presentations, student representative meetings, and other related affairs.</li>
                        </ul>
                    </div>
                </div>

                <div class="activity-item">
                    <div class="details">
                        <h3>ByteDance</h3>
                        <p>Campus Ambassador</p>
                        <p class="date">03/2022 - 06/2022</p>
                    </div>
                    <div class="content">
                        <ul>
                            <li>Assisted in promoting spring recruitment and summer internships.</li>
                            <li>Distributed local push manuals and internal push codes.</li>
                        </ul>
                    </div>
                </div>
                <div class="activity-item">
                    <b>Volunteer time: 100h+ ⏳</b>
                </div>
            </section>
        </div>
    </main>

    <footer>
        <div class="container">
            <p>&copy; 2025 Siyuan Li. All rights reserved. All original content can be found in the <a href="https://github.com/liaoyanqing666/liaoyanqing666.github.io" target="_blank">website repository</a>.</p>
            <p>And there is an interesting game written by me: <a href="game.html">Click the Ball Game 🎮</a>.</p>
        </div>
    </footer>

    <script src="https://www.gstatic.com/firebasejs/8.10.0/firebase-app.js"></script>
    <script src="https://www.gstatic.com/firebasejs/8.10.0/firebase-firestore.js"></script>
    <script src="track-visitors.js"></script>
</body>
</html>
