<html>
<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
<body class="bg-white font-sans leading-normal tracking-normal">
    <header class="bg-white border-b border-gray-200">
        <nav class="container mx-auto px-4 sm:px-6 lg:px-8">
            <div class="flex justify-between h-16">
                <div class="flex">
                    <div class="flex-shrink-0 flex items-center">
                        <a href="#" class="text-gray-800 hover:text-gray-900">Homepage</a>
                    </div>
                    <div class="hidden sm:ml-6 sm:flex sm:space-x-8">
                        <a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">About Me</a>
                        <a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">News</a>
                        <a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">Publications</a>
                        <a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">Honors and Awards</a>
                        <a href="#" class="inline-flex items-center px-1 pt-1 text-base font-normal text-gray-600 hover:text-gray-900">Educations</a>
                    </div>
                </div>
            </div>
        </nav>
    </header>

    <main class="container mx-auto px-4 sm:px-6 lg:px-8 py-8">
        <div class="flex flex-col md:flex-row">
            <div class="md:w-1/4 mb-8 md:mb-0">
                <img src="https://picsum.photos/300/300" alt="Profile Picture" class="rounded-full mx-auto mb-4">
                <h2 class="text-xl font-bold mb-2">Pan Zhang</h2>
                <p class="text-gray-600 mb-4">Shanghai AI Laboratory</p>
                <p class="text-gray-600 mb-4">Researcher at Shanghai AI Laboratory.</p>
                <ul class="list-disc pl-4 mb-4">
                    <li>Shanghai, China</li>
                    <li>Email</li>
                    <li>Google Scholar</li>
                </ul>
            </div>
            <div class="md:w-3/4">
                <h2 class="text-xl font-bold mb-2">Short Bio</h2>
                <p class="text-gray-600 mb-4">I am currently a researcher at Shanghai AI Laboratory (shlab). I received my Ph.D. degree through a Joint PhD Program between Microsoft Research Asia (MSRA) and University of Science and Technology of China (USTC) in 2022. Prior to that, I received my Bachelor degree of Engineering at University of Science and Technology of China in 2017. I joined the Shanghai AI Laboratory in July 2022.</p>
                <p class="text-gray-600 mb-4">My research interest includes Multimodal Large Language Models and Image/Video Generation and Editing.</p>
                <p class="text-gray-600 mb-4">We are seeking long-term internship candidates and looking for research collaboration. Please send email to me if you want to join us.</p>
            </div>
        </div>

        <div class="mt-8">
            <h2 class="text-xl font-bold mb-2">News</h2>
            <ul class="list-disc pl-4 mb-4">
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.03: InternLM-XComposer Series has received 1,300+ github star. <span class="text-red-500"><i class="fab fa-twitter"></i></span> XComposer2 has been commercially utilized by ByteDance.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.02: The model and dataset of ShareGPT4V has been downloaded 100,000+ times in one month.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.02: Three papers accepted by CVPR 2024. Alpha-CLiP is Strongly Accepted by All the Reviewers.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2024.01: We release InternLM-XComposer2. The first 7B model matches or even surpasses GPT-4 and GPT-Neo Pro in certain assessments.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.09: We release InternLM-XComposer, a vision-language large model for advanced text-image comprehension and composition.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.02: One paper accepted by SIGGRAPH Asia 2023.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.07: VDet, the first ten-thousand-class object detection dataset, is accepted by ICCV 2023 as an Oral paper.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2023.03: Two papers accepted by CVPR 2023.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2022.07: One paper accepted by ECCV 2022.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2022.06: One paper accepted by TPAMI.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2021.06: CoCoNet v2 is selected as a CVPR 2021 Best Paper Candidate.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2021.02: CoCoNet v2 and ProDA are accepted by CVPR 2021. CoCoNet v2 is an Oral Paper.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2020.10: Bring-Old-Photos-Back-to-Life has received 14,000+ github star.</li>
                <li><span class="text-red-500"><i class="fab fa-twitter"></i></span> 2020.03: CoCoNet and Bring-Old-Photos-Back-to-Life are accepted by CVPR 2020 as Oral Papers.</li>
            </ul>
        </div>
    </main>
</body>
</html>
