RewardMATH_project / index.html
RewardMATH's picture
Update index.html
4c6db86 verified
<!DOCTYPE html>
<html>
<head>
<title>Evaluating Robustness of Reward Models for Mathematical Reasoning</title>
<style>
.hidden {
display: none;
}
</style>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://kit.fontawesome.com/f8ddf9854a.js" crossorigin="anonymous"></script>
<meta charset="utf-8">
<meta name="description"
content="Evaluating Robustness of Reward Models for Mathematical Reasoning">
<meta name="keywords" content="Mathematical Reasoning, Reward Model, Benchmark, RLHF, Reward Hacking, Reward Overoptimization">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Evaluating Robustness of Reward Models for Mathematical Reasoning</title>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro" rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="stylesheet" href="./static/css/leaderboard.css">
<script type="text/javascript" src="static/js/sort-table.js" defer></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
<script src="./static/js/question_card.js"></script>
<script src="./data/results/data_setting.js" defer></script>
<script src="./data/results/model_scores.js" defer></script>
<script src="./visualizer/data/data_public.js" defer></script>
</head>
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-PBF77LE136"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-PBF77LE136');
</script>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title is-bold">
<span class="opencodeinterpreter" style="vertical-align: middle">Evaluating Robustness of Reward Models for Mathematical Reasoning</span>
</h1>
<br>
<h3>
Note that this project page is fully anonymized. Some links might not be available due to anonymization.
</h3>
<br>
<!-- <div class="column has-text-centered" style="overflow-x: auto;"> -->
<div class="column has-text-centered">
<div class="publication-links" style="justify-content: center;">
<!-- PDF Link. -->
<span class="link-block">
<!-- @PAN TODO: change links -->
<a href="https://huggingface.co/spaces/RewardMATH/RewardMATH_project/blob/main/ICLR2025_RewardMATH.pdf"
class="external-link button is-normal is-rounded is-dark" target="_blank">
<!-- <span class="icon">
<i class="fas fa-file-pdf"></i>
</span> -->
<span class="icon">
<p style="font-size:18px">📝</p>
</span>
<span>Paper</span>
</a>
</span>
<span class="link-block">
<a href="https://huggingface.co/datasets/RewardMATH/RewardMATH"
class="external-link button is-normal is-rounded is-dark" target="_blank">
<span class="icon">
<p style="font-size:18px">🤗</p>
</span>
<span>Datasets</span>
</a>
</span>
<span class="link-block">
<a href="https://anonymous.4open.science/r/RewardMATH-5BA4/README.md"
class="external-link button is-normal is-rounded is-dark" target="_blank">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<style>
.center {
display: block;
margin-left: auto;
margin-right: auto;
width: 80%;
}
</style>
<section>
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Reward models are key in reinforcement learning from human feedback (RLHF) systems, aligning the model behavior with human preferences.
Particularly in the math domain, there have been plenty of studies using reward models to align policies for improving reasoning capabilities.
Recently, as the importance of reward models has been emphasized, RewardBench is proposed to understand their behavior.
However, we figure out that the math subset of RewardBench has different representations between chosen and rejected completions, and relies on a single comparison, which may lead to unreliable results as it only see an isolated case.
Therefore, it fails to accurately present the robustness of reward models, leading to a misunderstanding of its performance and potentially resulting in reward hacking.
</p>
<p>
In this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct <span class="dnerf">RewardMATH</span>, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks.
We demonstrate that the scores on <span class="dnerf">RewardMATH</span> strongly correlate with the results of optimized policy and effectively estimate reward overoptimization, whereas the existing benchmark shows almost no correlation.
The results underscore the potential of our design to enhance the reliability of evaluation, and represent the robustness of reward model.
</p>
<p>
</p>
</div>
</div>
</div>
<!--/ Abstract. -->
</div>
</section>
<section class="hero is-light is-small">
<div class="hero-body has-text-centered">
<h1 class="title is-1 mmmu">
<span class="mmmu" style="vertical-align: middle">Preliminaries</span>
</h1>
</div>
</section>
<section class="section">
<div class="container">
<div class="columns is-centered has-text-centered">
<!-- <div class="column is-full-width has-text-centered"> -->
<div class="column is-four-fifths">
<h2 class="title is-3">Robustness of reward model</h2>
<div class="content has-text-justified">
<p>
<i>Reward hacking</i> represents a significant challenge in the development and implementation of reward models for RLHF.
This phenomenon occurs when policies exploit loopholes in reward models to achieve higher scores, stemming from discrepancies between human preferences (the true reward function) and proxy reward models.
Such issues underscore the importance of evaluating reward models themselves, not just policy models (post-RLHF models).
The reward hacking can lead to <b>reward overoptimization</b>, where employing a proxy reward model for optimization may initially improve the true reward but gradually leads to degradation, ultimately resulting in optimization failure.
</p>
<p>
In this work, we argue that the <i>robustness of a reward model should be evaluated based on how effectively it provides signals from which a policy can learn.</i>
</p>
</div>
</div>
</div>
</div>
</section>
<section class="hero is-light is-small">
<div class="hero-body has-text-centered">
<h1 class="title is-1 mmmu">
<span class="mmmu" style="vertical-align: middle">Designing a Reliable Benchmark</span>
</h1>
</div>
</section>
<section class="section">
<div class="container">
<div class="columns is-centered has-text-centered">
<!-- <div class="column is-full-width has-text-centered"> -->
<div class="column is-four-fifths">
<h2 class="title is-3">On the road to the Evaluation of Robustness of Reward Model</h2>
<div class="content has-text-centered">
<img src="static/images/motivation.png" alt="Motivation" class="center" style="width:80%">
<p> A motivation example from math subset of RewardBench and drawbacks of the existing evaluation method.</p>
</div>
<div class="content has-text-justified">
<p>
RewardBench, a widely-used benchmark for reward models, does not fully address the robustness of models in the math domain, with recent findings showing about 20% of the annotations in its underlying PRM800K dataset are incorrect.
The evaluation process in RewardBench, which compares rewards between chosen and rejected solutions annotated by unaligned GPT-4, is flawed due to humans often skipping steps in solutions, leading to discrepancies with machine-generated solutions.
These discrepancies challenge the evaluation’s reliability, as comparing with a single incorrect solution does not sufficiently assess the robustness of reward models.
</p>
</div>
<br/>
<h2 class="title is-3">RewardMATH</h2>
<div class="content has-text-centered">
<img src="static/images/RewardMATH.png" alt="statstics of RewardMATH" class="center" style="width:90%">
<p> A histogram showing the distribution of samples by the number of steps on RewardBench and <span class="dnerf">RewardMATH</span>, and the contribution of each model to the rejected solutions.</p>
</div>
<div class="content has-text-justified">
<p>
The design philosophy of <span class="dnerf">RewardMATH</span> is to caution against a hasty generalization, which occurs when conclusions are drawn from a sample that is too small or consists of too few cases.
To design a reliable benchmark, we aim to mitigate the risk of reward hacking and employs comparisons with a variety of incorrect (i.e., rejected) solutions.
Therefore, we introduce <span class="dnerf">RewardMATH</span>, a reliable benchmark crafted for evaluating the robustness of reward models in mathematical reasoning.
</p>
</div>
<br/>
<h2 class="title is-3">Evaluation metric</h2>
<div class="content has-text-justified">
<p>
For each problem, we infer 10 solutions in total<span>&#8212;</span>1 correct solution and 9 incorrect solutions<span>&#8212;</span>and then assign a true classification label when a reward of chosen solution is higher than all rewards of rejected solutions.
Furthermore, considering only whether the reward of chosen solution is the highest can be fairly strict, we also utilize Mean Reciprocal Rank (MRR), where higher ranks for the chosen solution lead to higher scores.
</p>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="hero is-light is-small">
<div class="hero-body has-text-centered">
<h1 class="title is-1 mmmu">
<span class="mmmu" style="vertical-align: middle">Evaluating Reward Models</span>
</h1>
</div>
</section>
<section class="section">
<div class="container">
<div class="columns is-centered has-text-centered" style="flex-direction: column; align-items: center;">
<!-- <div class="column is-full-width has-text-centered"> -->
<div class="column is-four-fifths">
<br/>
<div class="content has-text-centered">
<img src="static/images/main_results_1.png" alt="Results of generative RMs" width="60%"/>
<p>
The results of generative reward models on RewardBench and <span class="dnerf">RewardMATH</span>.
</p>
</div>
<div class="content has-text-justified">
<p>
The results from RewardBench suggest that LLMs, such as GPT-4 or Prometheus-2-7B, could potentially serve as effective reward models.
However, more thorough evaluations on <span class="dnerf">RewardMATH</span> indicate that LLMs generally do not perform well as reward models, with most achieving scores close to zero, except for those in the GPT-4 family.
Through direct assessment that considers the presence of ties, we find that most LLMs fail to distinguish between correct and incorrect solutions, simply assigning the same scores to all.
</p>
</div>
</div>
<div class="column is-four-fifths">
<br/>
<div class="content has-text-centered">
<img src="static/images/main_results_2.png" alt="Results of classifier-based RMs and PRMs" width="40%"/>
<p>
The results of classifier-based RMs and PRMs on RewardBench and <span class="dnerf">RewardMATH</span>.
</p>
</div>
<div class="content has-text-justified">
<p>
Rankings on RewardBench do not consistently predict performance on <span class="dnerf">RewardMATH</span>.
Specifically, Oasst-rm-2.1-pythia-1.4b, which is one of the top-ranked models in RewardBench, faces challenges in <span class="dnerf">RewardMATH</span>, scoring lower than Beavor-7b-v2.0-reward, the lowest-ranked model in RewardBench.
However, Internlm2-7b-reward exhibits the highest performance in <span class="dnerf">RewardMATH</span>, suggesting that it is genuinely a robust reward model for mathematical reasoning.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- RESULTS SECTION -->
<section class="hero is-light is-small">
<div class="hero-body has-text-centered">
<h1 class="title is-1 mmmu">
<span class="mmmu" style="vertical-align: middle">Validating Our Design for a Reliable Benchmark</span>
</h1>
</div>
</section>
<section class="section">
<div class="container">
<div class="columns is-centered has-text-centered" style="flex-direction: column; align-items: center;">
<!-- <div class="column is-full-width has-text-centered"> -->
<div class="column is-four-fifths">
<h2 class="title is-3">Reliability of Benchmark</h2>
<div class="content has-text-centered">
<img src="static/images/correlation.svg" alt="Results of generative RMs" width="80%"/>
<p>
The relationship between the difference in accuracy on math test sets and the performance based on the benchmark design.
</p>
</div>
<br/>
<div class="content has-text-justified">
<p>
<span class="dnerf">RewardMATH</span> shows a strong positive correlation between the benchmark scores and the results of optimized policy, indicating its reliability, whereas RewardBench shows only a weak correlation.
Additionally, the analysis explores the design of evaluation sets that prevent reward hacking by comparing chosen and rejected solutions from the two benchmarks.
The results of heatmap highlight that the importance of minimizing the representation differences between chosen and rejected solutions to mitigate vulnerability to reward hacking, as well as employing one-to-many comparisons for more reliable evaluations.
</p>
</div>
</div>
<div class="column is-four-fifths">
<h2 class="title is-3">Through the Lens of Reward Overoptimization</h2>
<div class="content has-text-centered">
<img src="static/images/data_size_reward.png" alt="Reward overoptimization" width="100%"/>
<p>
Gold rewards and oracle rewards (pass@1) in BoN and PPO experiments with proxy reward models across different amounts of data in a synthetic setup.
</p>
</div>
<br/>
<div class="content has-text-justified">
<p>
Typically, a robust proxy reward model trained to capture human preferences should exhibit increasing gold rewards as KL divergence increases.
Conversely, a collapse in gold rewards at certain point during an increase in KL divergence indicates a lack of robustness in the proxy reward model.
Figure illustrates how dataset size impacts the behavior of reward model within a synthetic setup.
We find that proxy reward models trained on smaller datasets reach peak rewards at lower KL divergences, indicating faster overoptimization.
This finding suggests that larger datasets can help mitigate reward overoptimization.
Furthermore, we confirm that reward overoptimization can also be observed through oracle rewards (i.e., pass@1) in tasks with well-defined human preferences, such as mathematics.
</p>
</div>
<br/>
<div class="content has-text-centered">
<img src="static/images/reward_overoptimization.svg" alt="Results of classifier-based RMs and PRMs" width="70%"/>
<p>
Gold and oracle rewards (pass@1) for BoN experiments with MetaMATH-Mistral-7B.
</p>
</div>
<br/>
<div class="content has-text-justified">
<p>
Figure shows gold and oracle rewards change with increasing KL divergence and reveals varying effects of overoptimization across different models.
Notably, high-performing models on RewardBench, like Oasst-rm-2.1-pythia-1.4b, often exhibit rapid overoptimization without a consistent correlation between benchmark performance and the extent of overoptimization.
In contrast, <span class="dnerf">RewardMATH</span> demonstrates a clear trend where higher performance correlates with less reward collapse, highlighting its reliability in providing accurate rewards and effectively mitigating overoptimization.
</p>
</div>
</div>
<!-- <br/>
<div class="content has-text-centered"></div>
<img src="static/images/reward_overoptimization.svg" alt="Results of classifier-based RMs and PRMs" width="70%"/>
<p>
Gold and oracle rewards (pass@1) for BoN experiments with MetaMATH-Mistral-7B.
</p>
</div>
<div class="content has-text-justified">
<p>
Figure shows gold and oracle rewards change with increasing KL divergence and reveals varying effects of overoptimization across different models.
Notably, high-performing models on RewardBench, like Oasst-rm-2.1-pythia-1.4b, often exhibit rapid overoptimization without a consistent correlation between benchmark performance and the extent of overoptimization.
In contrast, <span class="dnerf">RewardMATH</span> demonstrates a clear trend where higher performance correlates with less reward collapse, highlighting its reliability in providing accurate rewards and effectively mitigating overoptimization.
</p>
</div>
<br/>
</div> -->
</div>
</div>
</section>
<section class="hero is-light is-small">
<div class="hero-body has-text-centered">
<h1 class="title is-1 mmmu">
<span class="mmmu" style="vertical-align: middle">Discussion</span>
</h1>
</div>
</section>
<section class="section">
<div class="container">
<div class="columns is-centered has-text-centered">
<!-- <div class="column is-full-width has-text-centered"> -->
<div class="column is-four-fifths">
<h2 class="title is-3">Developing effective RLHF systems</h2>
<div class="content has-text-justified">
<p>
Benchmarks serve as critical milestones in advancing artificial intelligence.
In this work, we argue that a benchmark for reward models should reliably assess their robustness, where a robust RM indicates
a model that provide useful signals to enable effective policy learning.
Through extensive experiments, we confirm that our reliable benchmark design, which mitigates the risk of reward hacking and employs one-to-many comparisons, accurately reflects the robustness of reward models.
While this work marks a significant step forward, there is still room for improvement.
We validate our design in mathematical reasoning tasks, where human preferences can be clearly defined by correctness, making it easier to gather multiple rejected completions.
Since the reward models can be applied to a wide range of tasks, a crucial next step is to extend our design to cover all of them.
We hope that advancing this line of research will provide a promising path toward developing more trustworthy and effective RLHF systems.
</p>
</div>
<h2 class="title is-3">Conclusion</h2>
<div class="content has-text-justified">
<p>
In this work, we suggest a new design for reliable evaluation of reward models: (1) mitigating the risk of reward hacking and (2) employing a one-to-many comparison.
To validate our design, we propose <span class="dnerf">RewardMATH</span>, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks.
Our extensive experiments demonstrate that the performance on <span class="dnerf">RewardMATH</span> has a strong correlation with the performance of the optimized policy, whereas the existing benchmark shows no correlation.
Furthermore, we also confirm that <span class="dnerf">RewardMATH</span> can effectively estimate the reward overoptimization, a critical concern in RLHF systems.
</p>
</div>
</div>
</div>
</div>
</div>
</section>
<!-- @PAN TODO: bibtex -->
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title is-3 has-text-centered">BibTeX</h2>
<pre><code>
@article{Anonymized,
title={Evaluating Robustness of Reward Models for Mathematical Reasoning},
author={Anonymized},
journal={Anonymized},
year={2024}
}
</code></pre>
</div>
</section>
<footer class="footer">
<!-- <div class="container"> -->
<div class="content has-text-centered">
</div>
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
This website is website adapted from <a href="https://nerfies.github.io/">Nerfies</a> and <a href="https://mmmu.github.io/">MMMU</a>, licensed under a <a rel="license"
href="http://creativecommons.org/licenses/by-sa/4.0/">Creative
Commons Attribution-ShareAlike 4.0 International License</a>.
</p>
</div>
</div>
</div>
<!-- </div> -->
</footer>
<script>
function sortTable(table, column, type, asc) {
var tbody = table.tBodies[0];
var rows = Array.from(tbody.rows);
rows.sort(function(a, b) {
var valA = a.cells[column].textContent;
var valB = b.cells[column].textContent;
if (type === 'number') {
valA = parseFloat(valA);
valB = parseFloat(valB);
}
return asc ? valA - valB : valB - valA;
});
rows.forEach(row => tbody.appendChild(row));
}
// 切换表格的函数
function toggleTables () {
var table1 = document.getElementById('table1');
var table2 = document.getElementById('table2');
table1.classList.toggle('hidden');
table2.classList.toggle('hidden');
}
document.getElementById('toggleButton').addEventListener('click', toggleTables);
const canvas = document.getElementById('difficulty_level_chart');
canvas.style.width = '500px';
canvas.style.height = '120px';
const ctx = document.getElementById('difficulty_level_chart').getContext('2d');
const difficulty_level_chart = new Chart(ctx, {
type: 'bar',
data: {
labels: ['Easy', 'Medium', 'Hard', 'Overall'],
datasets: [{
label: 'Fuyu-8B',
data: [28.9, 27, 26.4, 27.4],
backgroundColor: 'rgba(196, 123, 160, 0.6)',
borderColor: 'rgba(196, 123, 160, 1)',
borderWidth: 1,
hoverBackgroundColor: 'rgba(196, 123, 160, 1)'
},
{
label: 'Qwen-VL-7B',
data: [39.4, 31.9, 27.6, 32.9],
backgroundColor: 'rgba(245, 123, 113, 0.6)',
borderColor: 'rgba(245, 123, 113, 1)',
borderWidth: 1,
hoverBackgroundColor: 'rgba(245, 123, 113, 1)'
},
{
label: 'LLaVA-1.5-13B',
data: [41.3, 32.7, 26.7, 33.6],
backgroundColor: 'rgba(255, 208, 80, 0.6)',
borderColor: 'rgba(255, 208, 80, 1)',
borderWidth: 1,
hoverBackgroundColor: 'rgba(255, 208, 80, 1)'
},
{
label: 'InstructBLIP-T5-XXL',
data: [40.3, 32.3, 29.4, 33.8],
backgroundColor: 'rgba(110, 194, 134, 0.6)',
borderColor: 'rgba(110, 194, 134, 1)',
borderWidth: 1,
hoverBackgroundColor: 'rgba(110, 194, 134, 1)'
},
{
label: 'BLIP-2 FLAN-T5-XXL',
data: [41, 32.7, 28.5, 34],
backgroundColor: 'rgba(255, 153, 78, 0.6)',
borderColor: 'rgba(255, 153, 78, 1)',
borderWidth: 1,
hoverBackgroundColor: 'rgba(255, 153, 78, 1)'
},
{
label: 'GPT-4V',
data: [76.1, 55.6, 31.2, 55.7],
backgroundColor: 'rgba(117, 209, 215, 0.6)',
borderColor: 'rgba(117, 209, 215, 1)',
borderWidth: 1,
hoverBackgroundColor: 'rgba(117, 209, 215, 1)'
}]
},
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20,
font: {
size: 16
}
}
},
x: {
ticks: {
font: {
size: 16 // 设置X轴字体大小
}
}
}
},
plugins: {
legend: {
labels: {
font: {
size: 16 // 设置标签文字大小
}
}
},
tooltip: {
callbacks: {
label: function(context) {
return context.dataset.label + ': ' + context.parsed.y;
}
}
}
},
onHover: (event, chartElement) => {
event.native.target.style.cursor = chartElement[0] ? 'pointer' : 'default';
}
}
});
document.addEventListener('DOMContentLoaded', function() {
// Data for the "Diagrams" chart
const data_Diagrams = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [27.6, 30.1, 31.8, 30.0, 32.0, 46.8],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
// "data_Diagrams" chart
new Chart(document.getElementById('chart_Diagrams'), {
type: 'bar',
data: data_Diagrams,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Tables" chart
const data_Tables = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [26.6, 29.0, 29.8, 27.8, 27.8, 61.8],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Tables'), {
type: 'bar',
data: data_Tables,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_PlotsAndCharts " chart
const data_PlotsAndCharts = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [24.8, 31.8, 36.2, 30.4, 35.8, 55.6],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_PlotsAndCharts'), {
type: 'bar',
data: data_PlotsAndCharts ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_ChemicalStructures " chart
const data_ChemicalStructures = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [25.0, 27.2, 27.1, 26.7, 25.5, 50.6],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_ChemicalStructures'), {
type: 'bar',
data: data_ChemicalStructures ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Photographs " chart
const data_Photographs = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [27.6, 40.5, 41.4, 44.4, 42.0, 64.2],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Photographs'), {
type: 'bar',
data: data_Photographs ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Paintings " chart
const data_Paintings = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [28.7, 57.2, 53.6, 56.3, 52.1, 75.9],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Paintings'), {
type: 'bar',
data: data_Paintings ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_GeometricShapes " chart
const data_GeometricShapes = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [21.1, 25.3, 21.4, 25.6, 28.3, 40.2],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_GeometricShapes'), {
type: 'bar',
data: data_GeometricShapes ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_SheetMusic " chart
const data_SheetMusic = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [35.2, 33.4, 34.6, 35.8, 34.9, 38.8],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_SheetMusic'), {
type: 'bar',
data: data_SheetMusic ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_MedicalImages " chart
const data_MedicalImages = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [25.4, 29.8, 31.6, 36.4, 29.8, 59.6],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_MedicalImages'), {
type: 'bar',
data: data_MedicalImages ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_PathologicalImages " chart
const data_PathologicalImages = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [26.5, 27.7, 31.2, 35.2, 35.6, 63.6],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_PathologicalImages'), {
type: 'bar',
data: data_PathologicalImages ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_MicroscopicImages " chart
const data_MicroscopicImages = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [27.0, 37.6, 29.2, 36.3, 32.7, 58.0],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_MicroscopicImages'), {
type: 'bar',
data: data_MicroscopicImages ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_MRIsCTScansXrays " chart
const data_MRIsCTScansXrays = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [21.7, 36.9, 33.3, 39.4, 29.8, 50.0],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_MRIsCTScansXrays'), {
type: 'bar',
data: data_MRIsCTScansXrays ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_SketchesAndDrafts " chart
const data_SketchesAndDrafts = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [37.0, 32.1, 29.9, 38.0, 33.7, 55.4],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_SketchesAndDrafts'), {
type: 'bar',
data: data_SketchesAndDrafts ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Maps " chart
const data_Maps = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [38.2, 36.5, 45.9, 47.6, 43.5, 61.8],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Maps'), {
type: 'bar',
data: data_Maps ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_TechnicalBlueprints " chart
const data_TechnicalBlueprints = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [24.7, 25.9, 28.4, 25.3, 27.8, 38.9],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_TechnicalBlueprints'), {
type: 'bar',
data: data_TechnicalBlueprints ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_TreesAndGraphs " chart
const data_TreesAndGraphs = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [30.1, 28.1, 28.8, 28.8, 34.9, 50.0],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_TreesAndGraphs'), {
type: 'bar',
data: data_TreesAndGraphs ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_MathematicalNotations " chart
const data_MathematicalNotations = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [15.8, 27.1, 22.6, 21.8, 21.1, 45.9],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_MathematicalNotations'), {
type: 'bar',
data: data_MathematicalNotations ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_ComicsAndCartoons " chart
const data_ComicsAndCartoons = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [29.0, 51.9, 49.6, 54.2, 51.1, 68.7],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_ComicsAndCartoons'), {
type: 'bar',
data: data_ComicsAndCartoons ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Sculpture " chart
const data_Sculpture = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [30.8, 46.2, 49.6, 51.3, 53.0, 76.1],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Sculpture'), {
type: 'bar',
data: data_Sculpture ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Portraits " chart
const data_Portraits = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [20.9, 52.7, 46.2, 54.9, 47.3, 70.3],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Portraits'), {
type: 'bar',
data: data_Portraits ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Screenshots " chart
const data_Screenshots = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [38.6, 35.7, 38.6, 34.3, 47.1, 65.7],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Screenshots'), {
type: 'bar',
data: data_Screenshots ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Other " chart
const data_Other = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [28.3, 38.3, 50.0, 51.7, 58.3, 68.3],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Other'), {
type: 'bar',
data: data_Other ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Poster " chart
const data_Poster = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [38.6, 50.9, 52.6, 61.4, 64.9, 80.7],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Poster'), {
type: 'bar',
data: data_Poster ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_IconsAndSymbols " chart
const data_IconsAndSymbols = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [23.8, 66.7, 57.1, 59.5, 59.5, 78.6],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_IconsAndSymbols'), {
type: 'bar',
data: data_IconsAndSymbols ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_HistoricalTimelines " chart
const data_HistoricalTimelines = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [30.0, 36.7, 40.0, 43.3, 43.3, 63.3],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_HistoricalTimelines'), {
type: 'bar',
data: data_HistoricalTimelines ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_3DRenderings " chart
const data_3DRenderings = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [33.3, 28.6, 57.1, 38.1, 47.6, 47.6],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_3DRenderings'), {
type: 'bar',
data: data_3DRenderings ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_DNASequences " chart
const data_DNASequences = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [20.0, 45.0, 25.0, 25.0, 45.0, 55.0],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_DNASequences'), {
type: 'bar',
data: data_DNASequences ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Landscapes " chart
const data_Landscapes = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [43.8, 43.8, 50.0, 31.2, 62.5, 68.8],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Landscapes'), {
type: 'bar',
data: data_Landscapes ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_LogosAndBranding " chart
const data_LogosAndBranding = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [21.4, 57.1, 64.3, 35.7, 50.0, 85.7],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_LogosAndBranding'), {
type: 'bar',
data: data_LogosAndBranding ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
// "data_Advertisements " chart
const data_Advertisements = {
labels: ['Fuyu-8B', 'Qwen-VL-7B', 'InstructBLIP-T5-XXL', 'LLaVA-1.5-13B', 'BLIP-2 FLAN-T5-XXL', 'GPT-4V'],
datasets: [{
data: [30.0, 60.0, 50.0, 60.0, 70.0, 100.0],
backgroundColor: ['rgba(196, 123, 160, 0.6)', 'rgba(245, 123, 113, 0.6)', 'rgba(255, 208, 80, 0.6)', 'rgba(110, 194, 134, 0.6)', 'rgba(255, 153, 78, 0.6)', 'rgba(117, 209, 215, 0.6)'],
borderColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,0.4)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)'],
hoverBackgroundColor: ['rgba(196, 123, 160, 1)', 'rgba(245, 123, 113,1)', 'rgba(255, 208, 80, 1)', 'rgba(110, 194, 134, 1)', 'rgba(255, 153, 78, 1)', 'rgba(117, 209, 215, 1)']
}]
};
new Chart(document.getElementById('chart_Advertisements'), {
type: 'bar',
data: data_Advertisements ,
options: {
scales: {
y: {
beginAtZero: true,
min: 0,
max: 100,
ticks: {
stepSize: 20
}
},
x: {
display: false
}
},
plugins: {
legend: {
display: false
},
tooltip: {
}
}
}
});
});
</script>
<style>
.publication-links {
/* 在默认情况下,水平排列 */
display: flex;
}
/* 在屏幕宽度小于600px时,竖直排列 */
@media only screen and (max-width: 600px) {
.publication-links {
display: flex;
flex-direction: column; /* 将元素竖直排列 */
}
}
.hidden {
display: none;
}
.sortable:hover {
cursor: pointer;
}
.asc::after {
content: ' ↑';
}
.desc::after {
content: ' ↓';
}
#toggleButton {
background-color: #ffffff;
border: 1px solid #dddddd;
color: #555555;
padding: 10px 20px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 14px;
margin: 4px 2px;
cursor: pointer;
border-radius: 25px;
box-shadow: 0 4px 8px 0 rgba(0,0,0,0.2);
transition-duration: 0.4s;
}
#toggleButton:hover {
box-shadow: 0 12px 16px 0 rgba(0,0,0,0.24), 0 17px 50px 0 rgba(0,0,0,0.19); /* 鼠标悬停时的阴影效果 */
}
table {
border-collapse: collapse;
width: 100%;
margin-top: 5px;
border: 1px solid #ddd;
font-size: 14px;
border-left: none;
border-right: none;
overflow-x: auto; /* 将 overflow-x 设置为 auto */
}
th, td {
text-align: left;
padding: 8px;
border-left: none;
border-right: none;
}
th {
background-color: #f2f2f2;
border-bottom: 2px solid #ddd;
border-left: none;
border-right: none;
}
td:hover {background-color: #ffffff;}
/* 去掉第二行和第三行之间的横线 */
tr:nth-child(1) td {
border-bottom: none;
}
tr:nth-child(2) td {
border-bottom: none;
}
tr:nth-child(3) td {
border-bottom: none;
}
tr:nth-child(4) td {
border-bottom: none;
}
.dashed-border {
border-top: 2px dashed #ccc; /* 调整间隔宽度和颜色 */
/* border-image: linear-gradient(to right, #ccc 25%, transparent 25%) 1 1; */
}
.centered-span {
display: flex;
align-items: center;
justify-content: center; /* 水平居中 */
height: 100%; /* 让 span 高度充满单元格 */
}
tr:nth-child(7) td {
border-bottom: none;
}
tr:nth-child(8) td {
border-bottom: none;
}
tr:nth-child(9) td {
border-bottom: none;
}
tr:nth-child(10) td {
border-bottom: none;
}
tr:nth-child(11) td {
border-bottom: none;
}
tr:nth-child(12) td {
border-bottom: none;
}
tr:nth-child(13) td {
border-bottom: none;
}
tr:nth-child(14) td {
border-bottom: none;
}
tr:nth-child(15) td {
border-bottom: none;
}
tr:nth-child(16) td {
border-bottom: none;
}
tr:nth-child(17) td {
border-bottom: none;
}
tr:nth-child(18) td {
border-bottom: none;
}
tr:nth-child(19) td {
border-bottom: none;
}
tr:nth-child(20) td {
border-bottom: none;
}
tr:nth-child(21) td {
border-bottom: none;
}
tr:nth-child(22) td {
border-bottom: none;
}
tr:nth-child(23) td {
border-bottom: none;
}
tr:nth-child(24) td {
border-bottom: none;
}
tr:nth-child(25) td {
border-bottom: none;
}
tr:nth-child(26) td {
border-bottom: none;
}
tr:nth-child(29) td {
border-bottom: none;
}
tr:nth-child(30) td {
border-bottom: none;
}
tr:nth-child(31) td {
border-bottom: none;
}
tr:nth-child(32) td {
border-bottom: none;
}
tr:nth-child(33) td {
border-bottom: none;
}
tr:nth-child(36) td {
border-bottom: none;
}
tr:nth-child(37) td {
border-bottom: none;
}
tr:nth-child(38) td {
border-bottom: none;
}
tr:nth-child(39) td {
border-bottom: none;
}
tr:nth-child(40) td {
border-bottom: none;
}
tr:nth-child(41) td {
border-bottom: none;
}
tr:nth-child(42) td {
border-bottom: none;
}
tr:nth-child(43) td {
border-bottom: none;
}
tr:nth-child(44) td {
border-bottom: none;
}
tr:nth-child(45) td {
border-bottom: none;
}
tr:nth-child(46) td {
border-bottom: none;
}
tr:nth-child(47) td {
border-bottom: none;
}
tr:nth-child(48) td {
border-bottom: none;
}
tr:nth-child(49) td {
border-bottom: none;
}
tr:nth-child(50) td {
border-bottom: none;
}
tr:nth-child(53) td {
border-bottom: none;
}
tr:nth-child(54) td {
border-bottom: none;
}
tr:nth-child(55) td {
border-bottom: none;
}
</style>
</body>
</html>