Spaces:
Running
Running
<html> | |
<head> | |
<meta charset="utf-8"> | |
<meta name="viewport" content="width=device-width"> | |
<title>CamelGPT 🐪</title> | |
<script src="https://unpkg.com/compromise"></script> | |
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@unocss/reset/tailwind.min.css"> | |
</head> | |
<body> | |
<script type="module"> | |
import wind from 'https://esm.sh/@unocss/preset-wind@0.52.4' | |
window.__unocss = { | |
presets: [wind] | |
} | |
</script> | |
<script src="https://cdn.jsdelivr.net/npm/@unocss/runtime"></script> | |
<div class="p-16"> | |
<h1 class=" text-center font-bold text-4xl">CamelGPT 🐪</h1> | |
<h2 class="text-lg text-center"> A Proof of Concept of achieving high-performance generative text with minimal footprint and compute power. </h2> | |
<h3 class="text-gray-600 text-center"> Brought to you by the GPT-Research team </h3> | |
<br> | |
<div class="max-w-xl rounded overflow-hidden shadow-lg border bg-white mx-auto"> | |
<div class="px-6 py-4"> | |
<div class="font-bold text-xl mb-2">CamelGPT Demo</div> | |
<p class="text-gray-700 text-base"> | |
This demo showcases the power of CamelGPT, enabled by the Converters library. All computations and loading are handled client-side, reducing server load and maximizing efficiency. | |
</p> | |
</div> | |
<div class="px-6 py-4"> | |
<div class="flex items-center"> | |
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-check" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round"> | |
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path> | |
<path d="M5 12l5 5l10 -10"></path> | |
</svg> | |
<span class="font-semibold text-gray-700">Efficient Model Retrieval</span> | |
</div> | |
<div class="flex items-center mt-2"> | |
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-check" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round"> | |
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path> | |
<path d="M5 12l5 5l10 -10"></path> | |
</svg> | |
<span class="font-semibold text-gray-700">Customization at Your Fingertips</span> | |
</div> | |
<div class="flex items-center mt-2"> | |
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-check" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round"> | |
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path> | |
<path d="M5 12l5 5l10 -10"></path> | |
</svg> | |
<span class="font-semibold text-gray-700">Wide Range of Model Support</span> | |
</div> | |
<div class="flex items-center mt-2"> | |
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-check" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round"> | |
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path> | |
<path d="M5 12l5 5l10 -10"></path> | |
</svg> | |
<span class="font-semibold text-gray-700">Optimized for Efficiency</span> | |
</div> | |
</div> | |
</div> | |
<br> | |
<p> | |
<h1 class="text-xl font-bold">Abstract</h1> CamelGPT can generate text at blazing-fast speeds on low-end CPUs while outperforming prior models like GPT3 and Bert in instruction based tasks by a very significant margin. Unlike models that are limited to single-digit speeds with a lot of expensive compute, CamelGPT can achieve these high speeds with orders of magnitude less compute. </p> | |
<br> | |
<br> | |
<br> | |
<h1 class="text-xl font-bold">Approach</h1> The CamelGPT architecture represents a significant breakthrough in the field, using both neural and linguistic features to generate coherent, grammatical text. The model combines recurrent and convolutional neural models and uses a new technique called “Eager Precached Dynamic Pruning” to reduce its compute requirements. We evaluate the model’s performance on the task answering benchmark Open LLM Leaderboard (our private version) but believe the results indicate its performance on other text generation tasks. </p> | |
<br> | |
<br> | |
<br> | |
<h1 class="font-bold text-xl">Model Details</h1> | |
<p>In this paper, we trained an instruction-tuned model, <code>CamelGPT-mini</code>, as proof of concept of CamelGPT's new architecture <br> Model Date: May 2023 <br> Model Type: Language <br> Dataset: Private <br> Parameters: 50k <br> Training: Training was performed in three phases. The first phase was an initial training run with only the convolutional component. The same initial network was trained with a linear embedding as in BERT in the second phase. Finally, the final architecture was introduced using the language models from the second and third phases and the final model. <br> Limitations: Our current model only supports english and may contain biases and produce inaccurate responses. </p> | |
<br> | |
<br> | |
<br> | |
<h1 class="font-bold text-xl">Demo</h1> | |
<p>⚡ CamelGPT is so performant it can run on a browser via Converters</p> | |
<form class="mt-3 px-4 py-2"> | |
<div class="grid grid--cols gap-2"> | |
<!-- Add fields here --> | |
<label htmlFor="inputText" class="block mb-2 font-medium text-gray-800">Input Text</label> | |
<textarea class="p-2 rounded bg-slate-100" id="inputText" required value="Create a poem about camels and rainbows" name="text">Create a poem about camels and rainbows</textarea> | |
<p class="answer"></p> | |
<button disabled type="submit" style="background-color: orange;" class="bg-orange-600 rounded-md shadow p-4">Loading Model...</button> | |
</div> | |
</form> | |
<br> | |
<br> | |
<br> | |
<h1 class="font-bold text-xl">Inference</h1> CamelGPT model weights are designed to be loaded and inferred using our Converters package, as seen in the demo below: | |
<code class="whitespace-pre-line"> | |
import { TextGenerationPipeline } from 'https://esm.sh/gh/gpt-research/converters/src/converters.js'; | |
const main = async () => { | |
// Initialize the pipeline with the desired model | |
const pipeline = await TextGenerationPipeline("@gpt-research/CamelGPT-mini"); | |
// Generate text using the pipeline | |
const generatedText = pipeline("Write a poem about camels."); | |
// Log or use the generated text | |
console.log(generatedText); | |
}; | |
main(); | |
</code> | |
</p> | |
</div> | |
</body> | |
</html> | |
<script type="module"> | |
import { TextGenerationPipeline } from 'https://esm.sh/gh/gpt-research/converters/src/converters.js'; | |
const main = async ()=>{ | |
const pipeline = await TextGenerationPipeline("@gpt-research/CamelGPT-mini"); | |
document.querySelector("button").innerText = "Generate"; | |
document.querySelector('button').disabled = false | |
document.querySelector('form').addEventListener('submit', async (e)=>{ | |
document.querySelector('.answer').textContent = "" | |
e.preventDefault(); | |
const text = document.querySelector("#inputText").value; | |
const answer = await pipeline(text); | |
const resTokens = answer.split(" ") | |
resTokens.forEach((e, i) => { | |
setTimeout(() => { | |
document.querySelector('.answer').textContent += `${e} ` | |
}, i * 3) | |
}) | |
}); | |
} | |
main(); | |
</script> |