This demo showcases the power of CamelGPT, enabled by the Converters library. All computations and loading are handled client-side, reducing server load and maximizing efficiency.
In this paper, we trained an instruction-tuned model, CamelGPT-mini
, as proof of concept of CamelGPT's new architecture
Model Date: May 2023
Model Type: Language
Dataset: Private
Parameters: 50k
Training: Training was performed in three phases. The first phase was an initial training run with only the convolutional component. The same initial network was trained with a linear embedding as in BERT in the second phase. Finally, the final architecture was introduced using the language models from the second and third phases and the final model.
Limitations: Our current model only supports english and may contain biases and produce inaccurate responses.
ā” CamelGPT is so performant it can run on a browser via Converters
import { TextGenerationPipeline } from 'https://esm.sh/gh/gpt-research/converters/src/converters.js';
const main = async () => {
// Initialize the pipeline with the desired model
const pipeline = await TextGenerationPipeline("@gpt-research/CamelGPT-mini");
// Generate text using the pipeline
const generatedText = pipeline("Write a poem about camels.");
// Log or use the generated text
console.log(generatedText);
};
main();