Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference
manticore-13b / README.md
winglian's picture
upload the epoch threee model
5f96fb2
metadata
datasets:
  - anon8231489123/ShareGPT_Vicuna_unfiltered
  - ehartford/wizard_vicuna_70k_unfiltered
  - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
  - QingyiSi/Alpaca-CoT
  - teknium/GPT4-LLM-Cleaned
  - teknium/GPTeacher-General-Instruct
  - metaeval/ScienceQA_text_only
  - hellaswag
  - tasksource/mmlu
  - openai/summarize_from_feedback
language:
  - en
library_name: transformers
pipeline_tag: text-generation

Manticore 13B - Preview Release (previously Wizard Mega)

Manticore 13B is a Llama 13B model fine-tuned on the following datasets:

Demo

Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.

Release Notes

Build

Manticore was built with Axolotl on 8xA100 80GB

  • Preview Release: 1 epoch taking 8 hours.
  • The configuration to duplicate this build is provided in this repo's /config folder.

Bias, Risks, and Limitations

Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.

Examples

### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.

### Assistant: 
### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar...  

### Assistant: