modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
superkaiba1/mnist_denoising_frequency_1topt5
superkaiba1
"2024-06-28T09:04:18Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-24T07:09:19Z"
Entry not found
morturr/flan-t5-base-headlines-text-classification-split-0-2024-06-24
morturr
"2024-06-24T07:17:23Z"
0
0
transformers
[ "transformers", "safetensors", "t5", "text-classification", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-06-24T07:09:53Z"
Entry not found
haniehkH/quantize-Dorna-llm
haniehkH
"2024-06-24T07:10:58Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:10:58Z"
Entry not found
Bighost/Charpoet_8bit
Bighost
"2024-06-24T07:29:24Z"
0
0
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-24T07:12:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kowshikpeddireddy/whisper-medium-en
Kowshikpeddireddy
"2024-06-24T07:13:08Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:13:08Z"
Entry not found
casque/Swimming_Lesson_B_v1
casque
"2024-06-24T07:14:34Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-24T07:13:09Z"
--- license: creativeml-openrail-m ---
OpenT2S/StreamHiFiGAN
OpenT2S
"2024-06-24T08:45:59Z"
0
0
null
[ "onnx", "HiFiGAN", "Streaming Vocoder", "Stream HiFiGAN", "Stream Vocoder", "Realtime TTS", "TTS", "Text-to-speech", "text-to-speech", "en", "license:apache-2.0", "region:us" ]
text-to-speech
"2024-06-24T07:14:02Z"
--- license: apache-2.0 language: - en pipeline_tag: text-to-speech tags: - HiFiGAN - Streaming Vocoder - Stream HiFiGAN - Stream Vocoder - Realtime TTS - TTS - Text-to-speech --- # StreamHiFiGAN StreamHiFiGAN offers a HiFiGAN vocoder model optimized for streaming inference, providing real-time audio synthesis capabilities. ![Stream HiFiGAN](stream_hifigan.png) ## Features StreamHiFiGAN offers several benefits for audio synthesis, optimizing both performance and efficiency: 1. **No Requirement for Causal Convolutions**: The released model is designed to support streaming inference without the need for retraining, facilitating seamless adaptation. 2. **Latency Reduction**: By leveraging streaming inference, it significantly minimizes delays, thereby boosting real-time audio processing capabilities. 3. **Computational Efficiency**: Incorporates caching strategies to eliminate unnecessary recalculations during the streaming process. 4. **Seamless Speech Clip Concatenation**: Enables direct, seamless stitching of speech clips without the need for overlapping, recalculating, or interpolating, ensuring lossless audio synthesis. These models are adapted from the work available at [ParallelWaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN). The converted models in ONNX format (conversion process not disclosed) are available under `dump/onnx/`. The following models are included: - `csmsc_hifigan.v1` - `jsut_hifigan.v1` - `libritts_hifigan.v1` - `ljspeech_hifigan.v1` - `vctk_hifigan.v1` ## Streaming Inference Use the following script to perform streaming inference: ```bash for tag in ljspeech_hifigan.v1 jsut_hifigan.v1 csmsc_hifigan.v1 vctk_hifigan.v1 libritts_hifigan.v1; do if [[ "$tag" == "ljspeech_hifigan.v1" ]]; then sr=22050 cd=3258 else sr=24000 cd=5687 fi python stream_infer.py --dumpdir dump/sample/norm/$tag \ --outdir dump/stream_synthesis/$tag/ \ --onnx dump/onnx/$tag/stream_hifigan.cd${cd}.onnx \ --cumulative-delay ${cd} --chunk-size 32 --sampling-rate ${sr} done ``` ## Usage of `stream_infer.py` For more details on the parameters of `stream_infer.py`, use: ```bash python stream_infer.py -h ``` This will display information on all available arguments, including directories for input features and output wav files, model file, and configuration details for streaming inference. ## Feature Extraction Features under `dump/sample/norm/` are pre-extracted mel-spectrogram parameters. For method of feature extraction, refer to the `ParallelWaveGAN` project: ```bash # To view all available pretrained models: python << EOF from parallel_wavegan.utils import PRETRAINED_MODEL_LIST print(PRETRAINED_MODEL_LIST.keys()) EOF # To download pretrained models: for tag in ljspeech_hifigan.v1 jsut_hifigan.v1 csmsc_hifigan.v1 vctk_hifigan.v1 libritts_hifigan.v1; do python << EOF from parallel_wavegan.utils import download_pretrained_model download_pretrained_model(${tag}, "pretrained_model") EOF done # Process for feature extraction -> normalization -> synthesis: for tag in ljspeech_hifigan.v1 jsut_hifigan.v1 csmsc_hifigan.v1 vctk_hifigan.v1 libritts_hifigan.v1; do if [[ "$tag" == "ljspeech_hifigan.v1" ]]; then sub="22k" else sub="24k" fi parallel-wavegan-preprocess \ --config pretrained_model/${tag}/config.yml \ --rootdir sample/${sub} \ --dumpdir dump/sample/raw/$tag parallel-wavegan-normalize \ --config pretrained_model/${tag}/config.yml \ --rootdir dump/sample/raw/$tag \ --dumpdir dump/sample/norm/$tag \ --stats pretrained_model/${tag}/stats.h5 parallel-wavegan-decode \ --checkpoint pretrained_model/${tag}/checkpoint-2500000steps.pkl \ --dumpdir dump/sample/norm/$tag \ --outdir dump/synthesis/$tag done ```
PrunaAI/codellama-CodeLlama-7b-Instruct-hf-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:16:02Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:codellama/CodeLlama-7b-Instruct-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:14:04Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: codellama/CodeLlama-7b-Instruct-hf metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo codellama/CodeLlama-7b-Instruct-hf installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/codellama-CodeLlama-7b-Instruct-hf-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-Instruct-hf") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model codellama/CodeLlama-7b-Instruct-hf before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
movefix-gia-bao-nhieu/hanoxol
movefix-gia-bao-nhieu
"2024-06-24T07:19:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:14:17Z"
[รีวิว] Hanoxol ดีหรือไม่? Hanoxol มีผลกระทบอะไรบ้าง? ฮัน็อกโซลมีราคาเท่าไหร่? หาซื้อ Hanoxol ของแท้ได้ที่ไหน? <h2>Hanoxol ดีหรือไม่?</h2> <img class="size-full wp-image-3563 aligncenter" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/hanoxol-is-good.jpg" alt="hanoxol is good" width="595" height="559" /> <em><strong>Hanoxol</strong></em> เป็นผลิตภัณฑ์สนับสนุนการรักษาโรคริดสีดวงทวารที่มีประสิทธิภาพสำหรับผู้ที่เป็นโรคริดสีดวงทวารภายในและภายนอก ช่วยลดความเจ็บปวดและอาการแสบร้อนของโรคริดสีดวงทวาร ผลิตภัณฑ์ <em><strong>Hanoxol</strong></em> มีส่วนผสมจากสมุนไพรธรรมชาติ ไม่มีผลข้างเคียง และปลอดภัยสำหรับผู้ใช้ สินค้าได้รับใบอนุญาตแล้ว <img class="size-full wp-image-3564 aligncenter" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/hanoxol-liscensed.jpg" alt="สินค้าได้รับใบอนุญาตแล้ว" width="764" height="451" /> <h2>Hanoxol มีส่วนผสมอะไรบ้าง?</h2> <img class="size-full wp-image-3562 aligncenter" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/hanoxol-ingredient.jpg" alt="Hanoxol มีส่วนผสมอะไรบ้าง" width="415" height="419" /> ส่วนผสมหลักบางอย่างอยู่ในยาเม็ด <em><strong>Hanoxol</strong></em> <strong><a href="https://tepkosalkhmer.com/go/hanoxol"> สั่งซื้อ Hanoxol ได้ที่เว็บไซต์อย่างเป็นทางการ</a></strong> <ul> <li aria-level="1"><b>Citrus Bioflavonoid </b><b>สารสกัดจากพืชตระกูลส้ม </b><b>:</b><span style="font-weight: 400;"> มีสารเฮสเพอริดินเป็นสารโพลิฟีนอลในกลุ่มไบโอฟลาโวนอยด์ (bioflaranoid) สารเฮสเพอริดินยังมีส่วนสาคัญในการพัฒนาและเสริมสร้าง เส้นเลือดดา โดยเฉพาะการลดการซึมผ่านและความ อ่อนแอของเส้นเลือดดา โดยเฉพาะโรคที่สัมพันธ์กับ การเพิ่มการซึมผ่านเส้นเลือด เช่น ริดสีดวงทวาร ลักปิดลักเปิด แผลเน่าเปื่อยพุพอง แผลถลอก และเมื่อร่างกายขาดเฮสเพอริดินมักจะเกิดความผิดปกติของ เส้นเลือดฝอย อาการอ่อนเพลีย และเกิดตะคริวที่ขาในเวลา กลางคืน</span></li> </ul> <ul> <li aria-level="1"><b>Mangosteen extract </b><b>สารสกัดจากเปลือกมังคุด</b><b> : </b><span style="font-weight: 400;">มีสาระสาคัญคือ Tannin ช่วยขับลม บรรเทาอาการริดสีดวงทวาร ช่วยหยุดเลือด มีฤทธิ์ทาให้ชาเฉพาะที่ จึงช่วยลดปวดและลดการอักเสบได้ดี โกฐกักกราจึงเหมาะกับ ผู้มีปัญหาเรื่องริดสีดวงทวาร เนื่องจากท้องผูก</span></li> </ul> <ul> <li aria-level="1"><b>Triphala Extract (ตรีผลา) :</b><span style="font-weight: 400;"> คือกลุ่มสมุนไพร 3 ชนิด คือ มะขามป้อม (Emblic Extract) สมอไทย (Terminalia Extract) และสมอพิเภก (Beleric Extract) ช่วยระบาย ล้างพิษในลาไส้ กระตุ้นการไหลเวียนโลหิต ควบคุมความดัน ควบคุมน้าตาล ต้านอนุมูลอิสระ กาจัดสารพิษในร่างกาย ป้องกันการเสื่อมของเซลล์ในร่างกาย ชะลอความแก่ กระตุ้นการทางานของร่างกาย และภูมิคุ้มกัน ช่วยปรับสมดุลในร่างกาย</span></li> <li><b></b><b>Broccoli extract powder </b><b>ผงสกัดบล็อคโคลี่</b><b> :</b><span style="font-weight: 400;"> มีสารต้านอนุมูลอิสระ ซัลโฟราเฟน ที่ช่วยออกฤทธิต้านและกาจัดเซลล์มะเร็งได้ ต้านโรคหัวใจ ป้องกันระบบประสาท โรคเบาหวาน ป้องกันอาการเลือดจาง</span></li> </ul> <strong><a href="https://tepkosalkhmer.com/go/hanoxol"> สั่งซื้อ Hanoxol ได้ที่เว็บไซต์อย่างเป็นทางการ</a></strong> <h2>ฮัน็อกโซลมีประโยชน์อย่างไร?</h2> <img class="size-full wp-image-3567 aligncenter" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/hanoxol-uses.jpg" alt="ฮัน็อกโซลมีประโยชน์อย่างไร?" width="410" height="428" /> <ul> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยบรรเทาอาการของโรคริดสีดวงทวาร</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยขับลม ช่วยปรับสมดุลในร่างกาย</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยหยุดเลือด</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยเสริมสร้างความแข็งแรงให้กับเส้นเลือดดำและเส้นเลือดฝอย</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ใช้ในการรักษาอาการเจ็บและบวมจากเส้นเลือดดำอักเสบได้เป็นอย่างดี</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยลดปวดและลดการอักเสบได้ดี</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยระบาย ล้างพิษในลำไส้</span></li> <li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">ช่วยออกฤทธิต้านและกาจัดเซลล์มะเร็ง</span></li> </ul> <h2>วิธีใช้ Hanoxol อย่างมีประสิทธิภาพ</h2> <img class="size-full wp-image-3566 aligncenter" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/hanoxol-review.jpg" alt="hanoxol review" width="515" height="602" /> วิธีใช้ : ทานวันละ 1 แคปซูล ขนาดบรรจุ : 20 แคปซูล <h2>ฮัน็อกโซลมีราคาเท่าไหร่?</h2> <a href="https://tepkosalkhmer.com/go/hanoxol" target="_blank" rel="noopener"><img class="aligncenter wp-image-3561 size-full" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/buy-hanoxol.jpg" alt="buy hanoxol" width="423" height="562" /></a> <em><strong>Hanoxol</strong></em> มีราคาระบุไว้ในเว็บไซต์อย่างเป็นทางการที่ 1980 บาท อย่างไรก็ตามทางผู้ผลิตเสนอส่วนลด <strong>50%</strong> ราคาขายเพียง <strong>990 บาท</strong> ข้อเสนออาจสิ้นสุดเมื่อใดก็ได้ ดังนั้นโปรดสั่งซื้ออย่างรวดเร็ว <strong><a href="https://tepkosalkhmer.com/go/hanoxol"> สั่งซื้อ Hanoxol ได้ที่เว็บไซต์อย่างเป็นทางการ</a></strong> <h2>หาซื้อ Hanoxol ของแท้ได้ที่ไหน?</h2> <img class="size-full wp-image-3565 aligncenter" src="https://tepkosalkhmer.com/wp-content/uploads/2024/06/hanoxol-price.jpg" alt="hanoxol price" width="410" height="436" /> คลิกที่ลิงค์เพื่อไปที่หน้าการซื้อของแท้ การสั่งซื้อที่นี่ทำให้มั่นใจได้ว่าคุณจะได้รับสินค้าจริงในราคาที่ดีที่สุด &nbsp; <strong><a href="https://tepkosalkhmer.com/go/hanoxol"> สั่งซื้อ Hanoxol ได้ที่เว็บไซต์อย่างเป็นทางการ</a></strong>
Alex-Libryo/colbertv2.0
Alex-Libryo
"2024-06-24T07:54:18Z"
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
"2024-06-24T07:14:19Z"
--- license: apache-2.0 ---
Noodle-bg/code-llama-7b-Text-classification
Noodle-bg
"2024-06-26T21:34:34Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
"2024-06-24T07:14:25Z"
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: codellama/CodeLlama-7b-hf datasets: - generator model-index: - name: code-llama-7b-Text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-Text-classification This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
PrunaAI/tokyotech-llm-Swallow-7b-plus-hf-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:16:29Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:tokyotech-llm/Swallow-7b-plus-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:14:34Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: tokyotech-llm/Swallow-7b-plus-hf metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo tokyotech-llm/Swallow-7b-plus-hf installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/tokyotech-llm-Swallow-7b-plus-hf-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("tokyotech-llm/Swallow-7b-plus-hf") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model tokyotech-llm/Swallow-7b-plus-hf before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
superkaiba1/cifar10_denoising_frequency_pt5to1
superkaiba1
"2024-06-28T20:27:30Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-24T07:15:21Z"
Entry not found
triplee/new_changed_unchanged_small_model
triplee
"2024-06-24T08:08:27Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-24T07:16:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/llm-jp-llm-jp-13b-v2.0-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:25:37Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:llm-jp/llm-jp-13b-v2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:21:43Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: llm-jp/llm-jp-13b-v2.0 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo llm-jp/llm-jp-13b-v2.0 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/llm-jp-llm-jp-13b-v2.0-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-v2.0") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model llm-jp/llm-jp-13b-v2.0 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Jhonfx123/Fcjhon
Jhonfx123
"2024-06-24T07:24:21Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-24T07:22:21Z"
--- license: apache-2.0 ---
bezzam/diffusercam-mirflickr-mmcn-unet4M
bezzam
"2024-06-24T07:28:13Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-24T07:22:48Z"
--- license: mit ---
bezzam/diffusercam-mirflickr-unrolled-admm5-unet8M
bezzam
"2024-06-24T07:27:39Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-24T07:24:02Z"
--- license: mit ---
bezzam/diffusercam-mirflickr-unet2M-mmcn-unet2M
bezzam
"2024-06-24T07:27:07Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-24T07:24:49Z"
--- license: mit ---
PrunaAI/Undi95-Llama-3-LewdPlay-8B-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:29:41Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:Undi95/Llama-3-LewdPlay-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:26:59Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Undi95/Llama-3-LewdPlay-8B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Undi95/Llama-3-LewdPlay-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/Undi95-Llama-3-LewdPlay-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Undi95/Llama-3-LewdPlay-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Undi95/Llama-3-LewdPlay-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/ajibawa-2023-WikiHow-Mistral-Instruct-7B-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:29:09Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "conversational", "base_model:ajibawa-2023/WikiHow-Mistral-Instruct-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:27:12Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: ajibawa-2023/WikiHow-Mistral-Instruct-7B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo ajibawa-2023/WikiHow-Mistral-Instruct-7B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/ajibawa-2023-WikiHow-Mistral-Instruct-7B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("ajibawa-2023/WikiHow-Mistral-Instruct-7B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model ajibawa-2023/WikiHow-Mistral-Instruct-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
superkaiba1/cifar10_denoising_frequency_1topt5
superkaiba1
"2024-06-28T20:47:14Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-24T07:27:25Z"
Entry not found
superkaiba1/CelebA_denoising_frequency_pt5to1
superkaiba1
"2024-06-27T22:51:26Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-24T07:27:25Z"
Entry not found
Siddhanna/lora-bloomz
Siddhanna
"2024-06-24T07:28:51Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-24T07:28:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mineblox/sword
Mineblox
"2024-06-24T07:28:54Z"
0
0
null
[ "license:unlicense", "region:us" ]
null
"2024-06-24T07:28:54Z"
--- license: unlicense ---
PrunaAI/failspy-Llama-3-8B-Instruct-MopeyMule-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:32:00Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:failspy/Llama-3-8B-Instruct-MopeyMule", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:29:18Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: failspy/Llama-3-8B-Instruct-MopeyMule metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo failspy/Llama-3-8B-Instruct-MopeyMule installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/failspy-Llama-3-8B-Instruct-MopeyMule-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("failspy/Llama-3-8B-Instruct-MopeyMule") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model failspy/Llama-3-8B-Instruct-MopeyMule before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
AN1609/autotrain-pbcst-o4o6d
AN1609
"2024-06-24T07:30:55Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-24T07:30:29Z"
--- tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.6690925359725952 f1: 0.0 precision: 0.0 recall: 0.0 auc: 0.0 accuracy: 0.5
PrunaAI/tokyotech-llm-Swallow-7b-hf-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:34:19Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:tokyotech-llm/Swallow-7b-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:32:19Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: tokyotech-llm/Swallow-7b-hf metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo tokyotech-llm/Swallow-7b-hf installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/tokyotech-llm-Swallow-7b-hf-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("tokyotech-llm/Swallow-7b-hf") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model tokyotech-llm/Swallow-7b-hf before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
superkaiba1/CelebA_denoising_frequency_1topt5
superkaiba1
"2024-06-27T22:51:28Z"
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2024-06-24T07:33:22Z"
Entry not found
balramsingh99/gpt2-reuters-tokenizer
balramsingh99
"2024-06-24T07:34:14Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-24T07:34:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/sambanovasystems-SambaLingo-Hungarian-Base-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:37:06Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:sambanovasystems/SambaLingo-Hungarian-Base", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:34:51Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: sambanovasystems/SambaLingo-Hungarian-Base metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo sambanovasystems/SambaLingo-Hungarian-Base installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/sambanovasystems-SambaLingo-Hungarian-Base-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Base") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model sambanovasystems/SambaLingo-Hungarian-Base before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
F-Fries/first
F-Fries
"2024-06-24T07:36:17Z"
0
0
null
[ "license:afl-3.0", "region:us" ]
null
"2024-06-24T07:36:17Z"
--- license: afl-3.0 ---
xxlrd/goyoonjung2
xxlrd
"2024-06-24T07:41:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:37:28Z"
Entry not found
circulus/on-yolov10n-ov
circulus
"2024-06-24T07:38:24Z"
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
"2024-06-24T07:38:06Z"
--- license: gpl-3.0 ---
Dichitha/mistral7b
Dichitha
"2024-06-24T07:39:27Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:39:24Z"
Entry not found
PrunaAI/Rakuten-RakutenAI-7B-chat-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:41:57Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "base_model:Rakuten/RakutenAI-7B-chat", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:39:46Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Rakuten/RakutenAI-7B-chat metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Rakuten/RakutenAI-7B-chat installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/Rakuten-RakutenAI-7B-chat-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Rakuten/RakutenAI-7B-chat") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Rakuten/RakutenAI-7B-chat before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
ShapeKapseln33/NexaSlim355
ShapeKapseln33
"2024-06-24T07:46:55Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:41:01Z"
Nexaslim Norge Erfaringer: I en verden hvor vitalitet og ytelse ofte er synonymt med suksess, er det viktig å opprettholde topp fysisk form. For menn strekker dette seg ofte utover bare kondisjon til områder med vitalitet, virilitet og generelt velvære. Men ettersom alderen innhenter oss, kan ulike faktorer hemme vår evne til å opprettholde optimal ytelse og tilfredshet i disse rikene. **[Klikk her for å kjøpe nå fra NexaSlim offisielle nettsted](https://slim-gummies-deutschland.de/nexaslim-no)** NexaSlim Ketosis: Alle ønsker å holde seg friske og i form for å sikre at de ikke lider av noen sykdom. Men ikke alle kan holde seg perfekt i form og sunne. En tredjedel av verdens befolkning lider allerede av fedme, og det er et globalt problem nå. På grunn av dårlig livsstil vokser dette problemet eksponentielt. Folk bruker mye av tiden på å sitte på kontoret eller bare bruke telefonen hjemme. Folk engasjerer seg ikke så mye i fysiske aktiviteter på grunn av den avanserte teknologien vi har. Hvis du ønsker å avslutte overvektsproblemet i livet ditt permanent, så har vi en løsning for deg. Du kjenner kanskje til flere vekttapsteknikker, her vil vi fortelle deg om et unikt ketoprodukt som enkelt kan løse vekttapproblemene. NexaSlim Ketosis Pills er et revolusjonerende ketogent supplement som kan hjelpe deg å miste fett for å generere energi. Den har en sammensetning av essensielle vekttapingredienser som kan trimme kroppsstrukturen uten bivirkninger. NexaSlim Ketosis Pill er en av de mest kjente ketogene produktene på markedet i dag. Dette produktet er basert på en keto-diettplan der du vil begynne å brenne kroppsfett som drivstoff. Det kan være vanskelig å følge den ketogene diettplanen over lengre tid uten eksternt tilskudd. Det er det eneste produktet som kan levere ekstremt fantastiske resultater på svært kort tid. Dette produktet vil brenne fett i en veldig god hastighet samtidig som kroppen din holder seg i ketose i lang tid. Det vil øke vitalitetsnivået og stoffskiftet. Dette produktet kan også forbedre den generelle kroppshelsen ved hjelp av ekstra vitaminer og mineraler lagt til det. Dette elementet vil regulere blodsukker og blodtrykksnivåer. Det vil også ta vare på humørkvaliteten din. Den har naturlig ekstraherte ingredienser som kan vise de ønskede fordelene uten å gjøre noen ekstra innsats. Les hele anmeldelsen og avgjør om du vil ta den eller ikke. ##Hva er NexaSlim Ketosis? Før du konsumerer et produkt, er det viktig å vite hva det er. Så hva er egentlig NexaSlim Ketosis? NexaSlim Ketosis er et kosttilskudd som potensielt kan hjelpe kunder med vekttap og fettreduksjon. Dette tillegget inneholder flere elementer og ingredienser som kan bidra til å forbrenne kroppsfett. NexaSlim Ketosis slankepiller er laget av naturlige og rene ingredienser. Den inneholder også en viss mengde koffein. Alle disse komponentene kan hjelpe forbrukere å brenne fett. Dette merket ble etablert i 2005 og bygget en anerkjent posisjon. De er stolte av dette produktet, da det gir en effektiv og effektiv løsning på vekttap. NexaSlim Ketosis har tilfredsstilt de fleste kunder gjennom sine høykvalitetsformler og verdifulle resultater. **[Klikk her for å kjøpe nå fra NexaSlim offisielle nettsted](https://slim-gummies-deutschland.de/nexaslim-no)** NexaSlim Ketosis inneholder en forbindelse kalt a-Lacys Reset, som effektivt kan hjelpe forbrukere med fettforbrenning og vekttap. Det går mye forskning på produksjonen av dette produktet, noe som beviser dets generelle troverdighet. Den har tjent tusenvis av kunder gjennom årene, og vi kan observere et mønster av betydelig kundetilfredshet. ##Hvordan virker NexaSlim Ketosis? NexaSlim Ketosis fungerer effektivt på grunn av sine fantastiske ingredienser. Leger har brukt år med forskning for å lage denne effektive formelen. Den fokuserer på keto-diettprosessen. Hovedformålet med denne vekttapsartikkelen er å sette kroppen i keto-modus hvor du lett kan miste fett. Dette produktet vil gjøre aktiveringsprosessen enkel for deg. Den har høykvalitets BHB-ketoner, som er ansvarlige for å aktivere produksjonen av ketoner i kroppen som er nødvendige for å aktivere ketoseprosessen. Når du begynner å innta færre karbohydrater og mer fett i ditt daglige kosthold, går kroppen din inn i ketoseprosessen. Dette produktet vil hjelpe her ved å undertrykke appetitten. Det primære drivstoffet er karbohydrater for å generere energi fordi de er tilstede i store mengder. Karbohydratene som ikke brukes i denne prosessen lagres i kroppen i form av fett. I ketoseprosessen vil kroppen din bare bruke fett for å produsere energi. Fett vil være i større mengder enn karbohydrater, og derfor vil fett brukes av kroppen. På denne måten vil din generelle utholdenhet forbedres, og du kan også komme deg ut av dine late vaner. Du kan holde deg i ketosestadiet så lenge du vil. NexaSlim Ketosis har spesielle ingredienser som bidrar til å forhindre fettcelleproduksjon i kroppen. Det vil også forbedre den generelle helsen fordi det har alle de essensielle vitaminene og mineralene. Fordøyelsessystemet ditt vil forbedres og alle de skadelige giftstoffene vil bli fjernet fra kroppen din. ##NexaSlim Ketosis-ingredienser og deres påviste vekttapsfordeler NexaSlim Ketosis slankepiller er formulert med seks alpine ingredienser og andre plantebaserte forbindelser. Disse ingrediensene bidrar ikke bare til å heve kroppstemperaturen, men støtter også metabolsk helse, termogenese, appetittundertrykkelse, blodsukkernivåer og generelt velvære. Eplecidereddik: Med 700 mg eplecidereddik pr porsjon er eplecidereddik en viktig ingrediens i NexaSlim Ketosis. Flere studier har koblet eplecidereddik med vekttap. Noen studier viser at eplecidereddik hjelper med vekttap ved å undertrykke appetitten. Hvis du for eksempel tar et skudd eplecidereddik 30 minutter før et måltid, vil du oppmuntre deg selv til å spise mindre i dette tilfellet. Ulike studier har funnet ut at andre aktive komponenter i eplecidereddik kan hjelpe deg å gå ned i vekt ved å aktivere fettforbrenning, tvinge kroppen til å frigjøre fettlagre og på ulike måter. Ingefær: Hver porsjon NexaSlim Ketosis inneholder 100 mg ingefær, noe som gjør den til den nest mest potente ingrediensen i kosttilskuddet. Ingefær har vist seg å støtte fysiske og kognitive stressresponser og har blitt brukt i tradisjonell asiatisk medisin i århundrer (inkludert standard kinesisk og koreansk medisin). Det er vanskelig for kroppen å gå ned i vekt mens du er stresset. Kroppen din klamrer seg til fett når du er bekymret. NexaSlim Ketosis inneholder ingefær, som kan hjelpe deg å gå ned i vekt ved å styrke stressresponsen. **[Klikk her for å kjøpe nå fra NexaSlim offisielle nettsted](https://slim-gummies-deutschland.de/nexaslim-no)**
bezzam/digicam-mirflickr-single-25k-unet2M-mmcn-unet2M-wave
bezzam
"2024-07-02T12:20:17Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-06-24T07:41:52Z"
--- license: mit ---
xxlrd/detailtweaker
xxlrd
"2024-06-25T03:02:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:43:25Z"
https://civitai.com/models/58390/detail-tweaker-lora-lora
kohankhaki/Llama-3-8B_SST5-Grouped_IDX-0
kohankhaki
"2024-06-24T08:06:40Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-06-24T07:43:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LarryAIDraw/hosekiLustrousmixPony_v10
LarryAIDraw
"2024-06-24T08:09:15Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-06-24T07:44:04Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/534425/hoseki-lustrousmix-pony-xl
PrunaAI/medalpaca-medalpaca-7b-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:47:11Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:medalpaca/medalpaca-7b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:45:16Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: medalpaca/medalpaca-7b metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo medalpaca/medalpaca-7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/medalpaca-medalpaca-7b-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("medalpaca/medalpaca-7b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model medalpaca/medalpaca-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/sambanovasystems-SambaLingo-Thai-Base-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:49:19Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:sambanovasystems/SambaLingo-Thai-Base", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:47:22Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: sambanovasystems/SambaLingo-Thai-Base metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo sambanovasystems/SambaLingo-Thai-Base installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/sambanovasystems-SambaLingo-Thai-Base-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Base") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model sambanovasystems/SambaLingo-Thai-Base before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
snob/eeve-2.8B_C_Ep5
snob
"2024-06-24T17:08:59Z"
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "custom_code", "base_model:yanolja/EEVE-Korean-Instruct-2.8B-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-24T07:48:42Z"
--- license: apache-2.0 base_model: yanolja/EEVE-Korean-Instruct-2.8B-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: eeve-2.8B_C_Ep5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eeve-2.8B_C_Ep5 This model is a fine-tuned version of [yanolja/EEVE-Korean-Instruct-2.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 48 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2875 | 1.0 | 233 | 0.3668 | | 0.2072 | 2.0 | 466 | 0.3707 | | 0.207 | 3.0 | 699 | 0.3768 | | 0.2054 | 4.0 | 932 | 0.3812 | | 0.1805 | 5.0 | 1165 | 0.3834 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1
PrunaAI/saltlux-Ko-Llama3-Luxia-8B-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:52:26Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:saltlux/Ko-Llama3-Luxia-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:49:37Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: saltlux/Ko-Llama3-Luxia-8B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo saltlux/Ko-Llama3-Luxia-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/saltlux-Ko-Llama3-Luxia-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("saltlux/Ko-Llama3-Luxia-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model saltlux/Ko-Llama3-Luxia-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Zoey0/Llama-2-7b-chat-finetune
Zoey0
"2024-06-24T07:49:51Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:49:51Z"
Entry not found
Karvina/fine-tuned_qwen_llm
Karvina
"2024-06-24T07:50:29Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:50:28Z"
Entry not found
PrunaAI/sbintuitions-sarashina2-7b-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:53:41Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:sbintuitions/sarashina2-7b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:51:06Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: sbintuitions/sarashina2-7b metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo sbintuitions/sarashina2-7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/sbintuitions-sarashina2-7b-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("sbintuitions/sarashina2-7b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model sbintuitions/sarashina2-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
yamkesan/Reinforce-CartPole
yamkesan
"2024-06-24T07:51:54Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-24T07:51:45Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
RaiRachit/xlm-roberta-base-finetuned-panx-Indo
RaiRachit
"2024-06-25T07:14:46Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:id_nergrit_corpus", "base_model:xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-06-24T07:52:06Z"
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - id_nergrit_corpus metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-Indo results: - task: name: Token Classification type: token-classification dataset: name: id_nergrit_corpus type: id_nergrit_corpus config: ner split: validation args: ner metrics: - name: F1 type: f1 value: 0.83694517516389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-Indo This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the id_nergrit_corpus dataset. It achieves the following results on the evaluation set: - Loss: 0.1919 - F1: 0.8369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3999 | 1.0 | 523 | 0.2013 | 0.8147 | | 0.1624 | 2.0 | 1046 | 0.1942 | 0.8249 | | 0.1097 | 3.0 | 1569 | 0.1919 | 0.8369 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
hgissbkh/ALMA-13B-SFT-HW-CPO-Pref-Mono-XCOMET-Choose-High-Reject-Base
hgissbkh
"2024-06-24T08:42:16Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-24T07:52:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ofintech/FinGPT_0.1.2
ofintech
"2024-06-24T07:54:44Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:54:44Z"
Entry not found
PrunaAI/nomic-ai-gpt4all-j-AWQ-4bit-smashed
PrunaAI
"2024-06-24T07:57:28Z"
0
0
transformers
[ "transformers", "safetensors", "gptj", "text-generation", "pruna-ai", "base_model:nomic-ai/gpt4all-j", "autotrain_compatible", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T07:55:29Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: nomic-ai/gpt4all-j metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo nomic-ai/gpt4all-j installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/nomic-ai-gpt4all-j-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("nomic-ai/gpt4all-j") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model nomic-ai/gpt4all-j before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Vanster/rl_course_vizdoom_health_gathering_supreme
Vanster
"2024-06-24T07:56:45Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-06-24T07:56:36Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 19.82 +/- 3.31 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Vanster/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
maheshwariaryan/healthcareSLM
maheshwariaryan
"2024-06-24T07:57:14Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T07:57:14Z"
Entry not found
WilsonVoV/Dog-Classifier
WilsonVoV
"2024-06-24T07:59:50Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-24T07:59:50Z"
--- license: apache-2.0 ---
Pandita-IA/Reinforce-CartPole-v1
Pandita-IA
"2024-06-24T08:14:27Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-24T08:01:42Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 482.98 +/- 53.49 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Nidhi-choudhary/q-FrozenLake-v1-4x4-noSlippery
Nidhi-choudhary
"2024-06-24T08:03:41Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-24T08:03:38Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Nidhi-choudhary/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
rllover123/q-Taxi-v3
rllover123
"2024-06-24T08:04:30Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-24T08:04:00Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="rllover123/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
google/gemma-2-9b-it-pytorch
google
"2024-06-27T13:40:40Z"
0
6
gemma_torch
[ "gemma_torch", "text-generation", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:2203.09509", "license:gemma", "region:us" ]
text-generation
"2024-06-24T08:04:12Z"
--- license: gemma library_name: gemma_torch pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- > [!IMPORTANT] > > This repository corresponds to the research [Gemma PyTorch repository](https://github.com/google/gemma_pytorch). If you're looking for the transformers implementation, visit [this page](https://huggingface.co/google/gemma-2-9b-it) # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma] **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it-pytorch) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | | ------------------------------ | ------------- | ----------- | ------------ | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | | ------------------------------ | ------------- | ----------- | ------------ | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | --------------- | ---------------- | | [RealToxicity][realtox] | average | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | | [Toxigen][toxigen] | | 39.30 | 38.42 | | ------------------------ | ------------- | --------------- | ---------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509
snob/eeve-10.8B_A_Ep5
snob
"2024-06-24T19:47:41Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-24T08:04:28Z"
--- license: apache-2.0 base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: eeve-10.8B_A_Ep5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eeve-10.8B_A_Ep5 This model is a fine-tuned version of [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 48 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2059 | 1.0 | 387 | 0.3973 | | 0.1265 | 2.0 | 774 | 0.4323 | | 0.0836 | 3.0 | 1161 | 0.4834 | | 0.0539 | 4.0 | 1548 | 0.5494 | | 0.0416 | 5.0 | 1935 | 0.5871 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1
google/gemma-2-27b-it-pytorch
google
"2024-06-27T13:39:58Z"
0
6
gemma_torch
[ "gemma_torch", "text-generation", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:2110.08193", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:1804.06876", "arxiv:2103.03874", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:2203.09509", "license:gemma", "region:us" ]
text-generation
"2024-06-24T08:04:35Z"
--- license: gemma library_name: gemma_torch pipeline_tag: text-generation extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- > [!IMPORTANT] > > This repository corresponds to the research [Gemma PyTorch repository](https://github.com/google/gemma_pytorch). If you're looking for the transformers implementation, visit [this page](https://huggingface.co/google/gemma-2-27b-it) # Gemma 2 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma] **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it-pytorch) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ### Citation ```none @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models][foundation-models], including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | Gemma PT 9B | Gemma PT 27B | | ------------------------------ | ------------- | ----------- | ------------ | | [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 | | [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 | | [PIQA][piqa] | 0-shot | 81.7 | 83.2 | | [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 | | [BoolQ][boolq] | 0-shot | 84.2 | 84.8 | | [WinoGrande][winogrande] | partial score | 80.6 | 83.7 | | [ARC-e][arc] | 0-shot | 88.0 | 88.6 | | [ARC-c][arc] | 25-shot | 68.4 | 71.4 | | [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 | | [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 | | [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 | | [MBPP][mbpp] | 3-shot | 52.4 | 62.6 | | [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 | | [MATH][math] | 4-shot | 36.6 | 42.3 | | [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 | | [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 | | ------------------------------ | ------------- | ----------- | ------------ | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq]. * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies][safety-policies] for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well-known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. #### Gemma 2.0 | Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B | | ------------------------ | ------------- | --------------- | ---------------- | | [RealToxicity][realtox] | average | 8.25 | 8.84 | | [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 | | [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 | | [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 | | [Winogender][winogender] | top-1 | 79.17 | 77.22 | | [TruthfulQA][truthfulqa] | | 50.27 | 51.60 | | [Winobias 1_2][winobias] | | 78.09 | 81.94 | | [Winobias 2_2][winobias] | | 95.32 | 97.22 | | [Toxigen][toxigen] | | 39.30 | 38.42 | | ------------------------ | ------------- | --------------- | ---------------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2 [terms]: https://ai.google.dev/gemma/terms [vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335 [sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference [safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11 [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/google/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [foundation-models]: https://ai.google/discover/foundation-models/ [gemini-2-paper]: https://goo.gle/gemma2report [mmlu]: https://arxiv.org/abs/2009.03300 [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [boolq]: https://arxiv.org/abs/1905.10044 [winogrande]: https://arxiv.org/abs/1907.10641 [commonsenseqa]: https://arxiv.org/abs/1811.00937 [openbookqa]: https://arxiv.org/abs/1809.02789 [arc]: https://arxiv.org/abs/1911.01547 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [humaneval]: https://arxiv.org/abs/2107.03374 [mbpp]: https://arxiv.org/abs/2108.07732 [gsm8k]: https://arxiv.org/abs/2110.14168 [realtox]: https://arxiv.org/abs/2009.11462 [bold]: https://arxiv.org/abs/2101.11718 [crows]: https://aclanthology.org/2020.emnlp-main.154/ [bbq]: https://arxiv.org/abs/2110.08193v2 [winogender]: https://arxiv.org/abs/1804.09301 [truthfulqa]: https://arxiv.org/abs/2109.07958 [winobias]: https://arxiv.org/abs/1804.06876 [math]: https://arxiv.org/abs/2103.03874 [agieval]: https://arxiv.org/abs/2304.06364 [big-bench]: https://arxiv.org/abs/2206.04615 [toxigen]: https://arxiv.org/abs/2203.09509
snob/eeve-10.8B_B_Ep5
snob
"2024-06-25T15:30:14Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-24T08:04:55Z"
--- license: apache-2.0 base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: eeve-10.8B_B_Ep5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eeve-10.8B_B_Ep5 This model is a fine-tuned version of [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5663 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 48 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2153 | 1.0 | 291 | 0.3885 | | 0.1327 | 2.0 | 582 | 0.4051 | | 0.0892 | 3.0 | 873 | 0.4534 | | 0.0516 | 4.0 | 1164 | 0.5224 | | 0.0403 | 5.0 | 1455 | 0.5663 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1
namrahrehman/dinov2-base-finetuned-adalora-rank8
namrahrehman
"2024-06-24T11:31:45Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-24T08:05:10Z"
Entry not found
snob/eeve-10.8B_C_Ep5
snob
"2024-06-24T08:05:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:05:22Z"
Entry not found
PrunaAI/beomi-Llama-3-Open-Ko-8B-Instruct-preview-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:08:15Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:05:27Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo beomi/Llama-3-Open-Ko-8B-Instruct-preview installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/beomi-Llama-3-Open-Ko-8B-Instruct-preview-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("beomi/Llama-3-Open-Ko-8B-Instruct-preview") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model beomi/Llama-3-Open-Ko-8B-Instruct-preview before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
happyneishon/shareX
happyneishon
"2024-06-24T08:18:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:08:48Z"
Entry not found
Yuanshi/VideoCrafter2
Yuanshi
"2024-06-24T08:09:02Z"
0
1
null
[ "license:apache-2.0", "region:us" ]
null
"2024-06-24T08:09:02Z"
--- license: apache-2.0 ---
circulus/on_yolov10m_ov
circulus
"2024-06-24T08:09:27Z"
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
"2024-06-24T08:09:07Z"
--- license: gpl-3.0 ---
Nidhi-choudhary/Taxi-v3
Nidhi-choudhary
"2024-06-24T08:13:46Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-24T08:10:40Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Nidhi-choudhary/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
krittapol/sandee
krittapol
"2024-07-01T08:06:43Z"
0
0
null
[ "safetensors", "license:llama3", "region:us" ]
null
"2024-06-24T08:11:08Z"
--- license: llama3 ---
F-Fries/dummy-model
F-Fries
"2024-06-24T08:15:37Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-24T08:12:48Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Amberle/GP
Amberle
"2024-06-24T08:13:07Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:13:07Z"
Entry not found
Advik007/output
Advik007
"2024-06-24T08:13:30Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/opt-125m", "license:other", "region:us" ]
null
"2024-06-24T08:13:09Z"
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-125m model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7277 | 0.0011 | 1 | 5.1617 | | 2.7116 | 0.0022 | 2 | 5.1617 | | 3.3353 | 0.0033 | 3 | 5.1617 | | 3.6362 | 0.0044 | 4 | 5.1604 | | 3.5728 | 0.0056 | 5 | 5.1578 | | 4.1179 | 0.0067 | 6 | 5.1534 | | 3.8586 | 0.0078 | 7 | 5.1471 | | 3.7692 | 0.0089 | 8 | 5.1381 | | 3.2691 | 0.0100 | 9 | 5.1286 | | 2.7563 | 0.0111 | 10 | 5.1158 | | 2.8201 | 0.0122 | 11 | 5.1017 | | 2.7152 | 0.0133 | 12 | 5.0859 | | 3.3258 | 0.0145 | 13 | 5.0673 | | 2.6642 | 0.0156 | 14 | 5.0478 | | 2.9017 | 0.0167 | 15 | 5.0267 | | 5.009 | 0.0178 | 16 | 5.0267 | | 4.6864 | 0.0189 | 17 | 5.0028 | | 7.29 | 0.0200 | 18 | 4.9743 | | 2.9371 | 0.0211 | 19 | 4.9427 | | 3.4501 | 0.0222 | 20 | 4.9074 | | 3.4496 | 0.0234 | 21 | 4.8706 | | 6.2758 | 0.0245 | 22 | 4.8296 | | 3.1046 | 0.0256 | 23 | 4.7856 | | 3.2246 | 0.0267 | 24 | 4.7389 | | 2.7564 | 0.0278 | 25 | 4.6927 | | 2.618 | 0.0289 | 26 | 4.6451 | | 2.4245 | 0.0300 | 27 | 4.5942 | | 3.7664 | 0.0311 | 28 | 4.5393 | | 2.7308 | 0.0323 | 29 | 4.4828 | | 2.9127 | 0.0334 | 30 | 4.4243 | | 2.4284 | 0.0345 | 31 | 4.3660 | | 3.2049 | 0.0356 | 32 | 4.3042 | | 6.9194 | 0.0367 | 33 | 4.2369 | | 4.0489 | 0.0378 | 34 | 4.1691 | | 3.5483 | 0.0389 | 35 | 4.1014 | | 4.0104 | 0.0400 | 36 | 4.0352 | | 5.8218 | 0.0412 | 37 | 3.9638 | | 3.7747 | 0.0423 | 38 | 3.8935 | | 6.3184 | 0.0434 | 39 | 3.8200 | | 5.9702 | 0.0445 | 40 | 3.7453 | | 5.812 | 0.0456 | 41 | 3.6688 | | 5.4048 | 0.0467 | 42 | 3.5897 | | 5.395 | 0.0478 | 43 | 3.5061 | | 6.2427 | 0.0489 | 44 | 3.4194 | | 5.3198 | 0.0501 | 45 | 3.3243 | | 5.7874 | 0.0512 | 46 | 3.2236 | | 5.447 | 0.0523 | 47 | 3.1186 | | 4.5552 | 0.0534 | 48 | 3.0101 | | 4.0754 | 0.0545 | 49 | 2.8993 | | 4.3237 | 0.0556 | 50 | 2.7892 | | 1.9868 | 0.0567 | 51 | 2.6904 | | 1.5068 | 0.0578 | 52 | 2.6046 | | 1.712 | 0.0590 | 53 | 2.5283 | | 1.7627 | 0.0601 | 54 | 2.4582 | | 1.8177 | 0.0612 | 55 | 2.3943 | | 1.6478 | 0.0623 | 56 | 2.3298 | | 2.8445 | 0.0634 | 57 | 2.2706 | | 1.5806 | 0.0645 | 58 | 2.2109 | | 2.7508 | 0.0656 | 59 | 2.1509 | | 1.8819 | 0.0667 | 60 | 2.0891 | | 1.1462 | 0.0679 | 61 | 2.0303 | | 1.3734 | 0.0690 | 62 | 1.9742 | | 1.1803 | 0.0701 | 63 | 1.9214 | | 2.1941 | 0.0712 | 64 | 1.8724 | | 0.7793 | 0.0723 | 65 | 1.8320 | | 3.01 | 0.0734 | 66 | 1.7952 | | 1.1941 | 0.0745 | 67 | 1.7625 | | 2.5168 | 0.0756 | 68 | 1.7334 | | 0.5401 | 0.0768 | 69 | 1.7070 | | 1.9184 | 0.0779 | 70 | 1.6828 | | 1.7626 | 0.0790 | 71 | 1.6600 | | 0.5959 | 0.0801 | 72 | 1.6429 | | 2.7231 | 0.0812 | 73 | 1.6300 | | 2.0815 | 0.0823 | 74 | 1.6195 | | 0.5208 | 0.0834 | 75 | 1.6148 | | 1.9837 | 0.0845 | 76 | 1.6038 | | 2.3287 | 0.0857 | 77 | 1.5937 | | 1.4635 | 0.0868 | 78 | 1.5825 | | 1.7228 | 0.0879 | 79 | 1.5737 | | 1.9844 | 0.0890 | 80 | 1.5662 | | 1.2962 | 0.0901 | 81 | 1.5630 | | 1.2724 | 0.0912 | 82 | 1.5652 | | 1.3526 | 0.0923 | 83 | 1.5730 | | 0.7081 | 0.0934 | 84 | 1.5776 | | 1.0034 | 0.0945 | 85 | 1.5925 | | 2.338 | 0.0957 | 86 | 1.6040 | | 1.2273 | 0.0968 | 87 | 1.6145 | | 1.8829 | 0.0979 | 88 | 1.6227 | | 2.2697 | 0.0990 | 89 | 1.6131 | | 2.835 | 0.1001 | 90 | 1.5992 | | 0.3555 | 0.1012 | 91 | 1.5935 | | 2.9143 | 0.1023 | 92 | 1.5740 | | 2.5601 | 0.1034 | 93 | 1.5500 | | 2.8801 | 0.1046 | 94 | 1.5172 | | 2.6985 | 0.1057 | 95 | 1.4825 | | 2.5985 | 0.1068 | 96 | 1.4548 | | 2.0787 | 0.1079 | 97 | 1.4412 | | 2.4753 | 0.1090 | 98 | 1.4316 | | 1.8741 | 0.1101 | 99 | 1.4238 | | 2.1989 | 0.1112 | 100 | 1.4164 | | 1.3169 | 0.1123 | 101 | 1.4027 | | 1.3048 | 0.1135 | 102 | 1.3867 | | 0.7764 | 0.1146 | 103 | 1.3719 | | 0.8508 | 0.1157 | 104 | 1.3588 | | 0.4721 | 0.1168 | 105 | 1.3525 | | 0.4086 | 0.1179 | 106 | 1.3506 | | 1.7034 | 0.1190 | 107 | 1.3473 | | 0.3793 | 0.1201 | 108 | 1.3515 | | 1.7981 | 0.1212 | 109 | 1.3544 | | 0.4725 | 0.1224 | 110 | 1.3494 | | 2.4311 | 0.1235 | 111 | 1.3452 | | 0.2646 | 0.1246 | 112 | 1.3463 | | 1.5125 | 0.1257 | 113 | 1.3484 | | 0.274 | 0.1268 | 114 | 1.3508 | | 0.6543 | 0.1279 | 115 | 1.3504 | | 1.0734 | 0.1290 | 116 | 1.3564 | | 0.0807 | 0.1301 | 117 | 1.3720 | | 1.1098 | 0.1313 | 118 | 1.3835 | | 1.2668 | 0.1324 | 119 | 1.3981 | | 1.1858 | 0.1335 | 120 | 1.4098 | | 2.2439 | 0.1346 | 121 | 1.4162 | | 2.2215 | 0.1357 | 122 | 1.4175 | | 1.5292 | 0.1368 | 123 | 1.4211 | | 4.4772 | 0.1379 | 124 | 1.4120 | | 0.6193 | 0.1390 | 125 | 1.3991 | | 1.8251 | 0.1402 | 126 | 1.3829 | | 2.5903 | 0.1413 | 127 | 1.3584 | | 0.9496 | 0.1424 | 128 | 1.3395 | | 0.9704 | 0.1435 | 129 | 1.3207 | | 0.5971 | 0.1446 | 130 | 1.3068 | | 0.6696 | 0.1457 | 131 | 1.2787 | | 2.3945 | 0.1468 | 132 | 1.2551 | | 1.7618 | 0.1479 | 133 | 1.2343 | | 2.1055 | 0.1491 | 134 | 1.2151 | | 1.7861 | 0.1502 | 135 | 1.2007 | | 1.8524 | 0.1513 | 136 | 1.1848 | | 2.1329 | 0.1524 | 137 | 1.1716 | | 1.8534 | 0.1535 | 138 | 1.1608 | | 1.9193 | 0.1546 | 139 | 1.1535 | | 1.8217 | 0.1557 | 140 | 1.1533 | | 1.5183 | 0.1568 | 141 | 1.1531 | | 1.744 | 0.1580 | 142 | 1.1562 | | 1.6805 | 0.1591 | 143 | 1.1590 | | 1.2057 | 0.1602 | 144 | 1.1638 | | 1.7226 | 0.1613 | 145 | 1.1691 | | 1.5426 | 0.1624 | 146 | 1.1720 | | 1.7681 | 0.1635 | 147 | 1.1715 | | 1.5663 | 0.1646 | 148 | 1.1703 | | 1.2717 | 0.1657 | 149 | 1.1681 | | 0.8039 | 0.1669 | 150 | 1.1679 | | 1.2106 | 0.1680 | 151 | 1.1639 | | 0.8539 | 0.1691 | 152 | 1.1544 | | 0.3421 | 0.1702 | 153 | 1.1406 | | 1.2 | 0.1713 | 154 | 1.1236 | | 0.3155 | 0.1724 | 155 | 1.1096 | | 1.2796 | 0.1735 | 156 | 1.0939 | | 0.648 | 0.1746 | 157 | 1.0780 | | 0.2594 | 0.1758 | 158 | 1.0654 | | 0.218 | 0.1769 | 159 | 1.0580 | | 1.2881 | 0.1780 | 160 | 1.0553 | | 0.2121 | 0.1791 | 161 | 1.0525 | | 0.2681 | 0.1802 | 162 | 1.0484 | | 0.1772 | 0.1813 | 163 | 1.0433 | | 1.1951 | 0.1824 | 164 | 1.0431 | | 0.5792 | 0.1835 | 165 | 1.0432 | | 1.4317 | 0.1846 | 166 | 1.0434 | | 0.0902 | 0.1858 | 167 | 1.0455 | | 0.9609 | 0.1869 | 168 | 1.0477 | | 2.0043 | 0.1880 | 169 | 1.0491 | | 2.8066 | 0.1891 | 170 | 1.0517 | | 0.1973 | 0.1902 | 171 | 1.0498 | | 0.7001 | 0.1913 | 172 | 1.0493 | | 0.07 | 0.1924 | 173 | 1.0540 | | 0.0593 | 0.1935 | 174 | 1.0651 | | 1.1858 | 0.1947 | 175 | 1.0772 | | 0.0208 | 0.1958 | 176 | 1.0938 | | 1.3335 | 0.1969 | 177 | 1.1051 | | 1.0653 | 0.1980 | 178 | 1.1173 | | 0.77 | 0.1991 | 179 | 1.1302 | | 0.5963 | 0.2002 | 180 | 1.1367 | | 1.6982 | 0.2013 | 181 | 1.1443 | | 0.6833 | 0.2024 | 182 | 1.1593 | | 1.5979 | 0.2036 | 183 | 1.1647 | | 1.7084 | 0.2047 | 184 | 1.1606 | | 1.5668 | 0.2058 | 185 | 1.1577 | | 1.9099 | 0.2069 | 186 | 1.1516 | | 1.343 | 0.2080 | 187 | 1.1496 | | 1.5356 | 0.2091 | 188 | 1.1468 | | 2.0848 | 0.2102 | 189 | 1.1408 | | 1.5701 | 0.2113 | 190 | 1.1363 | | 2.025 | 0.2125 | 191 | 1.1322 | | 2.268 | 0.2136 | 192 | 1.1289 | | 1.2892 | 0.2147 | 193 | 1.1304 | | 0.7525 | 0.2158 | 194 | 1.1356 | | 2.1636 | 0.2169 | 195 | 1.1357 | | 1.7001 | 0.2180 | 196 | 1.1336 | | 2.0213 | 0.2191 | 197 | 1.1289 | | 2.3829 | 0.2202 | 198 | 1.1277 | | 1.7002 | 0.2214 | 199 | 1.1289 | | 1.5459 | 0.2225 | 200 | 1.1341 | | 1.5605 | 0.2236 | 201 | 1.1391 | | 0.0354 | 0.2247 | 202 | 1.1438 | | 0.0245 | 0.2258 | 203 | 1.1503 | | 0.0288 | 0.2269 | 204 | 1.1558 | | 0.6154 | 0.2280 | 205 | 1.1562 | | 0.5196 | 0.2291 | 206 | 1.1509 | | 1.2114 | 0.2303 | 207 | 1.1400 | | 0.0472 | 0.2314 | 208 | 1.1343 | | 0.382 | 0.2325 | 209 | 1.1238 | | 0.3117 | 0.2336 | 210 | 1.1129 | | 0.195 | 0.2347 | 211 | 1.1039 | | 0.0805 | 0.2358 | 212 | 1.1010 | | 0.3772 | 0.2369 | 213 | 1.0979 | | 0.2097 | 0.2380 | 214 | 1.0995 | | 0.7782 | 0.2392 | 215 | 1.1051 | | 0.1969 | 0.2403 | 216 | 1.1144 | | 0.4024 | 0.2414 | 217 | 1.1237 | | 0.1512 | 0.2425 | 218 | 1.1348 | | 1.5646 | 0.2436 | 219 | 1.1405 | | 0.3151 | 0.2447 | 220 | 1.1419 | | 1.7554 | 0.2458 | 221 | 1.1441 | | 0.6883 | 0.2469 | 222 | 1.1401 | | 1.0699 | 0.2481 | 223 | 1.1363 | | 0.1807 | 0.2492 | 224 | 1.1371 | | 0.1306 | 0.2503 | 225 | 1.1431 | | 1.7597 | 0.2514 | 226 | 1.1486 | | 0.2133 | 0.2525 | 227 | 1.1507 | | 2.0966 | 0.2536 | 228 | 1.1435 | | 1.8127 | 0.2547 | 229 | 1.1284 | | 1.6355 | 0.2558 | 230 | 1.1149 | | 1.5196 | 0.2570 | 231 | 1.1027 | | 1.6257 | 0.2581 | 232 | 1.0923 | | 1.3485 | 0.2592 | 233 | 1.0810 | | 1.5934 | 0.2603 | 234 | 1.0710 | | 1.8309 | 0.2614 | 235 | 1.0625 | | 1.6573 | 0.2625 | 236 | 1.0523 | | 1.2893 | 0.2636 | 237 | 1.0452 | | 1.8672 | 0.2647 | 238 | 1.0393 | | 1.8806 | 0.2659 | 239 | 1.0331 | | 1.6047 | 0.2670 | 240 | 1.0282 | | 1.7251 | 0.2681 | 241 | 1.0256 | | 1.5101 | 0.2692 | 242 | 1.0249 | | 1.5325 | 0.2703 | 243 | 1.0256 | | 1.3231 | 0.2714 | 244 | 1.0277 | | 1.6505 | 0.2725 | 245 | 1.0296 | | 1.681 | 0.2736 | 246 | 1.0312 | | 1.5304 | 0.2747 | 247 | 1.0328 | | 2.0902 | 0.2759 | 248 | 1.0335 | | 1.7615 | 0.2770 | 249 | 1.0351 | | 1.749 | 0.2781 | 250 | 1.0368 | | 1.6146 | 0.2792 | 251 | 1.0376 | | 0.255 | 0.2803 | 252 | 1.0376 | | 0.3552 | 0.2814 | 253 | 1.0370 | | 0.3179 | 0.2825 | 254 | 1.0368 | | 0.301 | 0.2836 | 255 | 1.0377 | | 0.1024 | 0.2848 | 256 | 1.0407 | | 0.2253 | 0.2859 | 257 | 1.0407 | | 0.2825 | 0.2870 | 258 | 1.0370 | | 0.5809 | 0.2881 | 259 | 1.0360 | | 1.2988 | 0.2892 | 260 | 1.0311 | | 2.5663 | 0.2903 | 261 | 1.0252 | | 0.3101 | 0.2914 | 262 | 1.0190 | | 0.5031 | 0.2925 | 263 | 1.0146 | | 1.1346 | 0.2937 | 264 | 1.0105 | | 0.1266 | 0.2948 | 265 | 1.0083 | | 0.129 | 0.2959 | 266 | 1.0094 | | 0.7689 | 0.2970 | 267 | 1.0137 | | 1.4442 | 0.2981 | 268 | 1.0143 | | 0.2176 | 0.2992 | 269 | 1.0145 | | 2.0625 | 0.3003 | 270 | 1.0122 | | 1.0956 | 0.3014 | 271 | 1.0094 | | 0.5316 | 0.3026 | 272 | 1.0108 | | 1.8467 | 0.3037 | 273 | 1.0094 | | 1.4459 | 0.3048 | 274 | 1.0091 | | 0.4201 | 0.3059 | 275 | 1.0149 | | 0.4461 | 0.3070 | 276 | 1.0267 | | 1.8362 | 0.3081 | 277 | 1.0408 | | 1.3001 | 0.3092 | 278 | 1.0483 | | 1.768 | 0.3103 | 279 | 1.0491 | | 0.9666 | 0.3115 | 280 | 1.0466 | | 0.6121 | 0.3126 | 281 | 1.0448 | | 1.9983 | 0.3137 | 282 | 1.0371 | | 1.7632 | 0.3148 | 283 | 1.0310 | | 1.4109 | 0.3159 | 284 | 1.0238 | | 1.6489 | 0.3170 | 285 | 1.0179 | | 1.5482 | 0.3181 | 286 | 1.0129 | | 1.619 | 0.3192 | 287 | 1.0071 | | 0.6192 | 0.3204 | 288 | 1.0033 | | 1.9116 | 0.3215 | 289 | 1.0019 | | 1.5753 | 0.3226 | 290 | 1.0032 | | 2.3204 | 0.3237 | 291 | 1.0035 | | 1.7578 | 0.3248 | 292 | 1.0058 | | 2.5222 | 0.3259 | 293 | 1.0061 | | 1.9398 | 0.3270 | 294 | 1.0064 | | 1.5349 | 0.3281 | 295 | 1.0082 | | 1.5081 | 0.3293 | 296 | 1.0103 | | 1.7842 | 0.3304 | 297 | 1.0132 | | 1.867 | 0.3315 | 298 | 1.0175 | | 1.3196 | 0.3326 | 299 | 1.0221 | | 1.7214 | 0.3337 | 300 | 1.0261 | | 1.3245 | 0.3348 | 301 | 1.0294 | | 0.2538 | 0.3359 | 302 | 1.0293 | | 1.1445 | 0.3370 | 303 | 1.0297 | | 1.9243 | 0.3382 | 304 | 1.0256 | | 0.2245 | 0.3393 | 305 | 1.0260 | | 0.3221 | 0.3404 | 306 | 1.0271 | | 0.2189 | 0.3415 | 307 | 1.0296 | | 0.1769 | 0.3426 | 308 | 1.0287 | | 1.2275 | 0.3437 | 309 | 1.0240 | | 0.1689 | 0.3448 | 310 | 1.0208 | | 2.2955 | 0.3459 | 311 | 1.0154 | | 1.9588 | 0.3471 | 312 | 1.0084 | | 0.1797 | 0.3482 | 313 | 1.0007 | | 0.9536 | 0.3493 | 314 | 0.9961 | | 1.0649 | 0.3504 | 315 | 0.9931 | | 0.1304 | 0.3515 | 316 | 0.9942 | | 0.606 | 0.3526 | 317 | 0.9950 | | 0.727 | 0.3537 | 318 | 0.9972 | | 0.6424 | 0.3548 | 319 | 0.9996 | | 1.6756 | 0.3560 | 320 | 1.0012 | | 2.3348 | 0.3571 | 321 | 1.0004 | | 1.8616 | 0.3582 | 322 | 0.9976 | | 1.3072 | 0.3593 | 323 | 0.9939 | | 0.3905 | 0.3604 | 324 | 0.9894 | | 1.4134 | 0.3615 | 325 | 0.9860 | | 1.7013 | 0.3626 | 326 | 0.9839 | | 0.3586 | 0.3637 | 327 | 0.9813 | | 2.0069 | 0.3648 | 328 | 0.9758 | | 0.5278 | 0.3660 | 329 | 0.9712 | | 1.5287 | 0.3671 | 330 | 0.9675 | | 1.4929 | 0.3682 | 331 | 0.9657 | | 1.667 | 0.3693 | 332 | 0.9649 | | 1.237 | 0.3704 | 333 | 0.9652 | | 1.4674 | 0.3715 | 334 | 0.9661 | | 1.3671 | 0.3726 | 335 | 0.9674 | | 1.6393 | 0.3737 | 336 | 0.9674 | | 2.0745 | 0.3749 | 337 | 0.9670 | | 1.4795 | 0.3760 | 338 | 0.9661 | | 2.0828 | 0.3771 | 339 | 0.9647 | | 1.0505 | 0.3782 | 340 | 0.9628 | | 2.1495 | 0.3793 | 341 | 0.9590 | | 1.1686 | 0.3804 | 342 | 0.9570 | | 1.4332 | 0.3815 | 343 | 0.9573 | | 2.0272 | 0.3826 | 344 | 0.9552 | | 1.3721 | 0.3838 | 345 | 0.9536 | | 1.6578 | 0.3849 | 346 | 0.9524 | | 1.9912 | 0.3860 | 347 | 0.9488 | | 1.6928 | 0.3871 | 348 | 0.9438 | | 1.8232 | 0.3882 | 349 | 0.9392 | | 1.4681 | 0.3893 | 350 | 0.9364 | | 0.6941 | 0.3904 | 351 | 0.9333 | | 0.2774 | 0.3915 | 352 | 0.9315 | | 0.1625 | 0.3927 | 353 | 0.9285 | | 0.4019 | 0.3938 | 354 | 0.9256 | | 1.5112 | 0.3949 | 355 | 0.9215 | | 0.257 | 0.3960 | 356 | 0.9190 | | 0.4281 | 0.3971 | 357 | 0.9189 | | 0.1484 | 0.3982 | 358 | 0.9193 | | 0.1507 | 0.3993 | 359 | 0.9193 | | 0.1906 | 0.4004 | 360 | 0.9188 | | 0.1547 | 0.4016 | 361 | 0.9193 | | 0.7241 | 0.4027 | 362 | 0.9199 | | 0.189 | 0.4038 | 363 | 0.9210 | | 0.1899 | 0.4049 | 364 | 0.9215 | | 0.177 | 0.4060 | 365 | 0.9226 | | 1.8948 | 0.4071 | 366 | 0.9253 | | 0.2655 | 0.4082 | 367 | 0.9279 | | 1.7495 | 0.4093 | 368 | 0.9296 | | 1.5304 | 0.4105 | 369 | 0.9325 | | 0.4043 | 0.4116 | 370 | 0.9376 | | 2.2623 | 0.4127 | 371 | 0.9404 | | 0.1705 | 0.4138 | 372 | 0.9462 | | 1.8279 | 0.4149 | 373 | 0.9518 | | 0.5735 | 0.4160 | 374 | 0.9565 | | 0.6939 | 0.4171 | 375 | 0.9591 | | 1.3096 | 0.4182 | 376 | 0.9588 | | 1.1778 | 0.4194 | 377 | 0.9610 | | 0.5449 | 0.4205 | 378 | 0.9560 | | 0.087 | 0.4216 | 379 | 0.9576 | | 0.1414 | 0.4227 | 380 | 0.9649 | | 0.165 | 0.4238 | 381 | 0.9719 | | 2.0206 | 0.4249 | 382 | 0.9769 | | 1.0 | 0.4260 | 383 | 0.9826 | | 1.5295 | 0.4271 | 384 | 0.9850 | | 0.9033 | 0.4283 | 385 | 0.9859 | | 1.8659 | 0.4294 | 386 | 0.9831 | | 0.7754 | 0.4305 | 387 | 0.9809 | | 2.0899 | 0.4316 | 388 | 0.9782 | | 1.9827 | 0.4327 | 389 | 0.9717 | | 1.5254 | 0.4338 | 390 | 0.9646 | | 1.7747 | 0.4349 | 391 | 0.9579 | | 1.8698 | 0.4360 | 392 | 0.9525 | | 1.5746 | 0.4372 | 393 | 0.9462 | | 0.4105 | 0.4383 | 394 | 0.9447 | | 1.7313 | 0.4394 | 395 | 0.9445 | | 1.8022 | 0.4405 | 396 | 0.9453 | | 1.576 | 0.4416 | 397 | 0.9463 | | 1.5751 | 0.4427 | 398 | 0.9479 | | 1.6672 | 0.4438 | 399 | 0.9501 | | 1.572 | 0.4449 | 400 | 0.9523 | | 1.048 | 0.4461 | 401 | 0.9543 | | 0.07 | 0.4472 | 402 | 0.9575 | | 1.4413 | 0.4483 | 403 | 0.9609 | | 0.0841 | 0.4494 | 404 | 0.9670 | | 0.2638 | 0.4505 | 405 | 0.9683 | | 0.144 | 0.4516 | 406 | 0.9720 | | 0.3071 | 0.4527 | 407 | 0.9805 | | 0.4353 | 0.4538 | 408 | 0.9839 | | 2.6696 | 0.4549 | 409 | 0.9837 | | 0.3602 | 0.4561 | 410 | 0.9808 | | 0.9582 | 0.4572 | 411 | 0.9772 | | 0.1451 | 0.4583 | 412 | 0.9748 | | 0.1769 | 0.4594 | 413 | 0.9728 | | 0.1046 | 0.4605 | 414 | 0.9739 | | 1.7184 | 0.4616 | 415 | 0.9732 | | 0.1259 | 0.4627 | 416 | 0.9752 | | 0.1533 | 0.4638 | 417 | 0.9780 | | 0.2449 | 0.4650 | 418 | 0.9806 | | 0.3465 | 0.4661 | 419 | 0.9792 | | 0.8176 | 0.4672 | 420 | 0.9762 | | 1.3216 | 0.4683 | 421 | 0.9703 | | 1.4561 | 0.4694 | 422 | 0.9650 | | 1.7405 | 0.4705 | 423 | 0.9604 | | 0.2188 | 0.4716 | 424 | 0.9541 | | 0.7922 | 0.4727 | 425 | 0.9462 | | 0.2545 | 0.4739 | 426 | 0.9387 | | 0.1548 | 0.4750 | 427 | 0.9320 | | 1.4584 | 0.4761 | 428 | 0.9269 | | 1.0766 | 0.4772 | 429 | 0.9250 | | 0.725 | 0.4783 | 430 | 0.9248 | | 1.8235 | 0.4794 | 431 | 0.9252 | | 0.1626 | 0.4805 | 432 | 0.9268 | | 0.0717 | 0.4816 | 433 | 0.9296 | | 0.3625 | 0.4828 | 434 | 0.9327 | | 1.7691 | 0.4839 | 435 | 0.9354 | | 0.7091 | 0.4850 | 436 | 0.9410 | | 0.0559 | 0.4861 | 437 | 0.9480 | | 0.1762 | 0.4872 | 438 | 0.9549 | | 1.6251 | 0.4883 | 439 | 0.9592 | | 0.2079 | 0.4894 | 440 | 0.9674 | | 1.6618 | 0.4905 | 441 | 0.9767 | | 2.7082 | 0.4917 | 442 | 0.9838 | | 1.8634 | 0.4928 | 443 | 0.9839 | | 1.4724 | 0.4939 | 444 | 0.9848 | | 1.6626 | 0.4950 | 445 | 0.9859 | | 2.5735 | 0.4961 | 446 | 0.9806 | | 2.2897 | 0.4972 | 447 | 0.9743 | | 1.7994 | 0.4983 | 448 | 0.9690 | | 1.5399 | 0.4994 | 449 | 0.9643 | | 1.966 | 0.5006 | 450 | 0.9596 | | 1.2223 | 0.5017 | 451 | 0.9548 | | 0.0454 | 0.5028 | 452 | 0.9513 | | 0.1607 | 0.5039 | 453 | 0.9481 | | 0.2176 | 0.5050 | 454 | 0.9460 | | 0.2719 | 0.5061 | 455 | 0.9429 | | 1.6287 | 0.5072 | 456 | 0.9370 | | 0.0683 | 0.5083 | 457 | 0.9317 | | 0.0685 | 0.5095 | 458 | 0.9272 | | 0.1051 | 0.5106 | 459 | 0.9237 | | 0.1703 | 0.5117 | 460 | 0.9204 | | 0.1249 | 0.5128 | 461 | 0.9177 | | 0.2606 | 0.5139 | 462 | 0.9165 | | 2.6781 | 0.5150 | 463 | 0.9138 | | 0.2631 | 0.5161 | 464 | 0.9114 | | 1.0206 | 0.5172 | 465 | 0.9109 | | 1.2081 | 0.5184 | 466 | 0.9091 | | 0.1338 | 0.5195 | 467 | 0.9062 | | 0.228 | 0.5206 | 468 | 0.9038 | | 0.2389 | 0.5217 | 469 | 0.9021 | | 0.4553 | 0.5228 | 470 | 0.9024 | | 1.9758 | 0.5239 | 471 | 0.9014 | | 0.2372 | 0.5250 | 472 | 0.8994 | | 1.3759 | 0.5261 | 473 | 0.8976 | | 0.2539 | 0.5273 | 474 | 0.8974 | | 1.8096 | 0.5284 | 475 | 0.8975 | | 0.7604 | 0.5295 | 476 | 0.8961 | | 1.2531 | 0.5306 | 477 | 0.8940 | | 1.1764 | 0.5317 | 478 | 0.8916 | | 0.1205 | 0.5328 | 479 | 0.8895 | | 1.6164 | 0.5339 | 480 | 0.8886 | | 1.4468 | 0.5350 | 481 | 0.8875 | | 0.8834 | 0.5362 | 482 | 0.8859 | | 1.4551 | 0.5373 | 483 | 0.8854 | | 1.6162 | 0.5384 | 484 | 0.8856 | | 1.6151 | 0.5395 | 485 | 0.8869 | | 1.5783 | 0.5406 | 486 | 0.8890 | | 0.6407 | 0.5417 | 487 | 0.8915 | | 1.3884 | 0.5428 | 488 | 0.8944 | | 1.8659 | 0.5439 | 489 | 0.8953 | | 1.7978 | 0.5451 | 490 | 0.8966 | | 1.1046 | 0.5462 | 491 | 0.8983 | | 1.421 | 0.5473 | 492 | 0.9010 | | 1.2386 | 0.5484 | 493 | 0.9034 | | 2.1274 | 0.5495 | 494 | 0.9042 | | 1.472 | 0.5506 | 495 | 0.9057 | | 1.6057 | 0.5517 | 496 | 0.9072 | | 1.7172 | 0.5528 | 497 | 0.9087 | | 1.4565 | 0.5539 | 498 | 0.9106 | | 1.6786 | 0.5551 | 499 | 0.9131 | | 1.4197 | 0.5562 | 500 | 0.9154 | | 0.2015 | 0.5573 | 501 | 0.9178 | | 0.6688 | 0.5584 | 502 | 0.9191 | | 0.9548 | 0.5595 | 503 | 0.9196 | | 0.312 | 0.5606 | 504 | 0.9194 | | 0.7536 | 0.5617 | 505 | 0.9160 | | 0.0563 | 0.5628 | 506 | 0.9144 | | 0.0624 | 0.5640 | 507 | 0.9136 | | 0.0988 | 0.5651 | 508 | 0.9141 | | 0.3045 | 0.5662 | 509 | 0.9143 | | 0.0377 | 0.5673 | 510 | 0.9158 | | 1.53 | 0.5684 | 511 | 0.9163 | | 0.0603 | 0.5695 | 512 | 0.9193 | | 0.0278 | 0.5706 | 513 | 0.9244 | | 1.2749 | 0.5717 | 514 | 0.9276 | | 0.576 | 0.5729 | 515 | 0.9294 | | 0.7881 | 0.5740 | 516 | 0.9331 | | 0.3867 | 0.5751 | 517 | 0.9340 | | 0.4875 | 0.5762 | 518 | 0.9323 | | 0.0118 | 0.5773 | 519 | 0.9320 | | 1.1475 | 0.5784 | 520 | 0.9306 | | 1.1081 | 0.5795 | 521 | 0.9293 | | 0.3364 | 0.5806 | 522 | 0.9298 | | 0.541 | 0.5818 | 523 | 0.9287 | | 0.0081 | 0.5829 | 524 | 0.9298 | | 1.9083 | 0.5840 | 525 | 0.9298 | | 1.1584 | 0.5851 | 526 | 0.9294 | | 0.6945 | 0.5862 | 527 | 0.9298 | | 0.0169 | 0.5873 | 528 | 0.9315 | | 0.4948 | 0.5884 | 529 | 0.9340 | | 0.5595 | 0.5895 | 530 | 0.9392 | | 1.5655 | 0.5907 | 531 | 0.9446 | | 1.8649 | 0.5918 | 532 | 0.9486 | | 1.3572 | 0.5929 | 533 | 0.9534 | | 1.809 | 0.5940 | 534 | 0.9580 | | 1.3793 | 0.5951 | 535 | 0.9640 | | 1.226 | 0.5962 | 536 | 0.9705 | | 1.1318 | 0.5973 | 537 | 0.9767 | | 1.4665 | 0.5984 | 538 | 0.9824 | | 1.2866 | 0.5996 | 539 | 0.9873 | | 1.8524 | 0.6007 | 540 | 0.9936 | | 1.3191 | 0.6018 | 541 | 1.0015 | | 1.6435 | 0.6029 | 542 | 1.0078 | | 2.0259 | 0.6040 | 543 | 1.0093 | | 1.6916 | 0.6051 | 544 | 1.0095 | | 1.0637 | 0.6062 | 545 | 1.0117 | | 1.3404 | 0.6073 | 546 | 1.0122 | | 2.2138 | 0.6085 | 547 | 1.0101 | | 1.3587 | 0.6096 | 548 | 1.0100 | | 1.6743 | 0.6107 | 549 | 1.0097 | | 1.8482 | 0.6118 | 550 | 1.0075 | | 0.0822 | 0.6129 | 551 | 1.0061 | | 0.4181 | 0.6140 | 552 | 1.0040 | | 0.3995 | 0.6151 | 553 | 1.0010 | | 0.0374 | 0.6162 | 554 | 0.9991 | | 0.4563 | 0.6174 | 555 | 0.9966 | | 1.7288 | 0.6185 | 556 | 0.9921 | | 0.015 | 0.6196 | 557 | 0.9898 | | 1.8554 | 0.6207 | 558 | 0.9817 | | 0.4286 | 0.6218 | 559 | 0.9721 | | 0.0155 | 0.6229 | 560 | 0.9663 | | 0.3153 | 0.6240 | 561 | 0.9605 | | 0.388 | 0.6251 | 562 | 0.9532 | | 0.5015 | 0.6263 | 563 | 0.9494 | | 0.0242 | 0.6274 | 564 | 0.9490 | | 0.032 | 0.6285 | 565 | 0.9511 | | 0.0117 | 0.6296 | 566 | 0.9544 | | 1.0742 | 0.6307 | 567 | 0.9564 | | 0.8088 | 0.6318 | 568 | 0.9547 | | 0.016 | 0.6329 | 569 | 0.9549 | | 0.3597 | 0.6340 | 570 | 0.9531 | | 0.9906 | 0.6352 | 571 | 0.9481 | | 0.386 | 0.6363 | 572 | 0.9423 | | 1.0427 | 0.6374 | 573 | 0.9371 | | 1.1802 | 0.6385 | 574 | 0.9306 | | 0.0134 | 0.6396 | 575 | 0.9270 | | 0.3549 | 0.6407 | 576 | 0.9228 | | 0.8654 | 0.6418 | 577 | 0.9185 | | 0.4667 | 0.6429 | 578 | 0.9161 | | 0.3785 | 0.6440 | 579 | 0.9152 | | 0.9384 | 0.6452 | 580 | 0.9164 | | 0.3267 | 0.6463 | 581 | 0.9174 | | 0.5799 | 0.6474 | 582 | 0.9176 | | 0.3204 | 0.6485 | 583 | 0.9176 | | 0.6015 | 0.6496 | 584 | 0.9184 | | 1.4636 | 0.6507 | 585 | 0.9155 | | 2.0784 | 0.6518 | 586 | 0.9111 | | 1.0716 | 0.6529 | 587 | 0.9068 | | 0.8698 | 0.6541 | 588 | 0.9058 | | 1.2553 | 0.6552 | 589 | 0.9058 | | 1.5961 | 0.6563 | 590 | 0.9054 | | 1.3814 | 0.6574 | 591 | 0.9068 | | 1.8098 | 0.6585 | 592 | 0.9118 | | 2.2875 | 0.6596 | 593 | 0.9188 | | 2.0735 | 0.6607 | 594 | 0.9188 | | 1.7857 | 0.6618 | 595 | 0.9254 | | 1.9497 | 0.6630 | 596 | 0.9339 | | 1.7419 | 0.6641 | 597 | 0.9424 | | 1.9362 | 0.6652 | 598 | 0.9496 | | 1.2903 | 0.6663 | 599 | 0.9560 | | 1.8249 | 0.6674 | 600 | 0.9614 | | 0.1182 | 0.6685 | 601 | 0.9656 | | 0.7312 | 0.6696 | 602 | 0.9699 | | 0.0839 | 0.6707 | 603 | 0.9728 | | 0.2295 | 0.6719 | 604 | 0.9758 | | 0.0997 | 0.6730 | 605 | 0.9781 | | 1.4013 | 0.6741 | 606 | 0.9774 | | 0.204 | 0.6752 | 607 | 0.9779 | | 0.2013 | 0.6763 | 608 | 0.9787 | | 0.203 | 0.6774 | 609 | 0.9807 | | 0.0951 | 0.6785 | 610 | 0.9821 | | 0.1816 | 0.6796 | 611 | 0.9836 | | 0.9422 | 0.6808 | 612 | 0.9807 | | 1.6911 | 0.6819 | 613 | 0.9762 | | 0.2778 | 0.6830 | 614 | 0.9731 | | 0.1308 | 0.6841 | 615 | 0.9717 | | 0.1173 | 0.6852 | 616 | 0.9701 | | 0.302 | 0.6863 | 617 | 0.9686 | | 0.1223 | 0.6874 | 618 | 0.9689 | | 0.129 | 0.6885 | 619 | 0.9680 | | 0.171 | 0.6897 | 620 | 0.9678 | | 1.3379 | 0.6908 | 621 | 0.9657 | | 1.4052 | 0.6919 | 622 | 0.9630 | | 0.2087 | 0.6930 | 623 | 0.9586 | | 0.1978 | 0.6941 | 624 | 0.9554 | | 0.1114 | 0.6952 | 625 | 0.9540 | | 0.1476 | 0.6963 | 626 | 0.9516 | | 1.6153 | 0.6974 | 627 | 0.9488 | | 0.2185 | 0.6986 | 628 | 0.9469 | | 1.0687 | 0.6997 | 629 | 0.9447 | | 1.3618 | 0.7008 | 630 | 0.9433 | | 1.4832 | 0.7019 | 631 | 0.9415 | | 0.1436 | 0.7030 | 632 | 0.9417 | | 0.7105 | 0.7041 | 633 | 0.9454 | | 1.7709 | 0.7052 | 634 | 0.9487 | | 0.7318 | 0.7063 | 635 | 0.9499 | | 1.5488 | 0.7075 | 636 | 0.9481 | | 0.7455 | 0.7086 | 637 | 0.9450 | | 1.4415 | 0.7097 | 638 | 0.9442 | | 1.0464 | 0.7108 | 639 | 0.9446 | | 1.0735 | 0.7119 | 640 | 0.9465 | | 1.6784 | 0.7130 | 641 | 0.9483 | | 2.5311 | 0.7141 | 642 | 0.9493 | | 2.8956 | 0.7152 | 643 | 0.9459 | | 1.9388 | 0.7164 | 644 | 0.9432 | | 1.5131 | 0.7175 | 645 | 0.9425 | | 1.7043 | 0.7186 | 646 | 0.9442 | | 1.6066 | 0.7197 | 647 | 0.9469 | | 0.2487 | 0.7208 | 648 | 0.9534 | | 1.3942 | 0.7219 | 649 | 0.9606 | | 1.4343 | 0.7230 | 650 | 0.9679 | | 0.0807 | 0.7241 | 651 | 0.9739 | | 0.209 | 0.7253 | 652 | 0.9788 | | 0.1855 | 0.7264 | 653 | 0.9813 | | 0.149 | 0.7275 | 654 | 0.9846 | | 0.2529 | 0.7286 | 655 | 0.9860 | | 0.2344 | 0.7297 | 656 | 0.9869 | | 0.2349 | 0.7308 | 657 | 0.9858 | | 0.1476 | 0.7319 | 658 | 0.9851 | | 0.2017 | 0.7330 | 659 | 0.9857 | | 0.149 | 0.7341 | 660 | 0.9881 | | 0.131 | 0.7353 | 661 | 0.9909 | | 1.8229 | 0.7364 | 662 | 0.9890 | | 0.1755 | 0.7375 | 663 | 0.9871 | | 0.6368 | 0.7386 | 664 | 0.9858 | | 0.3149 | 0.7397 | 665 | 0.9852 | | 1.6095 | 0.7408 | 666 | 0.9813 | | 1.8711 | 0.7419 | 667 | 0.9746 | | 1.3569 | 0.7430 | 668 | 0.9658 | | 2.9789 | 0.7442 | 669 | 0.9538 | | 0.1193 | 0.7453 | 670 | 0.9445 | | 1.5695 | 0.7464 | 671 | 0.9333 | | 1.7978 | 0.7475 | 672 | 0.9224 | | 1.6926 | 0.7486 | 673 | 0.9113 | | 0.4426 | 0.7497 | 674 | 0.9042 | | 0.6309 | 0.7508 | 675 | 0.8986 | | 0.7492 | 0.7519 | 676 | 0.8954 | | 0.3499 | 0.7531 | 677 | 0.8943 | | 1.6252 | 0.7542 | 678 | 0.8939 | | 0.0555 | 0.7553 | 679 | 0.8944 | | 0.4801 | 0.7564 | 680 | 0.8950 | | 0.7634 | 0.7575 | 681 | 0.8975 | | 2.4567 | 0.7586 | 682 | 0.8984 | | 2.1475 | 0.7597 | 683 | 0.8987 | | 1.4414 | 0.7608 | 684 | 0.9001 | | 0.3578 | 0.7620 | 685 | 0.9016 | | 0.2974 | 0.7631 | 686 | 0.9020 | | 1.5136 | 0.7642 | 687 | 0.9023 | | 1.9552 | 0.7653 | 688 | 0.9020 | | 1.1884 | 0.7664 | 689 | 0.9021 | | 1.1642 | 0.7675 | 690 | 0.9036 | | 2.5016 | 0.7686 | 691 | 0.9035 | | 1.3415 | 0.7697 | 692 | 0.9038 | | 1.7761 | 0.7709 | 693 | 0.9043 | | 2.1978 | 0.7720 | 694 | 0.9029 | | 1.5958 | 0.7731 | 695 | 0.9016 | | 2.1666 | 0.7742 | 696 | 0.8993 | | 1.6266 | 0.7753 | 697 | 0.8963 | | 2.5113 | 0.7764 | 698 | 0.8928 | | 1.2582 | 0.7775 | 699 | 0.8902 | | 1.9249 | 0.7786 | 700 | 0.8888 | | 0.2406 | 0.7798 | 701 | 0.8876 | | 0.2678 | 0.7809 | 702 | 0.8858 | | 0.8793 | 0.7820 | 703 | 0.8836 | | 0.8654 | 0.7831 | 704 | 0.8825 | | 0.0508 | 0.7842 | 705 | 0.8828 | | 0.3096 | 0.7853 | 706 | 0.8824 | | 0.224 | 0.7864 | 707 | 0.8818 | | 0.2261 | 0.7875 | 708 | 0.8813 | | 0.1154 | 0.7887 | 709 | 0.8817 | | 0.0819 | 0.7898 | 710 | 0.8824 | | 0.9231 | 0.7909 | 711 | 0.8815 | | 1.5797 | 0.7920 | 712 | 0.8804 | | 0.3472 | 0.7931 | 713 | 0.8803 | | 0.0939 | 0.7942 | 714 | 0.8807 | | 0.1109 | 0.7953 | 715 | 0.8820 | | 1.1309 | 0.7964 | 716 | 0.8843 | | 1.6911 | 0.7976 | 717 | 0.8845 | | 0.5313 | 0.7987 | 718 | 0.8870 | | 1.1091 | 0.7998 | 719 | 0.8893 | | 0.9105 | 0.8009 | 720 | 0.8915 | | 0.0357 | 0.8020 | 721 | 0.8956 | | 1.0169 | 0.8031 | 722 | 0.9016 | | 0.7885 | 0.8042 | 723 | 0.9094 | | 0.035 | 0.8053 | 724 | 0.9193 | | 0.0135 | 0.8065 | 725 | 0.9308 | | 0.2767 | 0.8076 | 726 | 0.9404 | | 0.4077 | 0.8087 | 727 | 0.9474 | | 0.4544 | 0.8098 | 728 | 0.9509 | | 0.7031 | 0.8109 | 729 | 0.9551 | | 0.8978 | 0.8120 | 730 | 0.9593 | | 1.469 | 0.8131 | 731 | 0.9613 | | 0.0202 | 0.8142 | 732 | 0.9646 | | 0.9514 | 0.8154 | 733 | 0.9653 | | 1.5554 | 0.8165 | 734 | 0.9660 | | 1.3666 | 0.8176 | 735 | 0.9673 | | 1.404 | 0.8187 | 736 | 0.9686 | | 0.3999 | 0.8198 | 737 | 0.9700 | | 0.8687 | 0.8209 | 738 | 0.9678 | | 1.7205 | 0.8220 | 739 | 0.9657 | | 1.681 | 0.8231 | 740 | 0.9635 | | 1.1923 | 0.8242 | 741 | 0.9585 | | 1.5642 | 0.8254 | 742 | 0.9540 | | 1.5708 | 0.8265 | 743 | 0.9502 | | 1.5397 | 0.8276 | 744 | 0.9479 | | 1.2099 | 0.8287 | 745 | 0.9462 | | 1.6339 | 0.8298 | 746 | 0.9443 | | 1.5452 | 0.8309 | 747 | 0.9423 | | 1.2644 | 0.8320 | 748 | 0.9413 | | 1.5119 | 0.8331 | 749 | 0.9402 | | 1.9078 | 0.8343 | 750 | 0.9391 | | 0.0372 | 0.8354 | 751 | 0.9389 | | 0.0372 | 0.8365 | 752 | 0.9390 | | 0.4013 | 0.8376 | 753 | 0.9363 | | 0.0077 | 0.8387 | 754 | 0.9351 | | 0.4571 | 0.8398 | 755 | 0.9315 | | 0.9481 | 0.8409 | 756 | 0.9288 | | 0.0079 | 0.8420 | 757 | 0.9273 | | 0.5027 | 0.8432 | 758 | 0.9240 | | 0.7869 | 0.8443 | 759 | 0.9218 | | 0.4293 | 0.8454 | 760 | 0.9208 | | 0.0154 | 0.8465 | 761 | 0.9217 | | 1.2417 | 0.8476 | 762 | 0.9225 | | 0.3538 | 0.8487 | 763 | 0.9217 | | 0.2843 | 0.8498 | 764 | 0.9224 | | 0.3918 | 0.8509 | 765 | 0.9214 | | 0.0258 | 0.8521 | 766 | 0.9225 | | 0.412 | 0.8532 | 767 | 0.9219 | | 0.0436 | 0.8543 | 768 | 0.9233 | | 1.9301 | 0.8554 | 769 | 0.9209 | | 0.2875 | 0.8565 | 770 | 0.9174 | | 0.7284 | 0.8576 | 771 | 0.9145 | | 1.3918 | 0.8587 | 772 | 0.9087 | | 0.0384 | 0.8598 | 773 | 0.9046 | | 0.8444 | 0.8610 | 774 | 0.9016 | | 0.1872 | 0.8621 | 775 | 0.8994 | | 0.7931 | 0.8632 | 776 | 0.8947 | | 1.6151 | 0.8643 | 777 | 0.8890 | | 1.5179 | 0.8654 | 778 | 0.8843 | | 0.8453 | 0.8665 | 779 | 0.8799 | | 1.9355 | 0.8676 | 780 | 0.8764 | | 0.1875 | 0.8687 | 781 | 0.8758 | | 0.2983 | 0.8699 | 782 | 0.8746 | | 0.9777 | 0.8710 | 783 | 0.8749 | | 1.0127 | 0.8721 | 784 | 0.8744 | | 1.7352 | 0.8732 | 785 | 0.8744 | | 2.5737 | 0.8743 | 786 | 0.8738 | | 1.8679 | 0.8754 | 787 | 0.8736 | | 1.5714 | 0.8765 | 788 | 0.8736 | | 1.5651 | 0.8776 | 789 | 0.8741 | | 1.0685 | 0.8788 | 790 | 0.8736 | | 1.9763 | 0.8799 | 791 | 0.8726 | | 1.2117 | 0.8810 | 792 | 0.8706 | | 1.5393 | 0.8821 | 793 | 0.8693 | | 2.2293 | 0.8832 | 794 | 0.8682 | | 0.6925 | 0.8843 | 795 | 0.8676 | | 1.4512 | 0.8854 | 796 | 0.8673 | | 1.8421 | 0.8865 | 797 | 0.8668 | | 1.7099 | 0.8877 | 798 | 0.8659 | | 1.3619 | 0.8888 | 799 | 0.8647 | | 1.4567 | 0.8899 | 800 | 0.8642 | | 0.7425 | 0.8910 | 801 | 0.8635 | | 0.1949 | 0.8921 | 802 | 0.8627 | | 0.0268 | 0.8932 | 803 | 0.8624 | | 0.8605 | 0.8943 | 804 | 0.8618 | | 0.7463 | 0.8954 | 805 | 0.8613 | | 0.341 | 0.8966 | 806 | 0.8608 | | 0.2419 | 0.8977 | 807 | 0.8607 | | 0.1822 | 0.8988 | 808 | 0.8613 | | 0.8728 | 0.8999 | 809 | 0.8618 | | 0.1864 | 0.9010 | 810 | 0.8637 | | 0.202 | 0.9021 | 811 | 0.8648 | | 0.154 | 0.9032 | 812 | 0.8668 | | 0.2795 | 0.9043 | 813 | 0.8692 | | 1.3338 | 0.9055 | 814 | 0.8704 | | 0.8876 | 0.9066 | 815 | 0.8721 | | 0.1384 | 0.9077 | 816 | 0.8738 | | 0.1636 | 0.9088 | 817 | 0.8757 | | 0.224 | 0.9099 | 818 | 0.8764 | | 0.2594 | 0.9110 | 819 | 0.8756 | | 0.2243 | 0.9121 | 820 | 0.8736 | | 1.5429 | 0.9132 | 821 | 0.8719 | | 1.0358 | 0.9143 | 822 | 0.8704 | | 0.3107 | 0.9155 | 823 | 0.8688 | | 1.5149 | 0.9166 | 824 | 0.8678 | | 0.1985 | 0.9177 | 825 | 0.8664 | | 0.0891 | 0.9188 | 826 | 0.8667 | | 0.1555 | 0.9199 | 827 | 0.8695 | | 0.0341 | 0.9210 | 828 | 0.8754 | | 0.2167 | 0.9221 | 829 | 0.8817 | | 0.2351 | 0.9232 | 830 | 0.8856 | | 1.2993 | 0.9244 | 831 | 0.8876 | | 0.3349 | 0.9255 | 832 | 0.8868 | | 0.8712 | 0.9266 | 833 | 0.8859 | | 0.995 | 0.9277 | 834 | 0.8857 | | 0.4121 | 0.9288 | 835 | 0.8835 | | 1.4628 | 0.9299 | 836 | 0.8813 | | 0.4372 | 0.9310 | 837 | 0.8784 | | 1.4092 | 0.9321 | 838 | 0.8759 | | 0.3188 | 0.9333 | 839 | 0.8755 | | 2.315 | 0.9344 | 840 | 0.8749 | | 1.6631 | 0.9355 | 841 | 0.8746 | | 1.5236 | 0.9366 | 842 | 0.8740 | | 1.9751 | 0.9377 | 843 | 0.8740 | | 1.142 | 0.9388 | 844 | 0.8753 | | 2.2158 | 0.9399 | 845 | 0.8774 | | 1.5784 | 0.9410 | 846 | 0.8795 | | 1.6913 | 0.9422 | 847 | 0.8819 | | 1.4424 | 0.9433 | 848 | 0.8842 | | 1.3121 | 0.9444 | 849 | 0.8866 | | 1.7196 | 0.9455 | 850 | 0.8887 | | 0.1554 | 0.9466 | 851 | 0.8896 | | 0.4398 | 0.9477 | 852 | 0.8910 | | 0.4062 | 0.9488 | 853 | 0.8924 | | 0.0479 | 0.9499 | 854 | 0.8932 | | 0.2308 | 0.9511 | 855 | 0.8943 | | 0.0894 | 0.9522 | 856 | 0.8956 | | 0.2152 | 0.9533 | 857 | 0.8973 | | 0.0989 | 0.9544 | 858 | 0.8995 | | 0.2988 | 0.9555 | 859 | 0.9014 | | 1.3802 | 0.9566 | 860 | 0.9001 | | 0.3615 | 0.9577 | 861 | 0.8981 | | 0.2083 | 0.9588 | 862 | 0.8962 | | 0.1893 | 0.9600 | 863 | 0.8947 | | 0.2205 | 0.9611 | 864 | 0.8952 | | 0.1421 | 0.9622 | 865 | 0.8970 | | 0.2454 | 0.9633 | 866 | 0.8994 | | 1.147 | 0.9644 | 867 | 0.9009 | | 1.2525 | 0.9655 | 868 | 0.9024 | | 1.2321 | 0.9666 | 869 | 0.9040 | | 0.0836 | 0.9677 | 870 | 0.9040 | | 0.2181 | 0.9689 | 871 | 0.9060 | | 0.2079 | 0.9700 | 872 | 0.9087 | | 0.816 | 0.9711 | 873 | 0.9089 | | 0.0957 | 0.9722 | 874 | 0.9111 | | 0.1679 | 0.9733 | 875 | 0.9133 | | 0.6502 | 0.9744 | 876 | 0.9143 | | 0.2915 | 0.9755 | 877 | 0.9134 | | 0.9189 | 0.9766 | 878 | 0.9131 | | 0.5681 | 0.9778 | 879 | 0.9135 | | 1.1541 | 0.9789 | 880 | 0.9129 | | 2.043 | 0.9800 | 881 | 0.9127 | | 1.352 | 0.9811 | 882 | 0.9117 | | 2.0251 | 0.9822 | 883 | 0.9109 | | 0.3315 | 0.9833 | 884 | 0.9104 | | 2.6559 | 0.9844 | 885 | 0.9090 | | 1.6916 | 0.9855 | 886 | 0.9061 | | 1.2161 | 0.9867 | 887 | 0.9031 | | 2.4395 | 0.9878 | 888 | 0.8993 | | 1.995 | 0.9889 | 889 | 0.8959 | | 1.8883 | 0.9900 | 890 | 0.8926 | | 0.7587 | 0.9911 | 891 | 0.8896 | | 1.6366 | 0.9922 | 892 | 0.8872 | | 1.8463 | 0.9933 | 893 | 0.8859 | | 1.751 | 0.9944 | 894 | 0.8858 | | 1.5966 | 0.9956 | 895 | 0.8862 | | 1.3558 | 0.9967 | 896 | 0.8872 | | 1.9708 | 0.9978 | 897 | 0.8880 | | 1.9461 | 0.9989 | 898 | 0.8876 | | 1.9657 | 1.0 | 899 | 0.8877 | | 0.0462 | 1.0011 | 900 | 0.8868 | | 0.0647 | 1.0022 | 901 | 0.8858 | | 0.1662 | 1.0033 | 902 | 0.8853 | | 0.1734 | 1.0044 | 903 | 0.8851 | | 0.0969 | 1.0056 | 904 | 0.8862 | | 0.079 | 1.0067 | 905 | 0.8866 | | 0.2046 | 1.0078 | 906 | 0.8870 | | 0.1297 | 1.0089 | 907 | 0.8869 | | 0.1351 | 1.0100 | 908 | 0.8872 | | 0.1419 | 1.0111 | 909 | 0.8882 | | 0.5094 | 1.0122 | 910 | 0.8909 | | 0.1548 | 1.0133 | 911 | 0.8933 | | 0.2408 | 1.0145 | 912 | 0.8959 | | 0.1118 | 1.0156 | 913 | 0.9001 | | 1.5945 | 1.0167 | 914 | 0.9034 | | 1.3763 | 1.0178 | 915 | 0.9085 | | 0.165 | 1.0189 | 916 | 0.9127 | | 0.1748 | 1.0200 | 917 | 0.9164 | | 0.9763 | 1.0211 | 918 | 0.9172 | | 0.1664 | 1.0222 | 919 | 0.9221 | | 0.1645 | 1.0234 | 920 | 0.9272 | | 1.3009 | 1.0245 | 921 | 0.9290 | | 0.2601 | 1.0256 | 922 | 0.9306 | | 0.1477 | 1.0267 | 923 | 0.9360 | | 0.0778 | 1.0278 | 924 | 0.9441 | | 0.2416 | 1.0289 | 925 | 0.9517 | | 2.0753 | 1.0300 | 926 | 0.9546 | | 0.0673 | 1.0311 | 927 | 0.9576 | | 1.0199 | 1.0323 | 928 | 0.9591 | | 0.066 | 1.0334 | 929 | 0.9606 | | 1.252 | 1.0345 | 930 | 0.9581 | | 1.5283 | 1.0356 | 931 | 0.9562 | | 1.0096 | 1.0367 | 932 | 0.9515 | | 0.2835 | 1.0378 | 933 | 0.9473 | | 1.4452 | 1.0389 | 934 | 0.9438 | | 1.8031 | 1.0400 | 935 | 0.9360 | | 1.846 | 1.0412 | 936 | 0.9277 | | 0.5641 | 1.0423 | 937 | 0.9193 | | 1.3633 | 1.0434 | 938 | 0.9122 | | 1.2989 | 1.0445 | 939 | 0.9071 | | 1.194 | 1.0456 | 940 | 0.9032 | | 2.0484 | 1.0467 | 941 | 0.8994 | | 1.5011 | 1.0478 | 942 | 0.8962 | | 1.8806 | 1.0489 | 943 | 0.8938 | | 1.1868 | 1.0501 | 944 | 0.8925 | | 1.7486 | 1.0512 | 945 | 0.8904 | | 1.1225 | 1.0523 | 946 | 0.8888 | | 1.8779 | 1.0534 | 947 | 0.8866 | | 1.418 | 1.0545 | 948 | 0.8848 | | 1.6277 | 1.0556 | 949 | 0.8836 | | 0.7219 | 1.0567 | 950 | 0.8823 | | 0.1773 | 1.0578 | 951 | 0.8809 | | 0.0773 | 1.0590 | 952 | 0.8804 | | 0.1728 | 1.0601 | 953 | 0.8798 | | 0.4071 | 1.0612 | 954 | 0.8806 | | 0.1158 | 1.0623 | 955 | 0.8820 | | 0.2049 | 1.0634 | 956 | 0.8836 | | 1.1558 | 1.0645 | 957 | 0.8836 | | 0.151 | 1.0656 | 958 | 0.8836 | | 1.5666 | 1.0667 | 959 | 0.8826 | | 1.3955 | 1.0679 | 960 | 0.8799 | | 0.132 | 1.0690 | 961 | 0.8783 | | 1.2395 | 1.0701 | 962 | 0.8772 | | 0.0996 | 1.0712 | 963 | 0.8763 | | 0.1167 | 1.0723 | 964 | 0.8762 | | 0.233 | 1.0734 | 965 | 0.8765 | | 0.1228 | 1.0745 | 966 | 0.8764 | | 1.4968 | 1.0756 | 967 | 0.8749 | | 0.0783 | 1.0768 | 968 | 0.8748 | | 0.2102 | 1.0779 | 969 | 0.8748 | | 0.1764 | 1.0790 | 970 | 0.8748 | | 1.0459 | 1.0801 | 971 | 0.8745 | | 0.0742 | 1.0812 | 972 | 0.8756 | | 1.6514 | 1.0823 | 973 | 0.8761 | | 0.957 | 1.0834 | 974 | 0.8760 | | 1.7169 | 1.0845 | 975 | 0.8760 | | 0.2664 | 1.0857 | 976 | 0.8743 | | 0.2298 | 1.0868 | 977 | 0.8732 | | 0.2366 | 1.0879 | 978 | 0.8724 | | 0.8015 | 1.0890 | 979 | 0.8713 | | 1.155 | 1.0901 | 980 | 0.8708 | | 0.3053 | 1.0912 | 981 | 0.8707 | | 1.6347 | 1.0923 | 982 | 0.8702 | | 1.6544 | 1.0934 | 983 | 0.8697 | | 1.1555 | 1.0945 | 984 | 0.8700 | | 0.307 | 1.0957 | 985 | 0.8713 | | 0.669 | 1.0968 | 986 | 0.8727 | | 1.1216 | 1.0979 | 987 | 0.8741 | | 1.0408 | 1.0990 | 988 | 0.8758 | | 2.1496 | 1.1001 | 989 | 0.8768 | | 1.5613 | 1.1012 | 990 | 0.8779 | | 0.1063 | 1.1023 | 991 | 0.8811 | | 1.9677 | 1.1034 | 992 | 0.8848 | | 1.5885 | 1.1046 | 993 | 0.8890 | | 1.9143 | 1.1057 | 994 | 0.8931 | | 1.0212 | 1.1068 | 995 | 0.8972 | | 1.5952 | 1.1079 | 996 | 0.9008 | | 1.6481 | 1.1090 | 997 | 0.9035 | | 1.2049 | 1.1101 | 998 | 0.9062 | | 1.6733 | 1.1112 | 999 | 0.9091 | | 0.4324 | 1.1123 | 1000 | 0.9116 | | 0.0504 | 1.1135 | 1001 | 0.9133 | | 1.0199 | 1.1146 | 1002 | 0.9132 | | 0.2448 | 1.1157 | 1003 | 0.9118 | | 0.2372 | 1.1168 | 1004 | 0.9087 | | 0.2287 | 1.1179 | 1005 | 0.9056 | | 1.4554 | 1.1190 | 1006 | 0.9009 | | 0.1647 | 1.1201 | 1007 | 0.8960 | | 1.016 | 1.1212 | 1008 | 0.8898 | | 0.191 | 1.1224 | 1009 | 0.8848 | | 0.1421 | 1.1235 | 1010 | 0.8804 | | 0.1338 | 1.1246 | 1011 | 0.8777 | | 0.0994 | 1.1257 | 1012 | 0.8782 | | 0.1699 | 1.1268 | 1013 | 0.8792 | | 0.147 | 1.1279 | 1014 | 0.8826 | | 0.7609 | 1.1290 | 1015 | 0.8849 | | 0.2881 | 1.1301 | 1016 | 0.8864 | | 0.0361 | 1.1313 | 1017 | 0.8889 | | 0.3786 | 1.1324 | 1018 | 0.8900 | | 0.0369 | 1.1335 | 1019 | 0.8920 | | 0.3631 | 1.1346 | 1020 | 0.8963 | | 1.3332 | 1.1357 | 1021 | 0.8982 | | 1.2107 | 1.1368 | 1022 | 0.8993 | | 0.0324 | 1.1379 | 1023 | 0.9016 | | 0.2614 | 1.1390 | 1024 | 0.9030 | | 0.5293 | 1.1402 | 1025 | 0.9038 | | 1.3683 | 1.1413 | 1026 | 0.9020 | | 0.0198 | 1.1424 | 1027 | 0.9021 | | 0.2828 | 1.1435 | 1028 | 0.9007 | | 0.8604 | 1.1446 | 1029 | 0.8994 | | 1.9147 | 1.1457 | 1030 | 0.8980 | | 0.0973 | 1.1468 | 1031 | 0.8991 | | 1.8036 | 1.1479 | 1032 | 0.8981 | | 0.0745 | 1.1491 | 1033 | 0.8994 | | 0.49 | 1.1502 | 1034 | 0.9019 | | 0.1662 | 1.1513 | 1035 | 0.9066 | | 2.3744 | 1.1524 | 1036 | 0.9103 | | 2.0105 | 1.1535 | 1037 | 0.9120 | | 0.5715 | 1.1546 | 1038 | 0.9128 | | 1.8733 | 1.1557 | 1039 | 0.9131 | | 1.938 | 1.1568 | 1040 | 0.9137 | | 1.7144 | 1.1580 | 1041 | 0.9138 | | 1.2504 | 1.1591 | 1042 | 0.9143 | | 2.355 | 1.1602 | 1043 | 0.9147 | | 2.0111 | 1.1613 | 1044 | 0.9151 | | 1.1182 | 1.1624 | 1045 | 0.9130 | | 1.5881 | 1.1635 | 1046 | 0.9114 | | 1.3451 | 1.1646 | 1047 | 0.9105 | | 1.7636 | 1.1657 | 1048 | 0.9097 | | 1.8522 | 1.1669 | 1049 | 0.9088 | | 0.6336 | 1.1680 | 1050 | 0.9078 | | 0.0208 | 1.1691 | 1051 | 0.9073 | | 1.8121 | 1.1702 | 1052 | 0.9077 | | 0.8787 | 1.1713 | 1053 | 0.9058 | | 0.0193 | 1.1724 | 1054 | 0.9055 | | 0.0192 | 1.1735 | 1055 | 0.9067 | | 0.0101 | 1.1746 | 1056 | 0.9087 | | 0.2571 | 1.1758 | 1057 | 0.9090 | | 0.38 | 1.1769 | 1058 | 0.9073 | | 0.934 | 1.1780 | 1059 | 0.9040 | | 0.4592 | 1.1791 | 1060 | 0.8992 | | 0.6191 | 1.1802 | 1061 | 0.8955 | | 1.2962 | 1.1813 | 1062 | 0.8907 | | 0.0374 | 1.1824 | 1063 | 0.8884 | | 1.5384 | 1.1835 | 1064 | 0.8865 | | 1.1069 | 1.1846 | 1065 | 0.8850 | | 0.4331 | 1.1858 | 1066 | 0.8825 | | 1.2896 | 1.1869 | 1067 | 0.8804 | | 0.0153 | 1.1880 | 1068 | 0.8798 | | 1.6191 | 1.1891 | 1069 | 0.8772 | | 0.4539 | 1.1902 | 1070 | 0.8737 | | 0.0101 | 1.1913 | 1071 | 0.8714 | | 0.2086 | 1.1924 | 1072 | 0.8709 | | 0.0161 | 1.1935 | 1073 | 0.8712 | | 1.3259 | 1.1947 | 1074 | 0.8722 | | 0.8633 | 1.1958 | 1075 | 0.8734 | | 0.6649 | 1.1969 | 1076 | 0.8754 | | 0.9043 | 1.1980 | 1077 | 0.8769 | | 0.381 | 1.1991 | 1078 | 0.8801 | | 0.0328 | 1.2002 | 1079 | 0.8850 | | 0.6165 | 1.2013 | 1080 | 0.8889 | | 0.0122 | 1.2024 | 1081 | 0.8934 | | 1.5885 | 1.2036 | 1082 | 0.8970 | | 0.626 | 1.2047 | 1083 | 0.8991 | | 1.0318 | 1.2058 | 1084 | 0.9012 | | 1.4867 | 1.2069 | 1085 | 0.9014 | | 0.8728 | 1.2080 | 1086 | 0.9004 | | 1.019 | 1.2091 | 1087 | 0.8993 | | 1.7299 | 1.2102 | 1088 | 0.8986 | | 1.0601 | 1.2113 | 1089 | 0.8982 | | 1.5461 | 1.2125 | 1090 | 0.8962 | | 1.4592 | 1.2136 | 1091 | 0.8952 | | 1.6359 | 1.2147 | 1092 | 0.8930 | | 1.8747 | 1.2158 | 1093 | 0.8905 | | 1.6996 | 1.2169 | 1094 | 0.8893 | | 1.3435 | 1.2180 | 1095 | 0.8888 | | 1.3776 | 1.2191 | 1096 | 0.8887 | | 1.1417 | 1.2202 | 1097 | 0.8892 | | 1.3692 | 1.2214 | 1098 | 0.8893 | | 1.1623 | 1.2225 | 1099 | 0.8896 | | 0.0182 | 1.2236 | 1100 | 0.8908 | | 0.4927 | 1.2247 | 1101 | 0.8916 | | 0.007 | 1.2258 | 1102 | 0.8931 | | 0.5564 | 1.2269 | 1103 | 0.8927 | | 1.0595 | 1.2280 | 1104 | 0.8929 | | 0.0044 | 1.2291 | 1105 | 0.8937 | | 0.0076 | 1.2303 | 1106 | 0.8947 | | 0.0067 | 1.2314 | 1107 | 0.8960 | | 0.0121 | 1.2325 | 1108 | 0.8985 | | 0.9332 | 1.2336 | 1109 | 0.8998 | | 0.4979 | 1.2347 | 1110 | 0.8992 | | 1.2562 | 1.2358 | 1111 | 0.8992 | | 0.9052 | 1.2369 | 1112 | 0.8983 | | 0.4274 | 1.2380 | 1113 | 0.8958 | | 0.7295 | 1.2392 | 1114 | 0.8927 | | 2.1425 | 1.2403 | 1115 | 0.8894 | | 0.6584 | 1.2414 | 1116 | 0.8856 | | 0.0112 | 1.2425 | 1117 | 0.8828 | | 1.1516 | 1.2436 | 1118 | 0.8799 | | 0.3719 | 1.2447 | 1119 | 0.8756 | | 0.0154 | 1.2458 | 1120 | 0.8731 | | 0.4224 | 1.2469 | 1121 | 0.8695 | | 0.0133 | 1.2481 | 1122 | 0.8676 | | 0.8361 | 1.2492 | 1123 | 0.8659 | | 0.8988 | 1.2503 | 1124 | 0.8638 | | 1.42 | 1.2514 | 1125 | 0.8618 | | 0.0157 | 1.2525 | 1126 | 0.8613 | | 1.2848 | 1.2536 | 1127 | 0.8601 | | 0.3631 | 1.2547 | 1128 | 0.8579 | | 0.5068 | 1.2558 | 1129 | 0.8557 | | 1.1577 | 1.2570 | 1130 | 0.8540 | | 2.1854 | 1.2581 | 1131 | 0.8521 | | 0.7321 | 1.2592 | 1132 | 0.8509 | | 1.2866 | 1.2603 | 1133 | 0.8492 | | 1.738 | 1.2614 | 1134 | 0.8477 | | 1.0075 | 1.2625 | 1135 | 0.8476 | | 1.3022 | 1.2636 | 1136 | 0.8485 | | 1.4622 | 1.2647 | 1137 | 0.8500 | | 1.0172 | 1.2659 | 1138 | 0.8515 | | 0.8099 | 1.2670 | 1139 | 0.8524 | | 1.1392 | 1.2681 | 1140 | 0.8537 | | 1.1856 | 1.2692 | 1141 | 0.8554 | | 0.9508 | 1.2703 | 1142 | 0.8562 | | 1.0313 | 1.2714 | 1143 | 0.8572 | | 2.0244 | 1.2725 | 1144 | 0.8585 | | 1.7059 | 1.2736 | 1145 | 0.8597 | | 1.8035 | 1.2747 | 1146 | 0.8599 | | 1.4811 | 1.2759 | 1147 | 0.8601 | | 1.5032 | 1.2770 | 1148 | 0.8596 | | 1.9188 | 1.2781 | 1149 | 0.8581 | | 0.0183 | 1.2792 | 1150 | 0.8571 | | 0.4057 | 1.2803 | 1151 | 0.8556 | | 0.0316 | 1.2814 | 1152 | 0.8553 | | 0.3317 | 1.2825 | 1153 | 0.8544 | | 0.3376 | 1.2836 | 1154 | 0.8522 | | 0.3303 | 1.2848 | 1155 | 0.8494 | | 0.2943 | 1.2859 | 1156 | 0.8458 | | 0.0481 | 1.2870 | 1157 | 0.8442 | | 0.0237 | 1.2881 | 1158 | 0.8436 | | 0.0513 | 1.2892 | 1159 | 0.8439 | | 0.0353 | 1.2903 | 1160 | 0.8452 | | 0.2556 | 1.2914 | 1161 | 0.8453 | | 0.3627 | 1.2925 | 1162 | 0.8445 | | 0.283 | 1.2937 | 1163 | 0.8429 | | 0.8712 | 1.2948 | 1164 | 0.8414 | | 0.2819 | 1.2959 | 1165 | 0.8396 | | 1.7546 | 1.2970 | 1166 | 0.8375 | | 0.7339 | 1.2981 | 1167 | 0.8362 | | 0.5348 | 1.2992 | 1168 | 0.8346 | | 1.1005 | 1.3003 | 1169 | 0.8329 | | 0.2105 | 1.3014 | 1170 | 0.8314 | | 1.0169 | 1.3026 | 1171 | 0.8296 | | 0.207 | 1.3037 | 1172 | 0.8282 | | 0.1782 | 1.3048 | 1173 | 0.8279 | | 0.7407 | 1.3059 | 1174 | 0.8288 | | 0.1024 | 1.3070 | 1175 | 0.8313 | | 0.7585 | 1.3081 | 1176 | 0.8347 | | 0.5667 | 1.3092 | 1177 | 0.8381 | | 1.2681 | 1.3103 | 1178 | 0.8418 | | 1.2501 | 1.3115 | 1179 | 0.8462 | | 0.1045 | 1.3126 | 1180 | 0.8516 | | 2.3382 | 1.3137 | 1181 | 0.8565 | | 0.2623 | 1.3148 | 1182 | 0.8606 | | 1.0773 | 1.3159 | 1183 | 0.8647 | | 1.161 | 1.3170 | 1184 | 0.8692 | | 0.937 | 1.3181 | 1185 | 0.8735 | | 1.4227 | 1.3192 | 1186 | 0.8776 | | 1.4132 | 1.3204 | 1187 | 0.8819 | | 1.2079 | 1.3215 | 1188 | 0.8853 | | 2.5083 | 1.3226 | 1189 | 0.8876 | | 1.7559 | 1.3237 | 1190 | 0.8897 | | 0.9501 | 1.3248 | 1191 | 0.8912 | | 2.0093 | 1.3259 | 1192 | 0.8920 | | 1.8049 | 1.3270 | 1193 | 0.8924 | | 2.143 | 1.3281 | 1194 | 0.8921 | | 1.3383 | 1.3293 | 1195 | 0.8915 | | 1.7049 | 1.3304 | 1196 | 0.8908 | | 2.1545 | 1.3315 | 1197 | 0.8902 | | 1.763 | 1.3326 | 1198 | 0.8900 | | 1.2131 | 1.3337 | 1199 | 0.8903 | | 0.1 | 1.3348 | 1200 | 0.8895 | | 0.5456 | 1.3359 | 1201 | 0.8874 | | 0.1176 | 1.3370 | 1202 | 0.8849 | | 0.0826 | 1.3382 | 1203 | 0.8841 | | 0.3229 | 1.3393 | 1204 | 0.8824 | | 0.4289 | 1.3404 | 1205 | 0.8813 | | 0.2114 | 1.3415 | 1206 | 0.8786 | | 0.1258 | 1.3426 | 1207 | 0.8771 | | 0.3507 | 1.3437 | 1208 | 0.8746 | | 0.0851 | 1.3448 | 1209 | 0.8739 | | 0.0612 | 1.3459 | 1210 | 0.8747 | | 0.9213 | 1.3471 | 1211 | 0.8739 | | 1.7706 | 1.3482 | 1212 | 0.8721 | | 1.0434 | 1.3493 | 1213 | 0.8692 | | 1.4623 | 1.3504 | 1214 | 0.8670 | | 0.2846 | 1.3515 | 1215 | 0.8638 | | 0.3331 | 1.3526 | 1216 | 0.8590 | | 0.8364 | 1.3537 | 1217 | 0.8539 | | 0.2742 | 1.3548 | 1218 | 0.8483 | | 0.1685 | 1.3560 | 1219 | 0.8427 | | 0.1728 | 1.3571 | 1220 | 0.8368 | | 0.1158 | 1.3582 | 1221 | 0.8335 | | 0.8481 | 1.3593 | 1222 | 0.8308 | | 0.152 | 1.3604 | 1223 | 0.8281 | | 0.1965 | 1.3615 | 1224 | 0.8269 | | 1.4891 | 1.3626 | 1225 | 0.8256 | | 0.23 | 1.3637 | 1226 | 0.8250 | | 0.1552 | 1.3648 | 1227 | 0.8246 | | 1.0534 | 1.3660 | 1228 | 0.8242 | | 0.8665 | 1.3671 | 1229 | 0.8241 | | 0.2324 | 1.3682 | 1230 | 0.8245 | | 0.7599 | 1.3693 | 1231 | 0.8247 | | 1.4636 | 1.3704 | 1232 | 0.8244 | | 1.1979 | 1.3715 | 1233 | 0.8244 | | 1.6358 | 1.3726 | 1234 | 0.8246 | | 1.2016 | 1.3737 | 1235 | 0.8249 | | 1.6938 | 1.3749 | 1236 | 0.8255 | | 1.5856 | 1.3760 | 1237 | 0.8257 | | 1.3345 | 1.3771 | 1238 | 0.8255 | | 0.4809 | 1.3782 | 1239 | 0.8248 | | 1.6503 | 1.3793 | 1240 | 0.8248 | | 0.5154 | 1.3804 | 1241 | 0.8254 | | 1.609 | 1.3815 | 1242 | 0.8257 | | 1.2427 | 1.3826 | 1243 | 0.8264 | | 1.8158 | 1.3838 | 1244 | 0.8274 | | 1.2077 | 1.3849 | 1245 | 0.8286 | | 2.1999 | 1.3860 | 1246 | 0.8300 | | 1.0795 | 1.3871 | 1247 | 0.8317 | | 1.7249 | 1.3882 | 1248 | 0.8332 | | 1.617 | 1.3893 | 1249 | 0.8348 | | 0.6834 | 1.3904 | 1250 | 0.8363 | | 0.2791 | 1.3915 | 1251 | 0.8378 | | 0.6039 | 1.3927 | 1252 | 0.8399 | | 0.0995 | 1.3938 | 1253 | 0.8416 | | 0.3449 | 1.3949 | 1254 | 0.8433 | | 0.1153 | 1.3960 | 1255 | 0.8451 | | 0.0475 | 1.3971 | 1256 | 0.8472 | | 0.1998 | 1.3982 | 1257 | 0.8492 | | 0.1506 | 1.3993 | 1258 | 0.8512 | | 1.0664 | 1.4004 | 1259 | 0.8524 | | 0.2211 | 1.4016 | 1260 | 0.8534 | | 0.0649 | 1.4027 | 1261 | 0.8543 | | 0.2495 | 1.4038 | 1262 | 0.8561 | | 0.2044 | 1.4049 | 1263 | 0.8594 | | 0.9178 | 1.4060 | 1264 | 0.8621 | | 1.156 | 1.4071 | 1265 | 0.8639 | | 0.8862 | 1.4082 | 1266 | 0.8648 | | 1.3062 | 1.4093 | 1267 | 0.8653 | | 0.9078 | 1.4105 | 1268 | 0.8658 | | 0.1671 | 1.4116 | 1269 | 0.8667 | | 0.1901 | 1.4127 | 1270 | 0.8675 | | 0.6747 | 1.4138 | 1271 | 0.8668 | | 1.4042 | 1.4149 | 1272 | 0.8646 | | 0.1326 | 1.4160 | 1273 | 0.8643 | | 1.1563 | 1.4171 | 1274 | 0.8642 | | 0.9238 | 1.4182 | 1275 | 0.8645 | | 1.055 | 1.4194 | 1276 | 0.8637 | | 2.3584 | 1.4205 | 1277 | 0.8632 | | 1.0263 | 1.4216 | 1278 | 0.8618 | | 1.5701 | 1.4227 | 1279 | 0.8611 | | 1.8728 | 1.4238 | 1280 | 0.8605 | | 1.8641 | 1.4249 | 1281 | 0.8598 | | 1.061 | 1.4260 | 1282 | 0.8598 | | 1.9154 | 1.4271 | 1283 | 0.8586 | | 1.0905 | 1.4283 | 1284 | 0.8580 | | 1.5755 | 1.4294 | 1285 | 0.8576 | | 1.2756 | 1.4305 | 1286 | 0.8565 | | 1.0232 | 1.4316 | 1287 | 0.8560 | | 3.4327 | 1.4327 | 1288 | 0.8533 | | 1.0983 | 1.4338 | 1289 | 0.8516 | | 0.8966 | 1.4349 | 1290 | 0.8509 | | 1.6758 | 1.4360 | 1291 | 0.8505 | | 0.3289 | 1.4372 | 1292 | 0.8519 | | 1.6851 | 1.4383 | 1293 | 0.8533 | | 2.1615 | 1.4394 | 1294 | 0.8549 | | 1.7409 | 1.4405 | 1295 | 0.8558 | | 1.9547 | 1.4416 | 1296 | 0.8568 | | 1.7571 | 1.4427 | 1297 | 0.8579 | | 0.9661 | 1.4438 | 1298 | 0.8592 | | 1.4391 | 1.4449 | 1299 | 0.8606 | | 0.4474 | 1.4461 | 1300 | 0.8619 | | 0.1469 | 1.4472 | 1301 | 0.8635 | | 0.0399 | 1.4483 | 1302 | 0.8646 | | 0.6192 | 1.4494 | 1303 | 0.8648 | | 0.1387 | 1.4505 | 1304 | 0.8652 | | 0.1507 | 1.4516 | 1305 | 0.8659 | | 0.1471 | 1.4527 | 1306 | 0.8667 | | 0.1568 | 1.4538 | 1307 | 0.8682 | | 0.1381 | 1.4549 | 1308 | 0.8694 | | 1.2158 | 1.4561 | 1309 | 0.8695 | | 0.197 | 1.4572 | 1310 | 0.8692 | | 1.4848 | 1.4583 | 1311 | 0.8676 | | 0.19 | 1.4594 | 1312 | 0.8665 | | 0.1028 | 1.4605 | 1313 | 0.8659 | | 1.1339 | 1.4616 | 1314 | 0.8641 | | 0.1344 | 1.4627 | 1315 | 0.8639 | | 0.1351 | 1.4638 | 1316 | 0.8629 | | 0.1853 | 1.4650 | 1317 | 0.8624 | | 0.7337 | 1.4661 | 1318 | 0.8620 | | 1.7401 | 1.4672 | 1319 | 0.8620 | | 0.894 | 1.4683 | 1320 | 0.8617 | | 0.1087 | 1.4694 | 1321 | 0.8626 | | 0.8847 | 1.4705 | 1322 | 0.8641 | | 0.8549 | 1.4716 | 1323 | 0.8650 | | 0.7825 | 1.4727 | 1324 | 0.8655 | | 1.1954 | 1.4739 | 1325 | 0.8670 | | 1.0807 | 1.4750 | 1326 | 0.8693 | | 0.712 | 1.4761 | 1327 | 0.8718 | | 0.1519 | 1.4772 | 1328 | 0.8756 | | 0.1117 | 1.4783 | 1329 | 0.8802 | | 0.2402 | 1.4794 | 1330 | 0.8847 | | 0.9789 | 1.4805 | 1331 | 0.8873 | | 0.0752 | 1.4816 | 1332 | 0.8910 | | 0.4983 | 1.4828 | 1333 | 0.8927 | | 1.371 | 1.4839 | 1334 | 0.8934 | | 0.7262 | 1.4850 | 1335 | 0.8929 | | 2.2841 | 1.4861 | 1336 | 0.8915 | | 1.0938 | 1.4872 | 1337 | 0.8914 | | 1.3279 | 1.4883 | 1338 | 0.8898 | | 0.9464 | 1.4894 | 1339 | 0.8882 | | 1.5839 | 1.4905 | 1340 | 0.8865 | | 1.0345 | 1.4917 | 1341 | 0.8845 | | 2.2694 | 1.4928 | 1342 | 0.8821 | | 2.0455 | 1.4939 | 1343 | 0.8795 | | 2.3179 | 1.4950 | 1344 | 0.8764 | | 1.5415 | 1.4961 | 1345 | 0.8742 | | 0.7126 | 1.4972 | 1346 | 0.8731 | | 2.1948 | 1.4983 | 1347 | 0.8716 | | 0.9251 | 1.4994 | 1348 | 0.8704 | | 1.2534 | 1.5006 | 1349 | 0.8693 | | 0.3341 | 1.5017 | 1350 | 0.8687 | | 0.2565 | 1.5028 | 1351 | 0.8671 | | 0.1125 | 1.5039 | 1352 | 0.8658 | | 0.2147 | 1.5050 | 1353 | 0.8632 | | 0.8629 | 1.5061 | 1354 | 0.8607 | | 0.1721 | 1.5072 | 1355 | 0.8578 | | 0.1739 | 1.5083 | 1356 | 0.8541 | | 0.1841 | 1.5095 | 1357 | 0.8520 | | 0.1504 | 1.5106 | 1358 | 0.8498 | | 0.3525 | 1.5117 | 1359 | 0.8478 | | 0.2874 | 1.5128 | 1360 | 0.8454 | | 0.1059 | 1.5139 | 1361 | 0.8434 | | 0.1149 | 1.5150 | 1362 | 0.8421 | | 0.3111 | 1.5161 | 1363 | 0.8411 | | 0.1433 | 1.5172 | 1364 | 0.8403 | | 0.8971 | 1.5184 | 1365 | 0.8397 | | 0.7134 | 1.5195 | 1366 | 0.8391 | | 1.9402 | 1.5206 | 1367 | 0.8378 | | 0.0412 | 1.5217 | 1368 | 0.8372 | | 0.8237 | 1.5228 | 1369 | 0.8364 | | 0.1423 | 1.5239 | 1370 | 0.8358 | | 1.2363 | 1.5250 | 1371 | 0.8355 | | 0.3741 | 1.5261 | 1372 | 0.8357 | | 0.1142 | 1.5273 | 1373 | 0.8364 | | 0.4191 | 1.5284 | 1374 | 0.8368 | | 0.8951 | 1.5295 | 1375 | 0.8367 | | 1.8204 | 1.5306 | 1376 | 0.8368 | | 0.3309 | 1.5317 | 1377 | 0.8363 | | 1.7721 | 1.5328 | 1378 | 0.8358 | | 0.1599 | 1.5339 | 1379 | 0.8357 | | 2.4693 | 1.5350 | 1380 | 0.8342 | | 0.5709 | 1.5362 | 1381 | 0.8337 | | 0.9739 | 1.5373 | 1382 | 0.8327 | | 0.1538 | 1.5384 | 1383 | 0.8329 | | 1.5688 | 1.5395 | 1384 | 0.8332 | | 1.5836 | 1.5406 | 1385 | 0.8338 | | 1.4405 | 1.5417 | 1386 | 0.8345 | | 1.2315 | 1.5428 | 1387 | 0.8350 | | 0.7126 | 1.5439 | 1388 | 0.8351 | | 1.5053 | 1.5451 | 1389 | 0.8356 | | 0.918 | 1.5462 | 1390 | 0.8350 | | 1.6593 | 1.5473 | 1391 | 0.8350 | | 1.5296 | 1.5484 | 1392 | 0.8347 | | 1.9705 | 1.5495 | 1393 | 0.8337 | | 1.6258 | 1.5506 | 1394 | 0.8326 | | 1.6787 | 1.5517 | 1395 | 0.8322 | | 1.5066 | 1.5528 | 1396 | 0.8316 | | 1.075 | 1.5539 | 1397 | 0.8312 | | 1.6869 | 1.5551 | 1398 | 0.8309 | | 1.8803 | 1.5562 | 1399 | 0.8307 | | 0.0373 | 1.5573 | 1400 | 0.8310 | | 0.3026 | 1.5584 | 1401 | 0.8312 | | 0.3107 | 1.5595 | 1402 | 0.8313 | | 0.1133 | 1.5606 | 1403 | 0.8318 | | 0.1124 | 1.5617 | 1404 | 0.8321 | | 0.1394 | 1.5628 | 1405 | 0.8329 | | 0.1101 | 1.5640 | 1406 | 0.8345 | | 0.6585 | 1.5651 | 1407 | 0.8346 | | 0.2084 | 1.5662 | 1408 | 0.8350 | | 0.1359 | 1.5673 | 1409 | 0.8346 | | 0.704 | 1.5684 | 1410 | 0.8346 | | 0.165 | 1.5695 | 1411 | 0.8343 | | 0.1483 | 1.5706 | 1412 | 0.8334 | | 0.0582 | 1.5717 | 1413 | 0.8335 | | 1.7093 | 1.5729 | 1414 | 0.8330 | | 0.2987 | 1.5740 | 1415 | 0.8330 | | 0.1072 | 1.5751 | 1416 | 0.8336 | | 0.0973 | 1.5762 | 1417 | 0.8350 | | 0.1756 | 1.5773 | 1418 | 0.8354 | | 0.1486 | 1.5784 | 1419 | 0.8351 | | 0.1881 | 1.5795 | 1420 | 0.8343 | | 0.5024 | 1.5806 | 1421 | 0.8338 | | 0.1008 | 1.5818 | 1422 | 0.8331 | | 0.2236 | 1.5829 | 1423 | 0.8325 | | 1.1301 | 1.5840 | 1424 | 0.8314 | | 1.2099 | 1.5851 | 1425 | 0.8305 | | 0.0872 | 1.5862 | 1426 | 0.8297 | | 0.2532 | 1.5873 | 1427 | 0.8290 | | 0.5131 | 1.5884 | 1428 | 0.8280 | | 0.2754 | 1.5895 | 1429 | 0.8271 | | 0.235 | 1.5907 | 1430 | 0.8260 | | 1.2399 | 1.5918 | 1431 | 0.8242 | | 0.2947 | 1.5929 | 1432 | 0.8234 | | 1.9521 | 1.5940 | 1433 | 0.8226 | | 0.79 | 1.5951 | 1434 | 0.8220 | | 0.6896 | 1.5962 | 1435 | 0.8204 | | 1.5095 | 1.5973 | 1436 | 0.8192 | | 1.2774 | 1.5984 | 1437 | 0.8182 | | 1.0384 | 1.5996 | 1438 | 0.8164 | | 1.1429 | 1.6007 | 1439 | 0.8147 | | 0.5559 | 1.6018 | 1440 | 0.8131 | | 1.6752 | 1.6029 | 1441 | 0.8124 | | 1.2089 | 1.6040 | 1442 | 0.8123 | | 1.2771 | 1.6051 | 1443 | 0.8124 | | 1.6964 | 1.6062 | 1444 | 0.8130 | | 1.7943 | 1.6073 | 1445 | 0.8135 | | 1.3573 | 1.6085 | 1446 | 0.8141 | | 1.7025 | 1.6096 | 1447 | 0.8150 | | 1.1261 | 1.6107 | 1448 | 0.8159 | | 2.687 | 1.6118 | 1449 | 0.8157 | | 0.5996 | 1.6129 | 1450 | 0.8160 | | 0.3431 | 1.6140 | 1451 | 0.8161 | | 0.0495 | 1.6151 | 1452 | 0.8165 | | 0.1782 | 1.6162 | 1453 | 0.8168 | | 0.147 | 1.6174 | 1454 | 0.8170 | | 0.4936 | 1.6185 | 1455 | 0.8171 | | 0.1891 | 1.6196 | 1456 | 0.8174 | | 0.157 | 1.6207 | 1457 | 0.8180 | | 0.1494 | 1.6218 | 1458 | 0.8191 | | 1.1861 | 1.6229 | 1459 | 0.8192 | | 0.9716 | 1.6240 | 1460 | 0.8202 | | 0.1326 | 1.6251 | 1461 | 0.8208 | | 0.0688 | 1.6263 | 1462 | 0.8208 | | 0.1445 | 1.6274 | 1463 | 0.8201 | | 0.1322 | 1.6285 | 1464 | 0.8193 | | 0.1166 | 1.6296 | 1465 | 0.8186 | | 1.1675 | 1.6307 | 1466 | 0.8177 | | 0.0942 | 1.6318 | 1467 | 0.8176 | | 0.2866 | 1.6329 | 1468 | 0.8167 | | 0.1966 | 1.6340 | 1469 | 0.8161 | | 0.9997 | 1.6352 | 1470 | 0.8151 | | 1.1384 | 1.6363 | 1471 | 0.8145 | | 1.0952 | 1.6374 | 1472 | 0.8134 | | 1.2168 | 1.6385 | 1473 | 0.8128 | | 0.4955 | 1.6396 | 1474 | 0.8127 | | 1.5091 | 1.6407 | 1475 | 0.8128 | | 1.2753 | 1.6418 | 1476 | 0.8129 | | 0.4831 | 1.6429 | 1477 | 0.8136 | | 0.2462 | 1.6440 | 1478 | 0.8149 | | 1.7673 | 1.6452 | 1479 | 0.8170 | | 0.3723 | 1.6463 | 1480 | 0.8195 | | 1.4478 | 1.6474 | 1481 | 0.8217 | | 1.8741 | 1.6485 | 1482 | 0.8236 | | 0.8578 | 1.6496 | 1483 | 0.8265 | | 1.5605 | 1.6507 | 1484 | 0.8295 | | 0.5391 | 1.6518 | 1485 | 0.8313 | | 1.3922 | 1.6529 | 1486 | 0.8326 | | 1.8019 | 1.6541 | 1487 | 0.8332 | | 1.0494 | 1.6552 | 1488 | 0.8336 | | 1.2245 | 1.6563 | 1489 | 0.8344 | | 2.0011 | 1.6574 | 1490 | 0.8353 | | 2.1018 | 1.6585 | 1491 | 0.8353 | | 1.5163 | 1.6596 | 1492 | 0.8357 | | 1.7002 | 1.6607 | 1493 | 0.8360 | | 1.2659 | 1.6618 | 1494 | 0.8363 | | 1.5684 | 1.6630 | 1495 | 0.8369 | | 1.6578 | 1.6641 | 1496 | 0.8372 | | 1.3528 | 1.6652 | 1497 | 0.8380 | | 1.8636 | 1.6663 | 1498 | 0.8381 | | 1.1483 | 1.6674 | 1499 | 0.8383 | | 0.1161 | 1.6685 | 1500 | 0.8391 | | 0.2879 | 1.6696 | 1501 | 0.8398 | | 0.3367 | 1.6707 | 1502 | 0.8402 | | 0.0558 | 1.6719 | 1503 | 0.8404 | | 0.1054 | 1.6730 | 1504 | 0.8401 | | 0.1296 | 1.6741 | 1505 | 0.8392 | | 0.2435 | 1.6752 | 1506 | 0.8383 | | 0.1471 | 1.6763 | 1507 | 0.8373 | | 0.1226 | 1.6774 | 1508 | 0.8373 | | 0.2343 | 1.6785 | 1509 | 0.8368 | | 0.3051 | 1.6796 | 1510 | 0.8365 | | 0.2131 | 1.6808 | 1511 | 0.8376 | | 1.5339 | 1.6819 | 1512 | 0.8375 | | 1.5275 | 1.6830 | 1513 | 0.8361 | | 1.3589 | 1.6841 | 1514 | 0.8334 | | 0.088 | 1.6852 | 1515 | 0.8313 | | 0.2555 | 1.6863 | 1516 | 0.8288 | | 0.1137 | 1.6874 | 1517 | 0.8265 | | 0.2211 | 1.6885 | 1518 | 0.8242 | | 0.068 | 1.6897 | 1519 | 0.8218 | | 0.1081 | 1.6908 | 1520 | 0.8203 | | 1.2629 | 1.6919 | 1521 | 0.8195 | | 0.5371 | 1.6930 | 1522 | 0.8186 | | 1.1922 | 1.6941 | 1523 | 0.8179 | | 0.1714 | 1.6952 | 1524 | 0.8171 | | 1.1176 | 1.6963 | 1525 | 0.8168 | | 0.6662 | 1.6974 | 1526 | 0.8158 | | 0.2302 | 1.6986 | 1527 | 0.8143 | | 0.521 | 1.6997 | 1528 | 0.8122 | | 0.0925 | 1.7008 | 1529 | 0.8107 | | 0.2853 | 1.7019 | 1530 | 0.8092 | | 0.8094 | 1.7030 | 1531 | 0.8075 | | 1.0225 | 1.7041 | 1532 | 0.8065 | | 0.2593 | 1.7052 | 1533 | 0.8057 | | 0.4902 | 1.7063 | 1534 | 0.8050 | | 1.0796 | 1.7075 | 1535 | 0.8047 | | 1.0653 | 1.7086 | 1536 | 0.8044 | | 0.0781 | 1.7097 | 1537 | 0.8044 | | 1.0342 | 1.7108 | 1538 | 0.8040 | | 2.1249 | 1.7119 | 1539 | 0.8036 | | 1.3463 | 1.7130 | 1540 | 0.8032 | | 1.4189 | 1.7141 | 1541 | 0.8032 | | 1.6162 | 1.7152 | 1542 | 0.8036 | | 1.0056 | 1.7164 | 1543 | 0.8039 | | 1.2708 | 1.7175 | 1544 | 0.8045 | | 1.4553 | 1.7186 | 1545 | 0.8054 | | 2.0447 | 1.7197 | 1546 | 0.8056 | | 2.1384 | 1.7208 | 1547 | 0.8054 | | 1.4155 | 1.7219 | 1548 | 0.8051 | | 1.7154 | 1.7230 | 1549 | 0.8048 | | 0.0273 | 1.7241 | 1550 | 0.8050 | | 0.3516 | 1.7253 | 1551 | 0.8049 | | 0.6258 | 1.7264 | 1552 | 0.8045 | | 0.2509 | 1.7275 | 1553 | 0.8040 | | 0.0515 | 1.7286 | 1554 | 0.8038 | | 0.2363 | 1.7297 | 1555 | 0.8034 | | 1.2139 | 1.7308 | 1556 | 0.8031 | | 0.1864 | 1.7319 | 1557 | 0.8027 | | 0.2204 | 1.7330 | 1558 | 0.8022 | | 0.0592 | 1.7341 | 1559 | 0.8022 | | 0.6472 | 1.7353 | 1560 | 0.8015 | | 0.026 | 1.7364 | 1561 | 0.8010 | | 0.1572 | 1.7375 | 1562 | 0.8005 | | 0.1902 | 1.7386 | 1563 | 0.7998 | | 1.4421 | 1.7397 | 1564 | 0.7990 | | 0.6653 | 1.7408 | 1565 | 0.7984 | | 0.1626 | 1.7419 | 1566 | 0.7977 | | 0.2603 | 1.7430 | 1567 | 0.7974 | | 0.1649 | 1.7442 | 1568 | 0.7968 | | 0.3768 | 1.7453 | 1569 | 0.7965 | | 0.0322 | 1.7464 | 1570 | 0.7963 | | 0.1194 | 1.7475 | 1571 | 0.7960 | | 1.4423 | 1.7486 | 1572 | 0.7961 | | 0.91 | 1.7497 | 1573 | 0.7962 | | 1.0152 | 1.7508 | 1574 | 0.7960 | | 0.0507 | 1.7519 | 1575 | 0.7959 | | 0.9401 | 1.7531 | 1576 | 0.7958 | | 0.4271 | 1.7542 | 1577 | 0.7961 | | 0.1923 | 1.7553 | 1578 | 0.7974 | | 1.3395 | 1.7564 | 1579 | 0.7994 | | 0.1772 | 1.7575 | 1580 | 0.8019 | | 0.7208 | 1.7586 | 1581 | 0.8050 | | 0.4183 | 1.7597 | 1582 | 0.8097 | | 0.1822 | 1.7608 | 1583 | 0.8146 | | 0.9538 | 1.7620 | 1584 | 0.8185 | | 0.0984 | 1.7631 | 1585 | 0.8225 | | 0.0262 | 1.7642 | 1586 | 0.8263 | | 0.1873 | 1.7653 | 1587 | 0.8305 | | 0.1821 | 1.7664 | 1588 | 0.8345 | | 1.6246 | 1.7675 | 1589 | 0.8381 | | 1.2562 | 1.7686 | 1590 | 0.8393 | | 0.9315 | 1.7697 | 1591 | 0.8393 | | 1.6576 | 1.7709 | 1592 | 0.8394 | | 1.2396 | 1.7720 | 1593 | 0.8396 | | 1.2 | 1.7731 | 1594 | 0.8401 | | 1.8489 | 1.7742 | 1595 | 0.8404 | | 1.8223 | 1.7753 | 1596 | 0.8387 | | 1.8779 | 1.7764 | 1597 | 0.8372 | | 1.6186 | 1.7775 | 1598 | 0.8350 | | 1.7399 | 1.7786 | 1599 | 0.8335 | | 0.3663 | 1.7798 | 1600 | 0.8320 | | 0.048 | 1.7809 | 1601 | 0.8304 | | 0.2076 | 1.7820 | 1602 | 0.8294 | | 0.1963 | 1.7831 | 1603 | 0.8287 | | 0.0526 | 1.7842 | 1604 | 0.8284 | | 0.1969 | 1.7853 | 1605 | 0.8287 | | 0.305 | 1.7864 | 1606 | 0.8292 | | 0.2824 | 1.7875 | 1607 | 0.8298 | | 0.1252 | 1.7887 | 1608 | 0.8311 | | 0.1606 | 1.7898 | 1609 | 0.8315 | | 0.0918 | 1.7909 | 1610 | 0.8339 | | 0.1215 | 1.7920 | 1611 | 0.8353 | | 0.1895 | 1.7931 | 1612 | 0.8367 | | 0.9384 | 1.7942 | 1613 | 0.8381 | | 0.2112 | 1.7953 | 1614 | 0.8395 | | 0.3986 | 1.7964 | 1615 | 0.8410 | | 0.8468 | 1.7976 | 1616 | 0.8413 | | 0.1422 | 1.7987 | 1617 | 0.8409 | | 0.1265 | 1.7998 | 1618 | 0.8402 | | 0.7873 | 1.8009 | 1619 | 0.8378 | | 0.1191 | 1.8020 | 1620 | 0.8356 | | 0.2206 | 1.8031 | 1621 | 0.8337 | | 1.1094 | 1.8042 | 1622 | 0.8309 | | 1.0523 | 1.8053 | 1623 | 0.8281 | | 0.1362 | 1.8065 | 1624 | 0.8278 | | 0.6761 | 1.8076 | 1625 | 0.8277 | | 1.8764 | 1.8087 | 1626 | 0.8277 | | 0.9629 | 1.8098 | 1627 | 0.8266 | | 0.9299 | 1.8109 | 1628 | 0.8247 | | 1.2794 | 1.8120 | 1629 | 0.8221 | | 1.2126 | 1.8131 | 1630 | 0.8199 | | 0.4792 | 1.8142 | 1631 | 0.8175 | | 1.5682 | 1.8154 | 1632 | 0.8153 | | 1.2662 | 1.8165 | 1633 | 0.8125 | | 1.7495 | 1.8176 | 1634 | 0.8099 | | 2.0485 | 1.8187 | 1635 | 0.8077 | | 1.4757 | 1.8198 | 1636 | 0.8059 | | 1.4922 | 1.8209 | 1637 | 0.8044 | | 1.602 | 1.8220 | 1638 | 0.8033 | | 1.532 | 1.8231 | 1639 | 0.8024 | | 2.3022 | 1.8242 | 1640 | 0.8017 | | 1.0309 | 1.8254 | 1641 | 0.8009 | | 1.6082 | 1.8265 | 1642 | 0.8004 | | 1.35 | 1.8276 | 1643 | 0.8001 | | 1.0686 | 1.8287 | 1644 | 0.7999 | | 1.3929 | 1.8298 | 1645 | 0.7997 | | 1.6213 | 1.8309 | 1646 | 0.7994 | | 1.2308 | 1.8320 | 1647 | 0.7993 | | 1.8324 | 1.8331 | 1648 | 0.7991 | | 1.4515 | 1.8343 | 1649 | 0.7986 | | 0.053 | 1.8354 | 1650 | 0.7984 | | 0.22 | 1.8365 | 1651 | 0.7984 | | 0.0344 | 1.8376 | 1652 | 0.7984 | | 0.1577 | 1.8387 | 1653 | 0.7984 | | 0.6136 | 1.8398 | 1654 | 0.7983 | | 0.1809 | 1.8409 | 1655 | 0.7984 | | 0.0376 | 1.8420 | 1656 | 0.7984 | | 0.7229 | 1.8432 | 1657 | 0.7983 | | 0.8568 | 1.8443 | 1658 | 0.7983 | | 0.1841 | 1.8454 | 1659 | 0.7982 | | 0.2058 | 1.8465 | 1660 | 0.7980 | | 0.1158 | 1.8476 | 1661 | 0.7978 | | 0.2314 | 1.8487 | 1662 | 0.7977 | | 0.1547 | 1.8498 | 1663 | 0.7974 | | 0.5078 | 1.8509 | 1664 | 0.7972 | | 0.1633 | 1.8521 | 1665 | 0.7970 | | 1.872 | 1.8532 | 1666 | 0.7973 | | 0.2299 | 1.8543 | 1667 | 0.7973 | | 0.1479 | 1.8554 | 1668 | 0.7975 | | 0.0963 | 1.8565 | 1669 | 0.7974 | | 0.12 | 1.8576 | 1670 | 0.7971 | | 0.8169 | 1.8587 | 1671 | 0.7967 | | 0.4493 | 1.8598 | 1672 | 0.7965 | | 0.142 | 1.8610 | 1673 | 0.7966 | | 0.6782 | 1.8621 | 1674 | 0.7965 | | 0.6413 | 1.8632 | 1675 | 0.7968 | | 0.1142 | 1.8643 | 1676 | 0.7973 | | 1.2024 | 1.8654 | 1677 | 0.7980 | | 1.3174 | 1.8665 | 1678 | 0.7988 | | 1.6067 | 1.8676 | 1679 | 0.7998 | | 0.1293 | 1.8687 | 1680 | 0.8004 | | 1.5682 | 1.8699 | 1681 | 0.8010 | | 1.5205 | 1.8710 | 1682 | 0.8016 | | 0.4489 | 1.8721 | 1683 | 0.8022 | | 0.8175 | 1.8732 | 1684 | 0.8025 | | 1.4963 | 1.8743 | 1685 | 0.8026 | | 1.169 | 1.8754 | 1686 | 0.8031 | | 1.0243 | 1.8765 | 1687 | 0.8035 | | 1.1168 | 1.8776 | 1688 | 0.8043 | | 1.0448 | 1.8788 | 1689 | 0.8050 | | 0.6105 | 1.8799 | 1690 | 0.8060 | | 1.4621 | 1.8810 | 1691 | 0.8071 | | 1.7288 | 1.8821 | 1692 | 0.8083 | | 1.8469 | 1.8832 | 1693 | 0.8095 | | 1.1544 | 1.8843 | 1694 | 0.8103 | | 3.2608 | 1.8854 | 1695 | 0.8101 | | 1.0715 | 1.8865 | 1696 | 0.8107 | | 0.5967 | 1.8877 | 1697 | 0.8116 | | 1.7768 | 1.8888 | 1698 | 0.8122 | | 1.9667 | 1.8899 | 1699 | 0.8129 | | 0.5901 | 1.8910 | 1700 | 0.8133 | | 0.3236 | 1.8921 | 1701 | 0.8136 | | 0.1721 | 1.8932 | 1702 | 0.8142 | | 0.0519 | 1.8943 | 1703 | 0.8146 | | 0.5742 | 1.8954 | 1704 | 0.8155 | | 1.0073 | 1.8966 | 1705 | 0.8157 | | 0.158 | 1.8977 | 1706 | 0.8160 | | 0.462 | 1.8988 | 1707 | 0.8168 | | 0.1776 | 1.8999 | 1708 | 0.8179 | | 0.1624 | 1.9010 | 1709 | 0.8188 | | 1.1878 | 1.9021 | 1710 | 0.8203 | | 1.6591 | 1.9032 | 1711 | 0.8211 | | 0.4531 | 1.9043 | 1712 | 0.8223 | | 0.1843 | 1.9055 | 1713 | 0.8233 | | 0.1531 | 1.9066 | 1714 | 0.8242 | | 0.1279 | 1.9077 | 1715 | 0.8252 | | 0.1467 | 1.9088 | 1716 | 0.8264 | | 0.7342 | 1.9099 | 1717 | 0.8266 | | 0.0906 | 1.9110 | 1718 | 0.8265 | | 1.0331 | 1.9121 | 1719 | 0.8254 | | 0.1468 | 1.9132 | 1720 | 0.8244 | | 0.6936 | 1.9143 | 1721 | 0.8225 | | 1.2149 | 1.9155 | 1722 | 0.8206 | | 1.1369 | 1.9166 | 1723 | 0.8181 | | 0.2874 | 1.9177 | 1724 | 0.8170 | | 1.6901 | 1.9188 | 1725 | 0.8160 | | 0.4745 | 1.9199 | 1726 | 0.8152 | | 1.7521 | 1.9210 | 1727 | 0.8146 | | 0.2743 | 1.9221 | 1728 | 0.8153 | | 0.567 | 1.9232 | 1729 | 0.8158 | | 1.8166 | 1.9244 | 1730 | 0.8167 | | 1.1476 | 1.9255 | 1731 | 0.8175 | | 1.6616 | 1.9266 | 1732 | 0.8177 | | 0.7163 | 1.9277 | 1733 | 0.8175 | | 0.2111 | 1.9288 | 1734 | 0.8184 | | 1.3288 | 1.9299 | 1735 | 0.8192 | | 1.1528 | 1.9310 | 1736 | 0.8197 | | 1.5627 | 1.9321 | 1737 | 0.8206 | | 1.4838 | 1.9333 | 1738 | 0.8212 | | 1.1075 | 1.9344 | 1739 | 0.8223 | | 1.0118 | 1.9355 | 1740 | 0.8238 | | 1.8276 | 1.9366 | 1741 | 0.8246 | | 1.1128 | 1.9377 | 1742 | 0.8254 | | 1.5955 | 1.9388 | 1743 | 0.8254 | | 1.5487 | 1.9399 | 1744 | 0.8255 | | 1.5988 | 1.9410 | 1745 | 0.8258 | | 1.7693 | 1.9422 | 1746 | 0.8264 | | 1.8608 | 1.9433 | 1747 | 0.8265 | | 1.6035 | 1.9444 | 1748 | 0.8267 | | 1.0836 | 1.9455 | 1749 | 0.8266 | | 0.055 | 1.9466 | 1750 | 0.8263 | | 0.1085 | 1.9477 | 1751 | 0.8262 | | 0.2102 | 1.9488 | 1752 | 0.8259 | | 0.1962 | 1.9499 | 1753 | 0.8254 | | 0.2213 | 1.9511 | 1754 | 0.8252 | | 0.1103 | 1.9522 | 1755 | 0.8246 | | 0.8073 | 1.9533 | 1756 | 0.8235 | | 0.1643 | 1.9544 | 1757 | 0.8228 | | 0.1075 | 1.9555 | 1758 | 0.8225 | | 1.6566 | 1.9566 | 1759 | 0.8222 | | 0.1082 | 1.9577 | 1760 | 0.8218 | | 0.4689 | 1.9588 | 1761 | 0.8209 | | 1.0157 | 1.9600 | 1762 | 0.8196 | | 1.0878 | 1.9611 | 1763 | 0.8178 | | 0.1395 | 1.9622 | 1764 | 0.8163 | | 0.3113 | 1.9633 | 1765 | 0.8156 | | 0.1451 | 1.9644 | 1766 | 0.8154 | | 0.2029 | 1.9655 | 1767 | 0.8152 | | 0.2595 | 1.9666 | 1768 | 0.8152 | | 0.589 | 1.9677 | 1769 | 0.8150 | | 0.1361 | 1.9689 | 1770 | 0.8145 | | 0.4749 | 1.9700 | 1771 | 0.8146 | | 0.0851 | 1.9711 | 1772 | 0.8142 | | 0.5204 | 1.9722 | 1773 | 0.8141 | | 0.1485 | 1.9733 | 1774 | 0.8140 | | 0.111 | 1.9744 | 1775 | 0.8145 | | 0.8683 | 1.9755 | 1776 | 0.8144 | | 1.2599 | 1.9766 | 1777 | 0.8141 | | 0.961 | 1.9778 | 1778 | 0.8139 | | 1.4544 | 1.9789 | 1779 | 0.8138 | | 1.1352 | 1.9800 | 1780 | 0.8139 | | 1.1649 | 1.9811 | 1781 | 0.8142 | | 1.8951 | 1.9822 | 1782 | 0.8144 | | 1.4142 | 1.9833 | 1783 | 0.8146 | | 1.6664 | 1.9844 | 1784 | 0.8146 | | 2.2541 | 1.9855 | 1785 | 0.8141 | | 0.985 | 1.9867 | 1786 | 0.8135 | | 1.3531 | 1.9878 | 1787 | 0.8131 | | 1.0716 | 1.9889 | 1788 | 0.8130 | | 2.4513 | 1.9900 | 1789 | 0.8128 | | 1.7843 | 1.9911 | 1790 | 0.8122 | | 1.4967 | 1.9922 | 1791 | 0.8119 | | 1.4837 | 1.9933 | 1792 | 0.8117 | | 1.76 | 1.9944 | 1793 | 0.8112 | | 1.3446 | 1.9956 | 1794 | 0.8109 | | 1.6875 | 1.9967 | 1795 | 0.8103 | | 1.2917 | 1.9978 | 1796 | 0.8098 | | 1.4644 | 1.9989 | 1797 | 0.8097 | | 1.4759 | 2.0 | 1798 | 0.8095 | | 0.0546 | 2.0011 | 1799 | 0.8092 | | 0.1746 | 2.0022 | 1800 | 0.8092 | | 0.953 | 2.0033 | 1801 | 0.8094 | | 0.0529 | 2.0044 | 1802 | 0.8092 | | 0.1944 | 2.0056 | 1803 | 0.8091 | | 0.1952 | 2.0067 | 1804 | 0.8092 | | 0.0591 | 2.0078 | 1805 | 0.8091 | | 0.09 | 2.0089 | 1806 | 0.8094 | | 1.2722 | 2.0100 | 1807 | 0.8098 | | 0.1269 | 2.0111 | 1808 | 0.8106 | | 0.1834 | 2.0122 | 1809 | 0.8116 | | 0.5972 | 2.0133 | 1810 | 0.8119 | | 0.2789 | 2.0145 | 1811 | 0.8121 | | 0.0883 | 2.0156 | 1812 | 0.8127 | | 0.1855 | 2.0167 | 1813 | 0.8129 | | 0.146 | 2.0178 | 1814 | 0.8129 | | 0.9184 | 2.0189 | 1815 | 0.8128 | | 0.5626 | 2.0200 | 1816 | 0.8129 | | 0.0999 | 2.0211 | 1817 | 0.8133 | | 0.6176 | 2.0222 | 1818 | 0.8140 | | 1.2818 | 2.0234 | 1819 | 0.8147 | | 0.7819 | 2.0245 | 1820 | 0.8153 | | 0.5774 | 2.0256 | 1821 | 0.8150 | | 0.8818 | 2.0267 | 1822 | 0.8153 | | 1.747 | 2.0278 | 1823 | 0.8155 | | 0.0793 | 2.0289 | 1824 | 0.8163 | | 0.1307 | 2.0300 | 1825 | 0.8171 | | 1.9592 | 2.0311 | 1826 | 0.8179 | | 1.023 | 2.0323 | 1827 | 0.8189 | | 0.4283 | 2.0334 | 1828 | 0.8195 | | 0.9415 | 2.0345 | 1829 | 0.8201 | | 0.2385 | 2.0356 | 1830 | 0.8207 | | 0.5877 | 2.0367 | 1831 | 0.8213 | | 1.2666 | 2.0378 | 1832 | 0.8224 | | 0.7215 | 2.0389 | 1833 | 0.8232 | | 1.7314 | 2.0400 | 1834 | 0.8240 | | 1.7125 | 2.0412 | 1835 | 0.8248 | | 1.0443 | 2.0423 | 1836 | 0.8257 | | 1.6059 | 2.0434 | 1837 | 0.8269 | | 0.3492 | 2.0445 | 1838 | 0.8279 | | 1.702 | 2.0456 | 1839 | 0.8286 | | 2.8372 | 2.0467 | 1840 | 0.8288 | | 1.1872 | 2.0478 | 1841 | 0.8291 | | 1.9129 | 2.0489 | 1842 | 0.8292 | | 1.132 | 2.0501 | 1843 | 0.8294 | | 1.4119 | 2.0512 | 1844 | 0.8296 | | 1.5721 | 2.0523 | 1845 | 0.8300 | | 1.0254 | 2.0534 | 1846 | 0.8304 | | 1.8107 | 2.0545 | 1847 | 0.8305 | | 0.9845 | 2.0556 | 1848 | 0.8306 | | 0.3972 | 2.0567 | 1849 | 0.8307 | | 0.0791 | 2.0578 | 1850 | 0.8316 | | 0.0758 | 2.0590 | 1851 | 0.8321 | | 0.0459 | 2.0601 | 1852 | 0.8323 | | 0.1972 | 2.0612 | 1853 | 0.8323 | | 0.0752 | 2.0623 | 1854 | 0.8333 | | 0.1841 | 2.0634 | 1855 | 0.8335 | | 0.5516 | 2.0645 | 1856 | 0.8323 | | 0.2298 | 2.0656 | 1857 | 0.8308 | | 0.8784 | 2.0667 | 1858 | 0.8284 | | 1.8443 | 2.0679 | 1859 | 0.8256 | | 0.0832 | 2.0690 | 1860 | 0.8240 | | 0.143 | 2.0701 | 1861 | 0.8226 | | 0.0451 | 2.0712 | 1862 | 0.8220 | | 0.2045 | 2.0723 | 1863 | 0.8211 | | 0.1858 | 2.0734 | 1864 | 0.8195 | | 0.2827 | 2.0745 | 1865 | 0.8189 | | 0.1384 | 2.0756 | 1866 | 0.8183 | | 0.1762 | 2.0768 | 1867 | 0.8187 | | 0.2589 | 2.0779 | 1868 | 0.8188 | | 0.1649 | 2.0790 | 1869 | 0.8187 | | 0.0447 | 2.0801 | 1870 | 0.8195 | | 0.0695 | 2.0812 | 1871 | 0.8210 | | 1.5392 | 2.0823 | 1872 | 0.8213 | | 0.5572 | 2.0834 | 1873 | 0.8213 | | 0.606 | 2.0845 | 1874 | 0.8209 | | 0.8148 | 2.0857 | 1875 | 0.8204 | | 1.4537 | 2.0868 | 1876 | 0.8193 | | 0.0576 | 2.0879 | 1877 | 0.8196 | | 0.9662 | 2.0890 | 1878 | 0.8191 | | 1.3569 | 2.0901 | 1879 | 0.8189 | | 0.5703 | 2.0912 | 1880 | 0.8188 | | 0.2571 | 2.0923 | 1881 | 0.8182 | | 1.8157 | 2.0934 | 1882 | 0.8177 | | 1.6743 | 2.0945 | 1883 | 0.8174 | | 0.1911 | 2.0957 | 1884 | 0.8166 | | 1.6133 | 2.0968 | 1885 | 0.8158 | | 1.0532 | 2.0979 | 1886 | 0.8150 | | 1.7464 | 2.0990 | 1887 | 0.8144 | | 1.2309 | 2.1001 | 1888 | 0.8140 | | 2.0536 | 2.1012 | 1889 | 0.8136 | | 1.339 | 2.1023 | 1890 | 0.8129 | | 0.9884 | 2.1034 | 1891 | 0.8121 | | 1.1049 | 2.1046 | 1892 | 0.8118 | | 2.1256 | 2.1057 | 1893 | 0.8109 | | 0.8976 | 2.1068 | 1894 | 0.8105 | | 2.0437 | 2.1079 | 1895 | 0.8099 | | 1.3358 | 2.1090 | 1896 | 0.8095 | | 1.1334 | 2.1101 | 1897 | 0.8095 | | 1.19 | 2.1112 | 1898 | 0.8093 | | 0.3832 | 2.1123 | 1899 | 0.8089 | | 0.042 | 2.1135 | 1900 | 0.8086 | | 0.2511 | 2.1146 | 1901 | 0.8085 | | 0.1439 | 2.1157 | 1902 | 0.8082 | | 0.2863 | 2.1168 | 1903 | 0.8083 | | 0.1875 | 2.1179 | 1904 | 0.8080 | | 0.104 | 2.1190 | 1905 | 0.8081 | | 1.4708 | 2.1201 | 1906 | 0.8082 | | 0.0983 | 2.1212 | 1907 | 0.8086 | | 0.2098 | 2.1224 | 1908 | 0.8085 | | 1.1265 | 2.1235 | 1909 | 0.8080 | | 0.0726 | 2.1246 | 1910 | 0.8071 | | 1.1949 | 2.1257 | 1911 | 0.8062 | | 0.1737 | 2.1268 | 1912 | 0.8053 | | 0.1556 | 2.1279 | 1913 | 0.8047 | | 0.1533 | 2.1290 | 1914 | 0.8036 | | 0.8561 | 2.1301 | 1915 | 0.8022 | | 1.5786 | 2.1313 | 1916 | 0.8007 | | 0.2714 | 2.1324 | 1917 | 0.7994 | | 0.116 | 2.1335 | 1918 | 0.7978 | | 0.9109 | 2.1346 | 1919 | 0.7960 | | 0.108 | 2.1357 | 1920 | 0.7942 | | 0.4198 | 2.1368 | 1921 | 0.7928 | | 1.1983 | 2.1379 | 1922 | 0.7919 | | 0.9793 | 2.1390 | 1923 | 0.7910 | | 1.0001 | 2.1402 | 1924 | 0.7902 | | 0.6234 | 2.1413 | 1925 | 0.7895 | | 0.1918 | 2.1424 | 1926 | 0.7891 | | 0.3359 | 2.1435 | 1927 | 0.7886 | | 0.7589 | 2.1446 | 1928 | 0.7880 | | 1.1253 | 2.1457 | 1929 | 0.7875 | | 0.7433 | 2.1468 | 1930 | 0.7869 | | 1.0589 | 2.1479 | 1931 | 0.7864 | | 1.1689 | 2.1491 | 1932 | 0.7861 | | 0.825 | 2.1502 | 1933 | 0.7859 | | 1.8352 | 2.1513 | 1934 | 0.7856 | | 1.2677 | 2.1524 | 1935 | 0.7859 | | 1.5465 | 2.1535 | 1936 | 0.7861 | | 1.8866 | 2.1546 | 1937 | 0.7862 | | 1.9481 | 2.1557 | 1938 | 0.7861 | | 1.0132 | 2.1568 | 1939 | 0.7864 | | 1.4202 | 2.1580 | 1940 | 0.7867 | | 0.4956 | 2.1591 | 1941 | 0.7871 | | 1.5 | 2.1602 | 1942 | 0.7877 | | 1.4022 | 2.1613 | 1943 | 0.7882 | | 1.4192 | 2.1624 | 1944 | 0.7886 | | 1.7985 | 2.1635 | 1945 | 0.7893 | | 1.9861 | 2.1646 | 1946 | 0.7896 | | 1.8836 | 2.1657 | 1947 | 0.7899 | | 1.3428 | 2.1669 | 1948 | 0.7901 | | 0.3634 | 2.1680 | 1949 | 0.7903 | | 0.2863 | 2.1691 | 1950 | 0.7904 | | 0.2727 | 2.1702 | 1951 | 0.7906 | | 0.1933 | 2.1713 | 1952 | 0.7906 | | 0.1228 | 2.1724 | 1953 | 0.7909 | | 0.135 | 2.1735 | 1954 | 0.7909 | | 0.0977 | 2.1746 | 1955 | 0.7910 | | 0.1221 | 2.1758 | 1956 | 0.7909 | | 0.0345 | 2.1769 | 1957 | 0.7908 | | 0.1937 | 2.1780 | 1958 | 0.7910 | | 0.2213 | 2.1791 | 1959 | 0.7915 | | 0.1322 | 2.1802 | 1960 | 0.7919 | | 0.2655 | 2.1813 | 1961 | 0.7928 | | 0.1402 | 2.1824 | 1962 | 0.7936 | | 0.6266 | 2.1835 | 1963 | 0.7941 | | 0.0893 | 2.1846 | 1964 | 0.7945 | | 1.2012 | 2.1858 | 1965 | 0.7953 | | 1.0283 | 2.1869 | 1966 | 0.7959 | | 0.1171 | 2.1880 | 1967 | 0.7967 | | 0.3325 | 2.1891 | 1968 | 0.7979 | | 0.037 | 2.1902 | 1969 | 0.7994 | | 0.1933 | 2.1913 | 1970 | 0.8005 | | 1.7513 | 2.1924 | 1971 | 0.8009 | | 0.1514 | 2.1935 | 1972 | 0.8021 | | 0.2484 | 2.1947 | 1973 | 0.8027 | | 0.0267 | 2.1958 | 1974 | 0.8038 | | 0.6585 | 2.1969 | 1975 | 0.8047 | | 0.1021 | 2.1980 | 1976 | 0.8060 | | 0.6819 | 2.1991 | 1977 | 0.8073 | | 0.2105 | 2.2002 | 1978 | 0.8082 | | 1.1334 | 2.2013 | 1979 | 0.8091 | | 1.2161 | 2.2024 | 1980 | 0.8099 | | 1.0852 | 2.2036 | 1981 | 0.8103 | | 0.6283 | 2.2047 | 1982 | 0.8104 | | 1.5291 | 2.2058 | 1983 | 0.8104 | | 1.5804 | 2.2069 | 1984 | 0.8107 | | 0.7778 | 2.2080 | 1985 | 0.8100 | | 1.7425 | 2.2091 | 1986 | 0.8096 | | 1.094 | 2.2102 | 1987 | 0.8084 | | 1.1517 | 2.2113 | 1988 | 0.8076 | | 1.6564 | 2.2125 | 1989 | 0.8069 | | 0.2862 | 2.2136 | 1990 | 0.8067 | | 1.2113 | 2.2147 | 1991 | 0.8060 | | 1.1245 | 2.2158 | 1992 | 0.8056 | | 1.0473 | 2.2169 | 1993 | 0.8053 | | 1.1306 | 2.2180 | 1994 | 0.8051 | | 1.7533 | 2.2191 | 1995 | 0.8049 | | 1.7528 | 2.2202 | 1996 | 0.8050 | | 1.041 | 2.2214 | 1997 | 0.8053 | | 1.7567 | 2.2225 | 1998 | 0.8055 | | 0.6952 | 2.2236 | 1999 | 0.8056 | | 0.217 | 2.2247 | 2000 | 0.8054 | | 0.5035 | 2.2258 | 2001 | 0.8053 | | 0.2137 | 2.2269 | 2002 | 0.8049 | | 0.0366 | 2.2280 | 2003 | 0.8047 | | 0.0387 | 2.2291 | 2004 | 0.8051 | | 1.2857 | 2.2303 | 2005 | 0.8056 | | 1.1044 | 2.2314 | 2006 | 0.8058 | | 0.221 | 2.2325 | 2007 | 0.8058 | | 0.159 | 2.2336 | 2008 | 0.8068 | | 0.3445 | 2.2347 | 2009 | 0.8073 | | 1.4003 | 2.2358 | 2010 | 0.8074 | | 0.0509 | 2.2369 | 2011 | 0.8078 | | 2.2133 | 2.2380 | 2012 | 0.8078 | | 0.1404 | 2.2392 | 2013 | 0.8072 | | 0.193 | 2.2403 | 2014 | 0.8066 | | 2.0478 | 2.2414 | 2015 | 0.8055 | | 0.1626 | 2.2425 | 2016 | 0.8050 | | 1.6571 | 2.2436 | 2017 | 0.8041 | | 0.1065 | 2.2447 | 2018 | 0.8032 | | 0.1885 | 2.2458 | 2019 | 0.8024 | | 0.5714 | 2.2469 | 2020 | 0.8015 | | 0.144 | 2.2481 | 2021 | 0.8009 | | 0.1963 | 2.2492 | 2022 | 0.8012 | | 0.1324 | 2.2503 | 2023 | 0.8011 | | 1.0246 | 2.2514 | 2024 | 0.8014 | | 1.4993 | 2.2525 | 2025 | 0.8016 | | 0.4914 | 2.2536 | 2026 | 0.8018 | | 0.6109 | 2.2547 | 2027 | 0.8017 | | 1.2895 | 2.2558 | 2028 | 0.8013 | | 1.709 | 2.2570 | 2029 | 0.8008 | | 0.1208 | 2.2581 | 2030 | 0.8007 | | 1.9025 | 2.2592 | 2031 | 0.8005 | | 1.4313 | 2.2603 | 2032 | 0.8005 | | 1.2721 | 2.2614 | 2033 | 0.8010 | | 1.3929 | 2.2625 | 2034 | 0.8017 | | 0.259 | 2.2636 | 2035 | 0.8026 | | 1.3017 | 2.2647 | 2036 | 0.8035 | | 1.9006 | 2.2659 | 2037 | 0.8040 | | 1.6986 | 2.2670 | 2038 | 0.8049 | | 1.44 | 2.2681 | 2039 | 0.8056 | | 1.5887 | 2.2692 | 2040 | 0.8060 | | 2.051 | 2.2703 | 2041 | 0.8061 | | 1.8645 | 2.2714 | 2042 | 0.8062 | | 1.1789 | 2.2725 | 2043 | 0.8064 | | 2.3965 | 2.2736 | 2044 | 0.8067 | | 1.7286 | 2.2747 | 2045 | 0.8070 | | 1.9504 | 2.2759 | 2046 | 0.8071 | | 1.0462 | 2.2770 | 2047 | 0.8070 | | 1.3281 | 2.2781 | 2048 | 0.8072 | | 0.3158 | 2.2792 | 2049 | 0.8071 | | 0.0743 | 2.2803 | 2050 | 0.8072 | | 0.1278 | 2.2814 | 2051 | 0.8074 | | 0.1686 | 2.2825 | 2052 | 0.8075 | | 0.1274 | 2.2836 | 2053 | 0.8077 | | 0.089 | 2.2848 | 2054 | 0.8079 | | 0.8628 | 2.2859 | 2055 | 0.8082 | | 0.7943 | 2.2870 | 2056 | 0.8077 | | 0.1301 | 2.2881 | 2057 | 0.8071 | | 0.2256 | 2.2892 | 2058 | 0.8065 | | 0.1182 | 2.2903 | 2059 | 0.8059 | | 0.115 | 2.2914 | 2060 | 0.8055 | | 0.8352 | 2.2925 | 2061 | 0.8050 | | 0.1145 | 2.2937 | 2062 | 0.8043 | | 1.0578 | 2.2948 | 2063 | 0.8032 | | 0.194 | 2.2959 | 2064 | 0.8024 | | 0.061 | 2.2970 | 2065 | 0.8015 | | 0.9172 | 2.2981 | 2066 | 0.8008 | | 0.0979 | 2.2992 | 2067 | 0.8000 | | 0.0518 | 2.3003 | 2068 | 0.7994 | | 0.1185 | 2.3014 | 2069 | 0.7989 | | 0.1292 | 2.3026 | 2070 | 0.7994 | | 0.1247 | 2.3037 | 2071 | 0.8003 | | 0.6655 | 2.3048 | 2072 | 0.8011 | | 1.1948 | 2.3059 | 2073 | 0.8016 | | 1.1384 | 2.3070 | 2074 | 0.8020 | | 1.068 | 2.3081 | 2075 | 0.8019 | | 0.1339 | 2.3092 | 2076 | 0.8016 | | 1.055 | 2.3103 | 2077 | 0.8011 | | 0.0395 | 2.3115 | 2078 | 0.8004 | | 1.0997 | 2.3126 | 2079 | 0.8000 | | 1.2984 | 2.3137 | 2080 | 0.7995 | | 1.1363 | 2.3148 | 2081 | 0.7991 | | 0.1402 | 2.3159 | 2082 | 0.7989 | | 0.0136 | 2.3170 | 2083 | 0.7987 | | 0.1221 | 2.3181 | 2084 | 0.7991 | | 1.0947 | 2.3192 | 2085 | 0.7996 | | 1.049 | 2.3204 | 2086 | 0.8000 | | 1.1115 | 2.3215 | 2087 | 0.8004 | | 1.1633 | 2.3226 | 2088 | 0.8010 | | 1.3757 | 2.3237 | 2089 | 0.8015 | | 1.6552 | 2.3248 | 2090 | 0.8019 | | 1.9541 | 2.3259 | 2091 | 0.8023 | | 1.7233 | 2.3270 | 2092 | 0.8027 | | 1.5895 | 2.3281 | 2093 | 0.8027 | | 2.0405 | 2.3293 | 2094 | 0.8028 | | 1.2795 | 2.3304 | 2095 | 0.8029 | | 1.9669 | 2.3315 | 2096 | 0.8030 | | 1.8133 | 2.3326 | 2097 | 0.8029 | | 1.3531 | 2.3337 | 2098 | 0.8030 | | 0.0515 | 2.3348 | 2099 | 0.8031 | | 0.3664 | 2.3359 | 2100 | 0.8032 | | 0.146 | 2.3370 | 2101 | 0.8031 | | 0.1426 | 2.3382 | 2102 | 0.8030 | | 0.1639 | 2.3393 | 2103 | 0.8028 | | 0.1647 | 2.3404 | 2104 | 0.8025 | | 0.1306 | 2.3415 | 2105 | 0.8025 | | 0.1236 | 2.3426 | 2106 | 0.8024 | | 0.1211 | 2.3437 | 2107 | 0.8026 | | 0.1034 | 2.3448 | 2108 | 0.8028 | | 0.8095 | 2.3459 | 2109 | 0.8033 | | 1.5646 | 2.3471 | 2110 | 0.8034 | | 0.0385 | 2.3482 | 2111 | 0.8036 | | 0.1483 | 2.3493 | 2112 | 0.8039 | | 0.1971 | 2.3504 | 2113 | 0.8041 | | 1.6815 | 2.3515 | 2114 | 0.8042 | | 0.076 | 2.3526 | 2115 | 0.8040 | | 0.0779 | 2.3537 | 2116 | 0.8040 | | 1.9029 | 2.3548 | 2117 | 0.8039 | | 0.1741 | 2.3560 | 2118 | 0.8038 | | 0.1991 | 2.3571 | 2119 | 0.8047 | | 0.8906 | 2.3582 | 2120 | 0.8049 | | 0.2563 | 2.3593 | 2121 | 0.8049 | | 0.7227 | 2.3604 | 2122 | 0.8046 | | 0.953 | 2.3615 | 2123 | 0.8037 | | 1.2686 | 2.3626 | 2124 | 0.8032 | | 0.1691 | 2.3637 | 2125 | 0.8033 | | 0.8522 | 2.3648 | 2126 | 0.8034 | | 0.0869 | 2.3660 | 2127 | 0.8037 | | 1.0961 | 2.3671 | 2128 | 0.8040 | | 1.7717 | 2.3682 | 2129 | 0.8044 | | 0.2091 | 2.3693 | 2130 | 0.8043 | | 1.0702 | 2.3704 | 2131 | 0.8044 | | 1.3843 | 2.3715 | 2132 | 0.8044 | | 1.0898 | 2.3726 | 2133 | 0.8044 | | 0.8934 | 2.3737 | 2134 | 0.8040 | | 1.1458 | 2.3749 | 2135 | 0.8033 | | 1.4524 | 2.3760 | 2136 | 0.8028 | | 1.7166 | 2.3771 | 2137 | 0.8022 | | 1.0326 | 2.3782 | 2138 | 0.8017 | | 0.8289 | 2.3793 | 2139 | 0.8012 | | 2.001 | 2.3804 | 2140 | 0.8006 | | 1.1511 | 2.3815 | 2141 | 0.7996 | | 1.0205 | 2.3826 | 2142 | 0.7987 | | 1.2943 | 2.3838 | 2143 | 0.7982 | | 1.6228 | 2.3849 | 2144 | 0.7978 | | 1.8443 | 2.3860 | 2145 | 0.7971 | | 3.4554 | 2.3871 | 2146 | 0.7960 | | 0.1935 | 2.3882 | 2147 | 0.7954 | | 1.4434 | 2.3893 | 2148 | 0.7954 | | 0.3715 | 2.3904 | 2149 | 0.7952 | | 0.4915 | 2.3915 | 2150 | 0.7952 | | 0.2191 | 2.3927 | 2151 | 0.7951 | | 0.1556 | 2.3938 | 2152 | 0.7947 | | 0.0381 | 2.3949 | 2153 | 0.7943 | | 0.2709 | 2.3960 | 2154 | 0.7939 | | 0.1002 | 2.3971 | 2155 | 0.7938 | | 0.1077 | 2.3982 | 2156 | 0.7940 | | 0.0948 | 2.3993 | 2157 | 0.7939 | | 0.1406 | 2.4004 | 2158 | 0.7936 | | 0.1135 | 2.4016 | 2159 | 0.7930 | | 0.1031 | 2.4027 | 2160 | 0.7928 | | 1.0916 | 2.4038 | 2161 | 0.7925 | | 0.2215 | 2.4049 | 2162 | 0.7924 | | 0.2577 | 2.4060 | 2163 | 0.7922 | | 0.0883 | 2.4071 | 2164 | 0.7921 | | 0.1997 | 2.4082 | 2165 | 0.7922 | | 0.7317 | 2.4093 | 2166 | 0.7922 | | 0.235 | 2.4105 | 2167 | 0.7919 | | 0.1766 | 2.4116 | 2168 | 0.7915 | | 0.1493 | 2.4127 | 2169 | 0.7917 | | 1.1924 | 2.4138 | 2170 | 0.7915 | | 0.7985 | 2.4149 | 2171 | 0.7911 | | 0.2706 | 2.4160 | 2172 | 0.7908 | | 0.4076 | 2.4171 | 2173 | 0.7902 | | 0.1303 | 2.4182 | 2174 | 0.7897 | | 0.0788 | 2.4194 | 2175 | 0.7895 | | 0.4509 | 2.4205 | 2176 | 0.7895 | | 0.0827 | 2.4216 | 2177 | 0.7895 | | 1.8695 | 2.4227 | 2178 | 0.7895 | | 1.7029 | 2.4238 | 2179 | 0.7895 | | 0.1614 | 2.4249 | 2180 | 0.7896 | | 0.8564 | 2.4260 | 2181 | 0.7898 | | 0.3188 | 2.4271 | 2182 | 0.7903 | | 0.1542 | 2.4283 | 2183 | 0.7909 | | 1.3455 | 2.4294 | 2184 | 0.7913 | | 2.2018 | 2.4305 | 2185 | 0.7917 | | 1.5331 | 2.4316 | 2186 | 0.7920 | | 1.054 | 2.4327 | 2187 | 0.7923 | | 1.1311 | 2.4338 | 2188 | 0.7926 | | 1.1828 | 2.4349 | 2189 | 0.7932 | | 1.056 | 2.4360 | 2190 | 0.7931 | | 1.7048 | 2.4372 | 2191 | 0.7934 | | 2.1982 | 2.4383 | 2192 | 0.7936 | | 1.4793 | 2.4394 | 2193 | 0.7938 | | 1.6628 | 2.4405 | 2194 | 0.7941 | | 0.273 | 2.4416 | 2195 | 0.7947 | | 1.7106 | 2.4427 | 2196 | 0.7954 | | 1.7568 | 2.4438 | 2197 | 0.7961 | | 1.8051 | 2.4449 | 2198 | 0.7966 | | 0.3725 | 2.4461 | 2199 | 0.7971 | | 0.1714 | 2.4472 | 2200 | 0.7972 | | 0.1836 | 2.4483 | 2201 | 0.7975 | | 0.0633 | 2.4494 | 2202 | 0.7974 | | 0.1911 | 2.4505 | 2203 | 0.7971 | | 0.2365 | 2.4516 | 2204 | 0.7970 | | 0.1243 | 2.4527 | 2205 | 0.7964 | | 0.7851 | 2.4538 | 2206 | 0.7961 | | 0.1531 | 2.4549 | 2207 | 0.7959 | | 0.5499 | 2.4561 | 2208 | 0.7961 | | 0.8909 | 2.4572 | 2209 | 0.7959 | | 0.0723 | 2.4583 | 2210 | 0.7957 | | 0.078 | 2.4594 | 2211 | 0.7953 | | 0.0906 | 2.4605 | 2212 | 0.7948 | | 0.1474 | 2.4616 | 2213 | 0.7943 | | 0.1085 | 2.4627 | 2214 | 0.7942 | | 0.7282 | 2.4638 | 2215 | 0.7939 | | 1.8629 | 2.4650 | 2216 | 0.7938 | | 0.0529 | 2.4661 | 2217 | 0.7937 | | 0.1264 | 2.4672 | 2218 | 0.7937 | | 0.971 | 2.4683 | 2219 | 0.7934 | | 0.2309 | 2.4694 | 2220 | 0.7933 | | 0.5405 | 2.4705 | 2221 | 0.7929 | | 0.1155 | 2.4716 | 2222 | 0.7926 | | 0.1133 | 2.4727 | 2223 | 0.7925 | | 0.1298 | 2.4739 | 2224 | 0.7926 | | 0.3373 | 2.4750 | 2225 | 0.7926 | | 0.5555 | 2.4761 | 2226 | 0.7925 | | 0.0452 | 2.4772 | 2227 | 0.7925 | | 0.1698 | 2.4783 | 2228 | 0.7925 | | 0.2415 | 2.4794 | 2229 | 0.7925 | | 0.8485 | 2.4805 | 2230 | 0.7922 | | 0.3121 | 2.4816 | 2231 | 0.7918 | | 1.8965 | 2.4828 | 2232 | 0.7915 | | 1.204 | 2.4839 | 2233 | 0.7912 | | 0.9506 | 2.4850 | 2234 | 0.7908 | | 1.1824 | 2.4861 | 2235 | 0.7904 | | 1.3457 | 2.4872 | 2236 | 0.7899 | | 0.741 | 2.4883 | 2237 | 0.7896 | | 0.7791 | 2.4894 | 2238 | 0.7891 | | 1.942 | 2.4905 | 2239 | 0.7886 | | 0.9225 | 2.4917 | 2240 | 0.7880 | | 1.5893 | 2.4928 | 2241 | 0.7876 | | 0.633 | 2.4939 | 2242 | 0.7871 | | 1.3893 | 2.4950 | 2243 | 0.7866 | | 1.1808 | 2.4961 | 2244 | 0.7863 | | 1.8594 | 2.4972 | 2245 | 0.7861 | | 1.637 | 2.4983 | 2246 | 0.7858 | | 1.2926 | 2.4994 | 2247 | 0.7858 | | 1.6588 | 2.5006 | 2248 | 0.7856 | | 0.0512 | 2.5017 | 2249 | 0.7855 | | 0.5213 | 2.5028 | 2250 | 0.7854 | | 0.4796 | 2.5039 | 2251 | 0.7852 | | 0.2537 | 2.5050 | 2252 | 0.7850 | | 0.1989 | 2.5061 | 2253 | 0.7847 | | 0.5736 | 2.5072 | 2254 | 0.7845 | | 0.0354 | 2.5083 | 2255 | 0.7843 | | 0.5865 | 2.5095 | 2256 | 0.7840 | | 0.0681 | 2.5106 | 2257 | 0.7840 | | 0.7895 | 2.5117 | 2258 | 0.7839 | | 0.5514 | 2.5128 | 2259 | 0.7841 | | 1.0829 | 2.5139 | 2260 | 0.7842 | | 1.5276 | 2.5150 | 2261 | 0.7842 | | 0.2225 | 2.5161 | 2262 | 0.7842 | | 1.5433 | 2.5172 | 2263 | 0.7842 | | 0.1753 | 2.5184 | 2264 | 0.7841 | | 0.5329 | 2.5195 | 2265 | 0.7841 | | 0.1841 | 2.5206 | 2266 | 0.7839 | | 0.9784 | 2.5217 | 2267 | 0.7838 | | 1.032 | 2.5228 | 2268 | 0.7835 | | 0.119 | 2.5239 | 2269 | 0.7836 | | 0.036 | 2.5250 | 2270 | 0.7835 | | 0.4803 | 2.5261 | 2271 | 0.7838 | | 0.2873 | 2.5273 | 2272 | 0.7839 | | 0.4196 | 2.5284 | 2273 | 0.7843 | | 0.034 | 2.5295 | 2274 | 0.7846 | | 0.3129 | 2.5306 | 2275 | 0.7852 | | 0.1275 | 2.5317 | 2276 | 0.7859 | | 0.1727 | 2.5328 | 2277 | 0.7865 | | 1.2871 | 2.5339 | 2278 | 0.7868 | | 0.3185 | 2.5350 | 2279 | 0.7873 | | 0.5941 | 2.5362 | 2280 | 0.7878 | | 0.5896 | 2.5373 | 2281 | 0.7879 | | 0.8328 | 2.5384 | 2282 | 0.7883 | | 1.7686 | 2.5395 | 2283 | 0.7886 | | 2.353 | 2.5406 | 2284 | 0.7888 | | 0.5226 | 2.5417 | 2285 | 0.7894 | | 0.1483 | 2.5428 | 2286 | 0.7902 | | 0.8161 | 2.5439 | 2287 | 0.7910 | | 1.1574 | 2.5451 | 2288 | 0.7919 | | 1.082 | 2.5462 | 2289 | 0.7927 | | 1.0343 | 2.5473 | 2290 | 0.7932 | | 1.1727 | 2.5484 | 2291 | 0.7936 | | 1.8685 | 2.5495 | 2292 | 0.7941 | | 1.3118 | 2.5506 | 2293 | 0.7945 | | 1.4313 | 2.5517 | 2294 | 0.7948 | | 1.0455 | 2.5528 | 2295 | 0.7949 | | 1.4722 | 2.5539 | 2296 | 0.7953 | | 1.0583 | 2.5551 | 2297 | 0.7954 | | 1.1026 | 2.5562 | 2298 | 0.7957 | | 0.1997 | 2.5573 | 2299 | 0.7958 | | 0.028 | 2.5584 | 2300 | 0.7961 | | 0.988 | 2.5595 | 2301 | 0.7960 | | 0.0465 | 2.5606 | 2302 | 0.7959 | | 0.2166 | 2.5617 | 2303 | 0.7959 | | 1.1279 | 2.5628 | 2304 | 0.7954 | | 0.58 | 2.5640 | 2305 | 0.7948 | | 0.0918 | 2.5651 | 2306 | 0.7941 | | 0.0341 | 2.5662 | 2307 | 0.7935 | | 0.1504 | 2.5673 | 2308 | 0.7930 | | 0.1854 | 2.5684 | 2309 | 0.7928 | | 0.6975 | 2.5695 | 2310 | 0.7919 | | 1.0526 | 2.5706 | 2311 | 0.7910 | | 1.2668 | 2.5717 | 2312 | 0.7904 | | 0.1059 | 2.5729 | 2313 | 0.7898 | | 0.1463 | 2.5740 | 2314 | 0.7891 | | 0.329 | 2.5751 | 2315 | 0.7892 | | 0.3588 | 2.5762 | 2316 | 0.7895 | | 0.856 | 2.5773 | 2317 | 0.7896 | | 0.099 | 2.5784 | 2318 | 0.7897 | | 0.5204 | 2.5795 | 2319 | 0.7898 | | 0.0531 | 2.5806 | 2320 | 0.7900 | | 1.3945 | 2.5818 | 2321 | 0.7902 | | 0.2179 | 2.5829 | 2322 | 0.7908 | | 0.8451 | 2.5840 | 2323 | 0.7911 | | 0.6057 | 2.5851 | 2324 | 0.7915 | | 0.1309 | 2.5862 | 2325 | 0.7918 | | 0.3801 | 2.5873 | 2326 | 0.7920 | | 0.1257 | 2.5884 | 2327 | 0.7922 | | 0.1301 | 2.5895 | 2328 | 0.7922 | | 0.8882 | 2.5907 | 2329 | 0.7923 | | 1.2217 | 2.5918 | 2330 | 0.7920 | | 0.3079 | 2.5929 | 2331 | 0.7917 | | 0.6154 | 2.5940 | 2332 | 0.7912 | | 1.6202 | 2.5951 | 2333 | 0.7902 | | 1.8848 | 2.5962 | 2334 | 0.7896 | | 1.1328 | 2.5973 | 2335 | 0.7890 | | 1.4483 | 2.5984 | 2336 | 0.7884 | | 1.6775 | 2.5996 | 2337 | 0.7880 | | 2.0952 | 2.6007 | 2338 | 0.7873 | | 0.6836 | 2.6018 | 2339 | 0.7866 | | 1.2126 | 2.6029 | 2340 | 0.7862 | | 1.2388 | 2.6040 | 2341 | 0.7857 | | 1.3935 | 2.6051 | 2342 | 0.7851 | | 1.6503 | 2.6062 | 2343 | 0.7846 | | 1.1154 | 2.6073 | 2344 | 0.7843 | | 2.1109 | 2.6085 | 2345 | 0.7839 | | 1.7597 | 2.6096 | 2346 | 0.7835 | | 1.136 | 2.6107 | 2347 | 0.7833 | | 1.6337 | 2.6118 | 2348 | 0.7831 | | 0.213 | 2.6129 | 2349 | 0.7829 | | 0.161 | 2.6140 | 2350 | 0.7826 | | 0.0617 | 2.6151 | 2351 | 0.7823 | | 0.0297 | 2.6162 | 2352 | 0.7822 | | 0.1882 | 2.6174 | 2353 | 0.7820 | | 0.2199 | 2.6185 | 2354 | 0.7819 | | 0.2319 | 2.6196 | 2355 | 0.7817 | | 0.2203 | 2.6207 | 2356 | 0.7816 | | 0.0857 | 2.6218 | 2357 | 0.7814 | | 0.1579 | 2.6229 | 2358 | 0.7814 | | 0.139 | 2.6240 | 2359 | 0.7812 | | 0.1163 | 2.6251 | 2360 | 0.7811 | | 0.3314 | 2.6263 | 2361 | 0.7813 | | 0.2009 | 2.6274 | 2362 | 0.7813 | | 0.3194 | 2.6285 | 2363 | 0.7812 | | 0.1205 | 2.6296 | 2364 | 0.7812 | | 0.191 | 2.6307 | 2365 | 0.7810 | | 0.4038 | 2.6318 | 2366 | 0.7809 | | 0.9379 | 2.6329 | 2367 | 0.7808 | | 0.9255 | 2.6340 | 2368 | 0.7805 | | 0.5874 | 2.6352 | 2369 | 0.7802 | | 0.1949 | 2.6363 | 2370 | 0.7801 | | 1.1643 | 2.6374 | 2371 | 0.7802 | | 0.7948 | 2.6385 | 2372 | 0.7801 | | 1.7571 | 2.6396 | 2373 | 0.7802 | | 0.8816 | 2.6407 | 2374 | 0.7800 | | 1.1944 | 2.6418 | 2375 | 0.7801 | | 0.1597 | 2.6429 | 2376 | 0.7802 | | 0.1738 | 2.6440 | 2377 | 0.7803 | | 0.3801 | 2.6452 | 2378 | 0.7804 | | 0.2019 | 2.6463 | 2379 | 0.7805 | | 1.113 | 2.6474 | 2380 | 0.7809 | | 1.1533 | 2.6485 | 2381 | 0.7811 | | 0.6726 | 2.6496 | 2382 | 0.7814 | | 0.8319 | 2.6507 | 2383 | 0.7813 | | 1.368 | 2.6518 | 2384 | 0.7814 | | 1.0146 | 2.6529 | 2385 | 0.7815 | | 1.244 | 2.6541 | 2386 | 0.7816 | | 1.0361 | 2.6552 | 2387 | 0.7819 | | 2.5308 | 2.6563 | 2388 | 0.7819 | | 2.0992 | 2.6574 | 2389 | 0.7818 | | 1.0893 | 2.6585 | 2390 | 0.7817 | | 1.4822 | 2.6596 | 2391 | 0.7817 | | 1.1222 | 2.6607 | 2392 | 0.7817 | | 1.53 | 2.6618 | 2393 | 0.7817 | | 1.7553 | 2.6630 | 2394 | 0.7815 | | 1.4186 | 2.6641 | 2395 | 0.7817 | | 1.1509 | 2.6652 | 2396 | 0.7815 | | 1.4712 | 2.6663 | 2397 | 0.7816 | | 1.639 | 2.6674 | 2398 | 0.7816 | | 0.0425 | 2.6685 | 2399 | 0.7815 | | 0.0606 | 2.6696 | 2400 | 0.7816 | | 0.062 | 2.6707 | 2401 | 0.7817 | | 0.5606 | 2.6719 | 2402 | 0.7816 | | 0.2006 | 2.6730 | 2403 | 0.7815 | | 0.0346 | 2.6741 | 2404 | 0.7815 | | 0.1058 | 2.6752 | 2405 | 0.7815 | | 0.118 | 2.6763 | 2406 | 0.7817 | | 0.1927 | 2.6774 | 2407 | 0.7815 | | 0.0582 | 2.6785 | 2408 | 0.7815 | | 0.0448 | 2.6796 | 2409 | 0.7815 | | 1.3782 | 2.6808 | 2410 | 0.7815 | | 0.1338 | 2.6819 | 2411 | 0.7816 | | 0.1962 | 2.6830 | 2412 | 0.7816 | | 0.9595 | 2.6841 | 2413 | 0.7817 | | 0.7637 | 2.6852 | 2414 | 0.7819 | | 0.5361 | 2.6863 | 2415 | 0.7821 | | 0.1019 | 2.6874 | 2416 | 0.7823 | | 0.1018 | 2.6885 | 2417 | 0.7824 | | 0.1703 | 2.6897 | 2418 | 0.7826 | | 1.0466 | 2.6908 | 2419 | 0.7827 | | 0.041 | 2.6919 | 2420 | 0.7827 | | 0.1471 | 2.6930 | 2421 | 0.7827 | | 0.9106 | 2.6941 | 2422 | 0.7829 | | 0.2213 | 2.6952 | 2423 | 0.7828 | | 0.8011 | 2.6963 | 2424 | 0.7830 | | 0.0334 | 2.6974 | 2425 | 0.7832 | | 1.4244 | 2.6986 | 2426 | 0.7833 | | 0.4463 | 2.6997 | 2427 | 0.7836 | | 1.0023 | 2.7008 | 2428 | 0.7836 | | 0.1687 | 2.7019 | 2429 | 0.7838 | | 0.1197 | 2.7030 | 2430 | 0.7840 | | 0.6204 | 2.7041 | 2431 | 0.7840 | | 0.2263 | 2.7052 | 2432 | 0.7843 | | 0.5548 | 2.7063 | 2433 | 0.7845 | | 0.3764 | 2.7075 | 2434 | 0.7847 | | 1.2053 | 2.7086 | 2435 | 0.7850 | | 2.1112 | 2.7097 | 2436 | 0.7852 | | 2.3757 | 2.7108 | 2437 | 0.7852 | | 0.551 | 2.7119 | 2438 | 0.7852 | | 1.2656 | 2.7130 | 2439 | 0.7850 | | 1.8832 | 2.7141 | 2440 | 0.7848 | | 0.5566 | 2.7152 | 2441 | 0.7846 | | 1.6297 | 2.7164 | 2442 | 0.7844 | | 1.7238 | 2.7175 | 2443 | 0.7841 | | 1.6719 | 2.7186 | 2444 | 0.7839 | | 1.8143 | 2.7197 | 2445 | 0.7839 | | 1.0837 | 2.7208 | 2446 | 0.7838 | | 1.6855 | 2.7219 | 2447 | 0.7837 | | 1.2636 | 2.7230 | 2448 | 0.7833 | | 0.5373 | 2.7241 | 2449 | 0.7831 | | 0.0573 | 2.7253 | 2450 | 0.7827 | | 0.1994 | 2.7264 | 2451 | 0.7825 | | 0.0326 | 2.7275 | 2452 | 0.7823 | | 0.5329 | 2.7286 | 2453 | 0.7821 | | 0.1119 | 2.7297 | 2454 | 0.7820 | | 0.158 | 2.7308 | 2455 | 0.7819 | | 0.0682 | 2.7319 | 2456 | 0.7816 | | 0.6797 | 2.7330 | 2457 | 0.7815 | | 0.0366 | 2.7341 | 2458 | 0.7813 | | 1.2802 | 2.7353 | 2459 | 0.7810 | | 0.1164 | 2.7364 | 2460 | 0.7809 | | 0.0749 | 2.7375 | 2461 | 0.7808 | | 0.1005 | 2.7386 | 2462 | 0.7807 | | 0.033 | 2.7397 | 2463 | 0.7806 | | 0.5699 | 2.7408 | 2464 | 0.7806 | | 0.1015 | 2.7419 | 2465 | 0.7805 | | 0.8625 | 2.7430 | 2466 | 0.7803 | | 0.9114 | 2.7442 | 2467 | 0.7802 | | 0.2633 | 2.7453 | 2468 | 0.7799 | | 0.4383 | 2.7464 | 2469 | 0.7798 | | 0.1643 | 2.7475 | 2470 | 0.7797 | | 1.9756 | 2.7486 | 2471 | 0.7796 | | 0.0519 | 2.7497 | 2472 | 0.7797 | | 0.738 | 2.7508 | 2473 | 0.7795 | | 0.2748 | 2.7519 | 2474 | 0.7794 | | 0.2094 | 2.7531 | 2475 | 0.7794 | | 1.4342 | 2.7542 | 2476 | 0.7794 | | 0.1247 | 2.7553 | 2477 | 0.7794 | | 0.1223 | 2.7564 | 2478 | 0.7795 | | 0.7833 | 2.7575 | 2479 | 0.7794 | | 0.5958 | 2.7586 | 2480 | 0.7794 | | 0.7997 | 2.7597 | 2481 | 0.7794 | | 1.9686 | 2.7608 | 2482 | 0.7795 | | 0.1216 | 2.7620 | 2483 | 0.7796 | | 0.3839 | 2.7631 | 2484 | 0.7798 | | 1.5027 | 2.7642 | 2485 | 0.7800 | | 1.8809 | 2.7653 | 2486 | 0.7802 | | 1.3146 | 2.7664 | 2487 | 0.7804 | | 0.7627 | 2.7675 | 2488 | 0.7805 | | 1.2137 | 2.7686 | 2489 | 0.7806 | | 1.2149 | 2.7697 | 2490 | 0.7808 | | 1.2062 | 2.7709 | 2491 | 0.7809 | | 1.5739 | 2.7720 | 2492 | 0.7811 | | 1.0703 | 2.7731 | 2493 | 0.7813 | | 2.2729 | 2.7742 | 2494 | 0.7815 | | 1.6585 | 2.7753 | 2495 | 0.7815 | | 1.8161 | 2.7764 | 2496 | 0.7816 | | 1.0195 | 2.7775 | 2497 | 0.7818 | | 1.517 | 2.7786 | 2498 | 0.7818 | | 0.3808 | 2.7798 | 2499 | 0.7820 | | 0.0443 | 2.7809 | 2500 | 0.7820 | | 0.3027 | 2.7820 | 2501 | 0.7822 | | 0.1424 | 2.7831 | 2502 | 0.7823 | | 0.0737 | 2.7842 | 2503 | 0.7824 | | 0.1465 | 2.7853 | 2504 | 0.7825 | | 0.1334 | 2.7864 | 2505 | 0.7826 | | 0.466 | 2.7875 | 2506 | 0.7827 | | 1.0319 | 2.7887 | 2507 | 0.7828 | | 0.2032 | 2.7898 | 2508 | 0.7826 | | 0.1246 | 2.7909 | 2509 | 0.7827 | | 0.1851 | 2.7920 | 2510 | 0.7828 | | 0.1217 | 2.7931 | 2511 | 0.7829 | | 0.0312 | 2.7942 | 2512 | 0.7828 | | 0.0306 | 2.7953 | 2513 | 0.7829 | | 0.1378 | 2.7964 | 2514 | 0.7829 | | 0.9677 | 2.7976 | 2515 | 0.7831 | | 0.2434 | 2.7987 | 2516 | 0.7832 | | 0.7187 | 2.7998 | 2517 | 0.7832 | | 0.8449 | 2.8009 | 2518 | 0.7830 | | 0.2236 | 2.8020 | 2519 | 0.7831 | | 0.2576 | 2.8031 | 2520 | 0.7832 | | 0.7366 | 2.8042 | 2521 | 0.7830 | | 1.2055 | 2.8053 | 2522 | 0.7828 | | 0.547 | 2.8065 | 2523 | 0.7828 | | 0.2237 | 2.8076 | 2524 | 0.7826 | | 0.5488 | 2.8087 | 2525 | 0.7825 | | 0.2647 | 2.8098 | 2526 | 0.7824 | | 0.0485 | 2.8109 | 2527 | 0.7823 | | 1.3158 | 2.8120 | 2528 | 0.7823 | | 1.993 | 2.8131 | 2529 | 0.7820 | | 1.6819 | 2.8142 | 2530 | 0.7815 | | 0.3552 | 2.8154 | 2531 | 0.7813 | | 0.2748 | 2.8165 | 2532 | 0.7811 | | 1.0576 | 2.8176 | 2533 | 0.7809 | | 2.4066 | 2.8187 | 2534 | 0.7807 | | 0.2635 | 2.8198 | 2535 | 0.7804 | | 1.6457 | 2.8209 | 2536 | 0.7802 | | 1.1579 | 2.8220 | 2537 | 0.7800 | | 1.1736 | 2.8231 | 2538 | 0.7799 | | 1.2001 | 2.8242 | 2539 | 0.7798 | | 1.5518 | 2.8254 | 2540 | 0.7798 | | 1.1533 | 2.8265 | 2541 | 0.7798 | | 1.2986 | 2.8276 | 2542 | 0.7798 | | 1.1473 | 2.8287 | 2543 | 0.7797 | | 1.052 | 2.8298 | 2544 | 0.7797 | | 1.0916 | 2.8309 | 2545 | 0.7798 | | 1.3572 | 2.8320 | 2546 | 0.7799 | | 1.54 | 2.8331 | 2547 | 0.7799 | | 1.8142 | 2.8343 | 2548 | 0.7799 | | 0.4477 | 2.8354 | 2549 | 0.7799 | | 0.1857 | 2.8365 | 2550 | 0.7800 | | 0.0271 | 2.8376 | 2551 | 0.7799 | | 0.0779 | 2.8387 | 2552 | 0.7800 | | 0.0175 | 2.8398 | 2553 | 0.7800 | | 0.2412 | 2.8409 | 2554 | 0.7801 | | 0.4976 | 2.8420 | 2555 | 0.7801 | | 0.1672 | 2.8432 | 2556 | 0.7801 | | 0.2891 | 2.8443 | 2557 | 0.7801 | | 0.1048 | 2.8454 | 2558 | 0.7802 | | 0.055 | 2.8465 | 2559 | 0.7802 | | 0.1095 | 2.8476 | 2560 | 0.7803 | | 1.3294 | 2.8487 | 2561 | 0.7803 | | 0.2075 | 2.8498 | 2562 | 0.7801 | | 0.5171 | 2.8509 | 2563 | 0.7801 | | 0.8973 | 2.8521 | 2564 | 0.7801 | | 0.658 | 2.8532 | 2565 | 0.7801 | | 0.0133 | 2.8543 | 2566 | 0.7802 | | 0.1192 | 2.8554 | 2567 | 0.7802 | | 0.1471 | 2.8565 | 2568 | 0.7803 | | 1.2929 | 2.8576 | 2569 | 0.7803 | | 0.5592 | 2.8587 | 2570 | 0.7802 | | 0.2509 | 2.8598 | 2571 | 0.7803 | | 0.2323 | 2.8610 | 2572 | 0.7804 | | 0.1592 | 2.8621 | 2573 | 0.7805 | | 0.3122 | 2.8632 | 2574 | 0.7808 | | 0.4491 | 2.8643 | 2575 | 0.7809 | | 1.2057 | 2.8654 | 2576 | 0.7812 | | 0.5468 | 2.8665 | 2577 | 0.7812 | | 0.4804 | 2.8676 | 2578 | 0.7813 | | 0.7049 | 2.8687 | 2579 | 0.7814 | | 1.0475 | 2.8699 | 2580 | 0.7817 | | 1.2297 | 2.8710 | 2581 | 0.7817 | | 1.6397 | 2.8721 | 2582 | 0.7819 | | 1.4252 | 2.8732 | 2583 | 0.7820 | | 0.9749 | 2.8743 | 2584 | 0.7822 | | 1.8922 | 2.8754 | 2585 | 0.7823 | | 1.8783 | 2.8765 | 2586 | 0.7823 | | 1.7692 | 2.8776 | 2587 | 0.7824 | | 1.2958 | 2.8788 | 2588 | 0.7824 | | 1.644 | 2.8799 | 2589 | 0.7825 | | 1.777 | 2.8810 | 2590 | 0.7826 | | 1.6529 | 2.8821 | 2591 | 0.7826 | | 1.5526 | 2.8832 | 2592 | 0.7828 | | 1.8935 | 2.8843 | 2593 | 0.7829 | | 0.8347 | 2.8854 | 2594 | 0.7828 | | 1.2232 | 2.8865 | 2595 | 0.7829 | | 1.5951 | 2.8877 | 2596 | 0.7828 | | 1.7273 | 2.8888 | 2597 | 0.7829 | | 1.7824 | 2.8899 | 2598 | 0.7829 | | 0.0344 | 2.8910 | 2599 | 0.7830 | | 0.1341 | 2.8921 | 2600 | 0.7832 | | 0.0644 | 2.8932 | 2601 | 0.7833 | | 0.0444 | 2.8943 | 2602 | 0.7833 | | 1.2803 | 2.8954 | 2603 | 0.7835 | | 0.0336 | 2.8966 | 2604 | 0.7836 | | 1.0865 | 2.8977 | 2605 | 0.7837 | | 0.0334 | 2.8988 | 2606 | 0.7838 | | 0.1752 | 2.8999 | 2607 | 0.7840 | | 0.5919 | 2.9010 | 2608 | 0.7840 | | 0.36 | 2.9021 | 2609 | 0.7841 | | 0.1179 | 2.9032 | 2610 | 0.7840 | | 0.9057 | 2.9043 | 2611 | 0.7842 | | 0.1438 | 2.9055 | 2612 | 0.7840 | | 1.2221 | 2.9066 | 2613 | 0.7839 | | 0.0156 | 2.9077 | 2614 | 0.7838 | | 0.0847 | 2.9088 | 2615 | 0.7839 | | 0.8671 | 2.9099 | 2616 | 0.7839 | | 0.0198 | 2.9110 | 2617 | 0.7838 | | 0.6977 | 2.9121 | 2618 | 0.7839 | | 0.1272 | 2.9132 | 2619 | 0.7839 | | 0.9713 | 2.9143 | 2620 | 0.7840 | | 1.1521 | 2.9155 | 2621 | 0.7839 | | 0.718 | 2.9166 | 2622 | 0.7838 | | 0.1981 | 2.9177 | 2623 | 0.7837 | | 0.4061 | 2.9188 | 2624 | 0.7838 | | 0.4543 | 2.9199 | 2625 | 0.7837 | | 0.7475 | 2.9210 | 2626 | 0.7837 | | 0.878 | 2.9221 | 2627 | 0.7836 | | 1.2387 | 2.9232 | 2628 | 0.7836 | | 1.2712 | 2.9244 | 2629 | 0.7836 | | 0.4232 | 2.9255 | 2630 | 0.7837 | | 1.6084 | 2.9266 | 2631 | 0.7837 | | 0.9571 | 2.9277 | 2632 | 0.7837 | | 0.6519 | 2.9288 | 2633 | 0.7836 | | 1.1437 | 2.9299 | 2634 | 0.7835 | | 1.6637 | 2.9310 | 2635 | 0.7834 | | 1.1906 | 2.9321 | 2636 | 0.7835 | | 1.6574 | 2.9333 | 2637 | 0.7834 | | 1.6904 | 2.9344 | 2638 | 0.7834 | | 1.6933 | 2.9355 | 2639 | 0.7834 | | 1.0353 | 2.9366 | 2640 | 0.7834 | | 1.1522 | 2.9377 | 2641 | 0.7833 | | 1.0279 | 2.9388 | 2642 | 0.7832 | | 1.5069 | 2.9399 | 2643 | 0.7834 | | 1.7763 | 2.9410 | 2644 | 0.7834 | | 1.1176 | 2.9422 | 2645 | 0.7833 | | 1.5643 | 2.9433 | 2646 | 0.7833 | | 1.7622 | 2.9444 | 2647 | 0.7832 | | 1.6667 | 2.9455 | 2648 | 0.7832 | | 0.4148 | 2.9466 | 2649 | 0.7832 | | 0.4092 | 2.9477 | 2650 | 0.7832 | | 0.3592 | 2.9488 | 2651 | 0.7831 | | 1.9199 | 2.9499 | 2652 | 0.7832 | | 0.1868 | 2.9511 | 2653 | 0.7830 | | 0.1144 | 2.9522 | 2654 | 0.7831 | | 0.0213 | 2.9533 | 2655 | 0.7830 | | 0.1833 | 2.9544 | 2656 | 0.7830 | | 0.0725 | 2.9555 | 2657 | 0.7829 | | 0.4307 | 2.9566 | 2658 | 0.7829 | | 0.2273 | 2.9577 | 2659 | 0.7829 | | 0.0569 | 2.9588 | 2660 | 0.7829 | | 1.3159 | 2.9600 | 2661 | 0.7829 | | 1.0308 | 2.9611 | 2662 | 0.7829 | | 0.2523 | 2.9622 | 2663 | 0.7829 | | 0.0237 | 2.9633 | 2664 | 0.7830 | | 0.2497 | 2.9644 | 2665 | 0.7829 | | 0.1105 | 2.9655 | 2666 | 0.7828 | | 0.8522 | 2.9666 | 2667 | 0.7829 | | 0.1672 | 2.9677 | 2668 | 0.7829 | | 0.6849 | 2.9689 | 2669 | 0.7829 | | 0.4016 | 2.9700 | 2670 | 0.7828 | | 0.1031 | 2.9711 | 2671 | 0.7829 | | 0.8324 | 2.9722 | 2672 | 0.7829 | | 0.339 | 2.9733 | 2673 | 0.7830 | | 0.0526 | 2.9744 | 2674 | 0.7829 | | 0.1469 | 2.9755 | 2675 | 0.7830 | | 0.5769 | 2.9766 | 2676 | 0.7830 | | 1.5399 | 2.9778 | 2677 | 0.7831 | | 1.6727 | 2.9789 | 2678 | 0.7832 | | 0.3733 | 2.9800 | 2679 | 0.7831 | | 0.8024 | 2.9811 | 2680 | 0.7831 | | 1.4253 | 2.9822 | 2681 | 0.7832 | | 1.418 | 2.9833 | 2682 | 0.7832 | | 1.2311 | 2.9844 | 2683 | 0.7831 | | 1.4175 | 2.9855 | 2684 | 0.7831 | | 1.5036 | 2.9867 | 2685 | 0.7831 | | 1.9013 | 2.9878 | 2686 | 0.7832 | | 1.176 | 2.9889 | 2687 | 0.7832 | | 1.6634 | 2.9900 | 2688 | 0.7832 | | 1.0697 | 2.9911 | 2689 | 0.7832 | | 1.831 | 2.9922 | 2690 | 0.7832 | | 1.1826 | 2.9933 | 2691 | 0.7832 | | 0.8398 | 2.9944 | 2692 | 0.7832 | | 0.9606 | 2.9956 | 2693 | 0.7833 | | 1.7665 | 2.9967 | 2694 | 0.7832 | | 1.6684 | 2.9978 | 2695 | 0.7832 | | 1.6311 | 2.9989 | 2696 | 0.7833 | | 1.0668 | 3.0 | 2697 | 0.7833 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
ChristianLLM/llama3_test
ChristianLLM
"2024-06-24T08:14:02Z"
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-24T08:13:12Z"
--- license: apache-2.0 ---
PrunaAI/allenai-tulu-2-dpo-7b-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:17:09Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:allenai/tulu-2-dpo-7b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:15:19Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: allenai/tulu-2-dpo-7b metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo allenai/tulu-2-dpo-7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/allenai-tulu-2-dpo-7b-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("allenai/tulu-2-dpo-7b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model allenai/tulu-2-dpo-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
gdjinbo/oia
gdjinbo
"2024-06-30T11:34:29Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-06-24T08:16:16Z"
--- license: openrail ---
PrunaAI/chuxin-llm-Chuxin-1.6B-Base-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:17:03Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:chuxin-llm/Chuxin-1.6B-Base", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:16:18Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: chuxin-llm/Chuxin-1.6B-Base metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo chuxin-llm/Chuxin-1.6B-Base installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/chuxin-llm-Chuxin-1.6B-Base-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("chuxin-llm/Chuxin-1.6B-Base") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model chuxin-llm/Chuxin-1.6B-Base before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Ariffiq99/KUCI_COPA_xlm_roberta_base_finetuned
Ariffiq99
"2024-06-24T10:47:11Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "multiple-choice", "generated_from_trainer", "base_model:Ariffiq99/COPA_xlm_roberta_base_finetuned", "license:mit", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-06-24T08:19:16Z"
--- license: mit base_model: Ariffiq99/COPA_xlm_roberta_base_finetuned tags: - generated_from_trainer metrics: - f1 model-index: - name: KUCI_COPA_xlm_roberta_base_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KUCI_COPA_xlm_roberta_base_finetuned This model is a fine-tuned version of [Ariffiq99/COPA_xlm_roberta_base_finetuned](https://huggingface.co/Ariffiq99/COPA_xlm_roberta_base_finetuned) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7708 - F1: 0.7728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.6908 | 1.0 | 5196 | 0.6480 | 0.7423 | | 0.5619 | 2.0 | 10392 | 0.6410 | 0.7630 | | 0.4629 | 3.0 | 15588 | 0.6289 | 0.7685 | | 0.3645 | 4.0 | 20784 | 0.7006 | 0.7729 | | 0.2872 | 5.0 | 25980 | 0.7708 | 0.7728 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Amya47/MislamZep-2x7
Amya47
"2024-06-24T08:20:39Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:20:39Z"
Entry not found
wuqi001/minicpm_law
wuqi001
"2024-06-24T08:24:22Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:24:22Z"
Entry not found
PrunaAI/gorilla-llm-gorilla-openfunctions-v2-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:27:36Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:gorilla-llm/gorilla-openfunctions-v2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:25:27Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: gorilla-llm/gorilla-openfunctions-v2 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo gorilla-llm/gorilla-openfunctions-v2 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/gorilla-llm-gorilla-openfunctions-v2-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("gorilla-llm/gorilla-openfunctions-v2") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model gorilla-llm/gorilla-openfunctions-v2 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
gensou07/distilbert-base-uncased-finetuned-imdb
gensou07
"2024-06-24T08:26:09Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:26:09Z"
Entry not found
djsull/gemma-2b-4bit-classifier
djsull
"2024-06-24T08:26:26Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:26:26Z"
Entry not found
djsull/gemma-classifier
djsull
"2024-06-25T01:15:09Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-24T08:26:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NitinGautam05/logs
NitinGautam05
"2024-06-24T08:45:30Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-06-24T08:28:13Z"
Entry not found
kwkwkwkwpark/xlm-roberta-base-finetuned-panx-de
kwkwkwkwpark
"2024-06-26T05:23:05Z"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-06-24T08:32:26Z"
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1422 - F1: 0.8642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2572 | 1.0 | 787 | 0.1598 | 0.8178 | | 0.1302 | 2.0 | 1574 | 0.1495 | 0.8524 | | 0.0783 | 3.0 | 2361 | 0.1422 | 0.8642 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
DANI001/model
DANI001
"2024-06-26T04:25:36Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:Twitter/twhin-bert-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-24T08:32:44Z"
--- license: apache-2.0 base_model: Twitter/twhin-bert-large tags: - generated_from_trainer model-index: - name: model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [Twitter/twhin-bert-large](https://huggingface.co/Twitter/twhin-bert-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3631 | 1.0 | 171 | 2.1675 | | 2.3383 | 2.0 | 342 | 2.0818 | | 2.1424 | 3.0 | 513 | 1.9961 | | 2.1355 | 4.0 | 684 | 2.0919 | | 2.1401 | 5.0 | 855 | 2.0231 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
priiiiiii/whisper-hindi-finetuned
priiiiiii
"2024-06-27T07:15:47Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-06-24T08:34:19Z"
Entry not found
PrunaAI/Jiayi-Pan-Tiny-Vicuna-1B-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:34:58Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:Jiayi-Pan/Tiny-Vicuna-1B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:34:30Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Jiayi-Pan/Tiny-Vicuna-1B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Jiayi-Pan/Tiny-Vicuna-1B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/Jiayi-Pan-Tiny-Vicuna-1B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Jiayi-Pan/Tiny-Vicuna-1B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Jiayi-Pan/Tiny-Vicuna-1B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
TYKim/detr-resnet-50-dc5-finetuned-lora-coco
TYKim
"2024-06-24T08:35:56Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:35:56Z"
Entry not found
PrunaAI/meta-llama-Meta-Llama-Guard-2-8B-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:39:45Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:meta-llama/Meta-Llama-Guard-2-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:37:05Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: meta-llama/Meta-Llama-Guard-2-8B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo meta-llama/Meta-Llama-Guard-2-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-Guard-2-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-Guard-2-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-Guard-2-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
itay-nakash/model_6e99ce7442_sweep_winter-salad-908
itay-nakash
"2024-06-24T08:37:31Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:37:31Z"
Entry not found
PrunaAI/Rakuten-RakutenAI-7B-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:40:01Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "base_model:Rakuten/RakutenAI-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:37:55Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Rakuten/RakutenAI-7B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Rakuten/RakutenAI-7B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/Rakuten-RakutenAI-7B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Rakuten/RakutenAI-7B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Rakuten/RakutenAI-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
kohankhaki/Llama-3-8B_SST5-Grouped_IDX-2
kohankhaki
"2024-06-24T09:01:49Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-06-24T08:38:02Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/universitytehran-PersianMind-v1.0-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:40:34Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:universitytehran/PersianMind-v1.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:38:35Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: universitytehran/PersianMind-v1.0 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo universitytehran/PersianMind-v1.0 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/universitytehran-PersianMind-v1.0-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("universitytehran/PersianMind-v1.0") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model universitytehran/PersianMind-v1.0 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
itay-nakash/model_6e99ce7442_sweep_flowing-firefly-909
itay-nakash
"2024-06-24T08:38:46Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:38:46Z"
Entry not found
PrunaAI/artificialguybr-llama3-8b-sql-create-context-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:43:47Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:artificialguybr/llama3-8b-sql-create-context", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:40:49Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: artificialguybr/llama3-8b-sql-create-context metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo artificialguybr/llama3-8b-sql-create-context installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/artificialguybr-llama3-8b-sql-create-context-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("artificialguybr/llama3-8b-sql-create-context") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model artificialguybr/llama3-8b-sql-create-context before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Miykaelxxm/12123333
Miykaelxxm
"2024-06-24T08:41:16Z"
0
0
null
[ "region:us" ]
null
"2024-06-24T08:41:16Z"
Entry not found
PrunaAI/chuxin-llm-Chuxin-1.6B-1M-AWQ-4bit-smashed
PrunaAI
"2024-06-24T08:43:32Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "base_model:chuxin-llm/Chuxin-1.6B-1M", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-24T08:42:47Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: chuxin-llm/Chuxin-1.6B-1M metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo chuxin-llm/Chuxin-1.6B-1M installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/chuxin-llm-Chuxin-1.6B-1M-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("chuxin-llm/Chuxin-1.6B-1M") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model chuxin-llm/Chuxin-1.6B-1M before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).