Edit model card

Model Description

  • Developed by: [Microsoft]
  • Model type: [Text Generation]
  • Finetuned from model [microsoft/phi-1_5]

How to Use

Phi-1.5 has been integrated in the transformers version 4.30.0. ensure that you are doing the following:

  • When loading the model, ensure that trust_remote_code=True is passed as an argument of the from_pretrained() function.

Intended Uses

Given the nature of the training data, Fashion Phi-1.5 is best suited for prompts using the QA format, the chat format. Note that Fashion Phi-1.5 often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.

QA

<human>: What can I wear with an olive green bomber jacket for a winter casual outing? Olive Male
<assistant>: Great choice! To wear with an olive green bomber jacket for a winter casual outing consider adding: 1. White or gray graphic T-shirt for a laid-back vibe. 2. Dark denim jeans or cargo pants for a rugged look. 3. High-top sneakers or combat boots for a street-style finish. 4. A plaid flannel shirt or denim jacket for added warmth. 5. A patterned scarf or beanie for a pop of color and style.

How to Get Started with the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'

Model = 'SSTalha/Fashion_PHI_1-5'
model = AutoModelForCausalLM.from_pretrained(Model, torch_dtype="auto", trust_remote_code=True).to(torch_device)
tokenizer = AutoTokenizer.from_pretrained(Model, trust_remote_code=True)

inputs = tokenizer('''<human>: Have black dress pants and need advice on a shirt for a semi-formal gathering. Any recommendations on colors and styles? Black Male ''', return_tensors="pt", return_attention_mask=False)

inputs = {key: value.to('cuda') for key, value in inputs.items()}

outputs = model.generate(**inputs, max_length=90)

text = tokenizer.batch_decode(outputs)[0]
print(text)
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.