Papers
arxiv:2307.15780

LLM-Rec: Personalized Recommendation via Prompting Large Language Models

Published on Jul 24, 2023
· Featured in Daily Papers on Aug 1, 2023
Authors:
,
,

Abstract

We investigate various prompting strategies for enhancing personalized content recommendation performance with large language models (LLMs) through input augmentation. Our proposed approach, termed LLM-Rec, encompasses four distinct prompting strategies: (1) basic prompting, (2) recommendation-driven prompting, (3) engagement-guided prompting, and (4) recommendation-driven + engagement-guided prompting. Our empirical experiments show that combining the original content description with the augmented input text generated by LLM using these prompting strategies leads to improved recommendation performance. This finding highlights the importance of incorporating diverse prompts and input augmentation techniques to enhance the recommendation capabilities with large language models for personalized content recommendation.

Community

This comment has been hidden

Hi

I am looking to invest in the market. My personal profile is as such:

Purpose of Investment: Retirement
Time Horizon of Investment: 5 to 10 years
Risk Tolerance: Aggressive
I wish to contribute to companies which engages in sustainable and green practices.
I wish to have some exposure to emerging markets.

Based on my personal profile, could you first recommend me a portfolio from the following website, https://portfoliocharts.com/portfolios/, after which fill up the respective portfolio with specific asset classes based on the selected portfolio allocation in a way that aligns with my personal profile as well. Please provide a detailed breakdown of the assets classes and their weighted proportions.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.15780 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.15780 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.15780 in a Space README.md to link it from this page.

Collections including this paper 3