Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
OmniSafeAI
university
https://github.com/PKU-Alignment
PKU-Alignment
Activity Feed
Request to join this org
Follow
4
AI & ML interests
None defined yet.
Recent Activity
jijiaming
authored
a paper
6 months ago
ProgressGym: Alignment with a Millennium of Moral Progress
XuehaiPan
authored
a paper
about 1 year ago
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
calico-1226
authored
a paper
about 1 year ago
Safe RLHF: Safe Reinforcement Learning from Human Feedback
View all activity
Team members
4
models
None public yet
datasets
1
OmniSafeAI/hh-prompts
Viewer
•
Updated
Apr 22, 2023
•
169k
•
6
•
1