Jingcheng Hu
reign12
AI & ML interests
Foundation models and alignment
Organizations
reign12's activity
Add paper link
#3 opened 5 months ago
by
AdinaY
33B when?
2
#8 opened about 1 year ago
by
nova434431
Question about evaluating this reward model on Anthropic/hh-rlhf
1
#4 opened over 1 year ago
by
songff
More details on training data for reward model
#2 opened about 1 year ago
by
reign12
How is this dataset filtered?
#1 opened over 1 year ago
by
reign12
大神是怎么收集这么多高质量的数据的啊
3
#1 opened over 1 year ago
by
leonall