Datasets:

Languages:
English
ArXiv:
License:

[Paper Reproducibility] Objaverse-LVIS Metric Scale (and Placement) Metadata

#44
by skaramcheti - opened

Hey @mattdeitke - big fan of this work, and very excited for the Objaverse-XL upload! Quick question about reproducing some of the results/behaviors described in the original paper: is there code/metadata for getting the object metric scale (and placement/filtering parameters) following the same process detailed in the Open-Vocabulary ObjectNav experiments (Sec. 4.3).

We're really interested in using this dataset for some of our ongoing work at Stanford, but not having a rough idea of how to correct for object scale (even roughly) and placement in scenes is a major stumbling block!

Allen Institute for AI org

Hey Sidd, thanks for the kind words :)

Should be able to share this next week - will get back to you when I confirm!

It's been a while, but if I recall correctly, we crowdsourced these annotations from AI2 employees at a category level (e.g., about how large is this object type typically, and where might you find it in a home). Then applied some random jitter to the size annotations for each instance when it's placed in the scenes.

Doing annotations like these at an object type level makes it much more manageable.

Allen Institute for AI org

For placement, might help to check out the annotations I collected for ProcTHOR with the original AI2-THOR assets: https://vigorous-lamarr-a0a9c0.netlify.app/annotations

The ProcTHOR paper has lots of details on placing objects into the scenes once these annotations are available at a category level. (Checkout the appendix for most of these types of details, which I followed in Section 4.3 here.)

Hey Matt - totally understood! Category-level annotations makes a ton of sense - would love if you could share the crowdsourced labels if possible (would really really help us out). Will definitely checkout the ProcTHOR paper and annotations in the meantime.

Really appreciate it!

Allen Institute for AI org

Okay, check this out for the annotations!
https://docs.google.com/spreadsheets/d/1dRUKadvw5DyL44yh5nTo0Yc6cllFMSORihkEHq4QtYw/edit?usp=sharing

For the "scale", we annotate the largest side length of the bounding box (e.g., for a sofa, if its annotated as 2 meters, and an instance of the sofa has a bounding box of 30m x 60m x 20m, then it is scaled to 1m x 2m x 2/3m). This helps with the yaw rotation of the objects possibly being inconsistent (and the scales being quite arbitrary on most of the 3D modeled objects).

Also note for Objaverse 1.0, most objects that are supposed to be on the floor are already oriented correctly, such that its bottom is on the ground. Haven't confirmed this for Objaverse-XL, but it'd surprise me if it wasn't the case, as most of the modeling programs export what the upward direction (i.e., either y or z) is in the metadata of the 3D object.

Let me know if you have any questions on it!

Sign up or log in to comment