The Dataset Viewer has been disabled on this dataset.


About

AstroVision is a first-of-a-kind, large-scale dataset of real small body images from both legacy and ongoing deep space missions, which currently features 115,970 densely annotated, real images of sixteen small bodies from eight missions. AstroVision was developed to facilitate the study of computer vision and deep learning for autonomous navigation in the vicinity of a small body, with speicial emphasis on training and evaluation of deep learning-based keypoint detection and feature description methods.

If you find our datasets useful for your research, please cite the AstroVision paper:

@article{driver2023astrovision,
  title={{AstroVision}: Towards Autonomous Feature Detection and Description for Missions to Small Bodies Using Deep Learning},
  author={Driver, Travis and Skinner, Katherine and Dor, Mehregan and Tsiotras, Panagiotis},
  journal={Acta Astronautica: Special Issue on AI for Space},
  year={2023},
  volume={210},
  pages={393--410}
}

Please make sure to like the respository to show support!

Data format

Following the popular COLMAP data format, each data segment contains the files images.bin, cameras.bin, and points3D.bin, which contain the camera extrinsics and keypoints, camera intrinsics, and 3D point cloud data, respectively.

  • cameras.bin encodes a dictionary of camera_id and Camera pairs. Camera objects are structured as follows:

    • Camera.id: defines the unique (and possibly noncontiguious) identifier for the Camera.
    • Camera.model: the camera model. We utilize the "PINHOLE" camera model, as AstroVision contains undistorted images.
    • Camera.width & Camera.height: the width and height of the sensor in pixels.
    • Camera.params: List of cameras parameters (intrinsics). For the "PINHOLE" camera model, params = [fx, fy, cx, cy], where fx and fy are the focal lengths in $x$ and $y$, respectively, and (cx, cy) is the principal point of the camera.
  • images.bin encodes a dictionary of image_id and Image pairs. Image objects are structured as follows:

    • Image.id: defines the unique (and possibly noncontiguious) identifier for the Image.
    • Image.tvec: $\mathbf{r}^\mathcal{C_ i}_ {\mathrm{BC}_ i}$, i.e., the relative position of the origin of the camera frame $\mathcal{C}_ i$ with respect to the origin of the body-fixed frame $\mathcal{B}$ expressed in the $\mathcal{C}_ i$ frame.
    • Image.qvec: $\mathbf{q}_ {\mathcal{C}_ i\mathcal{B}}$, i.e., the relative orientation of the camera frame $\mathcal{C}_ i$ with respect to the body-fixed frame $\mathcal{B}$. The user may call Image.qvec2rotmat() to get the corresponding rotation matrix $R_ {\mathcal{C}_ i\mathcal{B}}$.
    • Image.camera_id: the identifer for the camera that was used to capture the image.
    • Image.name: the name of the corresponding file, e.g., 00000000.png.
    • Image.xys: contains all of the keypoints $\mathbf{p}^{(i)} _k$ in image $i$, stored as a ($N$, 2) array. In our case, the keypoints are the forward-projected model vertices.
    • Image.point3D_ids: stores the point3D_id for each keypoint in Image.xys, which can be used to fetch the corresponding point3D from the points3D dictionary.
  • points3D.bin enocdes a dictionary of point3D_id and Point3D pairs. Point3D objects are structured as follows:

    • Point3D.id: defines the unique (and possibly noncontiguious) identifier for the Point3D.
    • Point3D.xyz: the 3D-coordinates of the landmark in the body-fixed frame, i.e., $\mathbf{\ell} _{k}^\mathcal{B}$.
    • Point3D.image_ids: the ID of the images in which the landmark was observed.
    • Point3D.point2D_idxs: the index in Image.xys that corresponds to the landmark observation, i.e., xy = images[Point3D.image_ids[k]].xys[Point3D.point2D_idxs[k]] given some index k.

These three data containers, along with the ground truth shape model, completely describe the scene.

In addition to the scene geometry, each image is annotated with a landmark map, a depth map, and a visibility mask.

  • The landmark map provides a consistent, discrete set of reference points for sparse correspondence computation and is derived by forward-projecting vertices from a medium-resolution (i.e., $\sim$ 800k facets) shape model onto the image plane. We classify visible landmarks by tracing rays (via the Trimesh library) from the landmarks toward the camera origin and recording landmarks whose line-of-sight ray does not intersect the 3D model.
  • The depth map provides a dense representation of the imaged surface and is computed by backward-projecting rays at each pixel in the image and recording the depth of the intersection between the ray and a high-resolution (i.e., $\sim$ 3.2 million facets) shape model.
  • The visbility mask provides an estimate of the non-occluded portions of the imaged surface.

Note: Instead of the traditional $z$-depth parametrization used for depth maps, we use the absolute depth, similar to the inverse depth parameterization.

Downloads last month
0
Edit dataset card