# Open Source Dataset

Are you training Vision Foundation Models, World Models or Reinforcement Learning policies for robotics? Download our open souce dataset of 3D Maps and start building! The dataset is composed by 1000 diverse scenes, 580k+ high resolution images with rich metadata for a total data size 3.9 TB. One of the largest open source datsets in CV with 1k+ GPU hours used for the post processing.

The datset includes:

* Raw pictures&#x20;
* EXIF, IMU, SLAM and GPS Data
* COLMAP and pixSfM calculated camera poses and sparse pointcloud
* Gaussian Splat representaions
* Yolo11 categories for each image
* Qwen3-VL generated semantic metadata, including scene type, lighting, weather, and crowd density

You can browse our [**interactive preview gallery**](https://jumpin.ovr.ai/datasetmaps) showcasing rendered videos of all 1,000 3D Gaussian Splatting reconstructions. This visual index lets you explore the diversity and quality of scenes—from bustling urban streets to serene natural landscapes—helping you identify the most relevant samples for your research. Note: each Gaussian Splat reconstruction was generated using only 300 input images per scene.

[Access the OverMaps\_1k dataset on Hugging Face](https://huggingface.co/datasets/OverTheReality/OverMaps_1k)

You need even more data? Get in touch on our social channels or directly write to us here: <data@ovr.ai>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.overthereality.ai/over-wiki/physical-ai/open-source-dataset.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
