# Welcome to Xtreme1

{% hint style="info" %}
You can find our

&#x20;GitHub repos at <https://github.com/xtreme1-io/xtreme1>

and our cloud version at <https://www.basic.ai/>
{% endhint %}

## Introduction

Xtreme1 is the world's first open-source platform for **multisensory training data**.

Xtreme1 provides deep insight into data annotation, data curation, and ontology management to solve 2D image and 3D point cloud dataset ML challenges.

The built-in AI-assisted tools take your annotation efforts to the next level of efficiency for your **2D/3D Object Detection**, **3D Instance Segmentation**, and **LiDAR-Camera Fusion projects.**

## Key Features

| Image Annotation (B-box, Segmentation) - [YOLOR](https://github.com/WongKinYiu/yolor) & [RITM](https://github.com/saic-vul/ritm_interactive_segmentation) |                                  Lidar-camera Fusion (Frame series) Annotation - [OpenPCDet](https://github.com/open-mmlab/OpenPCDet) & [AB3DMOT](https://github.com/xinshuoweng/AB3DMOT)                                  |
| :-------------------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|                                                                                                                                                           | ![](https://2222059734-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FgZbaVXXtfTXMMcqdnKWV%2Fuploads%2FvxItLYh7miaow5Ht4t5o%2F2d-seg-model.gif?alt=media\&token=c984c09e-54e9-468a-8432-1e2e37f37e43) |

:one: Supports data labeling for images :camera:, 3D LiDAR and 2D/3D Sensor Fusion datasets :oncoming\_automobile: :vertical\_traffic\_light: :no\_pedestrians:

:two: Built-in pre-labeling and interactive models support 2D/3D object detection, segmentation and classification :rocket:

:three: Configurable Ontology Center for general classes (with hierarchies) and attributes for use in your model training :bookmark:

:four: Data management and quality monitoring :books:

:five: Find and fix labeling errors :microscope:

:six: Results visualization to help you to evaluate your model :chart\_with\_upwards\_trend:

|                                                                   3D Point Cloud Cuboid Annotation - [OpenPCDet](https://github.com/open-mmlab/OpenPCDet)                                                                   |                                                            2D & 3D Fusion Object Tracking Annotation - [AB3DMOT](https://github.com/xinshuoweng/AB3DMOT)                                                            |
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![](https://2222059734-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FgZbaVXXtfTXMMcqdnKWV%2Fuploads%2FphE8JtjiLXjKzMV8sFyq%2F3d-annotation.gif?alt=media\&token=4082ecce-1928-46bf-8d99-9315a4ed7aae) | ![](https://2222059734-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FgZbaVXXtfTXMMcqdnKWV%2Fuploads%2F4BhBaQw2GabvX79n8Vf1%2Fimage.png?alt=media\&token=1d11f920-d7ce-4f07-b663-51cecf0ef003) |

## Getting Started

You can install Xtreme1 on a Linux, Windows, or MacOSX machine.

[**Prerequisites details and built-in models installation is explained here**](https://docs.xtreme1.io/xtreme1-docs/broken-reference)**.**

Get started from the [**Quick Start**](https://docs.xtreme1.io/xtreme1-docs/broken-reference):

```bash
wget https://github.com/xtreme1-io/xtreme1/releases/download/v0.7.2/xtreme1-v0.7.2.zip
unzip -d xtreme1-v0.7.2 xtreme1-v0.7.2.zip

docker compose up
```

{% hint style="info" %}
Xtreme1 project is now hosted in [LF AI & Data Foundation](https://lfaidata.foundation/) as a sandbox project.
{% endhint %}

<figure><img src="https://2222059734-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FgZbaVXXtfTXMMcqdnKWV%2Fuploads%2Fr14zvt25COCUK7emMJUP%2Flf_x1.png?alt=media&#x26;token=9b109f26-a7b0-4c0c-b82c-7106c53b95d7" alt=""><figcaption><p>Xtreme1, the First Open-Source Labeling &#x26; Annotation and Visualization Project, is debuting at the Linux Foundation AI &#x26; DATA Global Landscape</p></figcaption></figure>

## Support and Community

Join our community to chat with other members.

Issue: <https://github.com/xtreme1-io/xtreme1>[/issues](https://github.com/basicai/xtreme1/issues)

Medium: <https://medium.com/multisensory-data-training>

GitHub: <https://github.com/xtreme1-io/xtreme1>

Twitter: <https://twitter.com/Xtreme1io>

Subscribe to the latest video tutorials on our [YouTube](https://www.youtube.com/@xtreme1ai) channel

## Quick Links

{% content-ref url="broken-reference" %}
[Broken link](https://docs.xtreme1.io/xtreme1-docs/broken-reference)
{% endcontent-ref %}

{% content-ref url="product-guides/lidar-annotation-tool" %}
[lidar-annotation-tool](https://docs.xtreme1.io/xtreme1-docs/product-guides/lidar-annotation-tool)
{% endcontent-ref %}

## Learn more

Please refer to the Linux Foundation Trademark Usage page to learn about the usage policy and guidelines: <https://www.linuxfoundation.org/trademark-usage>.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.xtreme1.io/xtreme1-docs/welcome-to-xtreme1.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
