Welcome to Xtreme1
Open-source platform for multisensory training data.
You can find our
GitHub repos at https://github.com/xtreme1-io/xtreme1
and our cloud version at https://www.basic.ai/
Introduction
Xtreme1 is the world's first open-source platform for multisensory training data.
Xtreme1 provides deep insight into data annotation, data curation, and ontology management to solve 2D image and 3D point cloud dataset ML challenges.
The built-in AI-assisted tools take your annotation efforts to the next level of efficiency for your 2D/3D Object Detection, 3D Instance Segmentation, and LiDAR-Camera Fusion projects.
Key Features
1️⃣ Supports data labeling for images 📷, 3D LiDAR and 2D/3D Sensor Fusion datasets 🚘 🚦 🚷
2️⃣ Built-in pre-labeling and interactive models support 2D/3D object detection, segmentation and classification 🚀
3️⃣ Configurable Ontology Center for general classes (with hierarchies) and attributes for use in your model training 🔖
4️⃣ Data management and quality monitoring 📚
5️⃣ Find and fix labeling errors 🔬
6️⃣ Results visualization to help you to evaluate your model 📈
Getting Started
You can install Xtreme1 on a Linux, Windows, or MacOSX machine.
Prerequisites details and built-in models installation is explained here.
Get started from the Quick Start:
Xtreme1 project is now hosted in LF AI & Data Foundation as a sandbox project.
Support and Community
Join our community to chat with other members.
Issue: https://github.com/xtreme1-io/xtreme1/issues
Medium: https://medium.com/multisensory-data-training
GitHub: https://github.com/xtreme1-io/xtreme1
Twitter: https://twitter.com/Xtreme1io
Subscribe to the latest video tutorials on our YouTube channel
Quick Links
Learn more
Please refer to the Linux Foundation Trademark Usage page to learn about the usage policy and guidelines: https://www.linuxfoundation.org/trademark-usage.
Last updated