Welcome to Xtreme1
Open-source platform for multisensory training data.
Last updated
Open-source platform for multisensory training data.
Last updated
You can find our
GitHub repos at https://github.com/xtreme1-io/xtreme1
and our cloud version at https://www.basic.ai/
Xtreme1 is the world's first open-source platform for multisensory training data.
Xtreme1 provides deep insight into data annotation, data curation, and ontology management to solve 2D image and 3D point cloud dataset ML challenges.
The built-in AI-assisted tools take your annotation efforts to the next level of efficiency for your 2D/3D Object Detection, 3D Instance Segmentation, and LiDAR-Camera Fusion projects.
Supports data labeling for images , 3D LiDAR and 2D/3D Sensor Fusion datasets
Built-in pre-labeling and interactive models support 2D/3D object detection, segmentation and classification
Configurable Ontology Center for general classes (with hierarchies) and attributes for use in your model training
Data management and quality monitoring
Find and fix labeling errors
Results visualization to help you to evaluate your model
You can install Xtreme1 on a Linux, Windows, or MacOSX machine.
Prerequisites details and built-in models installation is explained here.
Get started from the Quick Start:
Xtreme1 project is now hosted in LF AI & Data Foundation as a sandbox project.
Join our community to chat with other members.
Issue: https://github.com/xtreme1-io/xtreme1/issues
Medium: https://medium.com/multisensory-data-training
GitHub: https://github.com/xtreme1-io/xtreme1
Twitter: https://twitter.com/Xtreme1io
Subscribe to the latest video tutorials on our YouTube channel
Please refer to the Linux Foundation Trademark Usage page to learn about the usage policy and guidelines: https://www.linuxfoundation.org/trademark-usage.
3D Point Cloud Cuboid Annotation - OpenPCDet
2D & 3D Fusion Object Tracking Annotation - AB3DMOT