[](https://pypi.org/project/CoralNet-Toolbox)
[](https://pypi.python.org/pypi/CoralNet-Toolbox)
[](https://pypi.org/project/CoralNet-Toolbox)
[](https://pypi.org/project/CoralNet-Toolbox)
[](https://pypi.org/project/CoralNet-Toolbox)
[](https://pypi.org/project/CoralNet-Toolbox)
๐ Annotation
Create patches, rectangles, and polygons with AI assistance
|
๐ง AI-Powered
Leverage SAM, YOLOE, and various foundation models
|
๐ Complete Workflow
From data collection to model training and deployment
|
๐ฆ Quick Start
Running the following command will install the coralnet-toolbox
, which you can then run from the command line:
# cmd
# Install
pip install coralnet-toolbox
# Run
coralnet-toolbox
๐ Guides
For further instructions please see the following guides:
๐ฅ Watch the Video Demos
โฉ TL;Dr
The CoralNet-Toolbox
is an unofficial codebase that can be used to augment processes associated with those on
CoralNet.
It usesโจUltralytics
๐ as a base, which is an open-source library for
computer vision and deep learning built in PyTorch
. For more information on their AGPL-3.0
license, see
here.
๐ Supported Models
The toolbox
integrates a variety of state-of-the-art models to help you create rectangle and polygon annotations efficiently. Below is a categorized overview of the supported models and frameworks:
| Category | Models |
|-------------------------|---------------------------------------------------------------------------------------------------------|
| **Trainable** | - ๐ฆพ [YOLOv3](https://docs.ultralytics.com/models/)
- ๐ฆ [YOLOv4](https://docs.ultralytics.com/models/)
- ๐ฆ
[YOLOv5](https://docs.ultralytics.com/models/)
- ๐ฌ [YOLOv6](https://docs.ultralytics.com/models/)
- ๐ข [YOLOv7](https://docs.ultralytics.com/models/)
- ๐ [YOLOv8](https://docs.ultralytics.com/models/)
- ๐ [YOLOv9](https://docs.ultralytics.com/models/)
- ๐ฆ [YOLOv10](https://docs.ultralytics.com/models/)
- ๐ [YOLO11](https://docs.ultralytics.com/models/)
- ๐ณ [YOLO12](https://docs.ultralytics.com/models/) |
| **Segment Anything** | - ๐ชธ [SAM](https://github.com/facebookresearch/segment-anything)
- ๐ [CoralSCOP](https://github.com/zhengziqiang/CoralSCOP)
- โก [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM)
- ๐ [RepViT-SAM](https://github.com/THU-MIG/RepViT)
- โ๏ธ [EdgeSAM](https://github.com/chongzhou96/EdgeSAM)
- ๐ฑ [MobileSAM](https://github.com/ChaoningZhang/MobileSAM) |
| **Visual Prompting** | - ๐๏ธ [YOLOE](https://github.com/THU-MIG/yoloe)
- ๐ค [AutoDistill](https://github.com/autodistill):
โข ๐ฆ Grounding DINO
โข ๐ฆ OWLViT
โข โก OmDetTurbo |
These models enable fast, accurate, and flexible annotation workflows for a wide range of use cases for patch-based image classification, object detection, instance segmentation.
| 
**Patch Annotation** | 
**Rectangle Annotation** | 
**(Multi) Polygon Annotation** |
|:--:|:--:|:--:|
| 
**Image Classification** | 
**Object Detection** | 
**Instance Segmentation** |
| 
**Segment Anything (SAM)** | 
**Polygon Classification** | 
**Region-based Detection** |
| 
**Cut** | 
**Combine** | 
**Simplify** |
| 
**See Anything (YOLOE)** | 
**Patch-based LAI Classification** | 
**Video Inference** |
</div>
Enhance your CoralNet experience with these tools:
- ๐ฅ Download: Retrieve Source data (images and annotations) from CoralNet
- ๐ฌ Rasters: Import images, or extract frames directly from video files
- โ๏ธ Annotate: Create annotations freely
- ๐๏ธ Visualize: See CoralNet and CPCe annotations superimposed on images
- ๐ฌ Sample: Sample patches using various methods (Uniform, Random, Stratified)
- ๐งฉ Patches: Create patches (points)
- ๐ณ Rectangles: Create rectangles (bounding boxes)
- ๐ฃ Polygons: Create polygons (instance masks)
- ๐จโ๐ฉโ๐งโ๐ฆ MultiPolygons: Combine multiple, non-overlapping polygons (i.e, genets)
- โ๏ธ Edit: Cut and Combine polygons and rectangles
- ๐ฆพ SAM: Use
FastSAM
, CoralSCOP
, RepViT-SAM
, EdgeSAM
, MobileSAM
, and SAM
to create polygons
- ๐ทโโ๏ธ Work areas: Perform region-specific detections / segmentations with any model
- ๐ YOLOE (See Anything): Detect similar appearing objects using visual prompts automatically
- ๐งช AutoDistill: Use
AutoDistill
to access the following for creating rectangles and polygons:
- Uses
Grounding DINO
, OWLViT
, OmDetTurbo
- ๐ป Tune: Tune hyperparameters to identify ideal training conditions
- ๐ง Train: Build local patch-based classifiers, object detection, and instance segmentation models
- ๐ฎ Deploy: Use trained models for predictions
- ๐ Evaluation: Evaluate model performance
- ๐ Optimize: Productionize models for faster inferencing
- โ๏ธ Batch Inference: Perform predictions on multiple images, automatically
- ๐๏ธ Video Inference: Perform predictions on a video in real-time, record the output and analytics
- โ๏ธ I/O: Import and Export annotations from / to CoralNet, Viscore, and TagLab
- Export annotations as GeoJSONs, segmentation masks
- ๐ธ YOLO: Import and Export YOLO datasets for machine learning
- ๐งฑ Tile Dataset: Tile existing Detection / Segmentation datasets
- ๐๏ธ Tile Inference: Pre-compute multiple work areas for an entire image
๐ TODO