Segmentation Models
Founded 6 years ago

Tools for image segmentation and more

A set of tools for image semantic segmentation and classification. It includes python packages with popular neural network architectures implemented using modern deep learning frameworks like Keras, TensorFlow and PyTorch. Projects have more than two years history and overall more than 500K downloads from PyPI.

1. Segmentation Models (Keras / TF) & Segmentation Models PyTorch (PyTorch)
A set of popular neural network architectures for semantic segmentation like Unet, Linknet, FPN, PSPNet, DeepLabV3(+) with pretrained on imagenet state-of-the-art encoders (resnet, resnext, efficientnet and others). Easy to install and use with your favorite deep learning frameworks and "train-loop" libraries.

https://github.com/qubvel/segmentation_models.pytorch
https://github.com/qubvel/segmentation_models

2. Classification Models (Keras / TF)
Extended `keras-applications` package with some new ported classification atchitectures: ResNet(18, 34), ResNeXt(50, 101), SE-ResNeXt(50, 101) and others.

https://github.com/qubvel/classification_models

3. EfficientNet (Keras / TF)
Reimplementation of EfficientNet architecture for Keras and tf.keras frameworks API. Include converted weights for Imagenet and Noisy-Student.

https://github.com/qubvel/efficientnet

4. TTAch (PyTorch) & tta-wrapper (Keras)
Image test-time-augmentation wrappers for classification, segmentation and keypoints models. Wrap your inference model in a few lines of code to add transformations like horizontal or vertical flip, scale, resize, crop to increese your model performance.

https://github.com/qubvel/ttach
https://github.com/qubvel/tta_wrapper

To join projects write me in ODS slack: @qubvel or qubvel@gmail.com.

We are looking for
1) Keras / TF active users to support projects written on those frameworks
2) TensorRT / OpenVINO / "other inference framework" users to move forward production-ready functionality

Cookies help us deliver our services. By using our services, you agree to our use of cookies.