TagLab: an image segmentation tool oriented to marine data analysis

Overview

TagLab: an image segmentation tool oriented to marine data analysis

TagLab was created to support the activity of annotation and extraction of statistical data from ortho-maps of benthic communities. The tool includes different types of CNN-based segmentation networks specially trained for agnostic (relative only to contours) or semantic (also related to species) recognition of corals. TagLab is an ongoing project of the Visual Computing Lab http://vcg.isti.cnr.it/.

ScreenShot

Interaction

TagLab allows to :

  • zoom and navigate a large map using (zoom/mouse wheel, pan/'Move' tool selected + left button). With every other tool selected the pan is activated with ctrl + left button
  • segment coral instances in a semi-automatic way by simply clicks at the corals' extremes. This is achieved using the Deep Extreme Cut network fine-tuned on coral images. Deep Extreme Cut original code : https://github.com/scaelles/DEXTR-PyTorch/
  • assign a class with the 'Assign class' tool. Area and perimeter are now displayed in the segmentation info panel on the right
  • simultaneously turn off the visibility of one or more classes, (ctrl + left button/disable all but the selected, shift + left button, inverse operation), change the class transparency using the above slider
  • perform boolean operations between existing labels (right button to open the menu)
  • refine the incorrect borders automatically with the Refine operation or manually with the 'Edit Border' tool
  • tracking coral changes in different time intervals
  • import depth information of the seafloor
  • import GeoTiff
  • draw coral internal cracks with the 'Create Crack' tool
  • make freehand measurements or measure the distance between centroids (Ruler tool).
  • save the annotations (as polygons) and import them into a new project
  • export a CSV file table containing the data of each coral colony
  • export a JPG file of a black background with totally opaque labels
  • export shapefiles
  • export a new dataset and train your network (!)

We are working hard to create a web site with detailed instructions about TagLab. Stay tuned(!)

Supported Platforms and Requirements

TagLab runs on Linux, Windows, and MacOS. To run TagLab, the main requirement is just Python 3.6.x or 3.7.x.

GPU accelerated computations are not supported on MacOS and on any machine that has not an NVIDIA graphics card. To use them, you'll need to install the NVIDIA CUDA Toolkit, versions 9.2, 10.1 or 10.2 are supported. If you don't have a NVida graphics card (or if you use MacOS), CPU will be used.

Installing TagLab

Step 0: Dependencies

Before installing TagLab, be sure to have installed Python 3.6.x or 3.7.x, and NVIDIA CUDA Toolkit if it is supported. You can check if they are properly installed by running the following commands in a shell (bash on Linux, poweshell on Windows):

python3 --version
nvcc --version

If python and cuda are properly installed, both commands will print their versions.

Under Linux, if you don't use the APT package manager (not ubuntu or debian derived distros), be sure to install the gdal library manually (the command gdal-config --version should output the gdal library version). Under MacOS, if you don't use HomeBrew package manager, be sure to install the gdal library manually (the command gdal-config --version should output the gdal library version).

Under MacOS and Linux, also cmake and a C++ compiler must be installed.

Step 1: Clone the repository

Just click on the "Clone or Download" button at the top of this page and unzip the whole package in a folder of your choice.

Step 2: Install all the dependencies

Then, open a shell (not python prompt!), change directory to the TagLab main directory and run:

python3 install.py

or, on Windows:

python.exe install.py

The script will automatically install the remaining libraries required by TagLab and download the network weights. If NVIDIA CUDA Toolkit is not supported by your machine, the script will ask to install the cpu version. You can bypass this step and force to install the cpu version directly by running

python3 install.py cpu

or, on Windows:

python.exe install.py cpu

Step 3: Run

Just start TagLab.py from a command shell or your preferred Python IDE.

From a command shell simply write:

python3 taglab.py

or, on Windows:

python.exe taglab.py

To test if TagLab works correctly, try to open the sample project available in the projects folder.

Updating TagLab

If you already installed TagLab and you need to update to a new version, you can just run the update.py script:

python3 update.py

or, on Windows:

python.exe update.py

The script will automatically update TagLab to the newest version available in this repository.

Updating from 0.2

If you are updating TagLab from 0.2 version, in order to download also the new networks, please run the update.py script twice:

python3 update.py
python3 update.py
Comments
  • Windows DLL Load Failure

    Windows DLL Load Failure

    I am trying to run TagLab on a brand new installation of Windows 10 running Python 3.7.9 and CUDA 10.2. I have tried removing an reinstalling all packages to no avail. Install.py runs fine. When running TagLab.py I get:

    Traceback (most recent call last): File "TagLab.py", line 34, in from PyQt5.QtCore import Qt, QSize, QMargins, QDir, QPoint, QPointF, QRectF, QTimer, pyqtSlot, pyqtSignal, QSettings, QFileInfo, QModelIndex ImportError: DLL load failed: The specified module could not be found.

    I have tried separately install PyQt5 and have tried version 5.15.7 as well as 5.15.6. I can find the .dll files for PyQt5 and have added their location to my PCs path, but I still get the DLL load failure.

    Thank you, Chris

    opened by such-chris 7
  • No module named 'pycocotools' after UPDATE

    No module named 'pycocotools' after UPDATE

    After the update, cannot open TagLab again and it shows up:

    Traceback (most recent call last): File "taglab.py", line 78, in from source.NewDataset import NewDataset File "C:\Users\user\OneDrive\Documents\TagLab-main\source\NewDataset.py", line 20, in from pycocotools import mask as maskcoco ModuleNotFoundError: No module named 'pycocotools'

    Did anyone found this happening also? How can I solve this problem?

    opened by ihuanh 6
  • Automatic classification crashes with Classifier Widget error

    Automatic classification crashes with Classifier Widget error

    I am trying to train a classifier for my images. I was able to export a training data set, but when I clicked on the robot to select my classifier the program crashed with the error below. Seems like there's a problem displaying the QtClassifierWidget due to missing keys.

    Traceback (most recent call last):
      File "taglab.py", line 4420, in selectClassifier
        self.classifierWidget = QtClassifierWidget(self.available_classifiers, parent=self)
      File "C:\Program Files\TagLab-main\source\QtClassifierWidget.py", line 82, in __init__
        self.editClasses = QLineEdit(self.classes2str(classifiers[0]["Classes"]))
      File "C:\Program Files\TagLab-main\source\QtClassifierWidget.py", line 329, in classes2str
        for key in classes_dict.keys():
    AttributeError: 'list' object has no attribute 'keys'
    
    opened by mlopezp 4
  • Runtime Error: QPixmap

    Runtime Error: QPixmap

    Hello,

    I have been having trouble running TagLab today even though I have previously opened it before. I have uninstalled and reinstalled all the packages on my computer and in different Python environments. I have included the entire error message below. Is there anything I can do to get around this and run the program?

    Thanks.

    Traceback (most recent call last):
      File "/Users/jshao/Desktop/TagLab-main/TagLab.py", line 4578, in <module>
        tool = TagLab()
      File "/Users/jshao/Desktop/TagLab-main/TagLab.py", line 287, in __init__
        self.viewerplus = QtImageViewerPlus(self.taglab_dir)
      File "/Users/jshao/Desktop/TagLab-main/source/QtImageViewerPlus.py", line 128, in __init__
        self.tools = Tools(self)
      File "/Users/jshao/Desktop/TagLab-main/source/Tools.py", line 34, in __init__
        self.scribbles = Scribbles(self.scene)
      File "/Users/jshao/Desktop/TagLab-main/source/tools/Scribbles.py", line 32, in __init__
        self.setCustomCursor()
      File "/Users/jshao/Desktop/TagLab-main/source/tools/Scribbles.py", line 47, in setCustomCursor
        pxmap = QPixmap(cursor_size, cursor_size)
    TypeError: arguments did not match any overloaded call:
      QPixmap(): too many arguments
      QPixmap(int, int): argument 1 has unexpected type 'float'
      QPixmap(QSize): argument 1 has unexpected type 'float'
      QPixmap(str, format: str = None, flags: Union[Qt.ImageConversionFlags, Qt.ImageConversionFlag] = Qt.ImageConversionFlag.AutoColor): argument 1 has unexpected type 'float'
      QPixmap(List[str]): argument 1 has unexpected type 'float'
      QPixmap(QPixmap): argument 1 has unexpected type 'float'
      QPixmap(Any): too many arguments
    
    opened by jjshao 4
  • Error with requirement wheel

    Error with requirement wheel

    Hi! I'm working on some intertidal/underwater imagery at Universite Laval and have been having issues getting TagLab to run. I'm running it on Ubuntu 20.04.2 (downloaded on Windows 10) with Python 3.7.5 and NVCC 10.1. I'm running into issues with install.py where I cannot install wheel. But if I run pip3 install wheel, wheel is successfully installed. I'm wondering if it's a problem with my own system because I also have Python 3.8.5 installed but have Python 3.7.5 installed over it in a virtual environment and am running install.py with python3.7 install.py I just thought I would reach out and see if you guys had any insights to what was happening, thanks!

    Found NVCC version: 10.1
    Torch for CUDA 10.1
    GDAL version installed: 3.0.4
    Coraline built correctly.
    WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
    Collecting wheel
      WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/wheel/
      WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/wheel/
      WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/wheel/
      WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/wheel/
      WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/wheel/
      Could not fetch URL https://pypi.org/simple/wheel/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/wheel/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
      ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)
    ERROR: No matching distribution found for wheel
    WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
    Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
    Traceback (most recent call last):
      File "install.py", line 164, in <module>
        subprocess.check_call([sys.executable, "-m", "pip", "install", package])
      File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['/usr/local/bin/python3.7', '-m', 'pip', 'install', 'wheel']' returned non-zero exit status 1.
    
    opened by jjshao 3
  • Issues with Shapely? I got you.

    Issues with Shapely? I got you.

    If you run into an issue with Shapely after downloading and installing all of the requirements, uninstall the version of Shapely that you installed with the other packages using pip. Then install it with conda instead. If everything else installed correctly, you should be good.

    Cheers.

    opened by JordanMakesMaps 3
  • install script on M1 Mac hangups

    install script on M1 Mac hangups

    Just got through the install script on an M1 Macbook and had to manually install a few of the items, as well as run in rosetta. The most glaring problem I ran into was line 232 of the install script...

    # torch and torchvision subprocess.check_call([sys.executable, "-m", "pip", "install", torch_package, torchvision_package, 'torch_extra_argument1', 'torch_extra_argument2'])

    I got an invalid argument '' error during the install. So I guess these variables aren't being assigned values for me. Since the M1 requires running the install in cpu mode I copy pasted the following parameters in place of the torch_extra_argument variables and finally got through this line.

    "--extra-index-url", "https://download.pytorch.org/whl/cpu"

    opened by kidconcept 2
  • 4-click and Positive/Negative segmentation not working since v2022.02.24

    4-click and Positive/Negative segmentation not working since v2022.02.24

    After updating from TagLab v2022.02.13 to TagLab v2022.02.24 the click based segmentation stopped working. GUI still switches between tools but no segmentation is performed. Also tested with currently latest version TagLab v2022.02.25, with same behaviour. Fixed by just rolling back to the previously working version.

    Please let me know if you need more build/system info.

    opened by cappelletto 2
  • Error importing label images

    Error importing label images

    Before using TagLab we were manually tracing colonies using photoshop. I was testing importing these label images into TagLab so we don’t have to redo the work that we have already done, and possibly use these to create the training set for our monitoring sites. I have tried exporting the image labels as both png and jpg but the program crashes with error:

    File "TagLab.py", line 3168, in importLabelMap
        created_blobs = self.activeviewer.annotations.import_label_map(filename, self.project.labels, self.activeviewer.img_map.width(),
      File "C:\Program Files\TagLab-main\source\Annotation.py", line 495, in import_label_map
        c = label.fill
    AttributeError: 'str' object has no attribute ‘fill'
    

    TagLab: 0.7 Windows: 10 Enterprise Python: 3.8.10 nvcc: 11.3.58

    opened by mlopezp 2
  • GUI not recognizing mouse

    GUI not recognizing mouse

    Hello. I'm a coral ecologist at the Hawaii Institute of Marine Biology trying out the TagLab project. I have managed to install the everything on my Mac, and it runs. However:

    1. the GUI doesn't appear until I do something with the file menu (like "Load Map"); and
    2. once the GUI appears I can click between text inputs, but I can't click buttons (e.g., "Cancel"), so I'm kind of stuck...

    Notes: I have python 3.7.7 installed. In addition to your installation instructions, I had to install "albumentations" and a (seemingly known) Qt issue preventing the GUI to start meant that I had to take "opencv-python" back to version 4.1.2.30.

    opened by jmadin 2
  • Fix #24 Quickfix by inspecting for potentially empty objects

    Fix #24 Quickfix by inspecting for potentially empty objects

    Right Click -> Select All in the Note edit box was triggering an error as it was notifying about a change on a non-existing object. A quickfix is to inspect activeviewer object in noteChanged(self) before querying for number of selected objects. If the object is empty (None) the just return

    opened by cappelletto 1
  • Installation issue: AttributeError: module 'numpy' has no attribute 'int'

    Installation issue: AttributeError: module 'numpy' has no attribute 'int'

    Another installation issue found.

    Installation success, update for double check success. However after running the Taglab for the first time, the following comes out:

    TagLab 2022.11.3 Traceback (most recent call last): File "taglab.py", line 4704, in tool = TagLab() File "taglab.py", line 437, in init self.labels_widget.setLabels(self.project, None) File "...source\QtTableLabel.py", line 240, in setLabels self.data = project.create_labels_table(self.activeImg) File "...source\Project.py", line 492, in create_labels_table 'Visibility': np.zeros(len(self.labels), dtype=np.int), File "...AppData\Local\Programs\Python\Python38\lib\site-packages\numpy_init_.py", line 284, in getattr raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'int'

    Window 11 Python 3.8.0 NVCC 11.6

    Anyway I can fix this issue?

    opened by ihuanh 4
  • Entering custom values in a new dictionary displays alert after every keystroke

    Entering custom values in a new dictionary displays alert after every keystroke

    Hi, I was creating a new dictionary for our lab and whenever I click into the boxes to define the RGB colors it displays an alert: "Please enter a number between 0 and 255". While a tooltip is helpful in this case, something is triggering it way too often making it hard to enter the data this way.

    opened by mlopezp 1
  • Training Auto-segmentation

    Training Auto-segmentation

    Hi,

    I can no longer train a network. The error is in the picture below. I apologise for all the issues I'm posting but I am a PhD student and this program is meant to help me with some significant data analysis. Sometimes the training gets further sometimes it's shorter and sometimes it just doesn't work altogether. I would really appreciate the help.

    Thanks, Chris

    Capture

    opened by Chris-Cooney 6
  • 4-click segmentation

    4-click segmentation

    Hi there,

    After the most recent update the 4- clicks segmentation has stopped working for me. I have attempted to reinstall taglab and sadly it hasn't solved the issue. The error code is what is shown in the terminal before python stops working.

    Thanks, Chris

    Capture

    opened by Chris-Cooney 0
  • Question: Docker container in the works?

    Question: Docker container in the works?

    Hello! Really interested in using TagLab, it looks great!

    I was wondering if there are any plans to develop a Docker File for running TagLab, and then just exposing the GUI from within a running Docker container? This might help with installation issues for users using different environments.

    opened by Jordan-Pierce 1
Releases(v2022.11.3)
Owner
Visual Computing Lab - ISTI - CNR
Visual Computing Lab - ISTI - CNR
PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech Enhancement."

FullSubNet This Git repository for the official PyTorch implementation of "A Full-Band and Sub-Band Fusion Model for Real-Time Single-Channel Speech E

郝翔 357 Jan 04, 2023
a general-purpose Transformer based vision backbone

Swin Transformer By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo. This repo is the official implement

Microsoft 9.9k Jan 08, 2023
C3d-pytorch - Pytorch porting of C3D network, with Sports1M weights

C3D for pytorch This is a pytorch porting of the network presented in the paper Learning Spatiotemporal Features with 3D Convolutional Networks How to

Davide Abati 311 Jan 06, 2023
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

38 Oct 21, 2022
Node-level Graph Regression with Deep Gaussian Process Models

Node-level Graph Regression with Deep Gaussian Process Models Prerequests our implementation is mainly based on tensorflow 1.x and gpflow 1.x: python

1 Jan 16, 2022
CrossNorm and SelfNorm for Generalization under Distribution Shifts (ICCV 2021)

CrossNorm (CN) and SelfNorm (SN) (Accepted at ICCV 2021) This is the official PyTorch implementation of our CNSN paper, in which we propose CrossNorm

100 Dec 28, 2022
Contrastive Learning of Structured World Models

Contrastive Learning of Structured World Models This repository contains the official PyTorch implementation of: Contrastive Learning of Structured Wo

Thomas Kipf 371 Jan 06, 2023
A new video text spotting framework with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 67 Jan 03, 2023
VIsually-Pivoted Audio and(N) Text

VIP-ANT: VIsually-Pivoted Audio and(N) Text Code for the paper Connecting the Dots between Audio and Text without Parallel Data through Visual Knowled

Yän.PnG 16 Nov 04, 2022
Unofficial implementation of Google "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization" in PyTorch

CutPaste CutPaste: image from paper Unofficial implementation of Google's "CutPaste: Self-Supervised Learning for Anomaly Detection and Localization"

Lilit Yolyan 59 Nov 27, 2022
Plenoxels: Radiance Fields without Neural Networks, Code release WIP

Plenoxels: Radiance Fields without Neural Networks Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa UC Be

Alex Yu 2.3k Dec 30, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

185 Dec 26, 2022
Official implementation of VQ-Diffusion

Vector Quantized Diffusion Model for Text-to-Image Synthesis Overview This is the official repo for the paper: [Vector Quantized Diffusion Model for T

Microsoft 592 Jan 03, 2023
PyTorch implementation of "ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context" (INTERSPEECH 2020)

ContextNet ContextNet has CNN-RNN-transducer architecture and features a fully convolutional encoder that incorporates global context information into

Sangchun Ha 24 Nov 24, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
The project covers common metrics for super-resolution performance evaluation.

Super-Resolution Performance Evaluation Code The project covers common metrics for super-resolution performance evaluation. Metrics support The script

xmy 10 Aug 03, 2022
Implementation for Homogeneous Unbalanced Regularized Optimal Transport

HUROT: An Homogeneous formulation of Unbalanced Regularized Optimal Transport. This repository provides code related to this preprint. This is an alph

Théo Lacombe 1 Feb 17, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-grained Classification".

HA-in-Fine-Grained-Classification This repo includes the CUB-GHA (Gaze-based Human Attention) dataset and code of the paper "Human Attention in Fine-g

16 Oct 29, 2022