High level network definitions with pre-trained weights in TensorFlow

Overview

TensorNets Build Status

High level network definitions with pre-trained weights in TensorFlow (tested with 2.1.0 >= TF >= 1.4.0).

Guiding principles

  • Applicability. Many people already have their own ML workflows, and want to put a new model on their workflows. TensorNets can be easily plugged together because it is designed as simple functional interfaces without custom classes.
  • Manageability. Models are written in tf.contrib.layers, which is lightweight like PyTorch and Keras, and allows for ease of accessibility to every weight and end-point. Also, it is easy to deploy and expand a collection of pre-processing and pre-trained weights.
  • Readability. With recent TensorFlow APIs, more factoring and less indenting can be possible. For example, all the inception variants are implemented as about 500 lines of code in TensorNets while 2000+ lines in official TensorFlow models.
  • Reproducibility. You can always reproduce the original results with simple APIs including feature extractions. Furthermore, you don't need to care about a version of TensorFlow beacuse compatibilities with various releases of TensorFlow have been checked with Travis.

Installation

You can install TensorNets from PyPI (pip install tensornets) or directly from GitHub (pip install git+https://github.com/taehoonlee/tensornets.git).

A quick example

Each network (see full list) is not a custom class but a function that takes and returns tf.Tensor as its input and output. Here is an example of ResNet50:

import tensorflow as tf
# import tensorflow.compat.v1 as tf  # for TF 2
import tensornets as nets
# tf.disable_v2_behavior()  # for TF 2

inputs = tf.placeholder(tf.float32, [None, 224, 224, 3])
model = nets.ResNet50(inputs)

assert isinstance(model, tf.Tensor)

You can load an example image by using utils.load_img returning a np.ndarray as the NHWC format:

img = nets.utils.load_img('cat.png', target_size=256, crop_size=224)
assert img.shape == (1, 224, 224, 3)

Once your network is created, you can run with regular TensorFlow APIs 😊 because all the networks in TensorNets always return tf.Tensor. Using pre-trained weights and pre-processing are as easy as pretrained() and preprocess() to reproduce the original results:

with tf.Session() as sess:
    img = model.preprocess(img)  # equivalent to img = nets.preprocess(model, img)
    sess.run(model.pretrained())  # equivalent to nets.pretrained(model)
    preds = sess.run(model, {inputs: img})

You can see the most probable classes:

print(nets.utils.decode_predictions(preds, top=2)[0])
[(u'n02124075', u'Egyptian_cat', 0.28067636), (u'n02127052', u'lynx', 0.16826575)]

You can also easily obtain values of intermediate layers with middles() and outputs():

with tf.Session() as sess:
    img = model.preprocess(img)
    sess.run(model.pretrained())
    middles = sess.run(model.middles(), {inputs: img})
    outputs = sess.run(model.outputs(), {inputs: img})

model.print_middles()
assert middles[0].shape == (1, 56, 56, 256)
assert middles[-1].shape == (1, 7, 7, 2048)

model.print_outputs()
assert sum(sum((outputs[-1] - preds) ** 2)) < 1e-8

With load() and save(), your weight values can be restorable:

with tf.Session() as sess:
    model.init()
    # ... your training ...
    model.save('test.npz')

with tf.Session() as sess:
    model.load('test.npz')
    # ... your deployment ...

TensorNets enables us to deploy well-known architectures and benchmark those results faster ⚡️ . For more information, you can check out the lists of utilities, examples, and architectures.

Object detection example

Each object detection model can be coupled with any network in TensorNets (see performance) and takes two arguments: a placeholder and a function acting as a stem layer. Here is an example of YOLOv2 for PASCAL VOC:

import tensorflow as tf
import tensornets as nets

inputs = tf.placeholder(tf.float32, [None, 416, 416, 3])
model = nets.YOLOv2(inputs, nets.Darknet19)

img = nets.utils.load_img('cat.png')

with tf.Session() as sess:
    sess.run(model.pretrained())
    preds = sess.run(model, {inputs: model.preprocess(img)})
    boxes = model.get_boxes(preds, img.shape[1:3])

Like other models, a detection model also returns tf.Tensor as its output. You can see the bounding box predictions (x1, y1, x2, y2, score) by using model.get_boxes(model_output, original_img_shape) and visualize the results:

from tensornets.datasets import voc
print("%s: %s" % (voc.classnames[7], boxes[7][0]))  # 7 is cat

import numpy as np
import matplotlib.pyplot as plt
box = boxes[7][0]
plt.imshow(img[0].astype(np.uint8))
plt.gca().add_patch(plt.Rectangle(
    (box[0], box[1]), box[2] - box[0], box[3] - box[1],
    fill=False, edgecolor='r', linewidth=2))
plt.show()

More detection examples such as FasterRCNN on VOC2007 are here 😎 . Note that:

  • APIs of detection models are slightly different:

    • YOLOv3: sess.run(model.preds, {inputs: img}),
    • YOLOv2: sess.run(model, {inputs: img}),
    • FasterRCNN: sess.run(model, {inputs: img, model.scales: scale}),
  • FasterRCNN requires roi_pooling:

    • git clone https://github.com/deepsense-io/roi-pooling && cd roi-pooling && vi roi_pooling/Makefile and edit according to here,
    • python setup.py install.

Utilities

Besides pretrained() and preprocess(), the output tf.Tensor provides the following useful methods:

  • logits: returns the tf.Tensor logits (the values before the softmax),
  • middles() (=get_middles()): returns a list of all the representative tf.Tensor end-points,
  • outputs() (=get_outputs()): returns a list of all the tf.Tensor end-points,
  • weights() (=get_weights()): returns a list of all the tf.Tensor weight matrices,
  • summary() (=print_summary()): prints the numbers of layers, weight matrices, and parameters,
  • print_middles(): prints all the representative end-points,
  • print_outputs(): prints all the end-points,
  • print_weights(): prints all the weight matrices.
Example outputs of print methods are:
>>> model.print_middles()
Scope: resnet50
conv2/block1/out:0 (?, 56, 56, 256)
conv2/block2/out:0 (?, 56, 56, 256)
conv2/block3/out:0 (?, 56, 56, 256)
conv3/block1/out:0 (?, 28, 28, 512)
conv3/block2/out:0 (?, 28, 28, 512)
conv3/block3/out:0 (?, 28, 28, 512)
conv3/block4/out:0 (?, 28, 28, 512)
conv4/block1/out:0 (?, 14, 14, 1024)
...

>>> model.print_outputs()
Scope: resnet50
conv1/pad:0 (?, 230, 230, 3)
conv1/conv/BiasAdd:0 (?, 112, 112, 64)
conv1/bn/batchnorm/add_1:0 (?, 112, 112, 64)
conv1/relu:0 (?, 112, 112, 64)
pool1/pad:0 (?, 114, 114, 64)
pool1/MaxPool:0 (?, 56, 56, 64)
conv2/block1/0/conv/BiasAdd:0 (?, 56, 56, 256)
conv2/block1/0/bn/batchnorm/add_1:0 (?, 56, 56, 256)
conv2/block1/1/conv/BiasAdd:0 (?, 56, 56, 64)
conv2/block1/1/bn/batchnorm/add_1:0 (?, 56, 56, 64)
conv2/block1/1/relu:0 (?, 56, 56, 64)
...

>>> model.print_weights()
Scope: resnet50
conv1/conv/weights:0 (7, 7, 3, 64)
conv1/conv/biases:0 (64,)
conv1/bn/beta:0 (64,)
conv1/bn/gamma:0 (64,)
conv1/bn/moving_mean:0 (64,)
conv1/bn/moving_variance:0 (64,)
conv2/block1/0/conv/weights:0 (1, 1, 64, 256)
conv2/block1/0/conv/biases:0 (256,)
conv2/block1/0/bn/beta:0 (256,)
conv2/block1/0/bn/gamma:0 (256,)
...

>>> model.summary()
Scope: resnet50
Total layers: 54
Total weights: 320
Total parameters: 25,636,712

Examples

  • Comparison of different networks:
inputs = tf.placeholder(tf.float32, [None, 224, 224, 3])
models = [
    nets.MobileNet75(inputs),
    nets.MobileNet100(inputs),
    nets.SqueezeNet(inputs),
]

img = utils.load_img('cat.png', target_size=256, crop_size=224)
imgs = nets.preprocess(models, img)

with tf.Session() as sess:
    nets.pretrained(models)
    for (model, img) in zip(models, imgs):
        preds = sess.run(model, {inputs: img})
        print(utils.decode_predictions(preds, top=2)[0])
  • Transfer learning:
inputs = tf.placeholder(tf.float32, [None, 224, 224, 3])
outputs = tf.placeholder(tf.float32, [None, 50])
model = nets.DenseNet169(inputs, is_training=True, classes=50)

loss = tf.losses.softmax_cross_entropy(outputs, model.logits)
train = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(loss)

with tf.Session() as sess:
    nets.pretrained(model)
    for (x, y) in your_NumPy_data:  # the NHWC and one-hot format
        sess.run(train, {inputs: x, outputs: y})
  • Using multi-GPU:
inputs = tf.placeholder(tf.float32, [None, 224, 224, 3])
models = []

with tf.device('gpu:0'):
    models.append(nets.ResNeXt50(inputs))

with tf.device('gpu:1'):
    models.append(nets.DenseNet201(inputs))

from tensornets.preprocess import fb_preprocess
img = utils.load_img('cat.png', target_size=256, crop_size=224)
img = fb_preprocess(img)

with tf.Session() as sess:
    nets.pretrained(models)
    preds = sess.run(models, {inputs: img})
    for pred in preds:
        print(utils.decode_predictions(pred, top=2)[0])

Performance

Image classification

  • The top-k accuracies were obtained with TensorNets on ImageNet validation set and may slightly differ from the original ones.
    • Input: input size fed into models
    • Top-1: single center crop, top-1 accuracy
    • Top-5: single center crop, top-5 accuracy
    • MAC: rounded the number of float operations by using tf.profiler
    • Size: rounded the number of parameters (w/ fully-connected layers)
    • Stem: rounded the number of parameters (w/o fully-connected layers)
  • The computation times were measured on NVIDIA Tesla P100 (3584 cores, 16 GB global memory) with cuDNN 6.0 and CUDA 8.0.
    • Speed: milliseconds for inferences of 100 images
  • The summary plot is generated by this script.
Input Top-1 Top-5 MAC Size Stem Speed References
ResNet50 224 74.874 92.018 51.0M 25.6M 23.6M 195.4 [paper] [tf-slim] [torch-fb]
[caffe] [keras]
ResNet101 224 76.420 92.786 88.9M 44.7M 42.7M 311.7 [paper] [tf-slim] [torch-fb]
[caffe]
ResNet152 224 76.604 93.118 120.1M 60.4M 58.4M 439.1 [paper] [tf-slim] [torch-fb]
[caffe]
ResNet50v2 299 75.960 93.034 51.0M 25.6M 23.6M 209.7 [paper] [tf-slim] [torch-fb]
ResNet101v2 299 77.234 93.816 88.9M 44.7M 42.6M 326.2 [paper] [tf-slim] [torch-fb]
ResNet152v2 299 78.032 94.162 120.1M 60.4M 58.3M 455.2 [paper] [tf-slim] [torch-fb]
ResNet200v2 224 78.286 94.152 129.0M 64.9M 62.9M 618.3 [paper] [tf-slim] [torch-fb]
ResNeXt50c32 224 77.740 93.810 49.9M 25.1M 23.0M 267.4 [paper] [torch-fb]
ResNeXt101c32 224 78.730 94.294 88.1M 44.3M 42.3M 427.9 [paper] [torch-fb]
ResNeXt101c64 224 79.494 94.592 0.0M 83.7M 81.6M 877.8 [paper] [torch-fb]
WideResNet50 224 78.018 93.934 137.6M 69.0M 66.9M 358.1 [paper] [torch]
Inception1 224 66.840 87.676 14.0M 7.0M 6.0M 165.1 [paper] [tf-slim] [caffe-zoo]
Inception2 224 74.680 92.156 22.3M 11.2M 10.2M 134.3 [paper] [tf-slim]
Inception3 299 77.946 93.758 47.6M 23.9M 21.8M 314.6 [paper] [tf-slim] [keras]
Inception4 299 80.120 94.978 85.2M 42.7M 41.2M 582.1 [paper] [tf-slim]
InceptionResNet2 299 80.256 95.252 111.5M 55.9M 54.3M 656.8 [paper] [tf-slim]
NASNetAlarge 331 82.498 96.004 186.2M 93.5M 89.5M 2081 [paper] [tf-slim]
NASNetAmobile 224 74.366 91.854 15.3M 7.7M 6.7M 165.8 [paper] [tf-slim]
PNASNetlarge 331 82.634 96.050 171.8M 86.2M 81.9M 1978 [paper] [tf-slim]
VGG16 224 71.268 90.050 276.7M 138.4M 14.7M 348.4 [paper] [keras]
VGG19 224 71.256 89.988 287.3M 143.7M 20.0M 399.8 [paper] [keras]
DenseNet121 224 74.972 92.258 15.8M 8.1M 7.0M 202.9 [paper] [torch]
DenseNet169 224 76.176 93.176 28.0M 14.3M 12.6M 219.1 [paper] [torch]
DenseNet201 224 77.320 93.620 39.6M 20.2M 18.3M 272.0 [paper] [torch]
MobileNet25 224 51.582 75.792 0.9M 0.5M 0.2M 34.46 [paper] [tf-slim]
MobileNet50 224 64.292 85.624 2.6M 1.3M 0.8M 52.46 [paper] [tf-slim]
MobileNet75 224 68.412 88.242 5.1M 2.6M 1.8M 70.11 [paper] [tf-slim]
MobileNet100 224 70.424 89.504 8.4M 4.3M 3.2M 83.41 [paper] [tf-slim]
MobileNet35v2 224 60.086 82.432 3.3M 1.7M 0.4M 57.04 [paper] [tf-slim]
MobileNet50v2 224 65.194 86.062 3.9M 2.0M 0.7M 64.35 [paper] [tf-slim]
MobileNet75v2 224 69.532 89.176 5.2M 2.7M 1.4M 88.68 [paper] [tf-slim]
MobileNet100v2 224 71.336 90.142 6.9M 3.5M 2.3M 93.82 [paper] [tf-slim]
MobileNet130v2 224 74.680 92.122 10.7M 5.4M 3.8M 130.4 [paper] [tf-slim]
MobileNet140v2 224 75.230 92.422 12.1M 6.2M 4.4M 132.9 [paper] [tf-slim]
75v3large 224 73.754 91.618 7.9M 4.0M 2.7M 79.73 [paper] [tf-slim]
100v3large 224 75.790 92.840 27.3M 5.5M 4.2M 94.71 [paper] [tf-slim]
100v3largemini 224 72.706 90.930 7.8M 3.9M 2.7M 70.57 [paper] [tf-slim]
75v3small 224 66.138 86.534 4.1M 2.1M 1.0M 37.78 [paper] [tf-slim]
100v3small 224 68.318 87.942 5.1M 2.6M 1.5M 42.00 [paper] [tf-slim]
100v3smallmini 224 63.440 84.646 4.1M 2.1M 1.0M 29.65 [paper] [tf-slim]
EfficientNetB0 224 77.012 93.338 26.2M 5.3M 4.0M 147.1 [paper] [tf-tpu]
EfficientNetB1 240 79.040 94.284 15.4M 7.9M 6.6M 217.3 [paper] [tf-tpu]
EfficientNetB2 260 80.064 94.862 18.1M 9.2M 7.8M 296.4 [paper] [tf-tpu]
EfficientNetB3 300 81.384 95.586 24.2M 12.3M 10.8M 482.7 [paper] [tf-tpu]
EfficientNetB4 380 82.588 96.094 38.4M 19.5M 17.7M 959.5 [paper] [tf-tpu]
EfficientNetB5 456 83.496 96.590 60.4M 30.6M 28.5M 1872 [paper] [tf-tpu]
EfficientNetB6 528 83.772 96.762 85.5M 43.3M 41.0M 3503 [paper] [tf-tpu]
EfficientNetB7 600 84.088 96.740 131.9M 66.7M 64.1M 6149 [paper] [tf-tpu]
SqueezeNet 224 54.434 78.040 2.5M 1.2M 0.7M 71.43 [paper] [caffe]

summary

Object detection

  • The object detection models can be coupled with any network but mAPs could be measured only for the models with pre-trained weights. Note that:
    • YOLOv3VOC was trained by taehoonlee with this recipe modified as max_batches=70000, steps=40000,60000,
    • YOLOv2VOC is equivalent to YOLOv2(inputs, Darknet19),
    • TinyYOLOv2VOC: TinyYOLOv2(inputs, TinyDarknet19),
    • FasterRCNN_ZF_VOC: FasterRCNN(inputs, ZF),
    • FasterRCNN_VGG16_VOC: FasterRCNN(inputs, VGG16, stem_out='conv5/3').
  • The mAPs were obtained with TensorNets and may slightly differ from the original ones. The test input sizes were the numbers reported as the best in the papers:
    • YOLOv3, YOLOv2: 416x416
    • FasterRCNN: min_shorter_side=600, max_longer_side=1000
  • The computation times were measured on NVIDIA Tesla P100 (3584 cores, 16 GB global memory) with cuDNN 6.0 and CUDA 8.0.
    • Size: rounded the number of parameters
    • Speed: milliseconds only for network inferences of a 416x416 or 608x608 single image
    • FPS: 1000 / speed
PASCAL VOC2007 test mAP Size Speed FPS References
YOLOv3VOC (416) 0.7423 62M 24.09 41.51 [paper] [darknet] [darkflow]
YOLOv2VOC (416) 0.7320 51M 14.75 67.80 [paper] [darknet] [darkflow]
TinyYOLOv2VOC (416) 0.5303 16M 6.534 153.0 [paper] [darknet] [darkflow]
FasterRCNN_ZF_VOC 0.4466 59M 241.4 3.325 [paper] [caffe] [roi-pooling]
FasterRCNN_VGG16_VOC 0.6872 137M 300.7 4.143 [paper] [caffe] [roi-pooling]
MS COCO val2014 mAP Size Speed FPS References
YOLOv3COCO (608) 0.6016 62M 60.66 16.49 [paper] [darknet] [darkflow]
YOLOv3COCO (416) 0.6028 62M 40.23 24.85 [paper] [darknet] [darkflow]
YOLOv2COCO (608) 0.5189 51M 45.88 21.80 [paper] [darknet] [darkflow]
YOLOv2COCO (416) 0.4922 51M 21.66 46.17 [paper] [darknet] [darkflow]

News 📰

  • The six variants of MobileNetv3 are released, 12 Mar 2020.
  • The eight variants of EfficientNet are released, 28 Jan 2020.
  • It is available to use TensorNets on TF 2, 23 Jan 2020.
  • MS COCO utils are released, 9 Jul 2018.
  • PNASNetlarge is released, 12 May 2018.
  • The six variants of MobileNetv2 are released, 5 May 2018.
  • YOLOv3 for COCO and VOC are released, 4 April 2018.
  • Generic object detection models for YOLOv2 and FasterRCNN are released, 26 March 2018.

Future work 🔥

Comments
  • TypeError: 'NoneType' object is not callable in results = yolov2_box(opts, np.array(outs[0], dtype=np.float32))

    TypeError: 'NoneType' object is not callable in results = yolov2_box(opts, np.array(outs[0], dtype=np.float32))

    in get_v2_boxes, results = yolov2_box(opts, np.array(outs[0], dtype=np.float32)) TypeError: 'NoneType' object is not callable

    It seems that it failed to do this step: from .darkflow_utils.get_boxes import yolov2_box How can I solve it?

    opened by samanthawyf 13
  • Error when running pip install tensornets

    Error when running pip install tensornets

    This is the output when running pip install tensornets

    Collecting tensornets
      Downloading https://files.pythonhosted.org/packages/b9/b7/dd956d687d5a45ccac1275dba1f522f6dedeaab43e5e540c2627fa4d6f9c/tensornets-0.3.4.tar.gz (576kB)
    Building wheels for collected packages: tensornets
      Running setup.py bdist_wheel for tensornets: started
      Running setup.py bdist_wheel for tensornets: finished with status 'error'
      Complete output from command c:\users\jmorales\appdata\local\programs\python\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\jmorales\\AppData\\Local\\Temp\\pip-install-emz1qnku\\tensornets\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\jmorales\AppData\Local\Temp\pip-wheel-7vgpy5t9 --python-tag cp35:
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build\lib.win-amd64-3.5
      creating build\lib.win-amd64-3.5\tensornets
      copying tensornets\capsulenets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\darknets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\densenets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\detections.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\inceptions.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\layers.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\middles.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\mobilenets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\nasnets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\ops.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\preprocess.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\pretrained.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\resnets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\squeezenets.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\utils.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\vggs.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\zf.py -> build\lib.win-amd64-3.5\tensornets
      copying tensornets\__init__.py -> build\lib.win-amd64-3.5\tensornets
      creating build\lib.win-amd64-3.5\tensornets\datasets
      copying tensornets\datasets\imagenet.py -> build\lib.win-amd64-3.5\tensornets\datasets
      copying tensornets\datasets\voc.py -> build\lib.win-amd64-3.5\tensornets\datasets
      copying tensornets\datasets\__init__.py -> build\lib.win-amd64-3.5\tensornets\datasets
      creating build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\rcnns.py -> build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\rpn_utils.py -> build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\yolos.py -> build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\yolo_utils.py -> build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\__init__.py -> build\lib.win-amd64-3.5\tensornets\references
      creating build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\box.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\__init__.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      running egg_info
      writing tensornets.egg-info\PKG-INFO
      writing top-level names to tensornets.egg-info\top_level.txt
      writing dependency_links to tensornets.egg-info\dependency_links.txt
      reading manifest file 'tensornets.egg-info\SOURCES.txt'
      reading manifest template 'MANIFEST.in'
      writing manifest file 'tensornets.egg-info\SOURCES.txt'
      copying tensornets\references\darkflow_utils\get_boxes.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\nms.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\datasets\voc.names -> build\lib.win-amd64-3.5\tensornets\datasets
      copying tensornets\references\coco.names -> build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\voc.names -> build\lib.win-amd64-3.5\tensornets\references
      copying tensornets\references\darkflow_utils\__init__.pyc -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\box.pyc -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\get_boxes.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\get_boxes.so -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\nms.pxd -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\nms.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      copying tensornets\references\darkflow_utils\nms.so -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
      running build_ext
      building 'tensornets.references.darkflow_utils.nms' extension
      creating build\temp.win-amd64-3.5
      creating build\temp.win-amd64-3.5\Release
      creating build\temp.win-amd64-3.5\Release\tensornets
      creating build\temp.win-amd64-3.5\Release\tensornets\references
      creating build\temp.win-amd64-3.5\Release\tensornets\references\darkflow_utils
      C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\jmorales\appdata\local\programs\python\python35\lib\site-packages\numpy\core\include -Ic:\users\jmorales\appdata\local\programs\python\python35\include -Ic:\users\jmorales\appdata\local\programs\python\python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctensornets/references/darkflow_utils/nms.c /Fobuild\temp.win-amd64-3.5\Release\tensornets/references/darkflow_utils/nms.obj
      nms.c
      c:\users\jmorales\appdata\local\programs\python\python35\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
      tensornets/references/darkflow_utils/nms.c(2380): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
      tensornets/references/darkflow_utils/nms.c(2389): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
      tensornets/references/darkflow_utils/nms.c(2439): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
      tensornets/references/darkflow_utils/nms.c(22288): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
      tensornets/references/darkflow_utils/nms.c(22294): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
      C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\users\jmorales\appdata\local\programs\python\python35\libs /LIBPATH:c:\users\jmorales\appdata\local\programs\python\python35\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" m.lib /EXPORT:PyInit_nms build\temp.win-amd64-3.5\Release\tensornets/references/darkflow_utils/nms.obj /OUT:build\lib.win-amd64-3.5\tensornets\references\darkflow_utils\nms.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\tensornets/references/darkflow_utils\nms.cp35-win_amd64.lib
      LINK : fatal error LNK1181: cannot open input file 'm.lib'
      error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1181
    
      ----------------------------------------
      Failed building wheel for tensornets
      Running setup.py clean for tensornets
    Failed to build tensornets
    Installing collected packages: tensornets
      Running setup.py install for tensornets: started
        Running setup.py install for tensornets: finished with status 'error'
        Complete output from command c:\users\jmorales\appdata\local\programs\python\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\jmorales\\AppData\\Local\\Temp\\pip-install-emz1qnku\\tensornets\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\jmorales\AppData\Local\Temp\pip-record-gqdxhlji\install-record.txt --single-version-externally-managed --compile:
        running install
        running build
        running build_py
        creating build
        creating build\lib.win-amd64-3.5
        creating build\lib.win-amd64-3.5\tensornets
        copying tensornets\capsulenets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\darknets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\densenets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\detections.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\inceptions.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\layers.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\middles.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\mobilenets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\nasnets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\ops.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\preprocess.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\pretrained.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\resnets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\squeezenets.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\utils.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\vggs.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\zf.py -> build\lib.win-amd64-3.5\tensornets
        copying tensornets\__init__.py -> build\lib.win-amd64-3.5\tensornets
        creating build\lib.win-amd64-3.5\tensornets\datasets
        copying tensornets\datasets\imagenet.py -> build\lib.win-amd64-3.5\tensornets\datasets
        copying tensornets\datasets\voc.py -> build\lib.win-amd64-3.5\tensornets\datasets
        copying tensornets\datasets\__init__.py -> build\lib.win-amd64-3.5\tensornets\datasets
        creating build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\rcnns.py -> build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\rpn_utils.py -> build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\yolos.py -> build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\yolo_utils.py -> build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\__init__.py -> build\lib.win-amd64-3.5\tensornets\references
        creating build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\box.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\__init__.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        running egg_info
        writing tensornets.egg-info\PKG-INFO
        writing top-level names to tensornets.egg-info\top_level.txt
        writing dependency_links to tensornets.egg-info\dependency_links.txt
        reading manifest file 'tensornets.egg-info\SOURCES.txt'
        reading manifest template 'MANIFEST.in'
        writing manifest file 'tensornets.egg-info\SOURCES.txt'
        copying tensornets\references\darkflow_utils\get_boxes.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\nms.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\datasets\voc.names -> build\lib.win-amd64-3.5\tensornets\datasets
        copying tensornets\references\coco.names -> build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\voc.names -> build\lib.win-amd64-3.5\tensornets\references
        copying tensornets\references\darkflow_utils\__init__.pyc -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\box.pyc -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\get_boxes.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\get_boxes.so -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\nms.pxd -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\nms.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        copying tensornets\references\darkflow_utils\nms.so -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils
        running build_ext
        building 'tensornets.references.darkflow_utils.nms' extension
        creating build\temp.win-amd64-3.5
        creating build\temp.win-amd64-3.5\Release
        creating build\temp.win-amd64-3.5\Release\tensornets
        creating build\temp.win-amd64-3.5\Release\tensornets\references
        creating build\temp.win-amd64-3.5\Release\tensornets\references\darkflow_utils
        C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\users\jmorales\appdata\local\programs\python\python35\lib\site-packages\numpy\core\include -Ic:\users\jmorales\appdata\local\programs\python\python35\include -Ic:\users\jmorales\appdata\local\programs\python\python35\include "-IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program Files (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tctensornets/references/darkflow_utils/nms.c /Fobuild\temp.win-amd64-3.5\Release\tensornets/references/darkflow_utils/nms.obj
        nms.c
        c:\users\jmorales\appdata\local\programs\python\python35\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
        tensornets/references/darkflow_utils/nms.c(2380): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
        tensornets/references/darkflow_utils/nms.c(2389): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
        tensornets/references/darkflow_utils/nms.c(2439): warning C4244: '=': conversion from 'double' to 'float', possible loss of data
        tensornets/references/darkflow_utils/nms.c(22288): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
        tensornets/references/darkflow_utils/nms.c(22294): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
        C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\users\jmorales\appdata\local\programs\python\python35\libs /LIBPATH:c:\users\jmorales\appdata\local\programs\python\python35\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" m.lib /EXPORT:PyInit_nms build\temp.win-amd64-3.5\Release\tensornets/references/darkflow_utils/nms.obj /OUT:build\lib.win-amd64-3.5\tensornets\references\darkflow_utils\nms.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Release\tensornets/references/darkflow_utils\nms.cp35-win_amd64.lib
        LINK : fatal error LNK1181: cannot open input file 'm.lib'
        error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1181
    
        ----------------------------------------
    Command "c:\users\jmorales\appdata\local\programs\python\python35\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\jmorales\\AppData\\Local\\Temp\\pip-install-emz1qnku\\tensornets\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\jmorales\AppData\Local\Temp\pip-record-gqdxhlji\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\jmorales\AppData\Local\Temp\pip-install-emz1qnku\tensornets\
    

    This is my setup:

    Python version: 3.5 OS: Windows 10 Home System Type: 64-bit Operating System, x64-based processor Processor: AMD A10-8700P Radeon R6

    opened by jafetmorales 10
  • Various questions

    Various questions

    First off, thanks very much for all your work on this - a really clean and understandable implementation of YOLO 😄 I have a few questions (probably very basic) about getting started with training and inference with tensornets - in specific to the YOLOv2 model:

    1. I am wanting to train YOLOv2 on the Open Images dataset. How can the tensornets implementation of YOLOv2 be used with a custom number of classes (545 in my case)?
    2. After training, how would one save the weights to a file and load them back in later for inference?
    3. Can layers be frozen during training (i.e. only train the last 4 layers) similar to what one can do in darknet with stopbackward=1?
    4. Is there support for training the YOLOv2 model with a TPU? If not, is there a somewhat straightforward path to adding that functionality?
    5. Can YOLOv2 be trained and used for inference with FP16 precision to speed things up - and can the existing pretrained weights be converted to FP16 so as to not necessitate training from scratch?

    Thanks very much!

    opened by abagshaw 8
  • Vgg16: Problems of Loading Pretrained Weights without fcs

    Vgg16: Problems of Loading Pretrained Weights without fcs

    @taehoonlee I tried to use sess.run(net.pretrained()) for vgg16, but it failed because of shape mismatch (I did not use the default input size). I read the source code and I am confused by line 320 in tensornets/utils.py, image is the code at line 321 designed for tackling the shape mismatch of fcs? If yes, I think it has a problem on vgg16 because it has three fc and this code seems to be only suitable for networks with 1 fc.

    opened by LynnHo 7
  • Anaconda python 2.7

    Anaconda python 2.7

    Command "C:\Users\User\Anaconda2\python.exe -u -c "import setuptools, tokenize;file='c:\users\user\appdata\local\temp\pip-install-jarbs8\pybluez\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record c:\users\user\appdata\local\temp\pip-record-rosdw0\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\user\appdata\local\temp\pip-install-jarbs8\pybluez\

    opened by luitcifer 6
  • YOLOv3 odd detection behaviour vs YOLOv2

    YOLOv3 odd detection behaviour vs YOLOv2

    Here's the same image put through YOLOv2COCO and then YOLOv3COCO respectively (threshold set at 0.4 and input resolution at 416x416 for both):

    YOLOv2 YOLOv3

    YOLOv3 does pick up the smaller people (as one would expect) - but for some reason it seems to predicting a much smaller bounding box than it should. I've done some experimenting and similar behavior is exhibited on quite a few images (sometimes the proper box appears and a smaller, inner one also appears - in this case, though, only the smaller one is appearing).

    It's possible that this is just due to a poor prediction on the model's part, but my guess is that it isn't and due to some problem with the NMS function (almost as if it's doing non-minimal-suppression...if that's a thing 😄 ) or other post processing step.

    opened by abagshaw 6
  • tensornets on windows 10

    tensornets on windows 10

    Hello, facing difficulties to install tensornets.

    my environment is explained bellow.

    OS: windows 10, windows 7 anaconda Python version: 3.5, 3.6 tensorflow version: 1.9, 1.10 PIP version: 18.1, 10.0.1, 19.x

    pip install tensornets and pip install git+https://github.com/taehoonlee/tensornets.git

    Collecting git+https://github.com/taehoonlee/tensornets.git Cloning https://github.com/taehoonlee/tensornets.git to c:\users\admin\appdata\local\temp\pip-req-build-pslh3n1z Building wheels for collected packages: tensornets Building wheel for tensornets (setup.py) ... error Complete output from command c:\programdata\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\admin\AppData\Local\Temp\pip-req-build-pslh3n1z\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d C:\Users\admin\AppData\Local\Temp\pip-wheel-kz3ibhta --python-tag cp35: running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.5 creating build\lib.win-amd64-3.5\tensornets copying tensornets\capsulenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\darknets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\densenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\detections.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\inceptions.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\layers.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\middles.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\mobilenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\nasnets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\ops.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\preprocess.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\pretrained.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\resnets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\squeezenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\utils.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\vggs.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\wavenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\zf.py -> build\lib.win-amd64-3.5\tensornets copying tensornets_init_.py -> build\lib.win-amd64-3.5\tensornets creating build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\coco.py -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\imagenet.py -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\voc.py -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets_init_.py -> build\lib.win-amd64-3.5\tensornets\datasets creating build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\rcnns.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\rpn_utils.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\yolos.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\yolo_utils.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references_init_.py -> build\lib.win-amd64-3.5\tensornets\references creating build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\box.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils_init_.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils running egg_info creating tensornets.egg-info writing top-level names to tensornets.egg-info\top_level.txt writing tensornets.egg-info\PKG-INFO writing dependency_links to tensornets.egg-info\dependency_links.txt writing manifest file 'tensornets.egg-info\SOURCES.txt' reading manifest file 'tensornets.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'tensornets.egg-info\SOURCES.txt' copying tensornets\references\darkflow_utils\get_boxes.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\datasets\coco.names -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\voc.names -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\references\coco.names -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\voc.names -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\darkflow_utils\get_boxes.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pxd -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils running build_ext building 'tensornets.references.darkflow_utils.nms' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools


    Failed building wheel for tensornets Running setup.py clean for tensornets Failed to build tensornets Installing collected packages: tensornets Running setup.py install for tensornets ... error Complete output from command c:\programdata\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\admin\AppData\Local\Temp\pip-req-build-pslh3n1z\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\admin\AppData\Local\Temp\pip-record-tgoomb1b\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib.win-amd64-3.5 creating build\lib.win-amd64-3.5\tensornets copying tensornets\capsulenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\darknets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\densenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\detections.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\inceptions.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\layers.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\middles.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\mobilenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\nasnets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\ops.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\preprocess.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\pretrained.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\resnets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\squeezenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\utils.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\vggs.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\wavenets.py -> build\lib.win-amd64-3.5\tensornets copying tensornets\zf.py -> build\lib.win-amd64-3.5\tensornets copying tensornets_init_.py -> build\lib.win-amd64-3.5\tensornets creating build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\coco.py -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\imagenet.py -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\voc.py -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets_init_.py -> build\lib.win-amd64-3.5\tensornets\datasets creating build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\rcnns.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\rpn_utils.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\yolos.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\yolo_utils.py -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references_init_.py -> build\lib.win-amd64-3.5\tensornets\references creating build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\box.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils_init_.py -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils running egg_info writing dependency_links to tensornets.egg-info\dependency_links.txt writing tensornets.egg-info\PKG-INFO writing top-level names to tensornets.egg-info\top_level.txt reading manifest file 'tensornets.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'tensornets.egg-info\SOURCES.txt' copying tensornets\references\darkflow_utils\get_boxes.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.c -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\datasets\coco.names -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\datasets\voc.names -> build\lib.win-amd64-3.5\tensornets\datasets copying tensornets\references\coco.names -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\voc.names -> build\lib.win-amd64-3.5\tensornets\references copying tensornets\references\darkflow_utils\get_boxes.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pxd -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pyx -> build\lib.win-amd64-3.5\tensornets\references\darkflow_utils running build_ext building 'tensornets.references.darkflow_utils.nms' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools

    ----------------------------------------
    

    Command "c:\programdata\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\admin\AppData\Local\Temp\pip-req-build-pslh3n1z\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\admin\AppData\Local\Temp\pip-record-tgoomb1b\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\admin\AppData\Local\Temp\pip-req-build-pslh3n1z\

    Kindly help

    opened by rohtash0211 5
  • Extreme memory usage and bug with FP16 and darkflow_utils

    Extreme memory usage and bug with FP16 and darkflow_utils

    When using YOLOv2 COCO or VOC on tensornets (TF 1.7, CPU, Win 10) the memory usage with the input as tf.float32 is massive (3GB +) which is more than twice what darkflow uses for the same model. Not quite sure why.

    When trying to use tf.float16 with YOLOv2, memory usage is significantly reduced (could be due to a bug that so little memory is used actually...), but there seems to be a problem with the Cython utils ported from darkflow handling the FP16 output data.

    I get the following error:

    Traceback (most recent call last): File "blah\src\test.py", line 15, in boxes = model.get_boxes(preds, img.shape[1:3]) File "C:\Users\abags\AppData\Local\Programs\Python\Python36\lib\site-packages\tensornets\references\yolos.py", line 189, in _get_boxes return get_v2_boxes(opts('yolov2'), *args, **kwargs) File "C:\Users\abags\AppData\Local\Programs\Python\Python36\lib\site-packages\tensornets\references\yolo_utils.py", line 94, in get_v2_boxes results = yolov2_box(opts, outs[0].copy()) File "tensornets\references\darkflow_utils\get_boxes.pyx", line 108, in tensornets.references.darkflow_utils.get_boxes.yolov2_box ValueError: Does not understand character buffer dtype format string ('e')

    opened by abagshaw 5
  • The output problem of sharing weights.

    The output problem of sharing weights.

    I use the code like below,

    ...
    resnet = partial(tensornets.ResNet50, reuse=tf.AUTO_REUSE)
    outputs_1 = resnet(x_1).get_outputs()
    outputs_2 = resnet(x_2).get_outputs() 
    ...
    

    outputs_2 appends the outputs of x_2 to outputs_1.

    enhancement 
    opened by LynnHo 4
  • can't load the weights

    can't load the weights

    my code can run without any error in one device. when i copy the code to the another device, it crashed. here is the error:

    File "/media/profyuan/新加卷2/JD_Fashion_AI/JD_Fashion_AI_original/train.py", line 204, in main() File "/media/profyuan/新加卷2/JD_Fashion_AI/JD_Fashion_AI_original/train.py", line 200, in main run_training() File "/media/profyuan/新加卷2/JD_Fashion_AI/JD_Fashion_AI_original/train.py", line 152, in run_training sess.run(model.pretrained()) File "/usr/local/lib/python2.7/dist-packages/tensornets/pretrained.py", line 94, in _direct return fun(scope, return_fn=pretrained_initializer) File "/usr/local/lib/python2.7/dist-packages/tensornets/pretrained.py", line 186, in load_resnet101 return return_fn(scopes, values) File "/usr/local/lib/python2.7/dist-packages/tensornets/utils.py", line 292, in pretrained_initializer assert len(weights) == len(values), 'The sizes of symbolic and '
    AssertionError: The sizes of symbolic and actual weights do not match.

    here is my pip list:

    absl-py 0.2.1
    adium-theme-ubuntu 0.3.4
    astor 0.6.2
    backports-abc 0.5
    backports.shutil-get-terminal-size 1.0.0
    backports.weakref 1.0.post1
    bleach 1.5.0
    configparser 3.5.0
    cycler 0.9.0
    Cython 0.28.2
    decorator 4.0.6
    easydict 1.7
    entrypoints 0.2.3
    enum34 1.1.6
    funcsigs 1.0.2
    functools32 3.2.3.post2 futures 3.2.0
    gast 0.2.0
    grpcio 1.12.0
    h5py 2.7.1
    html5lib 0.9999999
    ipykernel 4.8.2
    ipython 5.7.0
    ipython-genutils 0.2.0
    ipywidgets 7.2.1
    Jinja2 2.10
    jsonschema 2.6.0
    jupyter 1.0.0
    jupyter-client 5.2.3
    jupyter-console 5.2.0
    jupyter-core 4.4.0
    Keras 2.1.6
    Markdown 2.6.11
    MarkupSafe 1.0
    matplotlib 1.5.1
    mistune 0.8.3
    mock 2.0.0
    nbconvert 5.3.1
    nbformat 4.4.0
    nose 1.3.7
    notebook 5.5.0
    numpy 1.14.3
    opencv-python 3.4.0.12
    pandocfilters 1.4.2
    pathlib2 2.3.2
    pbr 4.0.2
    pexpect 4.5.0
    pickleshare 0.7.4
    Pillow 5.1.0
    pip 10.0.1
    prompt-toolkit 1.0.15
    protobuf 3.5.2.post1 ptyprocess 0.5.2
    Pygments 2.2.0
    pyparsing 2.0.3
    Pyste 0.9.10
    python-dateutil 2.4.2
    pytz 2014.10
    PyYAML 3.12
    pyzmq 17.0.0
    qtconsole 4.3.1
    scandir 1.7
    scikit-image 0.10.1
    scikit-learn 0.19.1
    scipy 0.17.0
    Send2Trash 1.5.0
    setuptools 20.7.0
    simplegeneric 0.8.1
    singledispatch 3.4.0.3
    six 1.10.0
    sklearn 0.0
    tensorboard 1.8.0
    tensorflow 1.8.0
    tensorflow-gpu 1.4.0
    tensorflow-tensorboard 0.4.0
    tensornets 0.3.3
    termcolor 1.1.0
    terminado 0.8.1
    testpath 0.3.1
    tornado 5.0.2
    traitlets 4.3.2
    unity-lens-photos 1.0
    virtualenv 15.0.1
    wcwidth 0.1.7
    Werkzeug 0.14.1
    wheel 0.29.0
    widgetsnbextension 3.2.1

    opened by haddis3 4
  • Error When pip

    Error When pip

    When I installed TensorNets from PyPI (pip install tensornets) , what happened as: Building wheels for collected packages: tensornets Running setup.py bdist_wheel for tensornets ... error Complete output from command e:\python\anaconda_setup\python.exe -u -c "import setuptools, tokenize;file='C:\Users\hp\AppData\Local\Temp\pip-install-u8ygwt_s\tensornets\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d C:\Users\hp\AppData\Local\Temp\pip-wheel-9cd_yi__ --python-tag cp36: running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\tensornets copying tensornets\capsulenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\darknets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\densenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\detections.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\inceptions.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\layers.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\middles.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\mobilenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\nasnets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\ops.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\preprocess.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\pretrained.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\resnets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\squeezenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\unet.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\utils.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\vggs.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\wavenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\zf.py -> build\lib.win-amd64-3.6\tensornets copying tensornets_init_.py -> build\lib.win-amd64-3.6\tensornets creating build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\coco.py -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\imagenet.py -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\voc.py -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets_init_.py -> build\lib.win-amd64-3.6\tensornets\datasets creating build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\rcnns.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\rpn_utils.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\yolos.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\yolo_utils.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references_init_.py -> build\lib.win-amd64-3.6\tensornets\references creating build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\box.py -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils_init_.py -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils running egg_info writing tensornets.egg-info\PKG-INFO writing dependency_links to tensornets.egg-info\dependency_links.txt writing top-level names to tensornets.egg-info\top_level.txt reading manifest file 'tensornets.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'tensornets.egg-info\SOURCES.txt' copying tensornets\references\darkflow_utils\get_boxes.c -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.c -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\datasets\coco.names -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\voc.names -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\references\coco.names -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\voc.names -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\darkflow_utils_init_.pyc -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\box.pyc -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\get_boxes.pyx -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\get_boxes.so -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pxd -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pyx -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.so -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils running build_ext building 'tensornets.references.darkflow_utils.nms' extension creating build\temp.win-amd64-3.6 creating build\temp.win-amd64-3.6\Release creating build\temp.win-amd64-3.6\Release\tensornets creating build\temp.win-amd64-3.6\Release\tensornets\references creating build\temp.win-amd64-3.6\Release\tensornets\references\darkflow_utils D:\MSVSC\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ie:\python\anaconda_setup\lib\site-packages\numpy\core\include -Ie:\python\anaconda_setup\include -Ie:\python\anaconda_setup\include -ID:\MSVSC\VC\Tools\MSVC\14.15.26726\ATLMFC\include -ID:\MSVSC\VC\Tools\MSVC\14.15.26726\include "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\cppwinrt" /Tctensornets/references/darkflow_utils/nms.c /Fobuild\temp.win-amd64-3.6\Release\tensornets/references/darkflow_utils/nms.obj nms.c e:\python\anaconda_setup\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION tensornets/references/darkflow_utils/nms.c(2437): warning C4244: “=”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(2446): warning C4244: “=”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(2496): warning C4244: “=”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(22776): warning C4244: “初始化”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(22782): warning C4244: “初始化”: 从“double”转换到“float”,可能丢失数据 D:\MSVSC\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:e:\python\anaconda_setup\libs /LIBPATH:e:\python\anaconda_setup\PCbuild\amd64 /LIBPATH:D:\MSVSC\VC\Tools\MSVC\14.15.26726\ATLMFC\lib\x64 /LIBPATH:D:\MSVSC\VC\Tools\MSVC\14.15.26726\lib\x64 "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17134.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17134.0\um\x64" m.lib /EXPORT:PyInit_nms build\temp.win-amd64-3.6\Release\tensornets/references/darkflow_utils/nms.obj /OUT:build\lib.win-amd64-3.6\tensornets\references\darkflow_utils\nms.cp36-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.6\Release\tensornets/references/darkflow_utils\nms.cp36-win_amd64.lib LINK : fatal error LNK1181: 无法打开输入文件“m.lib” error: command 'D:\MSVSC\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\link.exe' failed with exit status 1181


    Failed building wheel for tensornets Running setup.py clean for tensornets Failed to build tensornets Installing collected packages: tensornets Running setup.py install for tensornets ... error Complete output from command e:\python\anaconda_setup\python.exe -u -c "import setuptools, tokenize;file='C:\Users\hp\AppData\Local\Temp\pip-install-u8ygwt_s\tensornets\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\hp\AppData\Local\Temp\pip-record-al4vrtuo\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\tensornets copying tensornets\capsulenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\darknets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\densenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\detections.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\inceptions.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\layers.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\middles.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\mobilenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\nasnets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\ops.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\preprocess.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\pretrained.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\resnets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\squeezenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\unet.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\utils.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\vggs.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\wavenets.py -> build\lib.win-amd64-3.6\tensornets copying tensornets\zf.py -> build\lib.win-amd64-3.6\tensornets copying tensornets_init_.py -> build\lib.win-amd64-3.6\tensornets creating build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\coco.py -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\imagenet.py -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\voc.py -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets_init_.py -> build\lib.win-amd64-3.6\tensornets\datasets creating build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\rcnns.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\rpn_utils.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\yolos.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\yolo_utils.py -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references_init_.py -> build\lib.win-amd64-3.6\tensornets\references creating build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\box.py -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils_init_.py -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils running egg_info writing tensornets.egg-info\PKG-INFO writing dependency_links to tensornets.egg-info\dependency_links.txt writing top-level names to tensornets.egg-info\top_level.txt reading manifest file 'tensornets.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'tensornets.egg-info\SOURCES.txt' copying tensornets\references\darkflow_utils\get_boxes.c -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.c -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\datasets\coco.names -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\datasets\voc.names -> build\lib.win-amd64-3.6\tensornets\datasets copying tensornets\references\coco.names -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\voc.names -> build\lib.win-amd64-3.6\tensornets\references copying tensornets\references\darkflow_utils_init_.pyc -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\box.pyc -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\get_boxes.pyx -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\get_boxes.so -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pxd -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.pyx -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils copying tensornets\references\darkflow_utils\nms.so -> build\lib.win-amd64-3.6\tensornets\references\darkflow_utils running build_ext building 'tensornets.references.darkflow_utils.nms' extension creating build\temp.win-amd64-3.6 creating build\temp.win-amd64-3.6\Release creating build\temp.win-amd64-3.6\Release\tensornets creating build\temp.win-amd64-3.6\Release\tensornets\references creating build\temp.win-amd64-3.6\Release\tensornets\references\darkflow_utils D:\MSVSC\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ie:\python\anaconda_setup\lib\site-packages\numpy\core\include -Ie:\python\anaconda_setup\include -Ie:\python\anaconda_setup\include -ID:\MSVSC\VC\Tools\MSVC\14.15.26726\ATLMFC\include -ID:\MSVSC\VC\Tools\MSVC\14.15.26726\include "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\cppwinrt" /Tctensornets/references/darkflow_utils/nms.c /Fobuild\temp.win-amd64-3.6\Release\tensornets/references/darkflow_utils/nms.obj nms.c e:\python\anaconda_setup\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(12) : Warning Msg: Using deprecated NumPy API, disable it by #defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION tensornets/references/darkflow_utils/nms.c(2437): warning C4244: “=”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(2446): warning C4244: “=”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(2496): warning C4244: “=”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(22776): warning C4244: “初始化”: 从“double”转换到“float”,可能丢失数据 tensornets/references/darkflow_utils/nms.c(22782): warning C4244: “初始化”: 从“double”转换到“float”,可能丢失数据 D:\MSVSC\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:e:\python\anaconda_setup\libs /LIBPATH:e:\python\anaconda_setup\PCbuild\amd64 /LIBPATH:D:\MSVSC\VC\Tools\MSVC\14.15.26726\ATLMFC\lib\x64 /LIBPATH:D:\MSVSC\VC\Tools\MSVC\14.15.26726\lib\x64 "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17134.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17134.0\um\x64" m.lib /EXPORT:PyInit_nms build\temp.win-amd64-3.6\Release\tensornets/references/darkflow_utils/nms.obj /OUT:build\lib.win-amd64-3.6\tensornets\references\darkflow_utils\nms.cp36-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.6\Release\tensornets/references/darkflow_utils\nms.cp36-win_amd64.lib LINK : fatal error LNK1181: 无法打开输入文件“m.lib” error: command 'D:\MSVSC\VC\Tools\MSVC\14.15.26726\bin\HostX86\x64\link.exe' failed with exit status 1181

    ----------------------------------------
    

    Command "e:\python\anaconda_setup\python.exe -u -c "import setuptools, tokenize;file='C:\Users\hp\AppData\Local\Temp\pip-install-u8ygwt_s\tensornets\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\hp\AppData\Local\Temp\pip-record-al4vrtuo\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\hp\AppData\Local\Temp\pip-install-u8ygwt_s\tensornets\

    My system is windows10 and my python is 3.6, tensorflow is 1.12.0 which is based on cpu. Can anyone help me? Please!

    opened by ScroogeMucDuck 3
  • Performance issue in /translations/tfslim.py (by P3)

    Performance issue in /translations/tfslim.py (by P3)

    Hello! I've found a performance issue in /translations/tfslim.py: with tf.Session() as sess(here) is repeatedly called in the loop for (net, shape, model_name) in models_list(here).

    tf.Session being defined repeatedly could lead to incremental overhead. If you define tf.Session out of the loop and pass tf.Session as a parameter to the loop, your program would be much more efficient. Here is the Stack Overflow post to support it.

    Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

    opened by DLPerf 1
  • Create a more robust pyproject.toml file

    Create a more robust pyproject.toml file

    ENVIRONMENT SETUP Windows 10 (64-bit), Anaconda 2.0.4, Python 3.8.10, Jupyter Notebook 6.4.3

    PROBLEM I've had numpy compatability issues getting tensornets 0.4.6 installed.

    I eventually managed to get tensornets 0.4.5 installed and running with the following dependancies

    • cython==0.29.24
    • numpy==1.19.5
    • tensorflow==2.6.0

    ISSUE wheel had issues identifying the right dependancies when I tried to rebuild 0.4.6 from source.

    When I took a closer look at the current version I noticed the pyproject.toml file is missing versions for cython and numpy and does not include any dependancies for tensorflow libraries or python.

    RESOLUTION I believe a more robust pyproject.toml is needed which might go someway to helping developers identify installation issues. At a minimum I think the following items should be defined in the pyproject.toml file along with version dependancies:

    • Python
    • various tensorflow packages
    • numpy (version dependancy to be added)
    • cython (version dependancy to be added)
    opened by ascidian-ai 0
  • I tried to install tensornet on my jenkins VM which uses ubuntu 20.04,python 3.6.10, pip 20.1.1

    I tried to install tensornet on my jenkins VM which uses ubuntu 20.04,python 3.6.10, pip 20.1.1

    I tried this sudo pip install tensornets==0.4.1 Collecting tensornets==0.4.1 Using cached tensornets-0.4.1.tar.gz (587 kB) Building wheels for collected packages: tensornets Building wheel for tensornets (setup.py) ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3.6 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-j_ig232n/tensornets/setup.py'"'"'; file='"'"'/tmp/pip-install-j_ig232n/tensornets/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-35puvj3l cwd: /tmp/pip-install-j_ig232n/tensornets/ Complete output (73 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/tensornets copying tensornets/detections.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/densenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/layers.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/preprocess.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/utils.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/inceptions.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/resnets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/unet.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/capsulenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/darknets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/vggs.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/squeezenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/middles.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/pretrained.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/tnets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/init.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/zf.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/mobilenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/wavenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/ops.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/nasnets.py -> build/lib.linux-x86_64-3.6/tensornets creating build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/imagenet.py -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/voc.py -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/coco.py -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/init.py -> build/lib.linux-x86_64-3.6/tensornets/datasets creating build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/yolo_utils.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/rcnns.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/yolos.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/init.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/rpn_utils.py -> build/lib.linux-x86_64-3.6/tensornets/references creating build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/box.py -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/init.py -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils running egg_info writing tensornets.egg-info/PKG-INFO writing dependency_links to tensornets.egg-info/dependency_links.txt writing top-level names to tensornets.egg-info/top_level.txt reading manifest file 'tensornets.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'tensornets.egg-info/SOURCES.txt' copying tensornets/datasets/coco.names -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/voc.names -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/references/coco.names -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/voc.names -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/darkflow_utils/init.pyc -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/box.pyc -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/get_boxes.c -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/get_boxes.pyx -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/get_boxes.so -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.c -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.pxd -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.pyx -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.so -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils running build_ext building 'tensornets.references.darkflow_utils.nms' extension creating build/temp.linux-x86_64-3.6 creating build/temp.linux-x86_64-3.6/tensornets creating build/temp.linux-x86_64-3.6/tensornets/references creating build/temp.linux-x86_64-3.6/tensornets/references/darkflow_utils x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.6-qBAIHI/python3.6-3.6.10=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/usr/include/python3.6m -c tensornets/references/darkflow_utils/nms.c -o build/temp.linux-x86_64-3.6/tensornets/references/darkflow_utils/nms.o tensornets/references/darkflow_utils/nms.c:26:10: fatal error: Python.h: No such file or directory 26 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

    ERROR: Failed building wheel for tensornets Running setup.py clean for tensornets Failed to build tensornets Installing collected packages: tensornets Running setup.py install for tensornets ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3.6 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-j_ig232n/tensornets/setup.py'"'"'; file='"'"'/tmp/pip-install-j_ig232n/tensornets/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-f_zwgh08/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.6/tensornets cwd: /tmp/pip-install-j_ig232n/tensornets/ Complete output (73 lines): running install running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/tensornets copying tensornets/detections.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/densenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/layers.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/preprocess.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/utils.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/inceptions.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/resnets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/unet.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/capsulenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/darknets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/vggs.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/squeezenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/middles.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/pretrained.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/tnets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/init.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/zf.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/mobilenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/wavenets.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/ops.py -> build/lib.linux-x86_64-3.6/tensornets copying tensornets/nasnets.py -> build/lib.linux-x86_64-3.6/tensornets creating build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/imagenet.py -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/voc.py -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/coco.py -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/init.py -> build/lib.linux-x86_64-3.6/tensornets/datasets creating build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/yolo_utils.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/rcnns.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/yolos.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/init.py -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/rpn_utils.py -> build/lib.linux-x86_64-3.6/tensornets/references creating build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/box.py -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/init.py -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils running egg_info writing tensornets.egg-info/PKG-INFO writing dependency_links to tensornets.egg-info/dependency_links.txt writing top-level names to tensornets.egg-info/top_level.txt reading manifest file 'tensornets.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'tensornets.egg-info/SOURCES.txt' copying tensornets/datasets/coco.names -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/datasets/voc.names -> build/lib.linux-x86_64-3.6/tensornets/datasets copying tensornets/references/coco.names -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/voc.names -> build/lib.linux-x86_64-3.6/tensornets/references copying tensornets/references/darkflow_utils/init.pyc -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/box.pyc -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/get_boxes.c -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/get_boxes.pyx -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/get_boxes.so -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.c -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.pxd -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.pyx -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils copying tensornets/references/darkflow_utils/nms.so -> build/lib.linux-x86_64-3.6/tensornets/references/darkflow_utils running build_ext building 'tensornets.references.darkflow_utils.nms' extension creating build/temp.linux-x86_64-3.6 creating build/temp.linux-x86_64-3.6/tensornets creating build/temp.linux-x86_64-3.6/tensornets/references creating build/temp.linux-x86_64-3.6/tensornets/references/darkflow_utils x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fdebug-prefix-map=/build/python3.6-qBAIHI/python3.6-3.6.10=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I/usr/include/python3.6m -c tensornets/references/darkflow_utils/nms.c -o build/temp.linux-x86_64-3.6/tensornets/references/darkflow_utils/nms.o tensornets/references/darkflow_utils/nms.c:26:10: fatal error: Python.h: No such file or directory 26 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/bin/python3.6 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-j_ig232n/tensornets/setup.py'"'"'; file='"'"'/tmp/pip-install-j_ig232n/tensornets/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-f_zwgh08/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.6/tensornets Check the logs for full command output.

    opened by yaswanth789 0
  • Confusing result when I load pretrained weight for mobilenetv3smallmini.

    Confusing result when I load pretrained weight for mobilenetv3smallmini.

    First I loaded pretrained weight with is_training=True, the ouput float is probability of input's class:

    [email protected]:/workspace/lx_code_hub/classification# CUDA_VISIBLE_DEVICES=1 python3 test.py 
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_qint8 = np.dtype([("qint8", np.int8, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_qint16 = np.dtype([("qint16", np.int16, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_qint32 = np.dtype([("qint32", np.int32, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      np_resource = np.dtype([("resource", np.ubyte, 1)])
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Colocations handled automatically by placer.
    1.0
    2020-06-12 09:50:02.483822: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    2020-06-12 09:50:02.644052: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x56f6a90 executing computations on platform CUDA. Devices:
    2020-06-12 09:50:02.644168: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): Tesla P4, Compute Capability 6.1
    2020-06-12 09:50:02.649138: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2099960000 Hz
    2020-06-12 09:50:02.651987: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x57e6590 executing computations on platform Host. Devices:
    2020-06-12 09:50:02.652247: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
    2020-06-12 09:50:02.653086: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
    name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
    pciBusID: 0000:82:00.0
    totalMemory: 7.43GiB freeMemory: 5.31GiB
    2020-06-12 09:50:02.653177: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
    2020-06-12 09:50:02.654691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
    2020-06-12 09:50:02.654734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
    2020-06-12 09:50:02.654787: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
    2020-06-12 09:50:02.655474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5138 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:82:00.0, compute capability: 6.1)
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/losses/losses_impl.py:209: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Use tf.cast instead.
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Use tf.cast instead.
    epoch 0
    2020-06-12 09:50:08.470524: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
    0.0007623361
    Exception ignored in: <bound method BaseSession.__del__ of <tensorflow.python.client.session.Session object at 0x7fd7b85f4c18>>
    Traceback (most recent call last):
      File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 738, in __del__
    TypeError: 'NoneType' object is not callable
    

    Then I loaded pretrained weight with is_training=False:

    [email protected]:/workspace/lx_code_hub/classification# CUDA_VISIBLE_DEVICES=1 python3 test.py 
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_qint8 = np.dtype([("qint8", np.int8, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_qint16 = np.dtype([("qint16", np.int16, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      _np_qint32 = np.dtype([("qint32", np.int32, 1)])
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
      np_resource = np.dtype([("resource", np.ubyte, 1)])
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Colocations handled automatically by placer.
    1.0
    2020-06-12 09:50:32.524288: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    2020-06-12 09:50:32.671292: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x54be810 executing computations on platform CUDA. Devices:
    2020-06-12 09:50:32.671360: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): Tesla P4, Compute Capability 6.1
    2020-06-12 09:50:32.676036: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2099960000 Hz
    2020-06-12 09:50:32.678501: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55ae310 executing computations on platform Host. Devices:
    2020-06-12 09:50:32.678571: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
    2020-06-12 09:50:32.679732: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
    name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
    pciBusID: 0000:82:00.0
    totalMemory: 7.43GiB freeMemory: 5.31GiB
    2020-06-12 09:50:32.679949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
    2020-06-12 09:50:32.681698: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
    2020-06-12 09:50:32.681776: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
    2020-06-12 09:50:32.681803: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
    2020-06-12 09:50:32.682511: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5138 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:82:00.0, compute capability: 6.1)
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/losses/losses_impl.py:209: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Use tf.cast instead.
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Use tf.cast instead.
    epoch 0
    2020-06-12 09:50:38.539550: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
    0.67551297
    

    Why they are different?

    Code:

    inputs = tf.placeholder(tf.float32, [None, 224, 224, 3])
    outputs = tf.placeholder(tf.float32, [None, 1000])
    model = nets.MobileNet100v3smallmini(inputs, is_training=False, classes=1000)
    #model = nets.MobileNet100v3smallmini(inputs)
    saver = tf.train.Saver(tf.global_variables(), max_to_keep=300)
    
    
    assert isinstance(model, tf.Tensor)
    path = ['cat.jpg'] * 10
    label = np.zeros((1000,), dtype=np.float32)
    label[283] = 1.
    print(label[283])
    labels = np.row_stack((label,) * 10)
    
    img = nets.utils.load_img(path, target_size=(224, 224))
    assert img.shape == (10, 224, 224, 3)
    with tf.Session() as sess:
        #sess.run(tf.global_variables_initializer())
        sess.run(model.pretrained())  # equivalent to nets.pretrained(model)
        #initialize_uninitialized(sess)
        #exit(0)
        #with tf.name_scope('lx_train'):
        loss = tf.losses.softmax_cross_entropy(outputs, model.logits)
        train = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(loss)
        #var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='lx_train')
        #print(var_list)
        #exit(0)
        #sess.run(tf.variables_initializer(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='lx_train')))
        for i in range(50):
            print('epoch {}'.format(i))
            img = model.preprocess(img)  # equivalent to img = nets.preprocess(model, img)
            #preds = sess.run(model, {inputs: img})
            #print(preds[0][283])
            #exit(0)
            if i % 5 == 0:
                saver.save(sess, save_path='ckpt/model.ckpt', global_step=i)
            preds = sess.run(model, {inputs: img})
            #_, total_loss, preds = sess.run([train, loss, model], {inputs: img, outputs: labels})
            if i % 5 == 0:
                print(preds[0][283])
                exit(0)
                #print(total_loss, preds[0][283])
    #print(nets.utils.decode_predictions(preds, top=2))
    

    Thank you for helping me.

    opened by linkrain-a 0
  • importing weights from darknet yolov3

    importing weights from darknet yolov3

    Hello,

    Thank you for this fast and simple API. I find it very helpful and efficient.

    I was wondering if there was a way to import weights from a yolov3 model I trained in darknet to run it in tensornets.

    Thanks.

    opened by Galisela 0
  • Coco DataSet Builder

    Coco DataSet Builder

    The original doesn't contain a load_train function and hence it doesn't return meta data with images. This version of the code has load_train and works faster by making image objects.

    opened by AhmedFakhry47 0
Releases(0.4.6)
  • 0.4.6(Mar 31, 2020)

  • 0.4.5(Mar 13, 2020)

    Areas of improvement

    None.

    API changes

    • Add init, load, and save.
    • Add MobileNetv3 with pretrained weights (#58).
    • Add names as parameter of middles, outputs, and weights.

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.3(Jan 29, 2020)

  • 0.4.2(Jan 23, 2020)

    Areas of improvement

    • TF 2 supports (#55).
    • Suppression of deprecation warnings for TF 1.14 and 1.15.
    • Unit tests / CI improvements.
    • Update of the performance table to follow the recent report format.

    API changes

    • Add reduce_max and swish.
    • Add weights_regularizer as a parameter of set_args.

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.1(Oct 13, 2019)

    Areas of improvement

    • Readability improvements.
    • A bug fix for pretrained_initializer with the latest NumPy.
    • Unit tests / CI improvements.
    • Addition of evaluation codes for all imagenet models (see #49).

    API changes

    • Add logits to access a logits tensor directly.
    • Add middles() which is equivalent to get_middles().
    • Add outputs() which is equivalent to get_outputs().
    • Add weights() which is equivalent to get_weights().
    • Add summary() which is equivalent to print_summary().

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Mar 8, 2019)

    Areas of improvement

    • Reproducibility improvements.
      • Revise pretrained weight of Inception 3 (the top-5 accuracy increased by 0.038%).
      • Add missing relu in Inception2 (the top-5 accuracy increased by 0.468%).
    • Bug fixes within variable_scope and name_scope (see #43).
    • Unit tests / CI improvements.
    • Documentation improvements.

    API changes

    None.

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
  • 0.3.6(Nov 10, 2018)

  • 0.3.5(Sep 1, 2018)

    Areas of improvement

    • Reproducibility improvements.
      • Add missing relu in DenseNet (the top-5 accuracies increased by 2.58-3.2%).
      • Revise the zero paddings of pool1 in ResNets (the top-5 accuracies increased by 0.6-1.16%).
    • Unit tests / CI improvements.
      • Disable tests if only README has been changed.
    • New APIs: MS COCO utils.

    API changes

    • Add MS COCO utils.
    • Add training data loader for VOC.
    • Add training codes for YOLOv2.

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
  • 0.3.4(May 12, 2018)

    Note that the following descriptions include changes for the 0.3.3. Additionally, The 0.3.2 doesn't exist due to PyPI conflicts.

    Areas of improvement

    • Bug fixes.
    • New APIs: image classification and object detection models.
    • Unit tests / CI improvements.
    • Documentation improvements.
    • Float16 supporting improvements (see #14).
    • Python 3 compatibility improvements.

    API changes

    • Add image classification models (Darknet19, MobileNet35v2, ..., MobileNet140v2, and PNASNetlarge).
    • Add references object detection models (YOLOv3VOC and YOLOv3COCO).
    • Add operations (local_flatten and upsample).

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
  • 0.3.1(Mar 27, 2018)

  • 0.3.0(Mar 27, 2018)

    Areas of improvement

    • Bug fixes.
    • New APIs: object detection models and PASCAL VOC utils.
    • Unit tests / CI improvements.
    • Documentation improvements.

    API changes

    • Add generic object detection models (YOLOv2, TinyYOLOv2, and FasterRCNN).
    • Add references object detection models (YOLOv2VOC, TinyYOLOv2VOC, FasterRCNN_ZF_VOC, and FasterRCNN_VGG16_VOC).
    • Add PASCAL VOC utils.
    • Add operations (stack: tf.stack, gather: tf.gather, srn: a spatial local response normalization)
    • Revise nets.pretrained to perform each model's pretrained.

    Breaking changes

    None.

    Source code(tar.gz)
    Source code(zip)
Owner
Taehoon Lee
machine learning, deep learning 🇰🇷
Taehoon Lee
GNN-based Recommendation Benchmark

GRecX A Fair Benchmark for GNN-based Recommendation Homepage and Documentation Homepage: Documentation: Paper: GRecX: An Efficient and Unified Benchma

73 Oct 17, 2022
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
A set of tools for converting a darknet dataset to COCO format working with YOLOX

darknet格式数据→COCO darknet训练数据目录结构(详情参见dataset/darknet): darknet ├── class.names ├── gen_config.data ├── gen_train.txt ├── gen_valid.txt └── images

RapidAI-NG 148 Jan 03, 2023
Code/data of the paper "Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction" (BMVC2021)

Hand-Object Contact Prediction (BMVC2021) This repository contains the code and data for the paper "Hand-Object Contact Prediction via Motion-Based Ps

Takuma Yagi 13 Nov 07, 2022
A curated list of programmatic weak supervision papers and resources

A curated list of programmatic weak supervision papers and resources

Jieyu Zhang 118 Jan 02, 2023
CVAT is free, online, interactive video and image annotation tool for computer vision

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 04, 2023
Code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”

GATER This repository contains the code for our EMNLP 2021 paper “Heterogeneous Graph Neural Networks for Keyphrase Generation”. Our implementation is

Jiacheng Ye 12 Nov 24, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel 🤘 This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
Human segmentation models, training/inference code, and trained weights, implemented in PyTorch

Human-Segmentation-PyTorch Human segmentation models, training/inference code, and trained weights, implemented in PyTorch. Supported networks UNet: b

Thuy Ng 474 Dec 19, 2022
Joint detection and tracking model named DEFT, or ``Detection Embeddings for Tracking.

DEFT: Detection Embeddings for Tracking DEFT: Detection Embeddings for Tracking, Mohamed Chaabane, Peter Zhang, J. Ross Beveridge, Stephen O'Hara

Mohamed Chaabane 253 Dec 18, 2022
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 08, 2022
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN 🌟 This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE)

OG-SPACE Introduction Optimized Gillespie algorithm for simulating Stochastic sPAtial models of Cancer Evolution (OG-SPACE) is a computational framewo

Data and Computational Biology Group UNIMIB (was BI*oinformatics MI*lan B*icocca) 0 Nov 17, 2021
Quick program made to generate alpha and delta tables for Hidden Markov Models

HMM_Calc Functions for generating Alpha and Delta tables from a Hidden Markov Model. Parameters: a: Matrix of transition probabilities. a[i][j] = a_{i

Adem Odza 1 Dec 04, 2021
Website for D2C paper

D2C This is the repository that contains source code for the D2C Website. If you find D2C useful for your work please cite: @article{sinha2021d2c au

1 Oct 21, 2021
Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud This repository contains a reference implementation of our Part-Aware Data Augment

Jaeseok Choi 62 Jan 03, 2023
Pacman-AI - AI project designed by UC Berkeley. Designed reflex and minimax agents for the game Pacman.

Pacman AI Jussi Doherty CAP 4601 - Introduction to Artificial Intelligence - Fall 2020 Python version 3.0+ Source of this project This repo contains a

Jussi Doherty 1 Jan 03, 2022