AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

Overview

AtlasNet [Project Page] [Paper] [Talk]

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In CVPR, 2018.

🚀 New branch : AtlasNet + Shape Reconstruction by Learning Differentiable Surface Representations

chair.png chair.gif

Install

This implementation uses Python 3.6, Pytorch, Pymesh, Cuda 10.1.

# Copy/Paste the snippet in a terminal
git clone --recurse-submodules https://github.com/ThibaultGROUEIX/AtlasNet.git
cd AtlasNet 

#Dependencies
conda create -n atlasnet python=3.6 --yes
conda activate atlasnet
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch --yes
pip install --user --requirement  requirements.txt # pip dependencies
Optional : Compile Chamfer (MIT) + Metro Distance (GPL3 Licence)
# Copy/Paste the snippet in a terminal
python auxiliary/ChamferDistancePytorch/chamfer3D/setup.py install #MIT
cd auxiliary
git clone https://github.com/ThibaultGROUEIX/metro_sources.git
cd metro_sources; python setup.py --build # build metro distance #GPL3
cd ../..

A note on data.

Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error running the demo, you can refer to #61.

You can manually download the data from three sources (there are the same) :

Please make sure to unzip the archives in the right places :

cd AtlasNet
mkdir data
unzip ShapeNetV1PointCloud.zip -d ./data/
unzip ShapeNetV1Renderings.zip -d ./data/
unzip metro_files.zip -d ./data/
unzip trained_models.zip -d ./training/

Usage

  • Demo : python train.py --demo
  • Training : python train.py --shapenet13 Monitor on http://localhost:8890/
  • Latest Refacto 12-2019 - [x] Factorize Single View Reconstruction and autoencoder in same class
    - [x] Factorise Square and Sphere template in same class
    - [x] Add latent vector as bias after first layer(30% speedup)
    - [x] Remove last th in decoder
    - [x] Make large .pth tensor with all pointclouds in cache(drop the nasty Chunk_reader)
    - [x] Make-it multi-gpu
    - [x] Add netvision visualization of the results
    - [x] Rewrite main script object-oriented
    - [x] Check that everything works in latest pytorch version
    - [x] Add more layer by default and flag for the number of layers and hidden neurons
    - [x] Add a flag to generate a mesh directly
    - [x] Add a python setup install
    - [x] Make sure GPU are used at 100%
    - [x] Add f-score in Chamfer + report f-score
    - [x] Get rid of shapenet_v2 data and use v1!
    - [x] Fix path issues no more sys.path.append
    - [x] Preprocess shapenet 55 and add it in dataloader
    - [x] Make minimal dependencies

Quantitative Results

Method Chamfer (*1) Fscore (*2) Metro (*3) Total Train time (min)
Autoencoder 25 Squares 1.35 82.3% 6.82 731
Autoencoder 1 Sphere 1.35 83.3% 6.94 548
SingleView 25 Squares 3.78 63.1% 8.94 1422
SingleView 1 Sphere 3.76 64.4% 9.01 1297
  • (*1) x1000. Computed between 2500 ground truth points and 2500 reconstructed points.
  • (*2) The threshold is 0.001
  • (*3) x100. Metro is ran on unormalized point clouds (which explains a difference with the paper's numbers)

Related projects

Citing this work

@inproceedings{groueix2018,
          title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
          year={2018}
        }

Comments
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Thank you for the great work! I get this error below when I run: ./training/train_AE_AtlasNet.py

    I checked two more similar issues but this looks different. Any idea how to solve it? Any help appreciated!

    File "./training/train_AE_AtlasNet.py", line 151, in dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory

    I run pytorch:0.4.1 / Ubuntu 18.04

    FULL CODE:

    (pytorch-atlasnet) [email protected]:~/AtlasNet$ python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt Setting up a new session... Namespace(accelerated_chamfer=0, batchSize=32, env='AE_AtlasNet', model='', nb_primitives=25, nepoch=120, num_points=2500, super_points=2500, workers=12) Random Seed: 314 {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % training set 31747 testing set 7943 **Traceback (most recent call last): File "./training/train_AE_AtlasNet.py", line 151, in <module> dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory**

    help wanted 
    opened by spha-code 15
  • To be honest, the latest code is very hard to understand

    To be honest, the latest code is very hard to understand

    I compare our method with AtlasNet several times. I need to edit the source code each time. However, the latest code is very hard to understand because it is of high abstraction. It takes me an hour to understand the relationship between each module.

    help wanted 
    opened by hzxie 10
  • Stuck after launching visdom server

    Stuck after launching visdom server

    I run the demo succesfully but after I launch the visdom server

    python -m visdom.server -p 8888

    I am stuck, I can't write any command anymore in my anaconda window. How do I continue? thanks!

    help wanted 
    opened by spha-code 10
  • [BUG] Chamfer Distance is not Correct

    [BUG] Chamfer Distance is not Correct

    I tried to debug the chamfer.cu by printing the values of tensors. I created two point clouds containing 3 and 5 points, respectively. The values are shown as below.

    (1,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -20.4838  4.4935  6.1395
      -3.7283 -0.7629  1.7736
    
    (2,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -17.4992  4.4902  5.0518
      -1.6003 -1.2430  0.8040
    [ Variable[CUDAType]{2,3,3} ]
    (1,.,.) = 
      0.0051  0.1850  0.0004
      0.0051  0.1850  0.0093
      0.0096  0.1850  0.0081
      0.0096  0.1850  0.0016
      0.0075  0.1850  0.0004
    
    (2,.,.) = 
     -0.1486 -0.0932 -0.0014
     -0.0406 -0.0932 -0.0017
     -0.2057 -0.0932 -0.0001
     -0.0915 -0.0932 -0.0001
      0.0103 -0.0932 -0.0001
    [ Variable[CUDAType]{2,5,3} ]
    

    I also add print statements in CUDA functions, and I got the following output.

    2i = 0, n = 3, j = 0, k = 0, d = 0.03425420, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 1, k = 0, d = 0.06742091, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 2, k = 0, d = 0.03920735, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00038808)
    2i = 1, n = 3, j = 0, k = 0, d = 0.03573948, x = (-0.08192606 0.01907521 0.02376382) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 1, k = 0, d = 0.03405631, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 2, k = 0, d = 0.03437031, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00928491)
    2i = 0, n = 3, j = 0, k = 1, d = 0.03434026, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 1, k = 1, d = 0.06641452, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 2, k = 1, d = 0.03897782, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00928491)
    2i = 1, n = 3, j = 0, k = 1, d = 0.03490656, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 1, k = 1, d = 0.03394968, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 2, k = 1, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00713918)
    2i = 0, n = 3, j = 0, k = 2, d = 0.03438481, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 1, k = 2, d = 0.06842789, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 2, k = 2, d = 0.03939636, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00809300)
    2i = 1, n = 3, j = 0, k = 2, d = 0.03508088, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 1, k = 2, d = 0.03389502, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 2, k = 2, d = 0.03423970, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00253408)
    2i = 0, n = 3, j = 0, k = 3, d = 0.03432181, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 1, k = 3, d = 0.06916460, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 2, k = 3, d = 0.03956439, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00158027)
    2i = 1, n = 3, j = 0, k = 3, d = 0.03652760, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 1, k = 3, d = 0.03404036, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 2, k = 3, d = 0.03435681, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00364473)
    3i = 0, n = 3, j = 0, k = 4, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 1, k = 4, d = 0.06842767, x = (-0.20483765 0.04493479 0.06139540) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 2, k = 4, d = 0.03941518, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00749534 0.18500790 0.00038808)
    3i = 1, n = 3, j = 0, k = 4, d = 0.03643737, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 1, k = 4, d = 0.03406866, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 2, k = 4, d = 0.03437987, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00602855)
    i = 0, n = 3, j = 0, best = 0.03425420, best_i = 0
    i = 0, n = 3, j = 1, best = 0.06641452, best_i = 1
    i = 0, n = 3, j = 2, best = 0.03897782, best_i = 1
    i = 1, n = 3, j = 0, best = 0.03490656, best_i = 1
    i = 1, n = 3, j = 1, best = 0.03389502, best_i = 2
    i = 1, n = 3, j = 2, best = 0.03423970, best_i = 2
    

    For batch 0 (i = 0), everything seems correct. However, for batch 1 (i = 1), the values of point clouds are not in the tensors. Is there something wrong with the code?

    chamfer 
    opened by hzxie 10
  • Evaluate RGB image with pretrained model

    Evaluate RGB image with pretrained model

    Hi Iam actually try to evaluate SVR Atlas pretrained model on RGB image(chair), my parameters are really similar to the demo and i got wird result (by view in chrom 3D viewer ). i used the demo grid generation. when i run your demo plane.jpg im my network i got good results in the 3Dviewer . Demo plain wird_pic_1 wird_pic_2 wird_pic_3 can you please direct my how to evaluate RGB image?

    testing 
    opened by Itamare1982 9
  • Test set used as validation to choose best model

    Test set used as validation to choose best model

    In train_AE_Atlasnet.py, the test set is used as the validation set to choose the best model. The test set should never be used during training and especially not to choose the best model as this biases the results. It's probably more appropriate to report the results on the last training epoch if there was no validation set.

    bug 
    opened by lynetcha 9
  • The corresponding normalized mesh

    The corresponding normalized mesh

    I downloaded the corresponding normalized mesh (Only 58Mb) from the link you provided. I found that the number of the mesh was much smaller than the corresponding point cloud. Could you please provide the full dataset of the corresponding normalized mesh? Thank you!

    data 
    opened by wang-ps 9
  • Cannot download the point cloud data

    Cannot download the point cloud data

    Hi! I'm trying to download the point cloud data provided in this link: https://cloud.enpc.fr/s/j2ECcKleA1IKNzk but the network fails every time I try to download.

    Do you know what's going on or how to download them?

    Thank you in advance!

    data 
    opened by jjpark 8
  • validation loss explodes

    validation loss explodes

    4cb6332fbe946eaa6a317f9f2ddc3b6 I directly run the script 'train_AE_Atlasnet.py' without any modification. As you can see above, the performance is good on the training set, but quite poor on the validation set. The validation loss increases quickly and doesn't decrease.

    pytorch 
    opened by AkonLau 8
  • About the point cloud dataset

    About the point cloud dataset

    I found that some of your point cloud dataset provided are missing. Could you provide all the point cloud dataset? or Could you tell me how to generate the point cloud dataset? Thank you!

    data 
    opened by guoyan1991 7
  • Memory Leak

    Memory Leak

    I found that the unused self.dist1 and self.dist2 in the file "nndistance/functions/nnd.py" cause memory leaking in my environment. (Python 3.5.2 with Pytorch 0.4.0)

    class NNDFunction(Function):
        def forward(self, xyz1, xyz2):
            dist1,dist2=cuda_compute_from(xyz1,xyz2)
            # following two lines cause memory leak
            self.dist1 = dist1
            self.dist2 = dist2
            return dist1, dist2
    
        def backward(self, graddist1, graddist2):
            gradxyz1,gradxyz2=grad_cuda_compute_from(graddist1,graddist2)
            return gradxyz1, gradxyz2
    
    chamfer pytorch 
    opened by liuyuan-pal 7
  • Question about running

    Question about running

    Hi,i'm soory to bother you again. When I ran the code as you explained, I got the following error: sh: 1: tmux: not found Setting up a new session... Exception in user code:

    could you give me some advice? The overall operation results in the terminal are as follows:

    /home/yukon/anaconda3/envs/pymesh/bin/python "/media/yukon/Extreme SSD/AtlasNet-master/train.py" anshu: Namespace(SVR=False, activation='relu', anisotropic_scaling=False, batch_size=32, batch_size_test=32, bottleneck_size=1024, class_choice=['airplane'], data_augmentation_axis_rotation=False, data_augmentation_random_flips=False, demo=True, demo_input_path='./doc/pictures/plane_input_demo.png', dir_name='', env='Atlasnet', hidden_neurons=512, http_port=8891, id='0', loop_per_epoch=1, lr_decay_1=120, lr_decay_2=140, lr_decay_3=145, lrate=0.001, multi_gpu=[0], nb_primitives=1, nepoch=150, no_learning=False, no_metro=False, normalization='UnitBall', num_layers=2, number_points=2500, number_points_eval=2500, random_rotation=False, random_seed=False, random_translation=False, reload_decoder_path='', reload_model_path='', remove_all_batchNorms=False, run_single_eval=False, sample=True, shapenet13=False, start_epoch=0, template_type='SPHERE', train_only_encoder=False, visdom_port=8890, workers=0) Loaded compiled 3D CUDA chamfer distance Launching new visdom instance in port 8890 TMUX=0 tmux new-session -d -s visdom_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m visdom.server -p 8890 > /dev/null 2>&1" Enter sh: 1: tmux: not found Launching new HTTP instance in port 8891 TMUX=0 tmux new-session -d -s http_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m http.server -p 8891 > /dev/null 2>&1" Enter sh: 1: tmux: not found Setting up a new session... Exception in user code:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 398, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1291, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1337, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1286, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1046, in _send_output self.send(msg) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 984, in send self.connect() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect conn = self._new_conn() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 450, in send timeout=timeout File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 788, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 695, in _send data=json.dumps(msg), File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 656, in _handle_post r = self.session.post(url, data=data) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) [Errno 111] Connection refused on_close() takes 1 positional argument but 3 were given New MLP decoder : hidden size 512, num_layers 2, activation relu Network weights loaded from ./training/trained_models/atlasnet_singleview_1_sphere/network.pth! Atlasnet generated mesh at ./doc/pictures/plane_input_demoAtlasnetReconstruction.ply!

    Process finished with exit code 0

    opened by tang-y-q 2
  • Question About Visulization

    Question About Visulization

    Hey! Sorry to disturb you again!

    I want to know if there any effective tools by python to visulize .obj file and save it to .png (except for meshlab).

    Thanks for your reply!

    opened by yufeng9819 1
  • Compile Metro Distance (GPL3 Licence)

    Compile Metro Distance (GPL3 Licence)

    Hi i'm sorry to bother you again.
    when I used the code you give me to trying to build metro distance, I found that it could not be compiled successfully. 
    It will indicate that the system path cannot be found. The results are shown as follows:
    

    捕获 could you please give some advice to solve this problem. thanks a lot!

    opened by tang-y-q 1
  • Question about train and test strategy

    Question about train and test strategy

    Hi! Sorry for disturb you again.

    I want to ask questions about train and test strategy. In your code, you set opt.shapenet13==True. So does it means that you first train your network on all categories and then test on each class to get the experiment metrics data of every single class.

    Looking forward to your reply!

    opened by yufeng9819 1
  • AtlasNet checkpoint not available

    AtlasNet checkpoint not available

    Hi @ThibaultGROUEIX, thank you for sharing the code.

    When downloading checkpoint of the model using the trained_models/download_models.sh (https://cloud.enpc.fr/s/c27Df7fRNXW2uG3/download) related to the version 2.2 of the source code, the link seems to be broken or no longer available. Could you please help me with this?

    Thanks.

    opened by apicis 4
Releases(v3.0)
Owner
also here : https://bitbucket.org/ThibaultGROUEIX/
PyTorch reimplementation of minimal-hand (CVPR2020)

Minimal Hand Pytorch Unofficial PyTorch reimplementation of minimal-hand (CVPR2020). you can also find in youtube or bilibili bare hand youtube or bil

Hao Meng 228 Dec 29, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Attention-guided gan for synthesizing IR images

SI-AGAN Attention-guided gan for synthesizing IR images This repository contains the Tensorflow code for "Pedestrian Gender Recognition by Style Trans

1 Oct 25, 2021
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection

Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection (NimPme) The official implementation of Novel Instances Mining with

12 Sep 08, 2022
TCube generates rich and fluent narratives that describes the characteristics, trends, and anomalies of any time-series data (domain-agnostic) using the transfer learning capabilities of PLMs.

TCube: Domain-Agnostic Neural Time series Narration This repository contains the code for the paper: "TCube: Domain-Agnostic Neural Time series Narrat

Mandar Sharma 7 Oct 31, 2021
Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization (Salesforce Research) This is a PyTorch implementation of the CoMatch paper [B

Salesforce 107 Dec 14, 2022
WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution

WPPNets: Unsupervised CNN Training with Wasserstein Patch Priors for Image Superresolution This code belongs to the paper [1] available at https://arx

Fabian Altekrueger 5 Jun 02, 2022
Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo Requirem

Robotics Evolution and Art Lab 51 Jan 01, 2023
Simple reference implementation of GraphSAGE.

Reference PyTorch GraphSAGE Implementation Author: William L. Hamilton Basic reference PyTorch implementation of GraphSAGE. This reference implementat

William L Hamilton 861 Jan 06, 2023
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

2 Dec 08, 2022
Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al.

nam-pytorch Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al. [abs, pdf] Installation You can access nam-pytorch vi

Rishabh Anand 11 Mar 14, 2022
Fast and Easy Infinite Neural Networks in Python

Neural Tangents ICLR 2020 Video | Paper | Quickstart | Install guide | Reference docs | Release notes Overview Neural Tangents is a high-level neural

Google 1.9k Jan 09, 2023
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
SMIS - Semantically Multi-modal Image Synthesis(CVPR 2020)

Semantically Multi-modal Image Synthesis Project page / Paper / Demo Semantically Multi-modal Image Synthesis(CVPR2020). Zhen Zhu, Zhiliang Xu, Anshen

316 Dec 01, 2022
Image-Stitching - Panorama composition using SIFT Features and a custom implementaion of RANSAC algorithm

About The Project Panorama composition using SIFT Features and a custom implementaion of RANSAC algorithm (Random Sample Consensus). Author: Andreas P

Andreas Panayiotou 3 Jan 03, 2023
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmenta

NVIDIA Research Projects 3.2k Dec 30, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Official Paddle Implementation] [Huggingface Gradio Demo] [Unofficial

442 Dec 16, 2022
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022