NeuPy is a Tensorflow based python library for prototyping and building neural networks

Overview

Travis Coverage Dependency Status License

NeuPy v0.8.2

NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learning models.

Installation

$ pip install neupy

User Guide

Articles and Notebooks

Growing Neural Gas

Growing Neural Gas is an algorithm that learns topological structure of the data.

Code that generates animation can be found in this ipython notebook

Making Art with Growing Neural Gas

Growing Neural Gas is another example of the algorithm that follows simple set of rules that on a large scale can generate complex patterns.

Image on the left is a great example of the art style that can be generated with simple set fo rules.

The main notebook that generates image can be found here

Self-Organizing Maps and Applications

Self-Organazing Maps (SOM or SOFM) is a very simple and powerful algorithm that has a wide variety of applications. This articles covers some of them, including:

  • Visualizing Convolutional Neural Networks
  • Data topology learning
  • High-dimensional data visualization
  • Clustering

Visualizing CNN based on Pre-trained VGG19

This notebook shows how you can easely explore reasons behind convolutional network predictions and understand what type of features has been learned in different layers of the network.

In addition, this notebook shows how to use neural network architectures in NeuPy, like VGG19, with pre-trained parameters.

Visualize Algorithms based on the Backpropagation

Image on the left shows comparison between paths that different algorithm take along the descent path. It's interesting to see how much information about the algorithm can be extracted from simple trajectory paths. All of this covered and explained in the article.

Hyperparameter optimization for Neural Networks

This article covers different approaches for hyperparameter optimization.

  • Grid Search
  • Random Search
  • Hand-tuning
  • Gaussian Process with Expected Improvement
  • Tree-structured Parzen Estimators (TPE)

The Art of SOFM

In this article, I just want to show how beautiful sometimes can be a neural network. I think, it’s quite rare that algorithm can not only extract knowledge from the data, but also produce something beautiful using exactly the same set of training rules without any modifications.

Discrete Hopfield Network

Article with extensive theoretical background about Discrete Hopfield Network. It also has example that show advantages and limitations of the algorithm.

Image on the left is a visulatization of the information stored in the network. This picture not only visualizes network's memory, ot shows everything network knows about the world.

Create unique text-style with SOFM

This article describes step-by-step solution that allow to generate unique styles with arbitrary text.

Playing with MLP visualizations

This notebook shows interesting ways to look inside of your MLP network.

Exploring world with Value Iteration Network (VIN)

One of the basic applications of the Value Iteration Network that learns how to find an optimal path between two points in the environment with obstacles.

Features learned by Restricted Boltzmann Machine (RBM)

Set of examples that use and explore knowledge extracted by Restricted Boltzmann Machine

Comments
  • Boston Pricing Frame work will not save network with dill

    Boston Pricing Frame work will not save network with dill

    I have been playing around with different things with the neupy library, so this network is arbitrary in purpose, however models the overall set up of the real ones. This code was just to get the dill dump/load figured out. The code is

    import dill
    import csv
    import numpy as np
    from sklearn import datasets, preprocessing
    from sklearn.cross_validation import train_test_split
    from neupy import algorithms, layers
    from neupy.functions import rmsle
    
    np.random.seed(0)
    
    #variables
    EPOCHS = 200
    HIDDENLAYER = 17
    TRAIN = 0.7
    ROUND = 2
    STEP = 0.003
    TOL = 0.02
    
    with open('binary_conversion_dataset_input_2.csv','r') as dest1_f:
    
        data_iter = csv.reader(dest1_f, delimiter = ',', quotechar = '"')
        data = [data for data in data_iter]
        data_array1 = np.asarray(data, dtype = float)
        hitmiss_in = data_array1 #loads entire dataset from excel csv file
    
    with open('binary_conversion_dataset_target_2.csv','r') as dest2_f:
        data_iter = csv.reader(dest2_f, delimiter = ',', quotechar = '"')
        data = [data for data in data_iter]
        data_array2 = np.asarray(data, dtype = float)
        hitmiss_target = data_array2 #loads entire dataset from excel csv file
        hitmiss_input = hitmiss_in[:,:]
        hitmiss_target = hitmiss_target[:,:]
        hitmiss_predict = [0.53, 0.80, 0.40, 0.20, 0.07]
    
    #####break target set into singles
    hitmiss_target1a = hitmiss_target[:,0]
    hitmiss_target1b = hitmiss_target[:,1]
    hitmiss_target1c = hitmiss_target[:,2]
    hitmiss_target1d = hitmiss_target[:,3]
    hitmiss_target1e = hitmiss_target[:,4]
    
    ################################################Neural Network for hit miss
    
    x_train, x_test, y_train, y_test = train_test_split(
        hitmiss_input, hitmiss_target1a, train_size=TRAIN
    )
    
    cgnet = algorithms.ConjugateGradient(
        connection=[
            layers.TanhLayer(5),
            layers.TanhLayer(HIDDENLAYER),
            layers.OutputLayer(1),
        ],
        search_method='golden',
        tol = TOL,
        step = STEP,
        show_epoch=25,
        optimizations=[algorithms.LinearSearch],
    )
    
    cgnet.train(x_train, y_train, x_test, y_test, epochs=EPOCHS)
    hitmiss_final_A = cgnet.predict(hitmiss_predict).round(ROUND)
    
    with open('network-storage.dill', 'w') as net:
        dill.dump(cgnet, net)
    
    print(hitmiss_final_A)
    
    The entire traceback errors produced are:
    
    Traceback (most recent call last):
    
    File "C:\Python27\save network script.py", line 74, in <module>
    
    dill.dump(cgnet, net)
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 115, in dump
    
    pik.dump(obj)
    
    File "C:\Python27\lib\pickle.py", line 224, in dump
    
    self.save(obj)
    
    File "C:\Python27\lib\pickle.py", line 331, in save
    
    self.save_reduce(obj=obj, *rv)
    
    File "C:\Python27\lib\pickle.py", line 401, in save_reduce
    
    save(args)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\pickle.py", line 548, in save_tuple
    
    save(element)
    
    File "C:\Python27\lib\pickle.py", line 331, in save
    
    self.save_reduce(obj=obj, *rv)
    
    File "C:\Python27\lib\pickle.py", line 419, in save_reduce
    
    save(state)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 440, in save_module_dict
    
    StockPickler.save_dict(pickler, obj)
    
    File "C:\Python27\lib\pickle.py", line 649, in save_dict
    
    self._batch_setitems(obj.iteritems())
    
    File "C:\Python27\lib\pickle.py", line 681, in _batch_setitems
    
    save(v)
    
    File "C:\Python27\lib\pickle.py", line 331, in save
    
    self.save_reduce(obj=obj, *rv)
    
    File "C:\Python27\lib\pickle.py", line 419, in save_reduce
    
    save(state)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 440, in save_module_dict
    
    StockPickler.save_dict(pickler, obj)
    
    File "C:\Python27\lib\pickle.py", line 649, in save_dict
    
    self._batch_setitems(obj.iteritems())
    
    File "C:\Python27\lib\pickle.py", line 681, in _batch_setitems
    
    save(v)
    
    File "C:\Python27\lib\pickle.py", line 331, in save
    
    self.save_reduce(obj=obj, *rv)
    
    File "C:\Python27\lib\pickle.py", line 419, in save_reduce
    
    save(state)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 440, in save_module_dict
    
    StockPickler.save_dict(pickler, obj)
    
    File "C:\Python27\lib\pickle.py", line 649, in save_dict
    
    self._batch_setitems(obj.iteritems())
    
    File "C:\Python27\lib\pickle.py", line 681, in _batch_setitems
    
    save(v)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 526, in save_functor
    
    obj.keywords), obj=obj)
    
    File "C:\Python27\lib\pickle.py", line 401, in save_reduce
    
    save(args)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\pickle.py", line 562, in save_tuple
    
    save(element)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 418, in save_function
    
    obj.func_closure), obj=obj)
    
    File "C:\Python27\lib\pickle.py", line 401, in save_reduce
    
    save(args)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\pickle.py", line 562, in save_tuple
    
    save(element)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\pickle.py", line 548, in save_tuple
    
    save(element)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 584, in save_cell
    
    pickler.save_reduce(_create_cell, (obj.cell_contents,), obj=obj)
    
    File "C:\Python27\lib\pickle.py", line 401, in save_reduce
    
    save(args)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\pickle.py", line 548, in save_tuple
    
    save(element)
    
    File "C:\Python27\lib\pickle.py", line 286, in save
    
    f(self, obj) # Call unbound method with explicit self
    
    File "C:\Python27\lib\site-packages\dill\dill.py", line 418, in save_function
    
    obj.func_closure), obj=obj)
    
    File "C:\Python27\lib\pickle.py", line 405, in save_reduce
    
    self.memoize(obj)
    
    File "C:\Python27\lib\pickle.py", line 244, in memoize
    
    assert id(obj) not in self.memo
    
    AssertionError
    
    question 
    opened by paperstsoap 26
  • No getting repeatable results

    No getting repeatable results

    For some reason, when i combine multiple pairs of layers (see the attached script) the result i get is not the same when i run the same script with a single pair of layers. for convenience, in addition to the script. i also attached the input files.

    many thanks for your help! ivan

    Script_Input_Files.zip

    bug 
    opened by ivan-marroquin 16
  • Quasi Newton methods failing with GPU calculations

    Quasi Newton methods failing with GPU calculations

    Hi, I've been training my network with Quasi Newton algorithm (bfgs mostly) on my cpu and it works really well. When I switched Theano to run on my GPU and both bfgs and sr1 (haven't tried the other ones) return NaN after the first epoch. I tracked the NaN to self.predict(input_test) = array([[ nan], [ nan], [ nan], ..., [ nan], [ nan], [ nan]], dtype=float32)

    What I see on the console output is the first train loss to be ok (a number of the magnitude I expect) but with a NaN (in the console a "-") as test loss. Every epoch afterwards has also "-" as both loss and error loss. I am happy to run some tests to help pinpoint the problem but I don't know how to debug within a Theano function, so I haven't been able to track the NaN backwards from the output layer towards the inputs.

    Let me know what more information I can provide to help with this bug.

    bug 
    opened by luk-f-a 14
  • Custom activation function in Neupy (floating point to fixed point)

    Custom activation function in Neupy (floating point to fixed point)

    I am trying to create a custom activation layer base on Neupy, however, once I apply my custom layer to the network, the training and validation error keep the same in each epoch. For my custom function, I want to make the input value from a floating point value to a fixed point for both ReLU and Softmax function (same as the code below). Therefore, I create a function call "float_limit", which helps me to change a floating point value to be a fixed point value. My first idea is to use an int() function within my float_limit function. However, it shows type error since int() cannot use for tensor variable. So I change the int() function to be T.floor(), which can do the same work as int(). But then the result of the network end up in a straight line. May I ask that how can I fix this problem?

    Thank you very much

    This is my code:

    from sklearn import datasets, model_selection
    from sklearn.preprocessing import OneHotEncoder
    from neupy import environment,algorithms, layers
    import numpy as np
    from sklearn.model_selection import train_test_split
    import theano
    import theano.tensor as T
    
    # load data
    mnist = datasets.fetch_mldata('MNIST original')
    data, target = mnist.data, mnist.target
    
    # make one hot
    data = data / 255.
    data = data - data.mean(axis=0)
    
    target_scaler = OneHotEncoder()
    target = target_scaler.fit_transform(target.reshape((-1, 1)))
    target = target.todense()
    
    # print(target)
    
    # split data for training and testing
    environment.reproducible()
    
    x_train, x_test, y_train, y_test = train_test_split(
        data.astype(np.float32),
        target.astype(np.float32),
        train_size=(6. / 7)
    )
    
    # Theano is a main backend for the Gradient Descent based algorithms in NeuPy.
    theano.config.floatX = 'float32'
    
    ################# float limit #####################
    def float_limit(n, b):
        d = 2 ** b
        return T.floor(n * d) / d
    
    # def float_limit(n, b):
    #     d = 2 ** b
    #     return T.int(n * d) / d
    ############### custom function #################
    
    ############### relu ##################
    def relu(x, alpha=0):
        # x = float_limit(x, 8)
        result = 0.5 * (x + abs(x))
        return result
    
    class custom_relu(layers.ActivationLayer):
        def activation_function(self, input_value):
            # result = float_limit(relu(input_value), 8)
            # return result
            return relu(input_value)
    #################### softmax ########################
    # def softmax(z):
    #     z -= np.max(z)
    #     sm = (np.exp(z).T / np.sum(np.exp(z),axis=1)).T
    #     return sm
    
    class custom_softmax(layers.ActivationLayer):
        def activation_function(self, input_value):
            input_value = float_limit(input_value,8)
            input_value -= T.max(input_value)
            result = (T.exp(input_value).T / T.sum(T.exp(input_value), axis=1)).T
            limit_result = result
            return result
    
    class custom_softmax_1(layers.ActivationLayer):
        def activation_function(self, input_value):
            input_value = float_limit(input_value,8)
            return T.nnet.softmax(input_value)
    ########### start the model architecture ############
    
    network = algorithms.Momentum(
        [
            layers.Input(784),
            custom_relu(500), #Relu
            custom_relu(300),
            # layers.Relu(300), #Relu
            custom_softmax_1(10)  # this is custom_softmax_1 which apply float_limit to original softmax
        ],
        error='categorical_crossentropy',
        step=0.01,
        verbose=True,
        shuffle_data=True,
        momentum=0.99,
        nesterov=True,
    )
    # print the architecture(Input shape, Layer Type, Output shape)
    network.architecture()
    
    # train the network
    network.train(x_train, y_train, x_test, y_test, epochs=20)
    # network.train(x_train, y_train, epochs=20)
    
    # show the accuracy
    from sklearn import metrics
    
    y_predicted = network.predict(x_test).argmax(axis=1)
    y_test = np.asarray(y_test.argmax(axis=1)).reshape(len(y_test))
    print("y_predicted",y_predicted)
    print("y_test",y_test)
    
    print(metrics.classification_report(y_test, y_predicted))
    
    score = metrics.accuracy_score(y_test, y_predicted)
    print("Validation accuracy: {:.2%}".format(score))
    
    # plot the image
    from neupy import plots
    plots.error_plot(network)
    
    question 
    opened by kaichi040696 11
  • LevenbergMarquardt speed

    LevenbergMarquardt speed

    First of all, congrats, I like very much the design of this library. It seems that there is something strange (perhaps buggy) going on with the LevenbergMarquardt algorithm, or either I am doing something wrong. I am using the following to test it out:

    import theano
    import pandas as pd
    import numpy as np
    
    from sklearn.preprocessing import robust_scale
    from sklearn.cross_validation import train_test_split
    from neupy import algorithms, layers
    
    #Load the data
    data = pd.DataFrame(np.array(range(10000)), dtype='float32')
    target = pd.DataFrame([x**2 for x in data[0]], dtype='float32')
    
    theano.config.floatX = 'float32'
    
    #the main split
    mask = (np.random.rand(len(data)) < 0.8)
    traindata = data[mask]
    traintarget = target[mask]
    testdata = data[~mask]
    testtarget = target[~mask]
    
    #split training data set into validation
    x_train, x_test, y_train, y_test = train_test_split(
        traindata.astype(np.float32),
        traintarget.astype(np.float32),
        train_size=(.8)
    )
    
    network = algorithms.LevenbergMarquardt(
        [
            layers.Input(1),
            layers.Relu(32),
            layers.Relu(16),
            layers.Sigmoid(1),
        ],
        error='mse',
        step=0.01,
        verbose=True,
        shuffle_data=True,
        #momentum=0.99,
        #nesterov=True,
    )
    
    #Train the netwrok with validation
    network.train(x_train, y_train, x_test, y_test, epochs=100)
    
    #predict new samples
    p = network.predict(testdata)
    e2 = (testtarget - p) ** 2
    mse = np.mean(e2)
    meanobs = np.mean(testtarget)
    ssobs = (testtarget - meanobs) ** 2
    print("MSE: ")
    print(mse)
    print("R2: ")
    print(1 - (mse/np.mean(ssobs)))
    

    Each training iteration takes about 50 seconds, while most of the algorithms take just a few ms.

    enhancement wontfix 
    opened by rmlopes 10
  • 'Layer' object has no attribute 'activation_function'

    'Layer' object has no attribute 'activation_function'

    Hi,

    Here is my setup:

    from neupy import algorithms, layers, environment
    
    environment.reproducible()
    
    cgnet = algorithms.ConjugateGradient(
        connection=[
            layers.Layer(5),
            layers.Sigmoid(100),
            layers.Output(1),
        ],
        shuffle_data = False,
        show_epoch=300,
        verbose=True,
        search_method='golden',
        update_function='fletcher_reeves',
        addons=[algorithms.LinearSearch],
    )
    

    Do you know why I have following error:

    [THEANO] Initializing Theano variables and functions. AttributeError: 'Layer' object has no attribute 'activation_function'

    Thanks.

    question 
    opened by denisvolokh 10
  • Crashes my RAM

    Crashes my RAM

    I am using oja's rule on dataset of size 400x156300. It seems to crash my RAM. I am not sure what is causing this. Please help. I have 12 GB of RAM. Tried using memmap still crash!!

    convert memmap and reduce precision

    [num_sample,num_feat]=train_data.shape filename = path.join(mkdtemp(), 'train_data.dat') memmap_train = np.memmap(filename, dtype='float32', mode='w+', shape=(num_sample,num_feat)) memmap_train[:] = train_data[:] del train_data,test_data

    apply oja's rule

    ojanet = algorithms.Oja(minimized_data_size=1250,step=1e-10,verbose=True,show_epoch=1) ojanet.train(memmap_train, epsilon=1e-3,epochs=10000) red_train_data = ojanet.predict(memmap_train) ojanet.plot_errors(logx=False) pdb.set_trace()

    bug 
    opened by abhigenie92 10
  • modified mse for BR

    modified mse for BR

    after fiddling with it https://github.com/itdxer/neupy/issues/236#issuecomment-476601598 , I can't get the value of the weight and bias in the library. Here's my code:

    from numpy import genfromtxt
    import numpy as np
    import neupy.algorithms.gd.lev_marq as lm
    from neupy import algorithms, storage
    from neupy.layers import *
    from neupy.exceptions import StopTraining
    from neupy import utils
    from sklearn import preprocessing
    from sklearn.model_selection import train_test_split
    import scipy as sci
    import time
    import tensorflow as tf
    
    start = time.time()
    
    def on_epoch_end(optimizer):
        if optimizer.errors.valid[-1] < 0.01:
            raise StopTraining("Training has been interrupted")
    
    dataset = genfromtxt('cuaca_fulldata.csv', delimiter=',')
    data = dataset[:,:-1]
    target = dataset[:,len(dataset[0])-1]
    print(data)
    print(target)
    N = np.size(data) * 0.75
    data_scaler = preprocessing.MinMaxScaler()
    
    data = data_scaler.fit_transform(data)
    
    utils.reproducible()
    
    x_train, x_test, y_train, y_test = train_test_split(
        data, target, test_size=0.15
    )
    
    def modified_mse(actual, expected, parameters):
        # Loss funtion can be modified here
        sum_error = 0
        sum_weight = []
        hessian  = lm.setup_parameter_updates()
        trace = sci.inv(hessian)
        sum_error += sum([(expected[i] - actual[i]) ** 2 for i in range(len(expected))])
        sum_weight += sum((parameters) ** 2)
        alpha = sum(parameters) / (2 * sum_weight + trace);
        gamma = sum(parameters) - (alpha * trace);
        beta = abs((N - gamma) / (2 * sum_error));
        C = (beta*sum_error) + (alpha*sum_weight)
        return tf.reduce_mean(tf.square(actual - expected))
    
    
    class LMwithBR(algorithms.LevenbergMarquardt):
        loss = None
    
        def __init__(self, *args, **kwargs):
            self.loss = modified_mse
            super(LMwithBR, self).__init__(*args, **kwargs)
    
    n_inputs = 6
    n_hidden = 3
    n_outputs = 1
    
    network = join(
        Input(n_inputs),
        Sigmoid(n_hidden),
        Linear(n_outputs),
    )
    
    optimizer = LMwithBR(
        network,
        verbose=True,
        show_epoch=5,
        mu=0.05,
        #signals=on_epoch_end,
    )
    
    question 
    opened by Ohayoosan 9
  • Hessian algorithm produces NaN values during the training procedure

    Hessian algorithm produces NaN values during the training procedure

    RE https://github.com/itdxer/neupy/issues/118#issuecomment-245899060

    ------------------------------------------------
    | Epoch # | Train err | Valid err | Time       |
    ------------------------------------------------
    | 1       | 0.162159  | 0.095479  | 00:00:26   |
    | 2       | 0.096515  | 0.048322  | 00:00:25   |
    | 3       | 0.048930  | 0.024670  | 00:00:26   |
    | 4       | 0.025032  | 0.014477  | 00:00:25   |
    | 5       | 0.014700  | 0.010096  | 00:00:26   |
    | 6       | 0.010240  | 0.008018  | 00:00:31   |
    | 7       | 0.008117  | 0.006877  | 00:00:26   |
    | 8       | 0.006950  | nan       | 00:00:25   |
    | 9       | nan       | nan       | 00:00:26   |
    
    bug 
    opened by itdxer 8
  • Feature request:

    Feature request:

    Give plot_optimizer_errors in neupy.algorithms.plot an image flag similar to the show flag that triggers a PIL image to be returned.

    location: https://github.com/itdxer/neupy/blob/master/neupy/algorithms/plots.py

    opened by DPR-Sanchez 7
  • File dependencies

    File dependencies

    Jupyter Notebook Untitled3 Last Checkpoint: 04/16/2018 (autosaved) Current Kernel Logo Logout Python 3 Trusted File Edit View Insert Cell Kernel Widgets Help Run In [1]:

    1 %matplotlib tk 2 import matplotlib.pyplot as plt 3 from sklearn import datasets 4 from neupy import algorithms, utils 5 ​ 6 from helpers import plot_2d_grid 7 ​ 8 ​ 9 plt.style.use('ggplot') 10 utils.reproducible() 11 ​ 12 ​ 13 if name == 'main': 14 GRID_WIDTH = 20 15 GRID_HEIGHT = 1 16 ​ 17 data, targets = datasets.make_moons(n_samples=400, noise=0.1) 18 data = data[targets == 1] 19 ​ 20 sofm = algorithms.SOFM( 21 n_inputs=2, 22 features_grid=(GRID_HEIGHT, GRID_WIDTH), 23 ​ 24 verbose=True, 25 shuffle_data=True, 26 ​ 27 # The winning neuron will be selected based on the 28 # Euclidean distance. For this task it's important 29 # that distance is Euclidean. Other distances will 30 # not give us the same results. 31 distance='euclid', 32 ​ 33 learning_radius=2, 34 # Reduce learning radius by 1 after every 20 epochs. 35 # Learning radius will be equal to 2 during first 36 # 20 epochs and on the 21st epoch it will be equal to 1. 37 reduce_radius_after=20, 38 ​ 39 # 2 Means that neighbour neurons will have high learning 40 # rates during the first iterations 41 std=2, 42 # Defines a rate at which parameter std will be reduced. 43 # Reduction is monotonic and reduces after each epoch. 44 # In 50 epochs std = 2 / 2 = 1 and after 100 epochs 45 # std = 2 / 3 and so on. 46 reduce_std_after=50, 47 ​ 48 # Step (or learning rate) 49 step=0.3, 50 # Defines a rate at which parameter step will reduced. 51 # Reduction is monotonic and reduces after each epoch. 52 # In 50 epochs step = 0.3 / 2 = 0.15 and after 100 epochs 53 # std = 0.3 / 3 = 0.1 and so on. 54 reduce_step_after=50, 55 ) 56 sofm.train(data, epochs=20) 57 ​ 58 red, blue = ('#E24A33', '#348ABD') 59 ​ 60 plt.scatter(*data.T, color=blue) 61 plt.scatter(*sofm.weight, color=red) 62 ​ 63 weights = sofm.weight.reshape((2, GRID_HEIGHT, GRID_WIDTH)) 64 plot_2d_grid(weights, color=red) 65 ​ 66 plt.show() 67 ​

    ModuleNotFoundError Traceback (most recent call last) in 4 from neupy import algorithms, utils 5 ----> 6 from helpers import plot_2d_grid 7 8

    ModuleNotFoundError: No module named 'helpers'

    I did not move any files from where the examples were installed. I did not find a "helpers" file related to neupy on my disk. I read all the fixes for that type of problem. After 4 hours I could not make this file work.

    opened by rpantera 7
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Bump pillow from 3.0.0 to 9.3.0 in /requirements

    Bump pillow from 3.0.0 to 9.3.0 in /requirements

    Bumps pillow from 3.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • cannot import name 'MutableMapping' from 'collections'

    cannot import name 'MutableMapping' from 'collections'

    Hi. I installed Spyder (standalone version) on a new Macbook Pro and installed neupy with "pip install neupy". The installation went fine, but my code gets this error when I run it: "cannot import name 'MutableMapping' from 'collections'." I followed the solution in this video, but it did not solve my problem.

    Any thoughts? Thank you.

    opened by csunlab 0
  • ART1: calling predict() multiple times does not give same result

    ART1: calling predict() multiple times does not give same result

    The ART1 documentation says that calling predict/train on the same model multiple times, trains a new model: http://neupy.com/modules/generated/neupy.algorithms.ART1.html Thus I would expect that they give the same result. However, different calls of predict() on the same model give different results. With my data, the result is different for the second run and then stays the same. Code:

    from neupy.algorithms import ART1
    artnet = ART1(
        step=0.1,
        rho=0.5,
        n_clusters=5,
    )
    for i in range(10):
        clusters = artnet.predict(data)
        print(clusters)
    

    yields result:

    [0. 1. 2. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    [0. 1. 1. ... 1. 1. 1.]
    

    The data is 2700 data points of about 200 features. If needed I can supply the data.

    opened by peterdekker 4
  • Not all documented ART1 parameters actually used

    Not all documented ART1 parameters actually used

    Not all parameters that are documented for ART1, are, as far as I can see, actually used: http://neupy.com/modules/generated/neupy.algorithms.ART1.html Three parameters which come from BaseNetwork: show_epoch, shuffle_data and signals, are not used.

    If I look at the source code, theseparameters are used in the train() method of BaseNetwork. However, ART1 has its own train() method, which does not call the train() method of BaseNetwork, and does not use these parameters. Would it be good to remove these parameters from the documentation for ART1?

    opened by peterdekker 0
  • Issue with LevenbergMarquardt optimiser algorithm (scan_perform)

    Issue with LevenbergMarquardt optimiser algorithm (scan_perform)

    I am trying to build a simple feedforward network with the Levenberg-Marquardt optimiser algorithm, but it won't compile as it seems to be missing the scan_perform module. If I change the optimiser algorithm to GradientDescent, then it works fine.

    The code needed to reproduce the error is:

    from neupy import algorithms
    from neupy.layers import *
    
    net = Input(2) >> Sigmoid(2) >> Linear(1)
    opt = algorithms.LevenbergMarquardt(net)
    

    and the error I get is:

    You can find the C code in this temporary file: /tmp/theano_compilation_error_8yf9iyd4
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:11860:69: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12617:21: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12618:22: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12619:19: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12626:24: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12627:25: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12628:22: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12629:13: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12630:13: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12631:13: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12686:24: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12687:25: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12688:22: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12689:13: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12690:13: is not found.
    library owlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12691:13: is not found.
    Traceback (most recent call last):
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/scan_module/scan_perform_ext.py", line 48, in <module>
        raise ImportError()
    ImportError
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/scan_module/scan_perform_ext.py", line 63, in <module>
        raise ImportError()
    ImportError
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/gus/Documents/UNSW_work/code/python/latent_variables/nn_test_neupy2.py", line 12, in <module>
        opt = algorithms.LevenbergMarquardt(net)
      File "/home/gus/anaconda3/lib/python3.8/site-packages/neupy/algorithms/gd/base.py", line 113, in __init__
        super(GradientDescent, self).__init__(connection, **options)
      File "/home/gus/anaconda3/lib/python3.8/site-packages/neupy/algorithms/constructor.py", line 291, in __init__
        super(ConstructibleNetwork, self).__init__(*args, **kwargs)
      File "/home/gus/anaconda3/lib/python3.8/site-packages/neupy/algorithms/constructor.py", line 163, in __init__
        self.init_methods()
      File "/home/gus/anaconda3/lib/python3.8/site-packages/neupy/algorithms/constructor.py", line 338, in init_methods
        train_epoch=theano.function(
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/compile/function.py", line 306, in function
        fn = pfunc(params=inputs,
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/compile/pfunc.py", line 483, in pfunc
        return orig_function(inputs, cloned_outputs, mode,
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/compile/function_module.py", line 1841, in orig_function
        fn = m.create(defaults)
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/compile/function_module.py", line 1714, in create
        _fn, _i, _o = self.linker.make_thunk(
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/gof/link.py", line 697, in make_thunk
        return self.make_all(input_storage=input_storage,
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/gof/vm.py", line 1087, in make_all
        thunks.append(node.op.make_thunk(node,
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/scan_module/scan_op.py", line 925, in make_thunk
        from . import scan_perform_ext
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/scan_module/scan_perform_ext.py", line 120, in <module>
        cmodule.GCC_compiler.compile_str(dirname, code, location=loc,
      File "/home/gus/anaconda3/lib/python3.8/site-packages/theano/gof/cmodule.py", line 2398, in compile_str
        raise Exception('Compilation failed (return status=%s): %s' %
    Exception: ('The following error happened while compiling the node', for{cpu,scan_fn}(Elemwise{Mul}[(0, 0)].0, Subtensor{int64:int64:int8}.0, Elemwise{Mul}[(0, 0)].0, Elemwise{Mul}[(0, 0)].0, Elemwise{Mul}[(0, 0)].0, Elemwise{Mul}[(0, 0)].0, Reshape{1}.0, Elemwise{sub,no_inplace}.0, Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0, InplaceDimShuffle{1,0}.0, Elemwise{sub,no_inplace}.0, MakeVector{dtype='int64'}.0, Alloc.0, InplaceDimShuffle{1,0}.0, InplaceDimShuffle{1,0}.0), '\n', 'Compilation failed (return status=1): /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: In function ‘PyObject* __Pyx_PyCFunction_FastCall(PyObject*, PyObject**, Py_ssize_t)’:. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:11860:69: error: too many arguments to function. 11860 |     return (*((__Pyx_PyCFunctionFast)meth)) (self, args, nargs, NULL);.       |                                                                     ^. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: In function ‘void __Pyx__ExceptionSave(PyThreadState*, PyObject**, PyObject**, PyObject**)’:. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12617:21: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?. 12617 |     *type = tstate->exc_type;.       |                     ^~~~~~~~.       |                     curexc_type. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12618:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?. 12618 |     *value = tstate->exc_value;.       |                      ^~~~~~~~~.       |                      curexc_value. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12619:19: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?. 12619 |     *tb = tstate->exc_traceback;.       |                   ^~~~~~~~~~~~~.       |                   curexc_traceback. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: In function ‘void __Pyx__ExceptionReset(PyThreadState*, PyObject*, PyObject*, PyObject*)’:. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12626:24: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?. 12626 |     tmp_type = tstate->exc_type;.       |                        ^~~~~~~~.       |                        curexc_type. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12627:25: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?. 12627 |     tmp_value = tstate->exc_value;.       |                         ^~~~~~~~~.       |                         curexc_value. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12628:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?. 12628 |     tmp_tb = tstate->exc_traceback;.       |                      ^~~~~~~~~~~~~.       |                      curexc_traceback. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12629:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?. 12629 |     tstate->exc_type = type;.       |             ^~~~~~~~.       |             curexc_type. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12630:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?. 12630 |     tstate->exc_value = value;.       |             ^~~~~~~~~.       |             curexc_value. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12631:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?. 12631 |     tstate->exc_traceback = tb;.       |             ^~~~~~~~~~~~~.       |             curexc_traceback. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp: In function ‘int __Pyx__GetException(PyThreadState*, PyObject**, PyObject**, PyObject**)’:. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12686:24: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?. 12686 |     tmp_type = tstate->exc_type;.       |                        ^~~~~~~~.       |                        curexc_type. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12687:25: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?. 12687 |     tmp_value = tstate->exc_value;.       |                         ^~~~~~~~~.       |                         curexc_value. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12688:22: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?. 12688 |     tmp_tb = tstate->exc_traceback;.       |                      ^~~~~~~~~~~~~.       |                      curexc_traceback. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12689:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_type’; did you mean ‘curexc_type’?. 12689 |     tstate->exc_type = local_type;.       |             ^~~~~~~~.       |             curexc_type. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12690:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_value’; did you mean ‘curexc_value’?. 12690 |     tstate->exc_value = local_value;.       |             ^~~~~~~~~.       |             curexc_value. /home/gus/.theano/compiledir_Linux-5.4--lowlatency-x86_64-with-glibc2.10-x86_64-3.8.3-64/scan_perform/mod.cpp:12691:13: error: ‘PyThreadState’ {aka ‘struct _ts’} has no member named ‘exc_traceback’; did you mean ‘curexc_traceback’?. 12691 |     tstate->exc_traceback = local_tb;.       |             ^~~~~~~~~~~~~.       |             curexc_traceback. ')
    

    As I said above, GradientDescent works fine, its just LevenbergMarquardt and Hessian and the other Newton's method-based ones.

    Any idea how I can fix this?

    opened by AngusKenny 0
Releases(v0.8.2)
  • v0.8.2(Apr 4, 2019)

    Main changes:

    • Improved functionality for layer/network copying
    • Added repeat function
    • Fixed output shape for the conv layers when input is unknown
    • Typo fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Mar 30, 2019)

  • v0.8.0(Jan 28, 2019)

    Main changes:

    • New design for the layer graphs and layers
    • Changed inline operators > (still supported) to >> and lists to| operator for parallel connections
    • Added 2 new step decay algorithms, namely exponential_decay and polynomial_decay
    • New regularizers
    • New signals functionality, can be extended with classes (signals attribute)
    • Tensorflow optimizers are used instead of self-implemented functions
    • Added GroupNorm layer
    • Changed behavior for the the predict methods
    • Added show method to the networks
    • Added plot_errors for the optimizers with better and more accurate visualizations

    Small changes:

    • The error parameters for optimizers was renamed to loss
    • Changed summary format in the logs
    • Removed training_errors and validation_errors and it were replaced with new errors attribute
    • Renamed the prediction_error method to score

    Misc:

    • Big code refactoring
    • Better exceptions

    Removed

    • Add-ons were removed
    • Tuple of integer as network configuration
    • Removed RBFKmeans
    • Removed train_end_signal and epoch_end_signal
    • Removed plots.error_plot function
    • Removed plots.network_structure function
    Source code(tar.gz)
    Source code(zip)
  • v0.7.3(Jan 17, 2019)

    • Fixed problem that occurred when GNG was loaded from pickled file. Issue https://github.com/itdxer/neupy/issues/229
    • Improved representation for the GNG graph elements
    • Added extra exceptions into GNG graph methods in order to get informative fail when unexpected action was applied
    Source code(tar.gz)
    Source code(zip)
  • v0.7.2(Dec 13, 2018)

    Changes:

    • Added missing dilation rates for ResNet-50 architecture
    • Fixed fan computation for weight initializers
    • Added variable initializer in the predict method
    • Change XavierNormal to HeNormal default initializer
    Source code(tar.gz)
    Source code(zip)
  • v0.7.1(Dec 10, 2018)

    Changes:

    • Speed up network initialization via lazy parameter initialization using tensorflow's tensors instead numpy's arrays
    • Global pooling layer accepts two string arguments that point to different tensorflow functions.
    • Fixes for the reshape layer when used with unknown input shape
    • Fixed cross entropy loss functions for spatial inputs
    • Removed input blocker during the training
    • Combined GradientDescent and MinibatchGradientDescent into one class GradientDescent
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Dec 2, 2018)

    Changes:

    • Backend was moved to Tensorflow
    • Pickle storage for the weights has been replaced with HDF5
    • Changed order of the dimensions for the convolutional filter (channel expected to be in the last dimension)
    • The compile method was removed
    • Added wolfe search to conjugate gradient
    • Fixes for the training algorithms

    Removed:

    • Linear models
    • Quickprop training algorithm
    • Ensemble algorithms
    Source code(tar.gz)
    Source code(zip)
  • v0.6.5(Oct 12, 2018)

  • v0.6.4(Mar 26, 2018)

    Features:

    • Added Growing Neural Gas (GNG) algorithm
    • Added 2 new improvements to the base GNG algorithms

    Bug:

    • Set up default value for epoch_time in order to prevent crashes when training was stopped on first iteration.
    Source code(tar.gz)
    Source code(zip)
  • v0.6.3(Feb 3, 2018)

    Fixes:

    • Use old table-style, otherwise table breaks for some of the font styles and it doesn't look good in ipython notebook
    • Adam optimizer fix for GPU (#200)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Dec 11, 2017)

  • v0.6.1(Dec 3, 2017)

    Enhancement:

    • Switched to Theano version 1.0
    • Use tableprint library instead of neupy table module
    • Use progressbar2 library instead of neupy progressbar module
    • Removed logic that controls number of outputs in terminal during the training

    Bugs:

    • Convert output from the Step layer to the integer explicitly in order to avoid boolean outputs
    • Fixed issue where message, that tells about training interruption, breaks result table
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Sep 10, 2017)

    Features:

    • Added module that contains popular DNN architectures, namely Resnet50, VGG19, VGG16, SqueezeNet, AlexNet
    • Pre-trained parameters for the new DNN architectures
    • Changed format in which neupy stores artitectures.
    • To the existed pickle format there were added support for a few new formats, namely hdf5, json and python dictionary

    Enhancement:

    • Changed API for the mixture of experts ensemble. Now it works from architectures module.
    • Save pickle files using protocol compatible with python 2 and 3 versions
    • New error messages that explain failures during parameter loading in storage module
    • Use different parameter loading strategies in storage module

    Bugs:

    • Fixed issue with PNN class mapping (#177)
    • Added SOFM weight normalization to the cosine similarity measurement
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 29, 2017)

    Features:

    • Pickle serializer for networks with fixed architectures
    • SOFM weight initialization with PCA
    • Added hexagon shaped grid types to SOFM
    • Added parameter reduction over time for SOFM
    • Possibility to set up different step sizes for different neigbour neurons in SOFM
    • Added support for N-dimensional grid shapes for SOFM
    • Max-norm regularization algorithm

    Enhancement:

    • Made n_outputs as an optional parameter for SOFM if feature_grid was specified (and vice versa)

    Bugs:

    • Fixed problem with more complicated cases for inline connections
    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Mar 12, 2017)

    Features:

    • Added LVQ algorithm
    • Added LVQ2 algorithm
    • Added LVQ2.1 algorithm
    • Added LVQ3 algorithm
    • Added step reduction algorithm into all LVQ versions

    Bugs:

    • Changed SciPy version in order to fix problem with golden search algorithm
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Feb 5, 2017)

    Features:

    • Added LSTM layer
    • Added GRU layer
    • Added customizable weight and bias initialization for LSTM and GRU layers
    • Created pointer to all layers in the network and connection
    • Ability to extract layer by its name from the connection

    Enhancement:

    • Theano 0.9.0 support
    • Fixed issues related to float16 data type

    Bugs:

    • Modified algorithm for layer name generation
    • Solved problem with Dropout in Wolfe Search algorithm
    • Fixed a few minor bugs
    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Jan 10, 2017)

    Bugs:

    • Fixed input and output layers duplication in the connection
    • Fixed training issues for networks with shared weights
    • Make valid input order for the compile method
    • Fixed progress bar appearance for the training with multiple inputs
    • Fixed shuffle data option for networks with multiple inputs
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Jan 4, 2017)

    Features:

    • Added Leaky Relu layer
    • Saliency Map plot
    • Ability to specify input and output layers for layer connections (start and end methods)
    • Added method that compiles network (compile method)

    Enhancements:

    • Validate that layer name is unique in the network
    • Added ability to train networks with multiple inputs
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Dec 11, 2016)

    Features:

    • Added Global Pooling layer
    • Added Concatenate layer
    • Added Element-wise layer
    • Added Embedding layer
    • Added Local Response Normalization layer
    • Added Discrete digits dataset
    • Parallel connections
    • Added layer_structure plot to visualize relations between layers
    • Added an ability to save and load weights from the pickle file

    Enhancement:

    • Improved and modified layer connection API
    • Developed graph structure that stores relations between layers
    • Set up bias as an optional parameter
    • Skip layers for the layer_structure function
    • Assign unique identifier for each layer

    Bugs:

    • Fixed lots of small bug in the layer connection module
    • Fixed bugs with Hessian algorithm
    • Fixed summary table
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Sep 5, 2016)

    Features:

    • Added Restricted Boltzmann Machine (RBM) (#64)

    Enhancement:

    • PNN mini-batch prediction (#72)
    • Check that it's possible to connect two layers during layer connection procedure (#114)
    • Add function for RBM that makes Gibbs sampling from the visible input for multiple iterations (#115)
    • Add more flexible way to initialize network parameters (#110)

    Refactoring:

    • Use ParameterProperty class instead of ArrayProperty (#111)

    Bugs:

    • Fixed Quasi Newton algorithm for the training GPU training (#108)
    • NeuPy shows NaN output values in summary tables as a dash symbol (#109)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0.b1(Jun 13, 2016)

    Features:

    • Neural Network Surgery module (#98)
    • Elu layer (#62)
    • PReLu layer (#90)
    • Leaky Relu layer (#89)
    • Unpooling layer (#88)
    • Batch Normalization layer (#87)
    • Hinge loss function (#76)
    • Layer that adds Gaussian Noise to the input (#93)

    Enhancement:

    • Improved layers API (#78)
    • Get rid of output layers (#96)
    • Get rid of commands module (#100)

    Bugs:

    • Added valid parameters to the network constructor class (https://github.com/itdxer/neupy/commit/de09f5abd6667824f14806709de2afa1ac5daa09#commitcomment-17833269)

    Mics:

    • Progress bar module refactoring (#104)
    • Added more tests
    • Updated docs
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(May 24, 2016)

  • v0.2.2(May 13, 2016)

  • v0.2.1(May 1, 2016)

    Features:

    • Added padding option for pooling layers
    • Added shape option for the Reshape layer
    • Added an ability to logging using simple one-line messages. (#83)

    Enhancements:

    • Move error plot into separate module and get rid of plot_errors method in the NN class.

    Bugs:

    • Fixed issue with GPU training (#85)
    • Fixed problems with training interruption (#82)
    • Fixed issue with termios crashes (#80)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1b1(Apr 10, 2016)

    Features:

    • Implemented convolutional layer (#56)
    • Implemented Max pooling layer (#56)
    • Implemented Average pooling layer (#56)
    • Implemented reshape layer (#56)

    Enhancements:

    • Implemented prediction using batches (#58)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 10, 2016)

    Fixed bugs:

    • Fixed model recovery issues (#75)
    • Set up minimum positive value based on the floatX number (#74)
    • Get rid of curses module (#73)
    • Fixed RuntimeError (#71)
    • Fixed training interruption issue (#70)

    Tests:

    • Run tests for the 32 bit and 64 bit float number configurations separately
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0b1(Mar 31, 2016)

    Main features:

    • Set up Theano as a main backend for algorithms based on backpropagation.
    • Speed up a lot of algorithms
    • Added new algorithms: Adam, Adamax, Nesterov Momentum, RMSProp, Adadelta, Adagrad
    • Added batch learning

    Enhancement:

    • Set up limits for weight delta before update it in Quickprop algorithm
    • Add possibility to initialize custom error function for the network enhancement
    • Identify required properties in already implemented algorithms enhancement
    • Identify shared documentation as separate class
    • Implement functionality that help identify where user terminal support styles and switch it's behaviour in case when it's doesn't support them.
    • Add different weight initialisation methods
    • There are some algorithms that don't support some learning rate update rules. It's better to make validations for them enhancement
    • Ignore keypress event in terminal during the training procedure enhancement

    Bugs:

    • Fixed a few algorithms
    • Investigate all problems related to second order back
    • Fix shared docs for init_method in Layer class
    • Fixed a lots of small bugs

    Docs:

    • Added more information in the documentation
    • Documented a lots of functions and classes
    • Modified tutorials and made them compatible with new version.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Nov 3, 2015)

    Enhancement:

    • Remove use_raw_predict_at_error option from Feedforward class
    • Control show_epoch option if train epoch runs too fast.
    • Memory optimizations for Oja and RPROP algorithms
    • Small modifications related to raw_predict method

    Bug:

    • Fix problems with neural network storage for algorithms inherited from Backpropagation class
    • Fix issues related to the epsilon option in train method.

    Docs:

    • Add raw_predict method to algorithm documentation.
    • Small update in documentation related to library changes
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Oct 25, 2015)

    Enhancement:

    • Add derivative function for ReLU.
    • Set up the maximum number of iterations of the algorithms that converge
    • Add more information about the training procedure that converge to some error difference

    Bug fixes:

    • Fix a problem with an iterator that converge
    • Fix a problem with the always enable optimizations option for the Backpropagation algorithms.
    Source code(tar.gz)
    Source code(zip)
Train Dense Passage Retriever (DPR) with a single GPU

Gradient Cached Dense Passage Retrieval Gradient Cached Dense Passage Retrieval (GC-DPR) - is an extension of the original DPR library. We introduce G

Luyu Gao 92 Jan 02, 2023
Open source code for the paper of Neural Sparse Voxel Fields.

Neural Sparse Voxel Fields (NSVF) Project Page | Video | Paper | Data Photo-realistic free-viewpoint rendering of real-world scenes using classical co

Meta Research 647 Dec 27, 2022
Semi-Autoregressive Transformer for Image Captioning

Semi-Autoregressive Transformer for Image Captioning Requirements Python 3.6 Pytorch 1.6 Prepare data Please use git clone --recurse-submodules to clo

YE Zhou 23 Dec 09, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 08, 2022
Keras Realtime Multi-Person Pose Estimation - Keras version of Realtime Multi-Person Pose Estimation project

This repository has become incompatible with the latest and recommended version of Tensorflow 2.0 Instead of refactoring this code painfully, I create

M Faber 769 Dec 08, 2022
Official repository for the ICCV 2021 paper: UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model.

UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model Official repository for the ICCV 2021 paper: UltraPose: Syn

MomoAILab 92 Dec 21, 2022
[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Feel free to visit my homepage Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper] Presentation

Seokeon Choi 35 Oct 26, 2022
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
FishNet: One Stage to Detect, Segmentation and Pose Estimation

FishNet FishNet: One Stage to Detect, Segmentation and Pose Estimation Introduction In this project, we combine target detection, instance segmentatio

1 Oct 05, 2022
RoFormer_pytorch

PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

yujun 283 Dec 12, 2022
Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionaries

Dictionary Learning for Clustering on Hyperspectral Images Overview Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionari

Joshua Bruton 6 Oct 25, 2022
Boost learning for GNNs from the graph structure under challenging heterophily settings. (NeurIPS'20)

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu,

GEMS Lab: Graph Exploration & Mining at Scale, University of Michigan 70 Dec 18, 2022
Using NumPy to solve the equations of fluid mechanics together with Finite Differences, explicit time stepping and Chorin's Projection methods

Computational Fluid Dynamics in Python Using NumPy to solve the equations of fluid mechanics 🌊 🌊 🌊 together with Finite Differences, explicit time

Felix Köhler 4 Nov 12, 2022
BMN: Boundary-Matching Network

BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

qinxin 260 Dec 06, 2022
Dataloader tools for language modelling

Installation: pip install lm_dataloader Design Philosophy A library to unify lm dataloading at large scale Simple interface, any tokenizer can be inte

5 Mar 25, 2022
A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning Website • About • Installation • Using OpenDR

OpenDR 304 Dec 28, 2022
VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

This repository contains the introduction to the collected VRViewportPose dataset and the code for the IEEE INFOCOM 2022 paper: "VR Viewport Pose Model for Quantifying and Exploiting Frame Correlatio

0 Aug 10, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

19.4k Jan 04, 2023
A nutritional label for food for thought.

Lexiscore As a first effort in tackling the theme of information overload in content consumption, I've been working on the lexiscore: a nutritional la

Paul Bricman 34 Nov 08, 2022