Machine learning algorithms for many-body quantum systems

Overview
logo

NetKet

Release Anaconda-Server Badge Paper License Code style: black codecov Slack

NetKet is an open-source project delivering cutting-edge methods for the study of many-body quantum systems with artificial neural networks and machine learning techniques. It is a Python library built on JAX.

Installation and Usage

Netket runs on MacOS and Linux. We reccomend to install NetKet using pip, but it can also be installed with conda. For instructions on how to install the latest stable/beta release of NetKet see the Getting Started section of our website or run the following command (Apple M1 users, follow that link for more instructions):

pip install --upgrade netket

If you wish to install the current development version of NetKet, which is the master branch of this GitHub repository, together with the additional dependencies, you can run the following command:

pip install 'git+https://github.com/netket/netket.git#egg=netket[all]'

To speed-up NetKet-computations, even on a single machine, you can install the MPI-related dependencies by using [mpi] between square brackets.

pip install --upgrade "netket[mpi]"

We recommend to install NetKet with all it's extra dependencies, which are documented below. However, if you do not have a working MPI compiler in your PATH this installation will most likely fail because it will attempt to install mpi4py, which enables MPI support in netket.

The latest release of NetKet is always available on PyPi and can be installed with pip. NetKet is also available on conda-forge, however the version available through conda install can be slightly out of date compared to PyPi. To check what is the latest version released on both distributions you can inspect the badges at the top of this readme.

Extra dependencies

When installing netket with pip, you can pass the following extra variants as square brakets. You can install several of them by separating them with a comma.

  • "[dev]": installs development-related dependencies such as black, pytest and testing dependencies
  • "[mpi]": Installs mpi4py to enable multi-process parallelism. Requires a working MPI compiler in your path
  • "[extra]": Installs tensorboardx to enable logging to tensorboard, and openfermion to convert the QubitOperators.
  • "[all]": Installs all extra dependencies

MPI Support

To enable MPI support you must install mpi4jax. Please note that we advise to install mpi4jax with the same tool (conda or pip) with which you install it's dependency mpi4py.

To check whever MPI support is enabled, check the flags

>>> import netket
>>> netket.utils.mpi.available
True

Getting Started

To get started with NetKet, we reccomend you give a look to our tutorials, by running them on your computer. There are also many example scripts that you can download, run and edit that showcase some use-cases of NetKet, although they are not commented.

If you want to get in touch with us, feel free to open an issue or a discussion here on GitHub, or to join the MLQuantum slack group where several people involved with NetKet hang out. To join the slack channel just accept this invitation

License

Apache License 2.0

Comments
  • Group algorithms and space-group symmetries for `Lattice`

    Group algorithms and space-group symmetries for `Lattice`

    This PR implements most of the features discussed in #703 pertaining to symmorphic space groups.

    New stuff and changes

    Groups

    • New module netket.utils.group to replace semigroup.py: it takes care of everything related to symmetry groups
      • contains SemiGroup, identity, Element, PermutationGroup, Permutation with unchanged API
      • some new features, e.g. Permutation objects can carry an arbitrary name
    • New class Group: base class for all group-like objects that are guaranteed to satisfy group axioms
      • methods to calculate inverse mapping, times table, conjugacy classes, character tables (using Burnside's algorithm) for a generic group
      • equality checking via the function _canonical() provided by subclasses: this must return an integer array for all group elements such that equal arrays imply equal group elements (this is a property of the specific group classes to allow them to handle Identity as they see fit)
    • New class PGSymmetry: represents a point group symmetry around the origin, specified by a transformation matrix
      • autogenerated name describes transformation in human-readable form
    • New class PointGroup: stores PGSymmetry objects
    • All crystallographically relevant point groups provided in submodules planar (2D), axial, cubic (3D)
    • New class SpaceGroupBuilder (in netket.graph)
      • translates PointGroups into PermutationGroups acting on a particular Lattice
      • generates the translation group of a Lattice as PermutationGroups (with sensible names attached)
      • hence generates space groups as PermutationGroups
      • helps calculate the character table of the space group intuitively (i.e., calculates irreps consistent with a given wave vector)

    Graphs

    • The custom rotation etc. groups of Lattice are removed in favour of using the above machinery

    • ~NetworkX.automorphisms() changed to only return an array of permutation indices; the PermutationGroup is made by a free-floating method in symmetry. (SpaceGroupBuilder needs to use Lattice, which means that having a reference to anything in symmetry within Lattice produces a circular import via symmetry/__init__py.) It is also deprecated and should eventually be replaced by a hidden method that feeds into symmetry.automorphism_group(): it makes little sense to have this single piece of symmetry functionality outside symmetry. Alternatively, SpaceGroupBuilder could live in the graph module and be blended into the functionality of Lattice, similar to how automorphisms() behaves now.~

    • Lattice is given several new methods:

      • space_group_builder() returns a SpaceGroupBuilder object (see above) corresponding to the translations of the lattice and the supplied point group. The Lattice constructor also takes a PointGroup argument that is cached as a default point group.
      • point_group() returns the representation of its PointGroup argument or the default point group as a PermutationGroup
      • rotation_group() picks out the rotations (determinant of rotation matrix is +1)
      • translation_group() returns the group of lattice translations as a PermutationGroup. It takes an optional argument to specify the axes along which to translate
      • space_group() is the semidirect product translation_group() @ point_group().

      All of these are convenience wrappers around methods of space_group_builder().

    • The "hashing logic" in Lattice is tidied up and extended to wave vectors (needed in SpaceGroupBuilder). It now honours periodic and open BCs.

    • The Grid class is removed and replaced by functions of the same calling sequence that return Lattices. (The space-group functionality is built around Lattices, so it is better to focus on improving that one API rather than developing several independent ones.)

      • This breaks Grid's ability to colour its edges by direction. A more flexible constructor for Lattice will solve this problem.
      • planar_rotation() and axis_reflection() are dropped as they were in Lattice
      • The name space_group() was used incorrectly: instead of that and lattice_group(), we have Lattice.point_group() and Lattice.space_group(). Deprecation would be hard, since one of the names is reused in a different meaning.
      • point_group() only returns symmetries that leave the origin in place. This is different from the original behaviour for open BC axes (could be fixed by allowing nonsymmorphic point groups, but I need more reason than this to implement those).
    • Specialised constructors for triangle, honeycomb and kagome lattices are added.

    Odds and ends

    • The "hashing logic" (used both in PointGroup and Lattice) is moved into netket.utils.float~_utils~ and fine-tuned. It also supports a mix of periodic and open boundary conditions. I've also added
      • a function that prunes nearly-zero real and imaginary parts from an array;
      • a function that checks whether elements of an array are nearly integers.
    • netket.jax.logsumexp is added, which extends the functionality of JAX logsumexp to handle complex numbers well (i.e., it forces the output to be complex and doesn't error out on complex inputs/outputs)
    • Functions to project the outputs of DenseSymm and DenseEquivariant onto irreps using their characters. The default flavour uses logsumexp, but there is one with plain sums too.

    Typical workflows

    A simple workflow, without character tables

    We just want to generate the space group of a Lattice given a point group we know it's invariant under:

    from netket.utils import group
    from netket.graph import Lattice
    
    graph = Lattice(basis_vectors = [[1,0],[0.5,0.75**0.5]], extent = (6,6)) # triangle lattice
    space_group = graph.space_group(group.planar.D(6))
    

    The resulting space_group is a PermutationGroup that can be used directly in a GCNN, for instance. Alternatively, we can use the premade triangular lattice that is loaded with the D_6 group:

    from netket.graph import TriangularLattice
    
    graph = TriangularLattice([6,6])
    space_group = graph.space_group()
    

    Using character tables

    For this, one needs a basic appreciation of how crystallographic character tables are constructed. They can be described in terms of a wave vector (or rather a star of symmetry-related wave vectors) and the irreps of the corresponding little group (the subgroup of the point group that leaves the wave vector unchanged). The latter can be read off from a human-readable character table one can generate in an interactive session:

    from netket.utils import group
    from netket.graph import TriangularLattice
    from math import pi
    
    graph = TriangularLattice([6,6]) 
    sgb = graph.space_group_builder()
    
    k = [4*pi/3,0] # corner of the hexagonal BZ
    sgb.little_group(k) 
    
    > PointGroup(elems=[Id(), Rot(120°), Rot(-120°), Refl(0°), Refl(-60°), Refl(60°)], ndim=2)
    
    sgb.little_group(k).character_table_readable()
    
    > (['1xId()', '2xRot(120°)', '3xRefl(0°)'], 
    array([[ 1.,  1.,  1.],
           [ 1.,  1., -1.],
           [ 2., -1.,  0.]]))
    

    The first output confirms that the little group of D_6 at the corner of the Brillouin zone is D_3, whose known character table is generated by the second command; given the labels in the first part of the output, they are easy to match to the characters in these tables, so we can look up their physical/geometrical meaning. Any of these can be turned into an irrep of the full space group using SpaceGroupBuilder: in fact, it generates all of them as a 2D array (in the same order as the irreps printed above), so we'd write something like

    chi = sgb.space_group_irreps(k)[2] # [2] selects the "E" irrep
    # ...
    # in the definition of the GCNN
    return irrep_project_logsumexp(output, chi)
    

    To do

    • ~Writing tests. I tested most of the stuff manually and it seems to work in all cases, but of course it has to be more systematic.~
    • Writing docs. Probably the best place for the kind of workflow docs you see above would be in @chrisrothUT's tutorial on GCNNs, and #700 will be updated with how the abstract stuff gets implemented here.
    • ~Checking if the stuff that got caught up in an earlier git-rebase (see first 2 commits) affects the behaviour of struct.dataclass. Everything seems to work fine, so I'm not too worried, but @PhilipVinc could you perhaps check and suggest what I should do?~
    • Non-symmorphic groups? I've given some thought to it, PointGroup wouldn't be too hard to extend, the main question is whether the automatic construction of character tables generalises nicely. I have a hunch that it does, but I would need some downtime with a group theory textbook to make sure. Probably left for another PR
    • Extending Lattice so it can have further-neighbour and coloured edges (this is functionality that is lost from the new Grid for instance). I just flag this up, but it can wait.

    An Easter egg

    The NetworkX algorithm really looks for all automorphisms:

    lattice = nk.graph.Square(4)
    len(symmetry.automorphism_group(lattice))
    > 384
    len(symmetry.space_group(lattice, symmetry.planar.D(4)))
    > 128
    

    It turns out that a 4x4 square lattice with PBC is isomorphic to a 2^4 hypercube, which has many more symmetries. E.g., you can check that this maps nearest neighbours to nearest neighbours without making any geometrical sense:

     0,  1,  5,  4
     3,  2,  6,  7
    15, 14, 10, 11
    12, 13,  9,  8
    

    This is an interesting caveat for using NetworkX graph matching. PS. The 3×3 triangle lattice turns out to have 1296 isomorphisms, of which only 108 are space-group symmetries!

    opened by attila-i-szabo 121
  • [WIP] Split SR into QGTMatrix and solver.

    [WIP] Split SR into QGTMatrix and solver.

    I put this here in case anybody wants to see the result of our discussion the other day. It implements the design described in #649.

    I'm not done yet but shouldn't be too far off. I think the only thing left to do for me is to rewrite the nk.optimizer.SR in order to detect if using the old api/new api, convert to new api and print deprecation warnings. +tests

    v3.0 
    opened by PhilipVinc 102
  • NetKet V2.0b1

    NetKet V2.0b1

    Summary for version 2.0

    This PR introduces the first beta version for NetKet 2.0.

    Major changes in version 2.0

    1. NetKet now fully exposes its internal types and classes, thus becoming a full-fledged python library built on a monolithic c++ core
    2. Python Bindings are provided using pybind11, thus fully addressing issue #8
    3. As a result of this transformation, JSON input is not accepted anymore and the netket executable does not exist anymore
    4. A great deal of flexibility in the usage of NetKet comes with these changes, removing the majority of constraints coming from the JSON-based input and usage pattern
    5. Bindings are provided for the great majority of NetKet classes, with the exception of some internals, that don't need python exposure.

    Reasons behind these changes

    1. The most common current usage pattern of NetKet is not to write directly the JSON file, but use the python scripts we have in the Tutorials/ folder. That's why moving to a python library is a desirable goal, improving the user experience and opening the way to a number of applications previously hard or impossible to perform
    2. With a full-fledged python library it is also much easier to submit multiple jobs with different parameters, and have more flexibility on the output
    3. There is a substantial overhead for maintaining the JSON support. Supporting all the possible ways of putting together the NetKet classes, in a consistent way, does not scale well given the increasing size of this project.
    4. It becomes easier to interface to a whole spectrum of Python-only libraries, including advanced visualization tools and state-of-the-art machine learning libraries (pytorch, tensorflow etc)
    5. New generations of students (unfortunately?) are not very familiar with c++, and strive to have a pure Python library

    Installing, Tutorials, Examples

    To install the latest development version do

    pip install . 
    

    The directory /Tutorials/ contains pynotebook tutorials. More will be added. Several example codes showcasing a specific application are contained in /Examples.

    Other remarks

    • Version 2.0 removes all "glue" classes, such as Graph,Hamiltonian etc. The corresponding abstract interfaces remain, with little changes.

    • We introduce the concept of Operator, which generalizes Hamiltonian,Observable etc. Hamiltonian and Observable disappear.

    • We introduce a LocalOperator, and all associated overloaded operations, for example

    hi=nk.Spin(s=0.5,graph=g)
    X=[[0,1],[1,0]]
    o=nk.LocalOperator(hi,X,[0])*(nk.LocalOperator(hi,X,[1]))
    

    defines a quantum operator X(0)*X(1), automatically performing the tensor product. Other operations such as product times a scalar and addition of local operators are also supported.

    opened by gcarleo 77
  • Netket v3

    Netket v3

    TLDR

    I'd like some feedback on this proposal. It's not yet done, though a big part is complete (I need to put a few @jit calls in the right place). Download it with pip install git+https://github.com/netket/[email protected] and play around a bit.

    If you want to review the PR, since it's huge, I'd suggest to look at it commit by commit. Every chunk of changes is in a separate commit. Skip the first commit which is not really important anymore.

    see how VMC becomes easy and more generic, or this gists for an example of the api

    Netket v3.0 master plan

    For the so-called, unreleased version of netket v3.0, until now, we have transitioned all of our infrastructure to python and removed all C++ code. This was done by temporarily using numpy and later converting most code to jax.

    Still, the API has largely remained unchanged (except few changes to the construction of hilbert spaces and graphs, mostly aestetical).

    v3.0 is still unreleased because me and @gcarleo wanted to get the API right before we commit to it, however, months have passed (almost an year) and this is still unreleased.

    The big issues that I would like to address are the following:

    1. #437 Add an extensible VariationalState and use it in the drivers, making it easier to develop new applications.
    2. #525 Make it easy and intuitive to write custom Metropolis-Hastings transition rules and use them.
    3. Support non homogeneous hilbert spaces.
    4. Make it easier to define new arbitrary networks, with real or complex weights.
    5. Actually support arbitrary networks that mix real and complex layers (right now they aren't)
    6. Support models with a state that can change (but should not trigger recompilation)
    7. Remove legacy code.

    In particular, while giving some lectures on netket I recently noticed Points 2 and 4, that is, it's quite messy to define those things.

    Points 3/5 can be resolved by moving to flax, where (as I will argue below) defining models is much more compact and intuitive.

    The following is my proposal for netket v3.0 API

    A somwhat central point of this API stems from the discussion in #480, namely my proposal, which seemed somewhat accepted, to remove numpy and torch backend and convert netket to a pure jax package. This was already quite well received, and it seems to me that a) jax has now complete support from Google and it's a stable project and b) it allows us to writ more compact, simpler code.

    This is my proposal for netket v3.0:

    Roughly:

    • add a new requirement: jax and flax. ~This will also bump the minimum required version to python 3.7. This should not be an issue as also jax is discussing dropping python 3.6.~ EDIT: it's possible to still support 3.6.

    Add to netket the following sub-packages:

    • netket.nn : re-exporting flax.nn but wrapping some functions in order to make them work better with complex numbers.

      • Jax/Flax already supports complex numbers, but some activation functions do not for different reasons and the kwargs necessary to use complex weights in a layer are a bit complicated to use. The main reason to re-export and wrap is to make our exported api of flax work out of the box with complex. Slowly, I hope to get some PRs merged in flax itself so we can drop our own code.

      • Example:

        >> import netket as nk
        >> import flax
        >> from jax import numpy as jnp
        
        >> x = jax.random.normal(jax.random.PRNGKey(0), (4,4), dtype=jax.numpy.complex64)
        # flax does not work
        >> flax.nn.activation.softplus(x)
        TypeError: add requires arguments to have the same dtypes, got complex64, float32.
        # netket.nn works
        >> nk.nn.activation.softplus(x)
        DeviceArray([[1.9650044+0.14860448j, 0.362084 -0.14605658j],
                     [0.6844594+0.72896177j, 0.4604799-0.02623011j]],            dtype=complex64)
        
        # To use flax, we need to define our own complex init function
        def complex_kernel_init(rng, shape):
          fan_in = np.prod(shape) // shape[-1]
          x = random.normal(random.PRNGKey(0), shape) + 1j * random.normal(random.PRNGKey(0), shape)
          return x * (2 * fan_in) ** -0.5
        
        complex_bias_init = lambda _, shape: jnp.zeros(shape, jnp.complex64)
        
        # complex-valued dense flax version
        m = flax.nn.Dense(features=3, dtype=jax.numpy.complex64, kernel_init=complex_kernel_init, bias_init=complex_bias_init)
        
        # nk-version
        m = netket.nn.Dense(features=3, dtype=jax.numpy.complex64)
        

        I hope the above convinces you that having flax working out of the box with complex values is handy.

      • Since people might still have jax machines around, we provide a very simple function wrapping a jax module into a flax one so that everything works out of the box (nk.nn.wrap_jax)

    • netket.jax : wrapping some functions in order to support complex numbers and functions that may one of R->R, R->C, C->C with the same syntax. Jax does not and will not support this out of the box. Notably, this will have our own version of jax.vjp and jax.grad based on the code we have already in jax_utils, plus a few over utilities.

    • netket.optim: (What was netket.optimizer).

      • I propose to change name (with a slow deprecation so not to break code) because optim is the default in jax and pyro world. tensorflfow uses optimisers, and we use optimiser. Let's pick one and be consistent.
      • the optimisers are simply re-exported from flax. No code.
      • We also export SR
        • Sr is rewritten with a new interface. see below.

    Remove AbstractMachine and its implementations

    • Functionally replaced by pure flax modules, which will be objects that contain no state (parameters) but only two fucntions: the init_params, returning the pytree of params and apply to compute the forward pass.
      • Also support pure jax modules and make it easy to support other jax frameworks (optax, for example).

      • While those modules do not contain the hilbert space, provide an easy to use constructor that accepts an hilbert space and extract the size.

      • By default use np.float32 and not np.float64, but depending on what the user uses, anything is supported

      • An advantage of this is that we can now copy-paste any machine written for jax/flax and they will work out of the box, regardless of what they do! (modulo replacing flax.nn -> netket.nn for complex-number compatibility until things are fixed upstream).

      • See for example how to define a Convnet or a RBM with spin and phase: it's very easy. compare it with our old jax code for a RBMModPhase which

      import netket as nk
      from netket import nn
      
      class RBMModPhase(nn.Module):
          dtype : Any = np.float32
          activation : Any = nknn.logcosh
          alpha : Union[float, int] = 1
          use_bias : bool = True
      
          @nn.compact
          def __call__(self, x):
              re = nknn.Dense(features=self.alpha*x.shape[-1], dtype=self.dtype, use_bias=self.use_bias)(x)
              re = self.activation(re)
              re = jnp.sum(re, axis=-1)
      
              im = nknn.Dense(features=self.alpha*x.shape[-1], dtype=self.dtype, use_bias=self.use_bias)(x)
              im = self.activation(im)
              im = jnp.sum(im, axis=-1)
      
              return mod + 1j * im
      
      
      class CNN(nn.Module):
        @nn.compact
        def __call__(self, x):
          x = nn.Conv(features=32, kernel_size=(3, 3))(x)
          x = nn.relu(x)
          x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2))
          x = x.reshape((x.shape[0], -1))  # flatten
          x = nn.Dense(features=256)(x)
          x = nn.relu(x)
          x = nn.log_softmax(x)
          return x
      
      machine = CNN(hilbert)
      W = machine.init_params(rng_key) # rng_key either an int/uint or jax.random.PRNGkey
      
      • We wish to support non-differentiable variables in machines (for RNNs, batchnorm or other things.). To do so, we adopt the flax standard: the parameters pytree has the shape {'params':params_pytree, **other_state}.

    Rewrite the samplers to be functional. Mainly:

    • Samplers are now datastructures only containing the parameters of the sampler, and no state, nor machine.

      • Using datastructures allow us to pass those objects straight to jitted functions without issues.
      • The state of a sampler is stored in a separate struct.
        • The state mainly carries the current rng state, plus additional stuff if needed.
      • The api will be comprised of the following functions:
        • netket.sampler.init_state(sampler, machine, params) or sampler.init_state(), creating the state
        • netket.sampler.sample(sampler, machine, params, chain_length=XXX, state=None) or sampler.sample(...)`, sampling n_samples
          • If state is None, then init_state is used to create a new_state
          • Returns the modified state and sampled values.
          • I chose chain_length instead of the old n_samples because that's technically what it is.
    • Example:

      >>>import netket as nk
      >>>from netket import nn
      >>>sampler = nk.sampler.ExactSampler(hilbert, seed=0)
      ExactSampler(
        hilbert = Spin(s=1/2, N=4),
        seed = [         0 2385058908],
        n_batches = 8,
        machine_power = 2)
      # notice the seed
      
      # create the state for the sampler:
      state = nk.sampler.init_state(sampler, machine, params)
      # reset the chain (could also have passed state = None, and it would create it)
      state = nk.sampler.reset(sampler, machine, params, state)
      
      samples, state = nk.sampler.sample(sampler, machine, params, chain_length = 1000, state=state)
      >>> samples
      DeviceArray([[[-1., -1., -1., -1.],
                    [-1.,  1.,  1.,  1.],
                    [-1., -1., -1., -1.],
                    ...,
      >>> state
      ExactSamplerState(pdf=DeviceArray([3.8351530e-01, 2.0624077e-02, 7.7783165e-04, 7.2943498e-03,
                   4.4462629e-02, 1.9844480e-04, 9.5614446e-03, 3.3565965e-02,
                   3.3565965e-02, 9.5614446e-03, 1.9844480e-04, 4.4462629e-02,
                   7.2943498e-03, 7.7783165e-04, 2.0624077e-02, 3.8351530e-01], dtype=float32), 
                  rng=DeviceArray([1255698341, 3859703708], dtype=uint32))
                  # notice the rng
      
      • We store in the sampler the seed, so that if you reuse the same samplere, without carrying with you the state, you will get the same samples again. I had a brief discussion about this with @gcarleo and it seems the most sensible thing to do.
      # If state is not passed in, the state is automatically creaeted...
      samples, state = nk.sampler.sample(sampler, machine, params, chain_length = 1000)
      >>> samples
      DeviceArray([[[-1., -1., -1., -1.],
                    [-1.,  1.,  1.,  1.],
                    [-1., -1., -1., -1.],
                    ...,
      >>> state.rng
      DeviceArray([1255698341, 3859703708], dtype=uint32)                
      # notice the rng: it's the same as before
      
      • While the interface is fully functional, you can also call those methods as my_sampler.sample(...) and so on.
      • This is all pure jax, even the sampler themselvees, so everything can be jitted through.
        • If you create a new sampler, the function is not recompiled. The only thing triggering recompilation is changing the n_batches, the hilbert or other things declared static.

    ### Implement a variationalstate

    • A variationalState has the following interface:
      • vs.parameters : (or params?) returns the PyTree of the variational parameters one may want to optimise.

      • vs.expect(operator) : computes expectation value of operator

      • vs.expect_and_grad(operator, is_hermitian=auto/True/False) computes the expectation value of operator, and the gradient of it.

        • is_hermitian can be used to decide whever to use the simpler form we use for the gradient energy now, or the more standard formula.
      • vs.QGT() -> Callable[Grad, Grad] : returns the quantum geometric tensor/ S matrix.

        • The returned object should be a lazy object (a la scipy.functor) that takes as input a gradient and returns another gradient.
      • vs.reset() : resets the internal state among iterations

      • save/load

      • For a machine/sampler, the ClassicalVariationalState will be thee first implementation of this interfacee.

      • It will provide some functionality similar to a sampler/machine from before, with a bunch of extra tricks:

        • A ClassicalVariational State is constructed by taking a
          • hilbert,
          • Machine/Module
          • Sampler
          • optional SR object? (~maybe~ probably)
          • some configuration data
            • chain_length
          • the api is the one deescribed above plus:
            • vs.sample(chain_length) : sample and store the samples internally until a reset() is callede
            • vs.samples : access the samplse. if it was resetted, resample.
            • vs.model_state : returns the PyTree of any other parameters that might change but we don't want to differentiate against (think batchnorm, rnn...). Might also put it in the common api. I'm not sure.

    Minor changes:

    • the file netket/utils.py has been moved into a subfolder and utils is now a full fledges submodule. I moved here our logic for detecting if MPI is installed, deprecation warnings, and so on.
    • I more carefully check that the discoverable names in every module are things that are relevant. For example, before we had that in every module like netket.hilbert there was both Spin (the class) and spin (the module generated by file). There is a small utility in netket.utils to hide all file-generated modules that we don't want and I use it extensively.
    • netket.hilbert.Boson has been renamed to netket.hilbert.Fock, as I find it more accurate. However, there is a deprecated constructor called Boson that forwards to Fock.
    • Maybe we should rename all Samplers from MetropolisSampler, ExactSampler to Metropolis, Exact ? As we usually use them from their module (netket.sampler) it might make sense, and makes the API lighter.
    • The old API to create samplers is deprecated (I'd like to remove it) but it is still there. If you try to construct a MetropolisSampler with a machine, it will give you the same objects as before.
    • I'd like to remove netket.random. There is nothing of interest there anymore.

    People

    @inailuig If you have some time I'd like to know if this can solve your problem of recompiling. I think it should.

    opened by PhilipVinc 61
  • Introduced Lanczos ED from IETL library

    Introduced Lanczos ED from IETL library

    Dear all,

    I wrote a wrapper for using the IETL Lanczos code (http://alps.comp-phys.org/static/doc2.1.0/ietl.html). The ground state energy computation in NetKet would now be done with this library. Probably we should discuss some more detailed user interface for the algorithm (how to set parameters, like precision for the Lanczos iteration. There are also some test, which could be extended.

    Best, Alex

    opened by awietek 54
  • SR with precomputed gradients

    SR with precomputed gradients

    I have implemented the SR algorithm with precomputed gradients. This is less elegant than the lazy vjp/jvp-based implementation in LazySMatrix, but has practical advantages:

    • It has minimal memory overhead: precomputing the gradients requires the same amount of memory as a single pass of vjp since the different samples on which the neural network is run are independent, so we only need to compute one of them to get a row of the Jacobian. (I.e., instead of looping through vjp with vmap as jacrev does, we can loop through grad.) Storing the matrix of gradients is guaranteed to take up less memory: backpropagation has to store a lot of internal information about the forward pass, which takes up many times the memory needed for the gradients in a deep network (In my experiments, they were 40 MB vs 1.2 GB). The bottom line is that if there is enough memory for a single vjp, there is enough memory for this too.
    • It yields a massive speedup: the gradient matrix can again be calculated in the same asymptotic time as a single vjp (it will of course be a bit slower because vmap is used, but not by large factors); afterwards, we only need matrix-vector multiplications, which are way faster than a full backpropagation of a complex neural network. In my experiments (on 20 CPUs), I could reduce the time of an SR step from 8x that with Adam to about 1.2x.
    • It allows for regularising the S matrix in a scale-invariant way by factoring out the magnitude of diagonal elements, as described in Becca & Sorella, p. 143. This has minimal overhead and can be crucial for heterogeneous networks that may have very different gradients in different parts.

    Unfortunately, calculating the full Jacobian cannot be done as dtype-agnostically as a VJP, so some information about the network structure needs to be passed. This is done through the jacobian parameter, which is implemented for the values "R2R" (both the network entries and the wave function are real), "R2C" (real entries, complex wave function), and "holomorphic" (holomorphic function of complex parameters), None (uses a LazySMatrix).

    • The parameter could use a better name, but jacobian also signifies it is a switch for using this code vs. the original one
    • It may be possible to automate this choice, although it's not trivial (e.g., R2R is the right choice even if the wave function has negative entries, which makes the output complex...)
    • A few more cases might be implemented, e.g. non-holomorphic C2C, although that is just sugar for R2C...

    The boolean parameter rescale_shiftspecifies whether the scale-invariant regularisation should be used.

    opened by attila-i-szabo 49
  • Implementation of fermionic hilbert and operator

    Implementation of fermionic hilbert and operator

    This is an implementation of discrete fermionic degrees of freedom by @imi-hub and myself. It contains the hilbert space and an operator. The operator handles the minus signs coming from the exchange symmetry, while the hilbert space stores occupation numbers only. This follows e.g. the way qiskit and openfermion would implement things. For the operator, we follow the implementation of openfermion and added an 'from_openfermion' function to create operators. This allows us to use all their hamiltonians, such as Fermi-Hubbard. The hilbert space is not optimized for fixed number of fermions (since it inherits from HomogeneousHilbert), for that we might need to add another Hilbert implementation.

    FYI: formatting will fail because flake8 and flakehell are giving me hell.

    opened by jwnys 45
  • Symmetry Operations on Lattice.py

    Symmetry Operations on Lattice.py

    This implements translations, rotations and reflections on graphs generated by Lattice.py.

    Let’s define a square grid to show how we use this

    graph = nk.graph.Lattice(basis_vectors = [[1,0],[0,1]], extent=[5,5])

    We can generate a SymmGroup with translations by the basis vectors as follows

    graph.basis_translations()

    In order to generate rotations we need to specify the period. This will generate the C4 rotational symmetry group

    graph.planar_rotations(period=4,axes=(0,1))

    If we specify a period that doesn’t map the lattice to itself we get a value error. Finally we can generate the reflections about a plane

    graph.reflections(axis=0)

    This should work for arbitrary dimensions (I only tested a cube). @femtobit @PhilipVinc @attila-i-szabo can you help me bug hunt?

    `

    opened by chrisrothUT 42
  • Fix Selu Activation Function

    Fix Selu Activation Function

    @attila-i-szabo noticed that jax.nn.selu loses it's self normalizing property with complex numbers. Namely, it applies selu(x)when it should be applying selu(Re(x)) + 1j*selu(Im(x)). Making this small change improves performance tremendously. I introduce the correct non-linearity as piecewise_selu

    opened by chrisrothUT 39
  • [RFC] Redesigning SR interface (3.0 or later?)

    [RFC] Redesigning SR interface (3.0 or later?)

    Me and @gcarleo were recently discussing the fact that we should also redesign the interface used to access the S matrix and to perform Stochastic Reconfiguration Natural Gradient.

    The aim is to achieve easier extendability (so that anyone can write his own version of the S matrix if he wants) and composability (play well with solvers from jax/scipy and others, possibly without requiring to wrap them as it's needed now).

    This Issue wants to discuss two items:

    • Whever we should do this redesign for v3.0, when to deprecate the old interface, or if we should put it on hold for a v3.1 or 3.2 later.
    • The design of the new system

    To recap, the current interface is the following:

    • A SR object holds settings on the type of S matrix representation used (dense, lazy-onthefly, lazy-jacobian...) and the algorithm used to solve Sx=F. Every object should correspond to only 1 set of choices (lazy-onthefly+cg, lazy-onthefly+gmres, ecc...) . In netket v3.0b1 there is only one possible representation of the S matrix so that was not a big issue.

      • As shown in the PR #648, this design has issues, as adding a new type of S matrix representation (lazy-jacobian in the PR) requires either duplicating all the types corresponding to solvers, which is inconvenient
      • The natural thing to do would be to split the S matrix representation from the solver used.
    • The SR object can build the S matrix itself if it's given a variational state S = sr.create_S(state).

    • The S matrix keeps a reference to the SR object, so that you can do S.solve(F) and it will use the parameters from the sr object.

    sr = nk.optimizer.sr.LazyCG(diag_shift=0.01, maxiters=100)
    S = vstate.quantum_geometric_tensor(sr) # equivalent to S = sr.create(vstate)
    _, F = vstate.expect_and_grad(ham)
    
    x, info = S.solve(F) # uses an inner field S.sr for the parameters and solve function
    

    While the system described above, where all configuration is stored in a single structure, it makes it rather easy to use SR inside of a driver, as we can simply pass

    gs = nk.VMC(ham, optim, variational_state=vstate, sr=sr)
    

    and inside the driver will use this object like shown above.

    --

    Tentative new design:

    Goal: be able to use scipy/jax solvers out of the box.

    • Every kind of SR matrix has it's own constructor. SMatrixOnTheFly(vstate), SMatrixJacbian(vstate) etcetera that can be used explicitly.

    • An helper function SMatrix(SMatrixType, vstate) = SMatrixType(vstate) is provided. If SMatrixType is not passed some sensible default representation that always works is used.

    • vstate.quantum_geometric_tensor() will now take as input the SMatrix type and relay the call to the type. If no type is passed, a sensible default is used.

    • All S matrix must have the methods __matmul__ and __call__ supporting both PyTrees and dense vectors, so that the matrix can be passed to sparse solvers.

    With this, what is below should work.

    S = SMatrixOnTheFly(vstate)
    x, info = jax.scipy.sparse.cg(S, F, maxiter=100)
    

    A question arises: we often regularise the S matrix to have a shift in the diagonal. To support it under this API, we should also ask all implementation of the S matrix to support ___add__(self, x:Number) and keep the diagonal shift in memory (if it's a lazy representation) or simply add it to the dense matrix if it's a dense representation.

    Some implementations might even support __mul__ or other conditionings.

    We will then be able to do

    S = SMatrixOnTheFly(vstate)
    S = S + 0.01 # equivalent to S.diag_shift+=0.01
    x, info = jax.scipy.sparse.cg(S, F, maxiter=100)
    

    (Note: this is inconsistent with numpy api, where adding a number to a matrix adds it to all the entries in the matrix, but is consistent with our implementation of LocalOperators where adding a number to a local operator only adds it to the diagonal).

    We could even get this to work with scipy (not jax) sparse solvers if we also implement S.shape to report the number of parameters.

    S = SMatrixOnTheFly(vstate)
    # assuming S.shape = (vstate.n_parameters, vstate.n_parameters)
    S = S + 0.01 # equivalent to S.diag_shift+=0.01
    F_dense, F_unravel = nk.jax.ravel(F)
    x, info = scipy.sparse.cg(S, F_dense, maxiter=100)
    

    or even

    S = SMatrixOnTheFly(vstate)
    S = S + 0.01 # equivalent to S.diag_shift+=0.01
    Sm1 = np.linalg.pinv(S.to_dense)
    x = [email protected]_dense
    

    So all seems great! The only thing we need to think about is how to make all this play with the Driver API.

    How to support this? we could accept two kwargs in the drivers:

     - S_type: Optional[SMatrixType] = The type of the S Matrix you want to use, that should support doing `S = S_type(vstate)` 
     - SR_solver: Optional[Callable] = The function to solve the linear system Sx=F. It must have signature SR_solver(S:SMatrix, F:PyTree, **kwargs) -> Tuple[x:PyTree, info:Any] 
    

    If both are not declared (None) sr is not used. If one of the two is passed we use SR and the unspecified kwarg goes to a default. Internally we could do something like

    def __init__(S_type=None, SR_solver=None):
       if S_type is None and Sr_solver is None:
         self.use_sr = False
       else:
         self.use_sr = True
    
       if use_sr = True and S_type is None:
         S_type = default
       ...
    
    
    def _forward_and_backward(self):
        """
        Performs a number of VMC optimization steps.
        """
    
        self.state.reset()
    
        # Compute the local energy estimator and average Energy
        self._loss_stats, self._loss_grad = self.state.expect_and_grad(self._ham)
    
        if self.sr is not None:
            self._S = self.S_type(self.vstate)
    
            # use the previous solution as an initial guess to speed up the solution of the linear system
            x0 = self._dp if self.sr_restart is False else None
            self._dp, self._sr_info = self.SR_solver(self._S, self._loss_grad, x0=x0)
    

    For the user, to specify kwargs of the solver like we do now, he would need to consult the docs of that solver and specify it with a functools.partial. Example:

    from functools import partial
    
    SR_solver = partial(jax.scipy.sparse.gmres, maxiter=300, restart=10)
    
    # use default S_matrix type
    gs = nk.VMC(ham, optim, variational_state=vstate, SR_solver= SR_solver) 
    # or
    gs = nk.VMC(ham, optim, variational_state=vstate, SR_solver= SR_solver, S_matrix=nk.optimizer.sr.SJacobian) 
    

    However how to include the diagonal shift? One would have to do

    def srsolver(S,F,**kwargs):
       return jax.scipy.sparse.gmres(S+0.01, F, **kwargs)
    
    SR_solver = partial(srsolver, maxiter=300, restart=10)
    

    which is not too clean...

    opened by PhilipVinc 39
  • Add invariant and constant diagonal shift at the same time in `QGTJacobian*`

    Add invariant and constant diagonal shift at the same time in `QGTJacobian*`

    This PR allows specifying both a scale-invariant and a constant diagonal offset in QGTJacobian* at the same time.

    @chrisrothUT and I discovered that adding a diagonal shift of the form diag_shift + diag_scale * S_ii is much more stable than pure scale-invariant shifting but remains faster and better converging than simply using diag_shift. To accommodate both, this PR deprecates rescale_shift and introduces diag_scale, the coefficient of S_ii in the shift.

    Defaults:

    • If nothing is specified, diag_shift=0.01, diag_scale=0.0 to recover the original behaviour
    • If only diag_scale is specified, diag_shift=0.0
    • If rescale_shift is specified, it behaves as it used to, but there is a deprecation warning
    • Specifying rescale_shift and diag_scale together leads to an error

    Internally, diag_scale is implemented the same way rescale_shift once was; diag_shift is added to it by adding offset=diag_shift/diag_scale to the scale factors used to rescale rows/columns of the S matrix.

    There are some bits to iron out, most importantly whether the defaults above are good and diag_scale is a good name for the new parameter.

    opened by attila-i-szabo 35
  • Bump sphinx from 4.5.0 to 6.1.1

    Bump sphinx from 4.5.0 to 6.1.1

    Bumps sphinx from 4.5.0 to 6.1.1.

    Release notes

    Sourced from sphinx's releases.

    v6.1.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.1.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v6.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.3.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.2.3

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.2.2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.2.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.2.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.1.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.1.0

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.0.2

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.0.1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    v5.0.0

    No release notes provided.

    v5.0.0b1

    Changelog: https://www.sphinx-doc.org/en/master/changes.html

    Changelog

    Sourced from sphinx's changelog.

    Release 6.1.1 (released Jan 05, 2023)

    Bugs fixed

    • #11091: Fix util.nodes.apply_source_workaround for literal_block nodes with no source information in the node or the node's parents.

    Release 6.1.0 (released Jan 05, 2023)

    Dependencies

    Incompatible changes

    • #10979: gettext: Removed support for pluralisation in get_translation. This was unused and complicated other changes to sphinx.locale.

    Deprecated

    • sphinx.util functions:

      • Renamed sphinx.util.typing.stringify() to sphinx.util.typing.stringify_annotation()
      • Moved sphinx.util.xmlname_checker() to sphinx.builders.epub3._XML_NAME_PATTERN

      Moved to sphinx.util.display:

      • sphinx.util.status_iterator
      • sphinx.util.display_chunk
      • sphinx.util.SkipProgressMessage
      • sphinx.util.progress_message

      Moved to sphinx.util.http_date:

      • sphinx.util.epoch_to_rfc1123
      • sphinx.util.rfc1123_to_epoch

      Moved to sphinx.util.exceptions:

      • sphinx.util.save_traceback

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Update nbsphinx requirement from ~=0.8.10 to ~=0.8.11

    Update nbsphinx requirement from ~=0.8.10 to ~=0.8.11

    Updates the requirements on nbsphinx to permit the latest version.

    Release notes

    Sourced from nbsphinx's releases.

    nbsphinx 0.8.11

    https://pypi.org/project/nbsphinx/0.8.11/

    • LaTeX: apply code cell border style to all code blocks
    Changelog

    Sourced from nbsphinx's changelog.

    Version 0.8.11 -- 2022-12-29 -- PyPI__ -- diff__

    • LaTeX: apply code cell border style to all code blocks

    __ https://pypi.org/project/nbsphinx/0.8.11/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.10...0.8.11

    Version 0.8.10 -- 2022-11-13 -- PyPI__ -- diff__

    • Fix handling of source_suffix
    • A few LaTeX fixes

    __ https://pypi.org/project/nbsphinx/0.8.10/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.9...0.8.10

    Version 0.8.9 -- 2022-06-04 -- PyPI__ -- diff__

    • CSS: support tables in widgets
    • Avoid empty "raw" directive

    __ https://pypi.org/project/nbsphinx/0.8.9/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.8...0.8.9

    Version 0.8.8 -- 2021-12-31 -- PyPI__ -- diff__

    • Support for the sphinx_codeautolink extension
    • Basic support for the text builder

    __ https://pypi.org/project/nbsphinx/0.8.8/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.7...0.8.8

    Version 0.8.7 -- 2021-08-10 -- PyPI__ -- diff__

    • Fix assertion error in LaTeX build with Sphinx 4.1.0+

    __ https://pypi.org/project/nbsphinx/0.8.7/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.6...0.8.7

    Version 0.8.6 -- 2021-06-03 -- PyPI__ -- diff__

    • Support for Jinja2 version 3

    __ https://pypi.org/project/nbsphinx/0.8.6/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.5...0.8.6

    Version 0.8.5 -- 2021-05-12 -- PyPI__ -- diff__

    • Freeze Jinja2 version to 2.11 (for now, until a bugfix is found)
    • Add theme_comparison.py tool for creating multiple versions (with different HTML themes) of the docs at once

    __ https://pypi.org/project/nbsphinx/0.8.5/ __ https://github.com/spatialaudio/nbsphinx/compare/0.8.4...0.8.5

    Version 0.8.4 -- 2021-04-29 -- PyPI__ -- diff__

    • Support for mathjax3_config (for Sphinx >= 4)
    • Force loading MathJax on HTML pages generated from notebooks

    ... (truncated)

    Commits
    • fe3f1c1 Release 0.8.11
    • 3ee3995 DOC: use "booktabs" table style for LaTeX
    • dc076bd LaTeX: apply code cell border style to all code blocks
    • 936b1b6 LaTeX: disable rounded corners for code cells
    • dd7288d CircleCI: install binutils
    • 7137bb3 DOC: disallow ipython 8.7.0
    • See full diff in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 1
  • Passing masks to a GCNN has no effect

    Passing masks to a GCNN has no effect

    Hi, first of all I'm sorry if this actually belongs in the discussions section, I've never worked with github before and don't know what problems are Issues-worthy.

    I noticed that after the new implementation of the masked GCNNs, passing masks to the GCNN does not have any effect, e.g. for a model with only 2nd nearest neighbors convolutions the number of parameters does not change, so

    # Define system
    L = 8
    lattice = nk.graph.Square(length=L)
    hi = nk.hilbert.Spin(s=1 / 2, total_sz=0, N=lattice.n_nodes)
    
    # Define Metropolis-Hastings sampler
    sampler = nk.sampler.MetropolisExchange(hilbert=hi, graph=lattice)
    
    # Define the model: masked GCNN
    input_mask = np.zeros([L, L])
    for i in range(-1, 2):
        for j in range(-1, 2):
            input_mask[i][j] = 1
    input_mask = input_mask.ravel()
    hidden_mask = np.repeat(np.expand_dims(input_mask, 1), repeats=8, axis=1).ravel()
    
    machine = nk.models.GCNN(symmetries=lattice, layers=2, features=(4, 2), param_dtype=jnp.complex128,
                             input_mask=input_mask, hidden_mask=hidden_mask)
    vstate = nk.vqs.MCState(sampler=sampler, model=machine)
    print("number of parameters:", vstate.n_parameters)
    

    returns number of parameters: 4358 instead of the desired number of parameters: 618

    I think the error is that in netket.models.equivariant , the masks are not passed on from the general GCNN method to the constructors of the different modes (FFT, irreps).

    Also, if I use the DenseSymm layer with a mask I get an error, so changing the model in the above code to machine = nk.nn.DenseSymm(symmetries=lattice, mode="matrix", mask=HashableArray(input_mask), features=2) gives the error TypeError: nonzero requires ndarray or scalar arguments, got <class 'netket.utils.array.HashableArray'> at position 0.

    It should work if self.kernel_indices = jnp.nonzero(self.mask)[0] in the DenseSymmMatrix class is changed to (self.kernel_indices,) = np.nonzero(self.mask), as in the other modes.

    opened by jobdky 0
  • LocalOperator should return error when same site is used twice

    LocalOperator should return error when same site is used twice

    Simple example of computing σ^2 for a single site.

    >>> hi = nk.hilbert.Spin(s=0.5, total_sz = 0, N=4)
    >>> heisenberg = np.array([[1, 0, 0, 0],[0, -1, 2, 0],[0, 2, -1, 0],[0, 0, 0, 1]])
    >>> ha = nk.operator.LocalOperator(hilbert=hi,operators=[heisenberg],acting_on=[(0,0)])
    >>> vstate.expect(ha)
    1
    

    The answer should be 3 of course but Netket returns 1 because it doesn't account for SxSx and SySy returning the state back to itself.

    opened by chrisrothUT 8
  • Update jax requirement from <0.4,>=0.3.16 to >=0.3.16,<0.5

    Update jax requirement from <0.4,>=0.3.16 to >=0.3.16,<0.5

    Updates the requirements on jax to permit the latest version.

    Release notes

    Sourced from jax's releases.

    Jax release v0.4.1

    • Changes
      • Support for Python 3.7 has been dropped, in accordance with JAX's {ref}version-support-policy.
      • We introduce jax.Array which is a unified array type that subsumes DeviceArray, ShardedDeviceArray, and GlobalDeviceArray types in JAX. The jax.Array type helps make parallelism a core feature of JAX, simplifies and unifies JAX internals, and allows us to unify jit and pjit. jax.Array has been enabled by default in JAX 0.4 and makes some breaking change to the pjit API. The jax.Array migration guide can help you migrate your codebase to jax.Array. You can also look at the Distributed arrays and automatic parallelization tutorial to understand the new concepts.
      • PartitionSpec and Mesh are now out of experimental. The new API endpoints are jax.sharding.PartitionSpec and jax.sharding.Mesh. jax.experimental.maps.Mesh and jax.experimental.PartitionSpec are deprecated and will be removed in 3 months.
      • with_sharding_constraints new public endpoint is jax.lax.with_sharding_constraint.
      • If using ABSL flags together with jax.config, the ABSL flag values are no longer read or written after the JAX configuration options are initially populated from the ABSL flags. This change improves performance of reading jax.config options, which are used pervasively in JAX.
      • The jax2tf.call_tf function now uses for TF lowering the first TF device of the same platform as used by the embedding JAX computation. Before, it was using the 0th device for the JAX-default backend.
      • A number of jax.numpy functions now have their arguments marked as positional-only, matching NumPy.
      • jnp.msort is now deprecated, following the deprecation of np.msort in numpy 1.24. It will be removed in a future release, in accordance with the {ref}api-compatibility policy. It can be replaced with jnp.sort(a, axis=0).
    Changelog

    Sourced from jax's changelog.

    jax 0.4.1 (Dec 13, 2022)

    • Changes
      • Support for Python 3.7 has been dropped, in accordance with JAX's {ref}version-support-policy.
      • We introduce jax.Array which is a unified array type that subsumes DeviceArray, ShardedDeviceArray, and GlobalDeviceArray types in JAX. The jax.Array type helps make parallelism a core feature of JAX, simplifies and unifies JAX internals, and allows us to unify jit and pjit. jax.Array has been enabled by default in JAX 0.4 and makes some breaking change to the pjit API. The jax.Array migration guide can help you migrate your codebase to jax.Array. You can also look at the Distributed arrays and automatic parallelization tutorial to understand the new concepts.
      • PartitionSpec and Mesh are now out of experimental. The new API endpoints are jax.sharding.PartitionSpec and jax.sharding.Mesh. jax.experimental.maps.Mesh and jax.experimental.PartitionSpec are deprecated and will be removed in 3 months.
      • with_sharding_constraints new public endpoint is jax.lax.with_sharding_constraint.
      • If using ABSL flags together with jax.config, the ABSL flag values are no longer read or written after the JAX configuration options are initially populated from the ABSL flags. This change improves performance of reading jax.config options, which are used pervasively in JAX.
      • The jax2tf.call_tf function now uses for TF lowering the first TF device of the same platform as used by the embedding JAX computation. Before, it was using the 0th device for the JAX-default backend.
      • A number of jax.numpy functions now have their arguments marked as positional-only, matching NumPy.
      • jnp.msort is now deprecated, following the deprecation of np.msort in numpy 1.24. It will be removed in a future release, in accordance with the {ref}api-compatibility policy. It can be replaced with jnp.sort(a, axis=0).

    jaxlib 0.4.1 (Dec 13, 2022)

    • Changes
      • Support for Python 3.7 has been dropped, in accordance with JAX's {ref}version-support-policy.
      • The behavior of XLA_PYTHON_CLIENT_MEM_FRACTION=.XX has been changed to allocate XX% of the total GPU memory instead of the previous behavior of using currently available GPU memory to calculate preallocation. Please refer to GPU memory allocation for more details.
      • The deprecated method .block_host_until_ready() has been removed. Use .block_until_ready() instead.

    jax 0.4.0 (Dec 12, 2022)

    • The release was yanked.

    ... (truncated)

    Commits
    • c4d590b Update values for release 0.4.1
    • 17c6796 Merge pull request #13619 from jakevdp:sparse-validate
    • dc8ead0 Update CHANGELOG to indicate that 0.4.0 was yanked.
    • 71569e1 Remove the specialized sm versions for testing. It caused release wheels to s...
    • 0bdb7ec Finish jax and jaxlib release 0.4.0
    • d491d9f Remove the cached check in aot compiled call in MeshExecutable because a fast...
    • e9cc523 [sparse] validate BCOO on instantiation
    • 23001ae Merge pull request #13603 from gnecula:native_unused
    • 5e8c0ec Merge pull request #13614 from hawkinsp:cuda
    • b868cf7 Merge pull request #13616 from jakevdp:fix-sparse-error
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 3
  • Bump flake8 from 5.0.4 to 6.0.0

    Bump flake8 from 5.0.4 to 6.0.0

    Bumps flake8 from 5.0.4 to 6.0.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
Releases(v3.6)
  • v3.6(Nov 6, 2022)

    New features

    • Added a new 'Full statevector' model netket.models.LogStateVector that stores the exponentially large state and can be used as an exact ansatz #1324.
    • Added a new experimental ~netket.experimental.driver.TDVPSchmitt driver, implementing the signal-to-noise ratio TDVP regularisation by Schmitt and Heyl #1306.
    • QGT classes accept a chunk_size parameter that overrides the chunk_size set by the variational state object #1347.
    • ~netket.optimizer.qgt.QGTJacobianPyTree and ~netket.optimizer.qgt.QGTJacobianDense support diagonal entry regularisation with constant and scale-invariant contributions. They accept a new diag_scale argument to pass the scale-invariant component #1352.
    • ~netket.optimizer.SR preconditioner now supports scheduling of the diagonal shift and scale regularisations #1364.

    Improvements

    • ~netket.vqs.ExactState.expect_and_grad now returns a nk.stats.Stats object that also contains the variance, as MCState does #1325.
    • Experimental RK solvers now store the error of the last timestep in the integrator state #1328.
    • ~netket.operator.PauliStrings can now be constructed by passing a single string, instead of the previous requirement of a list of strings #1331.
    • ~flax.core.frozen_dict.FrozenDict can now be logged to netket's loggers, meaning that one does no longer need to unfreeze the parameters before logging them #1338.
    • Fermion operators are much more efficient and generate fewer connected elements #1279.
    • NetKet now is completely PEP 621 compliant and does not have anymore a setup.py in favour of a pyproject.toml based on hatchling. To install NetKet you should use a recent version of pip or a compatible tool such as poetry/hatch/flint #1365.
    • ~netket.optimizer.qgt.QGTJacobianDense can now be used with ~netket.vqs.ExactState #1358.

    Bug Fixes

    • netket.vqs.ExactState.expect_and_grad returned a scalar while ~netket.vqs.ExactState.expect returned a nk.stats.Stats object with 0 error. The inconsistency has been addressed and now they both return a Stats object. This changes the format of the files logged when running VMC, which will now store the average under Mean instead of value #1325.

    Deprecations

    • The rescale_shift argument of ~netket.optimizer.qgt.QGTJacobianPyTree and ~netket.optimizer.qgt.QGTJacobianDense is deprecated inf avour the more flexible syntax with diag_scale. rescale_shift=False should be removed. rescale_shift=True should be replaced with diag_scale=old_diag_shift. #1352.
    • The call signature of preconditioners passed to netket.driver.VMC and other drivers has changed as a consequence of scheduling, and preconditioners should now accept an extra optional argument step. The old signature is still supported but is deprecated and will eventually be removed #1364.
    Source code(tar.gz)
    Source code(zip)
  • v3.5.2(Oct 30, 2022)

    This release addresses a major performance degradation of LocalOperator that arose in NetKet v3.4. We encourage everyone to upgrade as soon as possible.

    Source code(tar.gz)
    Source code(zip)
  • v3.5.1(Sep 6, 2022)

    New features

    • Added a new configuration option nk.config.netket_experimental_disable_ode_jit to disable jitting of the ODE solvers. This can be useful to avoid hangs that might happen when working on GPUs with some particular systems #1304.

    Bug Fixes

    • Continuous operatorors now work correctly when chunk_size != None. This was broken in v3.5 #1316.
    • Fixed a bug (#1101) that crashed NetKet when trying to take the product of two different Hilber spaces. It happened because the logic to build a TensorHilbert was ending in an endless loop. #1321.
    Source code(tar.gz)
    Source code(zip)
  • v3.5(Aug 18, 2022)

    GitHub commits.

    This release adds support and needed functions to run TDVP for neural networks with real/non-holomorphic parameters, an experimental HDF5 logger, and an MCState method to compute the local estimators of an observable for a set of samples.

    This release also drops support for older version of flax, while adopting the new interface which completely supports complex-valued neural networks. Deprecation warnings might be raised if you were using some layers from netket.nn that are now avaiable in flax.

    A new, more accurate, estimation of the autocorrelation time has been introduced, but it is disabled by default. We welcome feedback.

    New features

    • The method nk.vqs.MCState.local_estimators has been added, which returns the local estimators O_loc(s) = 〈s|O|ψ〉 / 〈s|ψ〉 (which are known as local energies if O is the Hamiltonian). #1179
    • The permutation equivariant nk.models.DeepSetRelDistance for use with particles in periodic potentials has been added together with an example. #1199
    • The class HDF5Log has been added to the experimental submodule. This logger writes log data and variational state variables into a single HDF5 file. #1200
    • Added a new method nk.logging.RuntimeLog.serialize to store the content of the logger to disk #1255.
    • New nk.callbacks.InvalidLossStopping which stops optimisation if the loss function reaches a NaN value. An optional patience argument can be set. #1259
    • Added a new method nk.graph.SpaceGroupBuilder.one_arm_irreps to construct GCNN projection coefficients to project on single-wave-vector components of irreducible representations. #1260.
    • New method nk.vqs.MCState.expect_and_forces has been added, which can be used to compute the variational forces generated by an operator, instead of only the (real-valued) gradient of an expectation value. This in general is needed to write the TDVP equation or other similar equations. #1261
    • TDVP now works for real-parametrized wavefunctions as well as non-holomorphic ones because it makes use of nk.vqs.MCState.expect_and_forces. #1261
    • New method nk.utils.group.Permutation.apply_to_id can be used to apply a permutation (or a permutation group) to one or more lattice indices. #1293
    • It is now possible to disable MPI by setting the environment variable NETKET_MPI. This is useful in cases where mpi4py crashes upon load #1254.
    • The new function nk.nn.binary_encoding can be used to encode a set of samples according to the binary shape defined by an Hilbert space. It should be used similarly to {func}flax.linen.one_hot and works with non homogeneous Hilbert spaces #1209.
    • A new method to estimate the correlation time in Markov chain Monte Carlo (MCMC) sampling has been added to the nk.stats.statistics function, which uses the full FFT transform of the input data. The new method is not enabled by default, but can be turned on by setting the NETKET_EXPERIMENTAL_FFT_AUTOCORRELATION environment variable to 1. In the future we might turn this on by default #1150.

    Dependencies

    • NetKet now requires at least Flax v0.5

    Deprecations

    • nk.nn.Module and nk.nn.compact have been deprecated. Please use the flax.linen.Module and flax.linen.compact instead.
    • nk.nn.Dense(dtype=mydtype) and related Modules (Conv, DenseGeneral and ConvGeneral) are deprecated. Please use flax.linen.***(param_dtype=mydtype) instead. Before flax v0.5 they did not support complex numbers properly within their modules, but starting with flax 0.5 they now do so we have removed our linear module wrappers and encourage you to use them. Please notice that the dtype argument previously used by netket should be changed to param_dtype to maintain the same effect. #...

    Bug Fixes

    • Fixed bug where a nk.operator.LocalOperator representing the identity would lead to a crash. #1197
    • Fix a bug where Fermionic operators nkx.operator.FermionOperator2nd would not result hermitian even if they were. #1233
    • Fix serialization of some arrays with complex dtype in RuntimeLog and JsonLog #1258
    • Fixed bug where the nk.callbacks.EarlyStopping callback would not work as intended when hitting a local minima. #1238
    • chunk_size and the random seed of Monte Carlo variational states are now serialised. States serialised previous to this change can no longer be unserialised #1247
    • Continuous-space hamiltonians now work correctly with neural networks with complex parameters #1273.
    • NetKet now works under MPI with recent versions of jax (>=0.3.15) #1291.
    Source code(tar.gz)
    Source code(zip)
  • v3.4.3(May 24, 2022)

  • v3.4.2(May 8, 2022)

    GitHub commits.

    This release fixes a critical bug affecting operators acting on non homogeneous hilbert spaces.

    Internal Changes

    • Several deprecation warnings related to jax.experimental.loops being deprecated have been resolved by changing those calls to jax.lax.fori_loop. Jax should feel more tranquillo now. #1172

    Bug Fixes

    • Several type promotion bugs that would end up promoting single-precision models to double-precision have been squashed. Those involved nk.operator.Ising and nk.operator.BoseHubbard#1180, nkx.TDVP #1186 and continuous-space samplers and operators #1187.
    • nk.operator.Ising, nk.operator.BoseHubbard and nk.operator.LocalLiouvillian now return connected samples with the same precision (dtype) as the input samples. This allows to preserve low precision along the computation when using those operators.#1180
    • nkx.TDVP now updates the expectation value displayed in the progress bar at every time step. #1182
    • Fixed bug #1192 that affected most operators (nk.operator.LocalOperator) constructed on non-homogeneous hilbert spaces. This bug was first introduced in version 3.3.4 and affects all subsequent versions until 3.4.2. #1193
    • It is now possible to add an operator and its lazy transpose/hermitian conjugate #1194
    Source code(tar.gz)
    Source code(zip)
  • v3.4.1(Apr 12, 2022)

    Internal Changes

    • Several deprecation warnings related to jax.tree_util.tree_multimap being deprecated have been resolved by changing those calls to jax.tree_util.tree_map. Jax should feel more tranquillo now. #1156

    Bug Fixes

    • TDVP now supports model with real parameters such as RBMModPhase. #1139
    • An error is now raised when user attempts to construct a LocalOperator with a matrix of the wrong size (bug #1157). #1158
    • A bug where QGTJacobian could not be used with models in single precision has been addressed (bug #1153). #1155
    Source code(tar.gz)
    Source code(zip)
  • v3.4(Apr 6, 2022)

    GitHub commits.

    This should be the first version easily installable on MacOs M1 following the instructions on the get started part of our website.

    New features

    • Lattice supports specifying arbitrary edge content for each unit cell via the kwarg custom_edges. A generator for hexagonal lattices with coloured edges is implemented as nk.graph.KitaevHoneycomb. nk.graph.Grid again supports colouring edges by direction. #1074
    • Fermionic hilbert space (nkx.hilbert.SpinOrbitalFermions) and fermionic operators (nkx.operator.fermion) to treat systems with a finite number of Orbitals have been added to the experimental submodule. The operators are also integrated with OpenFermion. Those functionalities are still in development and we would welcome feedback. #1090
    • It is now possible to change the integrator of a TDVP object without reconstructing it. #1123
    • Easy install on MacOs M1

    Breaking Changes

    • The gradient for models with real-parameter is now multiplied by 2. If your model had real parameters you might need to change the learning rate and halve it. Conceptually this is a bug-fix, as the value returned before was wrong (see Bug Fixes section below for additional details) #1069
    • In the statistics returned by netket.stats.statistics, the .R_hat diagnostic has been updated to be able to detect non-stationary chains via the split-Rhat diagnostic (see, e.g., Gelman et al., Bayesian Data Analysis, 3rd edition). This changes (generally increases) the numerical values of R_hat for existing simulations, but should strictly improve its capabilities to detect MCMC convergence failure. #1138

    Bug Fixes

    • The gradient obtained with VarState.expect_and_grad for models with real-parameters was off by a factor of $ 1/2 $ from the correct value. This has now been corrected. As a consequence, the correct gradient for real-parameter models is equal to the old times 2. If your model had real parameters you might need to change the learning rate and halve it. #1069
    • Support for coloured edges in nk.graph.Grid, removed in #724, is now restored. #1074
    • Fixed bug that prevented calling .quantum_geometric_tensor on netket.vqs.ExactState. #1108
    • Fixed bug where the gradient of C->C models (complex parameters, complex output) was computed incorrectly with nk.vqs.ExactState. #1110
    • Fixed bug where QGTJacobianDense.state and QGTJacobianPyTree.state would not correctly transform the starting point x0 if holomorphic=False. #1115
    • The gradient of the expectation value obtained with VarState.expect_and_grad for SquaredOperators was off by a factor of 2 in some cases, and wrong in others. This has now been fixed. #1065.
    Source code(tar.gz)
    Source code(zip)
  • v3.3.3(Mar 25, 2022)

  • v3.3.2.post2(Mar 1, 2022)

    GitHub commits.

    Internal Changes

    • Support for Python 3.10 #952.
    • The minimum optax version is now 0.1.1, which finally correctly supports complex numbers. The internal implementation of Adam which was introduced in 3.3 (#1069) has been removed. If an older version of optax is detected, an import error is thrown to avoid providing wrong numerical results. Please update your optax version! #1097

    Bug Fixes

    • Allow [email protected] for operators such as lazy Adjoint, Transpose and Squared. #1068
    • The logic to update the progress bar in nk.experimental.TDVP has been improved, and it should now display updates even if there are very sparse save_steps. #1084
    • The nk.logging.TensorBoardLog is now lazily initialized to better work in an MPI environment. #1086
    • Converting a nk.operator.BoseHubbard to a nk.operator.LocalOperator multiplied by 2 the nonlinearity U. This has now been fixed. #1102

    Notes:

    • .post1: support for python 3.10 and jax 0.3
    • .post2: remove spurious print statement
    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Jan 11, 2022)

    GitHub commits.

    • Initialisation of all implementations of DenseSymm, DenseEquivariant, GCNN now defaults to truncated normals with Lecun variance scaling. For layers without masking, there should be no noticeable change in behaviour. For masked layers, the same variance scaling now works correctly. #1045
    • Fix bug that prevented gradients of non-hermitian operators to be computed. The feature is still marked as experimental but will now run (we do not guarantee that results are correct). #1053
    • Common lattice constructors such as Honeycomb now accepts the same keyword arguments as Lattice. #1046
    • Multiplying a QGTOnTheFly representing the real part of the QGT (showing up when the ansatz has real parameters) with a complex vector now throws an error. Previously the result would be wrong, as the imaginary part was casted away. #885
    Source code(tar.gz)
    Source code(zip)
  • v3.3(Dec 20, 2021)

    GitHub commits.

    New features

    • The interface to define expectation and gradient function of arbitrary custom operators is now stable. If you want to define it for a standard operator that can be written as an average of local expectation terms, you can now define a dispatch rule for {ref}netket.vqs.get_local_kernel_arguments and {ref}netket.vqs.get_local_kernel. The old mechanism is still supported, but we encourage to use the new mechanism as it is more terse. #954
    • nk.optimizer.Adam now supports complex parameters, and you can use nk.optimizer.split_complex to make optimizers process complex parameters as if they are pairs of real parameters. #1009
    • Chunking of MCState.expect and MCState.expect_and_grad computations is now supported, which allows to bound the memory cost in exchange of a minor increase in computation time. #1006 (and discussions in #918 and #830)
    • A new variational state that performs exact summation over the whole Hilbert space has been added. It can be constructed with {ref}nk.vqs.ExactState and supports the same Jax neural networks as {ref}nk.vqs.MCState. #953
    • DenseSymm allows multiple input features. #1030
    • [Experimental] A new time-evolution driver {ref}nk.experimental.TDVP using the time-dependent variational principle (TDVP) has been added. It works with time-independent and time-dependent Hamiltonians and Liouvillians. #1012
    • [Experimental] A set of JAX-compatible Runge-Kutta ODE integrators has been added for use together with the new TDVP driver. #1012

    Breaking Changes

    • The method sample_next in Sampler and exact samplers (ExactSampler and ARDirectSampler) is removed, and it is only defined in MetropolisSampler. The module function nk.sampler.sample_next also only works with MetropolisSampler. For exact samplers, please use the method sample instead. #1016
    • The default value of n_chains_per_rank in Sampler and exact samplers is changed to 1, and specifying n_chains or n_chains_per_rank when constructing them is deprecated. Please change chain_length when calling sample. For MetropolisSampler, the default value is changed from n_chains = 16 (across all ranks) to n_chains_per_rank = 16. #1017
    • GCNN_Parity allowed biasing both the parity-preserving and the parity-flip equivariant layers. These enter into the network output the same way, so having both is redundant and makes QGTs unstable. The biases of the parity-flip layers are now removed. The previous behaviour can be restored using the deprecated extra_bias switch; we only recommend this for loading previously saved parameters. Such parameters can be transformed to work with the new default using nk.models.update_GCNN_parity. #1030
    • Kernels of DenseSymm are now three-dimensional, not two-dimensional. Parameters saved from earlier implementations can be transformed to the new convention using nk.nn.update_dense_symm. #1030

    Deprecations

    • The method Sampler.samples is added to return a generator of samples. The module functions nk.sampler.sampler_state, reset, sample, samples, and sample_next are deprecated in favor of the corresponding class methods. #1025
    • Kwarg in_features of DenseEquivariant is deprecated; the number of input features are inferred from the input. #1030
    • Kwarg out_features of DenseEquivariant is deprecated in favour of features. #1030

    Internal Changes

    • The definitions of MCState and MCMixedState have been moved to an internal module, nk.vqs.mc that is hidden by default. #954
    • Custom deepcopy for LocalOperator to avoid building LocalOperator from scratch each time it is copied #964

    Bug Fixes

    • The constructor of TensorHilbert (which is used by the product operator * for inhomogeneous spaces) no longer fails when one of the component spaces is non-indexable. #1004
    • The {ref}nk.hilbert.random.flip_state method used by MetropolisLocal now throws an error when called on a {ref}nk.hilbert.ContinuousHilbert hilbert space instead of entering an endless loop. #1014
    • Fixed bug in conversion to qutip for MCMixedState, where the resulting shape (hilbert space size) was wrong. #1020
    • Setting MCState.sampler now recomputes MCState.chain_length according to MCState.n_samples and the new sampler.n_chains. #1028
    • GCNN_Parity allowed biasing both the parity-preserving and the parity-flip equivariant layers. These enter into the network output the same way, so having both is redundant and makes QGTs unstable. The biases of the parity-flip layers are now removed. #1030
    Source code(tar.gz)
    Source code(zip)
  • v3.2(Nov 26, 2021)

    GitHub commits.

    New features

    • GraphOperator (and Heisenberg) now support passing a custom mapping of graph nodes to Hilbert space sites via the new acting_on_subspace argument. This makes it possible to create GraphOperators that act on a subset of sites, which is useful in composite Hilbert spaces. #924
    • PauliString now supports any Hilbert space with local size 2. The Hilbert space is now the optional first argument of the constructor. #960
    • PauliString now can be multiplied and summed together, performing some simple algebraic simplifications on the strings they contain. They also lazily initialize their internal data structures, making them faster to construct but slightly slower the first time that their matrix elements are accessed. #955
    • PauliStrings can now be constructed starting from an OpenFermion operator. #956
    • In addition to nearest-neighbor edges, Lattice can now generate edges between next-nearest and, more generally, k-nearest neighbors via the constructor argument max_neighbor_order. The edges can be distinguished by their color property (which is used, e.g., by GraphOperator to apply different bond operators). #970
    • Two continuous-space operators (KineticEnergy and PotentialEnergy) have been implemented. #971
    • Heisenberg Hamiltonians support different coupling strengths on Graph edges with different colors. #972.
    • The little_group and space_group_irreps methods of SpaceGroupBuilder take the wave vector as either varargs or iterables. #975
    • A new netket.experimental submodule has been created and all experimental features have been moved there. Note that in contrast to the other netket submodules, netket.experimental is not imported by default. #976

    Breaking Changes

    • Moved nk.vqs.variables_from_*** to nk.experimental.vqs module. Also moved the experimental samplers to nk.sampler.MetropolisPt and nk.sampler.MetropolisPmap to nk.experimental.sampler. #976
    • operator.size, has been deprecated. If you were using this function, please transition to operator.hilbert.size. #985

    Bug Fixes

    • A bug where LocalOperator.get_conn_flattened would read out-of-bounds memory has been fixed. It is unlikely that the bug was causing problems, but it triggered warnings when running Numba with boundscheck activated. #966
    • The dependency python-igraph has been updated to igraph following the rename of the upstream project in order to work on conda. #986
    • {attr}~netket.vqs.MCState.n_samples_per_rank was returning wrong values and has now been fixed. #987
    • The DenseSymm layer now also accepts objects of type HashableArray as symmetries argument. #989
    • A bug where VMC.info() was erroring has been fixed. #984
    Source code(tar.gz)
    Source code(zip)
  • v3.1.2(Nov 19, 2021)

  • v3.1.1(Nov 17, 2021)

  • v3.1(Oct 20, 2021)

    GitHub commits.

    New features

    • Added Conversion methods to_qobj() to operators and variational states, that produce QuTiP's qobjects.
    • A function nk.nn.activation.reim has been added that transforms a nonlinearity to act seperately on the real and imaginary parts
    • Nonlinearities reim_selu and reim_relu have been added
    • Autoregressive Neural Networks (ARNN) now have a machine_pow field (defaults to 2) used to change the exponent used for the normalization of the wavefunction. #940.

    Breaking Changes

    • The default initializer for netket.models.GCNN has been changed to from jax.nn.selu to netket.nn.reim_selu #892
    • netket.nn.initializers has been deprecated in favor of jax.nn.initializers #935.
    • Subclasses of AbstractARNN must define the field machine_pow #940
    • nk.hilbert.HilbertIndex and nk.operator.spin.DType are now unexported (they where never intended to be visible). #904
    • AbstractOperators have been renamed DiscreteOperators. AbstractOperators still exist, but have almost no functionality and they are intended as the base class for more arbitrary (eg. continuous space) operators. If you have defined a custom operator inheriting from AbstractOperator you should change it to derive from DiscreteOperator. #929

    Internal Changes

    • PermutationGroup.product_table now consumes less memory and is more performant. This is helpfull when working with large symmetry groups. #884 #891
    • Added size check to DiscreteOperator.get_conn and throw helpful error messages if those do not match. #927
    • The internal numba4jax module has been factored out into a standalone library, named (how original) numba4jax. This library was never intended to be used by external users, but if for any reason you were using it, you should switch to the external library. #934
    • netket.jax now includes several batching utilities like batched_vmap and batched_vjp. Those can be used to build memory efficient batched code, but are considered internal, experimental and might change without warning. #925.

    Bug Fixes

    • Autoregressive networks now work with Qubit hilbert spaces. #937
    Source code(tar.gz)
    Source code(zip)
  • v3.0.4(Sep 24, 2021)

  • v3.0.3(Sep 7, 2021)

  • v3.0.2(Sep 7, 2021)

  • v3.0.1(Aug 30, 2021)

  • v3.0(Aug 23, 2021)

    NetKet 3.0 (23 august 2021)

    GitHub commits.

    Complete changelog of beta versions available at https://www.netket.org/docs/changelog.html Below only changes from the last beta release are reported

    Breaking Changes

    • The default initializer for netket.nn.Dense layers now matches the same default as flax.linen, and it is lecun_normal instead of normal(0.01) #869
    • The default initializer for netket.nn.DenseSymm layers is now chosen in order to give variance 1 to every output channel, therefore defaulting to lecun_normal #870
    Source code(tar.gz)
    Source code(zip)
  • v3.0b4(Aug 16, 2021)

    GitHub commits.

    New features

    • DenseSymm now accepts a mode argument to specify whever the symmetries should be computed with a full dense matrix or FFT. The latter method is much faster for sufficiently large systems. Other kwargs have been added to satisfy the interface. The api changes are also reflected in RBMSymm and GCNN. #792

    Breaking Changes

    • The so-called legacy netket in netket.legacy has been removed. #773

    Internal Changes

    • The methods expect and expect_and_grad of MCState now use dispatch to select the relevant implementation of the algorithm. They can therefore be expanded and overridden without editing NetKet's source code. #804
    • netket.utils.mpi_available has been moved to netket.utils.mpi.available to have a more consistent api interface (all mpi-related properties in the same submodule). #827
    • netket.logging.TBLog has been renamed to netket.logging.TensorBoardLog for better readability. A deprecation warning is now issued if the older name is used #827
    • When MCState initializes a model by calling model.init, the call is now jitted. This should speed it up for non-trivial models but might break non-jit invariant models. #832
    • operator.get_conn_padded now supports arbitrarily-dimensioned bitstrings as input and reshapes the output accordingly. #834
    • NetKet's implementation of dataclasses now support pytree_node=True/False on cached properties. #835
    • Plum version has been bumped to 1.5.1 to avoid broken versions (1.4, 1.5). #856.
    • Numba version 0.54 is now allowed #857.

    Bug Fixes

    • Fix Progress bar bug. #810
    • Make the repr/printing of history objects nicer in the REPL. #819
    • The field MCState.model is now read-only, to prevent user errors. #822
    • The order of the operators in PauliString does no longer influences the estimate of the number of non-zero connected elements. #836
    Source code(tar.gz)
    Source code(zip)
  • v3.0b3.post2(Jul 14, 2021)

  • v3.0b3(Jul 9, 2021)

    NetKet 3.0b3 (published on July 9 2021)

    GitHub commits.

    New features

    • The {ref}utils.group submodule provides utilities for geometrical and permutation groups. Lattice (and its specialisations like Grid) use these to automatically construct the space groups of lattices, as well as their character tables for generating wave functions with broken symmetry. #724
    • Autoregressive neural networks, sampler, and masked linear layers have been added to models, sampler and nn #705.

    Breaking Changes

    • The graph.Grid class has been removed. {ref}graph.Grid will now return an instance of {ref}graph.Lattice supporting the same API but with new functionalities related to spatial symmetries. The color_edges optional keyword argument has been removed without deprecation. #724
    • MCState.n_discard has been renamed MCState.n_discard_per_chain and the old binding has been deprecated #739.
    • nk.optimizer.qgt.QGTOnTheFly option centered=True has been removed because we are now convinced the two options yielded equivalent results. QGTOnTheFly now always behaves as if centered=False #706.

    Internal Changes

    • networkX has been replaced by igraph, yielding a considerable speedup for some graph-related operations #729.
    • netket.hilbert.random module now uses plum-dispatch (through netket.utils.dispatch) to select the correct implementation of random_state and flip_state. This makes it easy to define new hilbert states and extend their functionality easily. #734.
    • The AbstractHilbert interface is now much smaller in order to also support continuous Hilbert spaces. Any functionality specific to discrete hilbert spaces (what was previously supported) has been moved to a new abstract type nk.hilbert.DiscreteHilbert. Any Hilbert space previously subclassing {ref}nk.hilbert.AbstractHilbert should be modified to subclass {ref}nk.hilbert.DiscreteHilbert #800.

    Bug Fixes

    • nn.to_array and MCState.to_array, if normalize=False, do not subtract the logarithm of the maximum value from the state #705.
    • Autoregressive networks now work with Fock space and give correct errors if the hilbert space is not supported #806.
    • Autoregressive networks are now much (x10-x100) faster #705.
    • Do not throw errors when calling operator.get_conn_flattened(states) with a jax array #764.
    • Fix bug with the driver progress bar when step_size != 1 #747.
    Source code(tar.gz)
    Source code(zip)
  • v3.0b2(May 31, 2021)

    NetKet 3.0b2 (published on 31 May 2021)

    GitHub commits.

    New features

    • Group Equivariant Neural Networks have been added to models #620
    • Permutation invariant RBM and Permutation invariant dense layer have been added to models and nn.linear #573
    • Add the property acceptance to MetropolisSampler's SamplerState, computing the MPI-enabled acceptance ratio. #592.
    • Add StateLog, a new logger that stores the parameters of the model during the optimization in a folder or in a tar file. #645
    • A warning is now issued if NetKet detects to be running under mpirun but MPI dependencies are not installed #631
    • operator.LocalOperators now do not return a zero matrix element on the diagonal if the whole diagonal is zero. #623.
    • logger.JSONLog now automatically flushes at every iteration if it does not consume significant CPU cycles. #599
    • The interface of Stochastic Reconfiguration has been overhauled and made more modular. You can now specify the solver you wish to use, NetKet provides some dense solvers out of the box, and there are 3 different ways to compute the Quantum Geometric Tensor. Read the documentation to learn more about it. #674
    • Unless you specify the QGT implementation you wish to use with SR, we use an automatic heuristic based on your model and the solver to pick one. This might affect SR performance. #674

    Breaking Changes

    • For all samplers, n_chains now sets the total number of chains across all MPI ranks. This is a breaking change compared to the old API, where n_chains would set the number of chains on a single MPI rank. It is still possible to set the number of chains per MPI rank by specifying n_chains_per_rank instead of n_chains. This change, while breaking allows us to be consistent with the interface of {ref}variational.MCState, where n_samples is the total number of samples across MPI nodes.
    • MetropolisSampler.reset_chain has been renamed to MetropolisSampler.reset_chains. Likewise in the constructor of all samplers.
    • Briefly during development releases MetropolisSamplerState.acceptance_ratio returned the percentage (not ratio) of acceptance. acceptance_ratio is now deprecated in favour of the correct acceptance.
    • models.Jastrow now internally symmetrizes the matrix before computing its value #644
    • MCState.evaluate has been renamed to MCState.log_value #632
    • nk.optimizer.SR no longer accepts keyword argument relative to the sparse solver. Those should be passed inside the closure or functools.partial passed as solver argument.
    • nk.optimizer.sr.SRLazyCG and nk.optimizer.sr.SRLazyGMRES have been deprecated and will soon be removed.
    • Parts of the Lattice API have been overhauled, with deprecations of several methods in favor of a consistent usage of Lattice.position for real-space location of sites and Lattice.basis_coords for location of sites in terms of basis vectors. Lattice.sites has been added, which provides a sequence of LatticeSite objects combining all site properties. Furthermore, Lattice now provides lookup of sites from their position via id_from_position using a hashing scheme that works across periodic boundaries. #703 #715
    • nk.variational has been renamed to nk.vqs and will be removed in a future release.

    Bug Fixes

    • Fix operator.BoseHubbard usage under jax Hamiltonian Sampling #662
    • Fix SROnTheFly for R->C models with non homogeneous parameters #661
    • Fix MPI Compilation deadlock when computing expectation values #655
    • Fix bug preventing the creation of a hilbert.Spin Hilbert space with odd sites and even S. #641
    • Fix bug #635 preventing the usage of NumpyMetropolisSampler with MCState.expect #635
    • Fix bug #635 where the graph.Lattice was not correctly computing neighbours because of floating point issues. #633
    • Fix bug the Y Pauli matrix, which was stored as its conjugate. #618 #617 #615
    Source code(tar.gz)
    Source code(zip)
  • v3.0b1.post9(May 7, 2021)

  • v3.0b1.post8(Apr 28, 2021)

  • v3.0b1.post7(Apr 22, 2021)

  • v3.0b1.post6(Apr 21, 2021)

Owner
NetKet
Open-source project for the development of machine intelligence for many-body quantum systems.
NetKet
PyTorch 1.0 inference in C++ on Windows10 platforms

Serving PyTorch Models in C++ on Windows10 platforms How to use Prepare Data examples/data/train/ - 0 - 1 . . . - n examples/data/test/

Henson 88 Oct 15, 2022
Users can free try their models on SIDD dataset based on this code

SIDD benchmark 1 Train python train.py If you want to train your network, just modify the yaml in the options folder. 2 Validation python validation.p

Yuzhi ZHAO 2 May 20, 2022
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and

lan.nguyen2k 77 Jan 03, 2023
Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini!

ConversorDeMedidas_CapuccinoGelado Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini! Requirem

Arthur Ottoni Ribeiro 48 Nov 15, 2022
The implement of papar "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization"

SIGIR2021-EGLN The implement of paper "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization" Neural graph based Col

15 Dec 27, 2022
Example for AUAV 2022 with obstacle avoidance.

AUAV 2022 Sample This is a sample PX4 based quadrotor path planning framework based on Ubuntu 20.04 and ROS noetic for the IEEE Autonomous UAS 2022 co

James Goppert 11 Sep 16, 2022
Analysing poker data from home games with friends

Poker Game Analysis Analysing poker data from home games with friends. Not a lot of data is collected, so this project is primarily focussed on descri

Stavros Karmaniolos 1 Oct 15, 2022
A very short and easy implementation of Quantile Regression DQN

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)

Protein GLM (wip) Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capabil

Phil Wang 17 May 06, 2022
sense-py-AnishaBaishya created by GitHub Classroom

Compute Statistics Here we compute statistics for a bunch of numbers. This project uses the unittest framework to test functionality. Pass the tests T

1 Oct 21, 2021
Main repository for the HackBio'2021 Virtual Internship Experience for #Team-Greider ❤️

Hello 🤟 #Team-Greider The team of 20 people for HackBio'2021 Virtual Bioinformatics Internship 💝 🖨️ 👨‍💻 HackBio: https://thehackbio.com 💬 Ask us

Siddhant Sharma 7 Oct 20, 2022
Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices,

Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices, Linh Van Ma, Tin Trung Tran, Moongu Jeon, ICAIIC 2022 (The 4th

Linh 11 Oct 10, 2022
BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels.

113 Dec 28, 2022
Learning Chinese Character style with conditional GAN

zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks Introduction Learning eastern asian language typefaces with GAN. zi2zi(字到字, me

Yuchen Tian 2.2k Jan 02, 2023
Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021)

Pano-AVQA Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021) [Paper] [Poster] [Video] Getting Starte

Heeseung Yun 9 Dec 23, 2022
Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.

Pyserini Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse re

Castorini 706 Dec 29, 2022
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
Official PyTorch Implementation of SSMix (Findings of ACL 2021)

SSMix: Saliency-based Span Mixup for Text Classification (Findings of ACL 2021) Official PyTorch Implementation of SSMix | Paper Abstract Data augment

Clova AI Research 52 Dec 27, 2022