A PyTorch-centric hybrid classical-quantum machine learning framework

Overview

torchquantum

A PyTorch-centric hybrid classical-quantum dynamic neural networks framework.

MIT License

News

  • Add a simple example script using quantum gates to do MNIST classification.
  • v0.0.1 available. Feedbacks are highly welcomed!

Installation

git clone https://github.com/Hanrui-Wang/pytorch-quantum.git
cd pytorch-quantum
pip install --editable .

Usage

Construct quantum NN models as simple as constructing a normal pytorch model.

import torch.nn as nn
import torch.nn.functional as F 
import torchquantum as tq
import torchquantum.functional as tqf

class QFCModel(nn.Module):
  def __init__(self):
    super().__init__()
    self.n_wires = 4
    self.q_device = tq.QuantumDevice(n_wires=self.n_wires)
    self.measure = tq.MeasureAll(tq.PauliZ)
    
    self.encoder_gates = [tqf.rx] * 4 + [tqf.ry] * 4 + \
                         [tqf.rz] * 4 + [tqf.rx] * 4
    self.rx0 = tq.RX(has_params=True, trainable=True)
    self.ry0 = tq.RY(has_params=True, trainable=True)
    self.rz0 = tq.RZ(has_params=True, trainable=True)
    self.crx0 = tq.CRX(has_params=True, trainable=True)

  def forward(self, x):
    bsz = x.shape[0]
    # down-sample the image
    x = F.avg_pool2d(x, 6).view(bsz, 16)
    
    # reset qubit states
    self.q_device.reset_states(bsz)
    
    # encode the classical image to quantum domain
    for k, gate in enumerate(self.encoder_gates):
      gate(self.q_device, wires=k % self.n_wires, params=x[:, k])
    
    # add some trainable gates (need to instantiate ahead of time)
    self.rx0(self.q_device, wires=0)
    self.ry0(self.q_device, wires=1)
    self.rz0(self.q_device, wires=3)
    self.crx0(self.q_device, wires=[0, 2])
    
    # add some more non-parameterized gates (add on-the-fly)
    tqf.hadamard(self.q_device, wires=3)
    tqf.sx(self.q_device, wires=2)
    tqf.cnot(self.q_device, wires=[3, 0])
    tqf.qubitunitary(self.q_device0, wires=[1, 2], params=[[1, 0, 0, 0],
                                                           [0, 1, 0, 0],
                                                           [0, 0, 0, 1j],
                                                           [0, 0, -1j, 0]])
    
    # perform measurement to get expectations (back to classical domain)
    x = self.measure(self.q_device).reshape(bsz, 2, 2)
    
    # classification
    x = x.sum(-1).squeeze()
    x = F.log_softmax(x, dim=1)

    return x

Features

  • Easy construction of parameterized quantum circuits in PyTorch.
  • Support batch mode inference and training on CPU/GPU.
  • Support dynamic computation graph for easy debugging.
  • Support easy deployment on real quantum devices such as IBMQ.

TODOs

  • Support more gates
  • Support compile a unitary with descriptions to speedup training
  • Support other measurements other than analytic method
  • In einsum support multiple qubit sharing one letter. So that more than 26 qubit can be simulated.
  • Support bmm based implementation to solve scalability issue
  • Support conversion from torchquantum to qiskit

Dependencies

  • Python >= 3.7
  • PyTorch >= 1.8.0
  • configargparse >= 0.14
  • GPU model training requires NVIDIA GPUs

MNIST Example

Train a quantum circuit to perform MNIST task and deploy on the real IBM Yorktown quantum computer as in mnist_example.py script:

python mnist_example.py

Files

File Description
devices.py QuantumDevice class which stores the statevector
encoding.py Encoding layers to encode classical values to quantum domain
functional.py Quantum gate functions
operators.py Quantum gate classes
layers.py Layer templates such as RandomLayer
measure.py Measurement of quantum states to get classical values
graph.py Quantum gate graph used in static mode
super_layer.py Layer templates for SuperCircuits
plugins/qiskit* Convertors and processors for easy deployment on IBMQ
examples/ More examples for training QML and VQE models

More Examples

The examples/ folder contains more examples to train the QML and VQE models. Example usage for a QML circuit:

# train the circuit with 36 params in the U3+CU3 space
python examples/train.py examples/configs/mnist/four0123/train/baseline/u3cu3_s0/rand/param36.yml

# evaluate the circuit with torchquantum
python examples/eval.py examples/configs/mnist/four0123/eval/tq/all.yml --run-dir=runs/mnist.four0123.train.baseline.u3cu3_s0.rand.param36

# evaluate the circuit with real IBMQ-Yorktown quantum computer
python examples/eval.py examples/configs/mnist/four0123/eval/x2/real/opt2/300.yml --run-dir=runs/mnist.four0123.train.baseline.u3cu3_s0.rand.param36

Example usage for a VQE circuit:

# Train the VQE circuit for h2
python examples/train.py examples/configs/vqe/h2/train/baseline/u3cu3_s0/human/param12.yml

# evaluate the VQE circuit with torchquantum
python examples/eval.py examples/configs/vqe/h2/eval/tq/all.yml --run-dir=runs/vqe.h2.train.baseline.u3cu3_s0.human.param12/

# evaluate the VQE circuit with real IBMQ-Yorktown quantum computer
python examples/eval.py examples/configs/vqe/h2/eval/x2/real/opt2/all.yml --run-dir=runs/vqe.h2.train.baseline.u3cu3_s0.human.param12/

Detailed documentations coming soon.

Contact

Hanrui Wang ([email protected])

Comments
  • Cannot use qiskit simulation when running mnist_example.py

    Cannot use qiskit simulation when running mnist_example.py

    I tried to run mnist_example.py with IBM Q token already set. But I ran into trouble when doing qiskit simulation. The line is

    valid_test(dataflow, 'test', model, device, qiskit=True)
    

    I think the execution should be fast, but it got stuck after the following messages:

    Test with Qiskit Simulator
    [2022-03-22 22:36:14.573] Before transpile: {'depth': 32, 'size': 77, 'width': 8, 'n_single_gates': 62, 'n_two_gates': 11, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 24, 'rx': 17, 'cx': 10, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
    [2022-03-22 22:36:14.864] After transpile: {'depth': 23, 'size': 49, 'width': 8, 'n_single_gates': 33, 'n_two_gates': 12, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 8, 'rz': 7, 'rx': 4, 'cx': 12, 'u1': 2, 'u3': 11, 'u2': 1, 'measure': 4}}
    

    I interrupted the program using ctrl+c after 2-3 minutes, getting a very long error log for interrupting multiprocessing. I want to know what causes that trouble and how to deal with it.

    Thank!


    I am using torchquantum master branch, qiskit 0.19.2.

    error_log.txt

    opened by royess 10
  • Always print “The qiskit parameterization bug is already fixed!” when running mnist_example.py

    Always print “The qiskit parameterization bug is already fixed!” when running mnist_example.py

    I tried to run the minist_example.py. And I commented out the later part about Qiskit.

    I tried to run it with the following command:

    python mnist_example.py --epochs 1

    But always print "The qiskit parameterization bug is already fixed!" in the terminal. I wish I had a way to stop printing, but I haven't found one yet.

    Thank!

    print_log.txt

    opened by ex193170 5
  • Testing of mnist_example_no_binding.py file produces error

    Testing of mnist_example_no_binding.py file produces error

    Hi, I tried testing the mnist_example_no_binding.py file. I keep getting the below error:

      File "C:\Users\manuc\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\__init__.py", line 126, in <module>
        raise err
    OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\manuc\AppData\Local\Programs\Python\Python39\
    lib\site-packages\torch\lib\cusparse64_11.dll" or one of its dependencies.
    Traceback (most recent call last):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "<string>", line 1, in <module>
    
    

    Please suggest me a way to resolve this. Because of this error, my testing is not yet complete.

    opened by manu123416 4
  • Cannot use Qiskit simulation when running example1

    Cannot use Qiskit simulation when running example1

    I tried to runtorchquantum-master\artifact\example1\mnist_example.py, but I ran into some trouble when doing qiskit simulation, too.

    Because the flie "examples.core.datasets" is missing, I copied it from the file from https://zenodo.org/record/5787244#.YbunmBPMJhE.(\torchquantum-master\examples\core\datasets). To avoid BrokenPipeError, I reset the "num_workers" in line 152 to 0.

    After these messages:

    Test with Qiskit Simulator
    [2022-10-20 14:33:05.579] Before transpile: {'depth': 36, 'size': 77, 'width': 8, 'n_single_gates': 52, 'n_two_gates': 21, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 18, 'rx': 13, 'cx': 20, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
    [2022-10-20 14:33:06.257] After transpile: {'depth': 31, 'size': 61, 'width': 8, 'n_single_gates': 37, 'n_two_gates': 20, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 11, 'rz': 7, 'rx': 6, 'cx': 20, 'u3': 11, 'u1': 2, 'measure': 4}}
    

    an error occurred, saying "need at least one array to stack". the details are in the file "errorlog1"

    I also tried to add ", parallel=False" and modify the file qiskit/assembler/assemble_circuits.py as the Issue#9 does, but another error occurred, the details are in the file "errorlog2"

    The version information is as follow,

    >>> import qiskit
    >>> qiskit.version.QiskitVersion()
    {'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
    

    and I'm running the code under python 3.9.

    By the way, I tried the code in https://zenodo.org/record/5787244#.YbunmBPMJhE.. The same problem occurred. I wonder how to deal with it. Any help would be greatly appreciated.

    errorlog1.txt errorlog2.txt

    opened by yzr-mint 2
  • Code for reproducing QuantumNAS results missing

    Code for reproducing QuantumNAS results missing

    The .py and .yml files referenced in the shell scripts used to reproduce the results from the QuantumNAS paper seem to be missing - when running the Colab notebooks, I always run into this error:

    can't open file 'examples/train.py': [Errno 2] No such file or directory

    I tried searching for the files in the repo manually, but could not find the .py or the .yml files anywhere.

    opened by SashwatAnagolum 2
  • GPU is not utilized during VQE training

    GPU is not utilized during VQE training

    I tried to use codes in VQE examples but found that GPU was not utilized. However, 2GB of the GPU memory is used.

    My configuration:

    [2022-05-31 13:54:52.758] /home/yuxuan/.julia/conda/3/envs/qtorch39/bin/python  examples/vqe/xxz_noncritical_configs.yml --gpu 0
    [2022-05-31 13:54:52.758] Training started: "runs/vqe.xxz_noncritical_configs".
    dataset:
      name: vqe
      input_name: input
      target_name: target
    trainer:
      name: params_shift_trainer
    run:
      steps_per_epoch: 10
      workers_per_gpu: 8
      n_epochs: 10
      bsz: 1
      device: gpu
    model:
      transpile_before_run: False
      load_op_list: False
      hamil_filename: examples/vqe/h2.txt
      arch:
        n_wires: 6
        n_layers_per_block: 6
        q_layer_name: seth_0
        n_blocks: 6
      name: vqe_0
    qiskit:
      use_qiskit: False
      use_qiskit_train: True
      use_qiskit_valid: True
      use_real_qc: False
      backend_name: ibmq_quito
      noise_model_name: None
      n_shots: 8192
      initial_layout: None
      optimization_level: 0
      max_jobs: 1
    ckpt:
      load_ckpt: False
      load_trainer: False
      name: checkpoints/min-loss-valid.pt
    debug:
      pdb: False
      set_seed: False
    optimizer:
      name: adam
      lr: 0.05
      weight_decay: 0.0001
      lambda_lr: 0.01
    criterion:
      name: minimize
    scheduler:
      name: cosine
    callbacks: [{'callback': 'InferenceRunner', 'split': 'valid', 'subcallbacks': [{'metrics': 'MinError', 'name': 'loss/valid'}]}, {'callback': 'MinSaver', 'name': 'loss/valid'}, {'callback': 'Saver', 'max_to_keep': 10}]
    regularization:
      unitary_loss: False
    legalization:
      legalize: False
    

    GPU status by Nvitop: (The last line is for VQE training process.)

    image

    Version information:

    python                    3.9.11               h12debd9_2
    tensorboard               2.9.0                    pypi_0    pypi
    tensorboard-data-server   0.6.1                    pypi_0    pypi
    tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
    tensorflow                2.9.1                    pypi_0    pypi
    tensorflow-estimator      2.9.0                    pypi_0    pypi
    tensorflow-io-gcs-filesystem 0.26.0                   pypi_0    pypi
    tensorpack                0.11                     pypi_0    pypi
    torch                     1.11.0+cu113             pypi_0    pypi
    torchaudio                0.11.0+cu113             pypi_0    pypi
    torchpack                 0.3.1                    pypi_0    pypi
    torchquantum              0.1.0                     dev_0    <develop>
    torchvision               0.12.0+cu113             pypi_0    pypi
    
    opened by royess 2
  • inconvenient to run VQE example

    inconvenient to run VQE example

    I find it not very convenient to run VQE example. If I run python examples/vqe/train.py directly, I will get an error message

    ......
        from examples.vqe import builder
    ModuleNotFoundError: No module named 'examples.vqe'
    

    But python -c "from examples.vqe import builder" works fine, which is strange to me.

    And my current way to run the script is by opening a python REPL and running

    from examples.vqe import train
    import sys
    sys.argv.append('examples/vqe/vqe_configs.yml')
    train.main()
    

    I wonder whether I can do it in a simpler way. Or there is a need to modify codes.

    opened by royess 2
  • Got stuck while running .\artifact\example2

    Got stuck while running .\artifact\example2

    I tried to runtorchquantum-master\artifact\example2\quantumnas\1_train_supercircuit.sh, but I got stuck.

    The program seems stuck after it begins to train. After the message "0% 0/92 [00:00<?, ?it/s]" came out, I waited for hours but nothing happened. The output is in the file"errorlog.log".

    The version information is as follow,

    >>> import qiskit
    >>> qiskit.version.QiskitVersion()
    {'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
    

    and I'm running the code under python 3.9.

    I wonder how to deal with it. Any help would be greatly appreciated.

    errorlog.log

    opened by yzr-mint 1
  • Apple Silicon Mac needs one more step: install hdf5

    Apple Silicon Mac needs one more step: install hdf5

    git clone https://github.com/mit-han-lab/torchquantum.git
    cd torchquantum
    brew install hdf5
    export HDF5_DIR="$(brew --prefix hdf5)"
    pip install --editable .
    python fix_qiskit_parameterization.py
    

    reference: https://stackoverflow.com/questions/66741778/how-to-install-h5py-needed-for-keras-on-macos-with-m1

    opened by frogcjn 1
  • (feat/github-ci) ensure python style consistency, add pre-commit hooks

    (feat/github-ci) ensure python style consistency, add pre-commit hooks

    Hello 👋

    This small PR adds new GH CI workflows, flake8 configuration to the torchquantum project to ensure Python style consistency, prevents codebase from common mistakes.

    opened by almostagi 1
  • How to save the QNN model like a normal pytorch model?

    How to save the QNN model like a normal pytorch model?

    Hi,

    How can I save the QNN model in such a way that it can be loaded back in the same way we load a normal pytorch model. Basically, I want to load it for this use case.

    I did check the saving example from the examples section, but it doesn't save the entire model but just a checkpoint.

    opened by sauravtii 1
  • A simple way to improve the regression example

    A simple way to improve the regression example

    https://github.com/mit-han-lab/torchquantum/blob/7122388a3d58d5b6c48db44fbd4b27198941ed2f/examples/regression/run_regression.py#L138

    In this example, after the measurement, you get 3 numbers (output_all.shape = [bsz, 3]). However, in the loss function, only the 2nd number is utilized, i.e., [:, 1]. This leads to poor performance. A simple fix can significantly improve the performance (I already tested).

    1. Add res = self.linear(res) to the last step of the self.forward() function, where self.linear = nn.torch.Linear(self.n_wires, 1)
    2. The targets need to unsqueeze(dim=-1) so the dimension of outputs_all and targets match

    BTW: I have been playing around with torchquantum recently. It is a very good tool.

    opened by caitaozhan 0
  • Support for fake backends

    Support for fake backends

    First of all, I appreciate your effort! This framework is so helpful for new learners!

    I think it would be great if this framework supports fake backends as well for reproducibility!

    Thank you.

    opened by j0807s 1
  • Train data label and image are different

    Train data label and image are different

    Hi , I tried testing your quantum neural network code on jupyter notebook. I think, there is some bug in the training data.

    dataset = MNIST(root='../Data_Manu',
                     train_valid_split_ratio=[0.9, 0.1],
                digits_of_interest=[3, 5],
    #             n_train_samples = 75,
                n_test_samples=75)
    

    data_train= dataset['train'][0]

    {'image': tensor([[[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.3733, -0.1696,
               -0.1696,  0.8868,  1.4468,  2.1087,  0.0213, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242,  0.1995,  0.6704,  1.8032,  1.9560,  2.7960,
                2.8088,  2.7960,  2.7960,  2.7960,  2.4142, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                1.6250,  2.1723,  1.3068,  1.3068,  1.3196,  1.3068,  1.3068,
                1.4978,  2.5542,  2.7069,  2.7960,  2.7960,  2.7960,  2.7960,
                2.8088,  2.7960,  2.7960,  2.3251,  1.8160, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                2.2996,  2.7960,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
                2.7960,  2.7960,  2.8088,  2.7960,  2.7960,  2.7960,  2.7960,
                2.0323,  0.9886,  0.3140, -0.3606, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                0.7977,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
                2.8088,  2.8088,  2.6433,  1.9560,  0.8232,  0.6322, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                0.5049,  2.7960,  2.7960,  2.7960,  2.8088,  2.4778,  1.2941,
                0.6322,  0.0722, -0.0424, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.4795,
                2.6433,  2.7960,  2.7960,  2.7960,  1.0523, -0.2715, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.2360,
                2.7960,  2.7960,  2.5160,  0.5813, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.8088,
                2.7960,  2.7960,  2.4906,  0.8232,  0.8359,  0.8232,  0.8232,
                0.8232, -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.0451,
                2.8088,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
                2.8088,  2.8088,  2.0451,  0.1740, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.1740,
                1.8796,  2.5415,  2.5415,  2.6433,  2.5542,  2.5415,  2.5415,
                2.5415,  2.6433,  2.8088,  2.3760, -0.2460, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.0424, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.0424,  2.4142,  2.7960,  2.1087, -0.3478, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.2206,  2.2360,  2.7960,  2.7960, -0.1824, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.0296,
                1.4850,  2.3378,  2.8088,  2.7960,  2.7960,  0.3013, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242,  0.6195,  1.6505,  2.8088,
                2.8088,  2.8088,  2.8215,  2.8088,  1.9051, -0.3224, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.3351,  0.5049,  2.0196,  2.8088,  2.7960,  2.7960,
                2.7960,  2.7960,  2.4524,  1.2050, -0.2715, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.3351,  1.7269,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
                2.6306,  1.7905,  0.3395, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.3097,  1.4341,  2.2869,  2.2869,  2.2996,  1.7141,  1.0650,
               -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242]]]),
     'digit': 1}
    
    The tensor matrix contains 5 but the label shows 1?
    
    opened by manu123416 3
  • Density matrix and mixed state

    Density matrix and mixed state

    Hi,

    we are currently using Torchquantum to implement hybrid models, and we're wondering does Torchquantum plan to support mixed states and density matrix simulation in the near future since we'd like to implement e.g. something like qiskit.quantum_info.partial_trace?

    Without density matrix/mixed states, is something like https://quantumai.google/reference/python/cirq/partial_trace_of_state_vector_as_mixture currently doable with Torchquantum?

    Thanks for making such an awesome library available!

    opened by wcqc 6
Releases(v0.1.5)
  • v0.1.2(Sep 14, 2022)

    1. Add support for state.gate such as state.h
    2. Add more examples

    What's Changed

    • RZ gate by @jessding in https://github.com/mit-han-lab/torchquantum/pull/1
    • [major] merge pruning branch to master by @Hanrui-Wang in https://github.com/mit-han-lab/torchquantum/pull/2
    • Jiaqi by @JeremieMelo in https://github.com/mit-han-lab/torchquantum/pull/3
    • Jiaqi by @JeremieMelo in https://github.com/mit-han-lab/torchquantum/pull/4
    • Params shift by @abclzr in https://github.com/mit-han-lab/torchquantum/pull/6
    • Corrected module name to import MNIST by @googlercolin in https://github.com/mit-han-lab/torchquantum/pull/23
    • modify doc conf to init docstring publish task by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/24
    • refine class template by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/25
    • [minor] update format and theme by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/26
    • [minor] adjust dark theme code block and add function template by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/27
    • add customized furo doc theme by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/28
    • [doc] add ipynb and md support into doc, add one example by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/29
    • [major] fix bugs in torchquantum/measure.py by @abclzr in https://github.com/mit-han-lab/torchquantum/pull/30
    • [doc] Fix examples page in examples/index.rst by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/31

    New Contributors

    • @jessding made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/1
    • @Hanrui-Wang made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/2
    • @JeremieMelo made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/3
    • @abclzr made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/6
    • @googlercolin made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/23
    • @frogcjn made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/24

    Full Changelog: https://github.com/mit-han-lab/torchquantum/commits/v0.1.2

    Source code(tar.gz)
    Source code(zip)
    torchquantum-0.1.2-py3-none-any.whl(107.24 KB)
    torchquantum-0.1.2.tar.gz(93.23 KB)
Owner
MIT HAN Lab
Accelerating Deep Learning Computing
MIT HAN Lab
VoxHRNet - Whole Brain Segmentation with Full Volume Neural Network

VoxHRNet This is the official implementation of the following paper: Whole Brain Segmentation with Full Volume Neural Network Yeshu Li, Jonathan Cui,

Microsoft 12 Nov 24, 2022
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 02, 2023
YoloV3 Implemented in Tensorflow 2.0

YoloV3 Implemented in TensorFlow 2.0 This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices. Key Features

Zihao Zhang 2.5k Dec 26, 2022
Unified learning approach for egocentric hand gesture recognition and fingertip detection

Unified Gesture Recognition and Fingertip Detection A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and finge

Mohammad 227 Dec 25, 2022
In the case of your data having only 1 channel while want to use timm models

timm_custom Description In the case of your data having only 1 channel while want to use timm models (with or without pretrained weights), run the fol

2 Nov 26, 2021
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
[IJCAI'21] Deep Automatic Natural Image Matting

Deep Automatic Natural Image Matting [IJCAI-21] This is the official repository of the paper Deep Automatic Natural Image Matting. Introduction | Netw

Jizhizi_Li 316 Jan 06, 2023
This project provides a stock market environment using OpenGym with Deep Q-learning and Policy Gradient.

Stock Trading Market OpenAI Gym Environment with Deep Reinforcement Learning using Keras Overview This project provides a general environment for stoc

Kim, Ki Hyun 769 Dec 25, 2022
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyone’s

Umar Masud 15 Dec 06, 2022
To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beginners, intermediates as well as experts

JaxTon 💯 JAX exercises Mission 🚀 To provide 100 JAX exercises over different sections structured as a course or tutorials to teach and learn for beg

Rohan Rao 512 Jan 01, 2023
A real-time speech emotion recognition application using Scikit-learn and gradio

Speech-Emotion-Recognition-App A real-time speech emotion recognition application using Scikit-learn and gradio. Requirements librosa==0.6.3 numpy sou

Son Tran 6 Oct 04, 2022
Collection of Docker images for ML/DL and video processing projects

Collection of Docker images for ML/DL and video processing projects. Overview of images Three types of images differ by tag postfix: base: Python with

OSAI 87 Nov 22, 2022
Nightmare-Writeup - Writeup for the Nightmare CTF Challenge from 2022 DiceCTF

Nightmare: One Byte to ROP // Alternate Solution TLDR: One byte write, no leak.

1 Feb 17, 2022
Cancer-and-Tumor-Detection-Using-Inception-model - In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks, specifically here the Inception model by google.

Cancer-and-Tumor-Detection-Using-Inception-model In this repo i am gonna show you how i did cancer/tumor detection in lungs using deep neural networks

Deepak Nandwani 1 Jan 01, 2022
Reinfore learning tool box, contains trpo, a3c algorithm for continous action space

RL_toolbox all the algorithm is running on pycharm IDE, or the package loss error may exist. implemented algorithm: trpo a3c a3c:for continous action

yupei.wu 44 Oct 10, 2022
CoINN: Correlated-informed neural networks: a new machine learning framework to predict pressure drop in micro-channels

CoINN: Correlated-informed neural networks: a new machine learning framework to predict pressure drop in micro-channels Accurate pressure drop estimat

Alejandro Montanez 0 Jan 21, 2022
Optimize Trading Strategies Using Freqtrade

Optimize trading strategy using Freqtrade Short demo on building, testing and optimizing a trading strategy using Freqtrade. The DevBootstrap YouTube

DevBootstrap 139 Jan 01, 2023
Source code release of the paper: Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation.

GNet-pose Project Page: http://guanghan.info/projects/guided-fractal/ UPDATE 9/27/2018: Prototxts and model that achieved 93.9Pck on LSP dataset. http

Guanghan Ning 83 Nov 21, 2022
Official Implementation for the "An Empirical Investigation of 3D Anomaly Detection and Segmentation" paper.

An Empirical Investigation of 3D Anomaly Detection and Segmentation Project | Paper Official PyTorch Implementation for the "An Empirical Investigatio

Eliahu Horwitz 55 Dec 14, 2022