A Peer-to-peer Platform for Secure, Privacy-preserving, Decentralized Data Science

Overview

PyGrid logo

Run Tests Docker build

PyGrid is a peer-to-peer network of data owners and data scientists who can collectively train AI models using PySyft. PyGrid is also the central server for conducting both model-centric and data-centric federated learning.

A quick note about PySyft 0.3.x: Currently, PyGrid is designed to work with the PySyft 0.2.x product line only. We are working on support for 0.3.x and hope to have this released by early 2021. Thanks for your patience!

Architecture

PyGrid platform is composed by three different components.

  • Network - A Flask-based application used to manage, monitor, control, and route instructions to various PyGrid Nodes.
  • Node - A Flask-based application used to store private data and models for federated learning, as well as to issue instructions to various PyGrid Workers.
  • Worker - An ephemeral instance, managed by a PyGrid Node, that is used to compute data.

Use Cases

Federated Learning

Simply put, federated learning is machine learning where the data and the model are initially located in two different locations. The model must travel to the data in order for training to take place in a privacy-preserving manner. Depending on what you're looking to accomplish, there are two types of federated learning that you can perform with the help of PyGrid.

Model-centric FL

Model-centric FL is when the model is hosted in PyGrid. This is really useful when you have data located at an "edge device" like a person's mobile phone or web browser. Since the data is private, we should respect that and leave it on the device. The following workflow will take place:

  1. The device will request to train a model
  2. The model and a training plan may be sent to that device
  3. The training will take place with private data on the device itself
  4. Once training is completed, a "diff" is generated between the new and the original state of the model
  5. The diff is reported back to PyGrid and it's averaged into the model

This takes place potentially with hundreds, or thousands of devices simultaneously. For model-centric federated learning, you only need to run a Node. Networks and Workers are irrelevant for this specific use-case.

Note: For posterity sake, we previously used to refer to this process as "static federated learning".

Cycled MCFL

Data-centric FL

To view the current roadmap for data-centric FL, please click here.

Data-centric FL is the same problem as model-centric FL, but from the opposite perspective. The most likely scenario for data-centric FL is where a person or organization has data they want to protect in PyGrid (instead of hosting the model, they host data). This would allow a data scientist who is not the data owner, to make requests for training or inference against that data. The following workflow will take place:

  1. A data scientist searches for data they would like to train on (they can search either an individual Node, or a Network of Nodes)
  2. Once the data has been found, they may write a training plan and optionally pre-train a model
  3. The training plan and model are sent to the PyGrid Node in the form of a job request
  4. The PyGrid Node will gather the appropriate data from its database and send the data, the model, and the training plan to a Worker for processing
  5. The Worker performs the plan on the model using the data
  6. The result is returned to the Node
  7. The result is returned to the data scientist

For the last step, we're working on adding the capability for privacy budget tracking to be applied that will allow a data owner to "sign off" on whether or not a trained model should be released.

Note: For posterity sake, we previously used to refer to this process as "dynamic federated learning".

Node-only data-centric FL

Technically speaking, it isn't required to run a Network when performing data-centric federated learning. Alternatively, as a data owner, you may opt to only run a Node, but participate in a Network hosted by someone else. The Network host will not have access to your data.

Node-only DCFL

Network-based data-centric FL

Many times you will wat to use a Network to allow multiple Nodes to be connected together. As a data owner, it's not strictly necessary to own and operate mulitple Nodes. PyGrid doesn't prescribe one way to organize Nodes and Networks, but we expose these applications to allow you and various related stakeholders to make the correct decision about your infrastructure needs.

Network-based DCFL

Getting started

Currently, we suggest two ways to run PyGrid locally: Docker and manually running from source. With Docker, we can organize all the services we'd like to use and then boot them all in one command. With manually running from source, we have to run them as separate tasks.

Docker

To install Docker, just follow the docker documentation.

1. Setting the your hostfile

Before start the grid platform locally using Docker, we need to set up the domain names used by the bridge network. In order to use these nodes from outside of the containers context, you should add the following domain names on your /etc/hosts

127.0.0.1 network
127.0.0.1 alice
127.0.0.1 bob
127.0.0.1 charlie
127.0.0.1 dan

Note that you're not restricted to running 4 nodes and a network. You could instead run just a single node if you'd like - this is often all you need for model-centric federated learning. For the sake of our example, we'll use the network running 4 nodes underneath but you're welcome to modify it to your needs.

2. Run Docker Images

The latest PyGrid Network and Node images are also available on the Docker Hub.

To setup and start the PyGrid platform you just need start the docker-compose process.

$ docker-compose up

This will download the latest Openmined Docker images and start a grid platform with a network and 4 nodes. You can modify this setup by changing the docker-compose.yml file.

3. Optional - Build your own images

If you want to build your own custom images, you may do so using the following command for the Node:

docker build ./apps/node --file ./apps/node/Dockerfile --tag openmined/grid-node:mybuildname

Or for the Network:

docker build ./apps/network --file ./apps/network/Dockerfile --tag openmined/grid-network:mybuildname

Manual Start

Running a Node

Installation

First install poetry and run poetry install in apps/node

To start the PyGrid Node manually, run:

cd apps/node
./run.sh --id bob --port 5000 --start_local_db

You can pass the arguments or use environment variables to set the network configs.

Arguments

  • -h, --help - Shows the help message and exit
  • -p [PORT], --port [PORT] - Port to run server on (default: 5000)
  • --host [HOST] - The Node host
  • --num_replicas [NUM] - The number of replicas to provide fault tolerance to model hosting
  • --id [ID] - The ID of the Node
  • --start_local_db - If this flag is used a SQLAlchemy DB URI is generated to use a local db

Environment Variables

  • GRID_NODE_PORT - Port to run server on
  • GRID_NODE_HOST - The Node host
  • NUM_REPLICAS - Number of replicas to provide fault tolerance to model hosting
  • DATABASE_URL - The Node database URL
  • SECRET_KEY - The secret key

Running a Network

To start the PyGrid Network manually, run:

cd apps/network
./run.sh --port 7000 --start_local_db

You can pass the arguments or use environment variables to set the network configs.

Arguments

  • -h, --help - Shows the help message and exit
  • -p [PORT], --port [PORT] - Port to run server on (default: 7000)
  • --host [HOST] - The Network host
  • --start_local_db - If this flag is used a SQLAlchemy DB URI is generated to use a local db

Environment Variables

  • GRID_NETWORK_PORT - Port to run server on
  • GRID_NETWORK_HOST - The Network host
  • DATABASE_URL - The Network database URL
  • SECRET_KEY - The secret key

PyGrid CLI

OpenMined PyGrid CLI is used for Infrastructure Management to deploy various PyGrid components to various cloud providers (AWS, GCP, Azure).

To get started, install the CLI first through this command:

pip install -e .

Running CLI

Install Terraform

Check Instructions here: https://learn.hashicorp.com/tutorials/terraform/install-cli

Deploy a Node to AWS

pygrid deploy --provider aws --app node

Deploy a Network to AWS

pygrid deploy --provider azure --app network

Contributing

If you're interested in contributing, check out our Contributor Guidelines.

Support

For support in using this library, please join the #lib_pygrid Slack channel. If you’d like to follow along with any code changes to the library, please join the #code_pygrid Slack channel. Click here to join our Slack community!

License

Apache License 2.0

Comments
  • Add API worker websocket and HTTP endpoints for FL to PyGrid

    Add API worker websocket and HTTP endpoints for FL to PyGrid

    The various worker libraries will need to communicate with PyGrid according to an API that's defined in PyGrid. I currently believe that we should aim to support both Websocket messages as well as HTTPS endpoints to accomplish this - hopefully with this philosophy becoming a standard of PyGrid.

    All socket calls should follow the format of:

    {
      "type": "the type of the message",
      "data": {}
    }
    

    I'd like for the following endpoints to be added:

    Authentication with PyGrid

    Method in worker library:

    const worker = new syft({
      url: 'https://localhost:3000',
      auth_token: MY_AUTH_TOKEN
    });
    

    HTTP endpoint: POST /federated/authenticate Socket "type": federated/authenticate Request data:

    {
      "auth_token": "MY_AUTH_TOKEN"
    }
    

    Note that auth_token supplied above is an optional argument depending on the setup of PyGrid.

    This endpoint is where the worker library (syft.js, KotlinSyft, or SwiftSyft) is attempting to authenticate as a worker with PyGrid.

    In order to guarantee the identity of a worker, it's important to have some sort of authentication workflow present. While this isn't strictly required, it will prove an important mechanism in our federated learning workflow for preventing a variety of attacks, most notably a "Sybil attack". This would happen when a worker could generate multiple versions of themself, thus steering all model training to be done by the same worker on the same data, but with unique "worker id's" - which would overfit the model. To prevent this, we strongly suggest that every deployment of PyGrid's FL system implement some sort of oAuth 2.0 protocol.

    In this circumstance, a worker would be logged in to their application via oAuth and would be given an authentication token with which to make secure web requests inside the app. Assuming that PyGrid has also been set up to include this same oAuth mechanism, a worker could forward this auth_token to PyGrid, which then validates that token as an actual user with the same oAuth provider. It's important to do this because it avoids putting the responsibility of having to incorporate our own authentication system with PyGrid, and instead farms this responsibility out to a third-party system.

    In the event that the administrator of the PyGrid gateway does not want to add oAuth support, or there is no login capability within the web or mobile app the worker is running on, then this authentication process is skipped and a worker_id is assigned. This is insecure and open to attacks - it's not suggested, but is required as part of our system.

    There are three possible responses, one success and two error responses:

    Success - triggered when there is no oAuth flow required by PyGrid OR when there is a required oAuth flow in PyGrid and the auth_token sent by the worker validates the existence of that user by a third-party

    {
      "worker_id": "ID OF THE WORKER"
    }
    

    Error - triggered when there is an oAuth flow required by PyGrid and no auth_token is sent

    {
      "error": "Authentication is required, please pass an 'auth_token'."
    }
    

    Error - triggered when there is an oAuth flow required by PyGrid and the auth_token that was sent is invalid

    {
      "error": "The 'auth_token' that you passed is invalid."
    }
    

    The success response will include a worker_id which should be cached for long-term use. This will be passed with all subsequent calls to PyGrid.

    Connection Speed Test

    Method in worker library: job.start() HTTP endpoint: GET /federated/speed-test and POST /federated/speed-test Socket "type": N/A Query string: ?random=RANDOM HASH VALUE&worker_id=ID OF THE WORKER

    This endpoint is HTTP only.

    We need some way of getting a reliable average upload and download speed for a worker in order to potentially qualify them for joining an FL worker cycle. In order to do this, we need to endpoints at the same location: a GET route for testing worker download speed and a POST route for testing worker upload speed. In each route, a random query string value must be appended onto the end of the request to prevent the server or the worker from caching the result after multiple rounds.

    When performing the download speed test, PyGrid will generate a random file of a certain size (to be determined) which the worker may download. The time it takes the worker to download will be captured by the worker and stored.

    When performing the upload speed test, the worker will generate a random file of a certain size (to be determined) which will be uploaded to PyGrid (and then discarded). The time it takes the worker to upload will be also captured by the worker and stored.

    Note: The above is merely a proposal of how this workflow should work. The real-world solution should be determined and this document will be modified to fit the best solution we come up with. This paradigm should be heavily tested against real-world connection speed tests to ensure a reliable result. @Prtfw please do some extra research on this to cover our bases.

    FL Worker Cycle Request

    Method in worker library: Also part of job.start() behind the scenes HTTP endpoint: POST /federated/cycle-request Socket "type": federated/cycle-request Request data:

    {
      "worker_id": "ID OF THE WORKER",
      "model": "my-federated-model",
      "version": "0.1.0",
      "ping": "8ms",
      "download": "46.3mbps",
      "upload": "23.7mbps"
    }
    

    Note that version supplied above is an optional argument.

    This endpoint is where the worker library (syft.js, KotlinSyft, or SwiftSyft) is attempting to join an active federated learning cycle. PyGrid, depending on the current state of the cycle, the speed of the worker's connection, and how many workers have already been chosen.

    Given this information, PyGrid will send one of two responses:

    Rejection

    {
      "status": "rejected",
      "timeout": 2700,
      "model": "my-federated-model",
      "version": "0.1.0"
    }
    

    This means that the worker was rejected from the current cycle and asked to request to join another cycle in 2700 seconds. The number of seconds will depend on when the next cycle is expected to start. If a timeout is not sent, this means that it's the last cycle and there will not be another one to join.

    Accepted

    {
      "status": "accepted",
      "model": "my-federated-model",
      "version": "0.1.0",
      "request_key": "LONG HASH VALUE",
      "plans": { "training_plan": "ID OF THE TRAINING PLAN", "another_plan": "ID OF ANOTHER PLAN" },
      "client_config": "CLIENT CONFIG OBJECT",
      "protocols": { "secure_agg_protocol": "ID OF THE PROTOCOL" },
      "model_id": "ID OF THE MODEL"
    }
    

    In the event that the worker is accepted into the current cycle, they will be sent a named list of the ID's of various plans they need to execute, a named list of the ID's of various protocols they need to execute, the id of the model, and the client config. The plans, protocols, and model will not be downloaded in this response. Instead, the worker will need to make an additional request to receive them (due to the size constraints of the response). They will pass the request_key given above as a form of "authenticating" the download request. This is specific to the relationship between the worker AND the cycle and cannot be reused for future cycles or other workers. This will be detailed in the ["Plan Download section"](#Plan Download).

    Note that it is not possible for a worker to participate in the same cycle multiple times. The client creates a "job" request. If they are accepted, they should not be allowed to submit another job request for the same cycle.

    Plan Download

    Method in worker library: Also part of job.start() behind the scenes HTTP endpoint: GET /federated/get-plan Socket "type": N/A Query string: ?worker_id=ID OF THE WORKER&request_key=LONG HASH VALUE&plan_id=ID OF THE PLAN&receive_operations_as=list

    This endpoint is HTTP only.

    This method will allow a worker that has been accepted into a cycle to request the download of a plan from PyGrid. They need to submit their request_key provided in the cycle request call above. This provides an extra means of authentication for PyGrid to ensure it's sending data to the right worker.

    The worker also needs to specify how the worker likes to receive plans: either a list of operations ("list") or TorchScript ("torchscript") depending on the type of worker requesting (https://github.com/OpenMined/PyGrid/issues/437). This is found in the receive_operations_as key of the request data.

    Response: This downloads the plan to the worker.

    Protocol Download

    Method in worker library: Also part of job.start() behind the scenes HTTP endpoint: GET /federated/get-protocol Socket "type": N/A Query string: ?worker_id=ID OF THE WORKER&request_key=LONG HASH VALUE&protocol_id=ID OF THE PROTOCOL

    This endpoint is HTTP only.

    This method will allow a worker that has been accepted into a cycle to request the download of a protocol from PyGrid. They need to submit their request_key provided in the cycle request call above. This provides an extra means of authentication for PyGrid to ensure it's sending data to the right worker.

    Response: This downloads the protocol to the worker.

    Model Download

    Method in worker library: Also part of job.start() behind the scenes HTTP endpoint: GET /federated/get-model Socket "type": N/A Query string: ?worker_id=ID OF THE WORKER&request_key=LONG HASH VALUE&model_id=ID OF THE MODEL

    This endpoint is HTTP only.

    This method will allow a worker that has been accepted into a cycle to request the download of a model from PyGrid. They need to submit their request_key provided in the cycle request call above. This provides an extra means of authentication for PyGrid to ensure it's sending data to the right worker.

    Response: This downloads the model to the worker.

    Report

    Method in worker library: job.report() HTTP endpoint: POST /federated/report Socket "type": federated/report Request data:

    {
      "worker_id": "ID OF THE WORKER",
      "request_key": "LONG HASH VALUE",
      "diff": "FINAL MODEL DIFF FROM TRAINING"
    }
    

    This method will allow a worker that has been accepted into a cycle and finished training a model on their device to upload the resulting model diff.

    If the worker did not train a protocol to be done after the plan(s) was executed, then they will simply submit their entire model diff. If they want to manually add noise to this diff as a layer of protection, they may do so at the developer's discretion from inside the worker implementation.

    If the worker did execute a protocol and they have finished the secure aggregation protocol with other workers, they will now receive a share of the resulting securely aggregated model diff. In this case, they will submit the share of the diff, rather than their original model diff. PyGrid will handle the decryption of the shares once they're all submitted.

    Response: { "status": "success" }

    The response of success is sent if the response is a 200. The worker should not be informed if the model diff was accepted or denied as part of the global model update.

    Type: Epic :call_me_hand: 
    opened by cereallarceny 22
  • Implementing 'fit' for Torch

    Implementing 'fit' for Torch

    We need to make a decisions whether utils will carry serialization/deserialization or pubsub will carry it.

    TODO

    • [ ] Create python Notebook to test Torch Integration
    work-in-progress 
    opened by mjvypr 16
  • Jupyter Notebooks examples and tutorials to not work with docker-compose

    Jupyter Notebooks examples and tutorials to not work with docker-compose

    Describe the bug I launched the gateway and nodes with the provided docker-compose and tried to connect with a local running JupyterLab as well as with a JupyterLab running inside the docker-compose. In Both cases I cannot execute successfully the examples from two different examples: pygrid in pysyft (here Part 1 worked and the nodes seem to know each other) nor the pygrid examples

    I changed optionally localhost to gateway, alice, bob, etc., however, this did not change anything. Before I launched the docker-compose I added all the hosts to the /etc/hosts file:

    127.0.0.1       gateway
    127.0.0.1       bob
    127.0.0.1       alice
    127.0.0.1       bill
    127.0.0.1       james
    

    In the local as well as in the jupyterlab running in the docker-container I get the following error:

    Websocket connection closed (worker: Bob)
    Created new websocket connection
    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-5-12b0ebb6e5f3> in <module>
          7 and storing a pointer plan that manages all remote references.
          8 '''
    ----> 9 cloud_grid_service.serve_model(model,id=model.id,allow_remote_inference=True, mpc=True) # If mpc flag is False, It will host a unencrypted model.
    
    /opt/conda/lib/python3.7/site-packages/syft/grid/public_grid.py in serve_model(self, model, id, mpc, allow_remote_inference, allow_download, n_replica)
         64             self._serve_unencrypted_model(model, id, allow_remote_inference, allow_download)
         65         else:
    ---> 66             self._serve_encrypted_model(model)
         67 
         68     def query_model_hosts(
    
    /opt/conda/lib/python3.7/site-packages/syft/grid/public_grid.py in _serve_encrypted_model(self, model)
        162 
        163                     # SMPC Share
    --> 164                     model.fix_precision().share(*smpc_workers, crypto_provider=crypto_provider)
        165 
        166                     # Host model
    
    /opt/conda/lib/python3.7/site-packages/syft/execution/plan.py in share_(self, *args, **kwargs)
        526 
        527     def share_(self, *args, **kwargs):
    --> 528         self.state.share_(*args, **kwargs)
        529         return self
        530 
    
    /opt/conda/lib/python3.7/site-packages/syft/execution/state.py in share_(self, *args, **kwargs)
        100         for tensor in self.tensors():
        101             self.create_grad_if_missing(tensor)
    --> 102             tensor.share_(*args, **kwargs)
        103 
        104     def get_(self):
    
    /opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in share_(self, *args, **kwargs)
        896                 kwargs["requires_grad"] = False
        897 
    --> 898             shared_tensor = self.child.share_(*args, **kwargs)
        899 
        900             if requires_grad and not isinstance(shared_tensor, syft.PointerTensor):
    
    /opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/precision.py in share_(self, *args, **kwargs)
        929         contrary to the classic share version version
        930         """
    --> 931         self.child = self.child.share_(*args, no_wrap=True, **kwargs)
        932         return self
        933 
    
    /opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in share_(self, *args, **kwargs)
        904             return self
        905         else:
    --> 906             return self.share(*args, **kwargs)  # TODO change to inplace
        907 
        908     def combine(self, *pointers):
    
    /opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in share(self, field, crypto_provider, requires_grad, no_wrap, *owners)
        875                 )
        876                 .on(self.copy(), wrap=False)
    --> 877                 .init_shares(*owners)
        878             )
        879 
    
    /opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/additive_shared.py in init_shares(self, *owners)
        198         shares_dict = {}
        199         for share, owner in zip(shares, owners):
    --> 200             share_ptr = share.send(owner, **no_wrap)
        201             shares_dict[share_ptr.location.id] = share_ptr
        202 
    
    /opt/conda/lib/python3.7/site-packages/syft/frameworks/torch/tensors/interpreters/native.py in send(self, inplace, user, local_autograd, preinitialize_grad, no_wrap, garbage_collect_data, *location)
        417                 local_autograd=local_autograd,
        418                 preinitialize_grad=preinitialize_grad,
    --> 419                 garbage_collect_data=garbage_collect_data,
        420             )
        421 
    
    /opt/conda/lib/python3.7/site-packages/syft/workers/base.py in send(self, obj, workers, ptr_id, garbage_collect_data, create_pointer, **kwargs)
        385 
        386         # Send the object
    --> 387         self.send_obj(obj, worker)
        388 
        389         # If we don't need to create the pointer
    
    /opt/conda/lib/python3.7/site-packages/syft/workers/base.py in send_obj(self, obj, location)
        678                 receive the object.
        679         """
    --> 680         return self.send_msg(ObjectMessage(obj), location)
        681 
        682     def request_obj(
    
    /opt/conda/lib/python3.7/site-packages/syft/workers/base.py in send_msg(self, message, location)
        285 
        286         # Step 2: send the message and wait for a response
    --> 287         bin_response = self._send_msg(bin_message, location)
        288 
        289         # Step 3: deserialize the response
    
    /opt/conda/lib/python3.7/site-packages/syft/workers/virtual.py in _send_msg(self, message, location)
         13             sleep(self.message_pending_time)
         14 
    ---> 15         return location._recv_msg(message)
         16 
         17     def _recv_msg(self, message: bin) -> bin:
    
    /opt/conda/lib/python3.7/site-packages/syft/workers/websocket_client.py in _recv_msg(self, message)
        103             if not self.ws.connected:
        104                 raise RuntimeError(
    --> 105                     "Websocket connection closed and creation of new connection failed."
        106                 )
        107         return response
    
    RuntimeError: Websocket connection closed and creation of new connection failed.
    

    In the docker-compose look I get the following, however, not sure if the error occurs at the same time:

    bob_1      | Traceback (most recent call last):
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/gevent/pywsgi.py", line 976, in handle_one_response
    bob_1      |     self.run_application()
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/geventwebsocket/handler.py", line 75, in run_application
    bob_1      |     self.run_websocket()
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/geventwebsocket/handler.py", line 52, in run_websocket
    bob_1      |     list(self.application(self.environ, lambda s, h, e=None: []))
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2463, in __call__
    bob_1      |     return self.wsgi_app(environ, start_response)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/flask_sockets.py", line 45, in __call__
    bob_1      |     handler(environment, **values)
    bob_1      |   File "/app/app/main/events/__init__.py", line 57, in socket_api
    bob_1      |     response = route_requests(message)
    bob_1      |   File "/app/app/main/events/__init__.py", line 37, in route_requests
    bob_1      |     return forward_binary_message(message)
    bob_1      |   File "/app/app/main/auth/__init__.py", line 60, in wrapped
    bob_1      |     return f(*args, **kwargs)
    bob_1      |   File "/app/app/main/events/syft_events.py", line 27, in forward_binary_message
    bob_1      |     decoded_response = current_user.worker._recv_msg(message)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/workers/virtual.py", line 19, in _recv_msg
    bob_1      |     return self.recv_msg(message)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/workers/base.py", line 314, in recv_msg
    bob_1      |     msg = sy.serde.deserialize(bin_message, worker=self)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/serde.py", line 69, in deserialize
    bob_1      |     return strategy(binary, worker)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 381, in deserialize
    bob_1      |     return _deserialize_msgpack_simple(simple_objects, worker)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 372, in _deserialize_msgpack_simple
    bob_1      |     return _detail(worker, simple_objects)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 472, in _detail
    bob_1      |     return detailers[obj[0]](worker, obj[1], **kwargs)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/messaging/message.py", line 252, in detail
    bob_1      |     return ObjectMessage(sy.serde.msgpack.serde._detail(worker, msg_tuple[0]))
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/serde.py", line 472, in _detail
    bob_1      |     return detailers[obj[0]](worker, obj[1], **kwargs)
    bob_1      |   File "/usr/local/lib/python3.7/site-packages/syft/serde/msgpack/torch_serde.py", line 192, in _detail_torch_tensor
    bob_1      |     ) = tensor_tuple
    bob_1      | ValueError: not enough values to unpack (expected 9, got 7)
    bob_1      | 2020-04-18T16:19:18Z {'REMOTE_ADDR': '172.22.0.8', 'REMOTE_PORT': '60776', 'HTTP_HOST': 'bob:3000', (hidden keys: 26)} failed with ValueError
    

    Here is the docker-compose file I used:

    version: '3'
    services:
        gateway:
            image: openmined/grid-gateway:latest
            build: .
            environment:
                - PORT=5000
                - SECRET_KEY=ineedtoputasecrethere
                - DATABASE_URL=sqlite:///databasegateway.db
            ports:
            - 5000:5000
        redis:
            image: redis:latest
            volumes:
                - ./redis-data:/data
            expose:
            - 6379
            ports:
            - 6379:6379
        jupyter:
            image: openmined/pysyft-notebook
            environment:
                - WORKSPACE_DIR=/root
            volumes:
                - .:/root
            depends_on:
                - "gateway"
                - "redis"
                - "bob"
                - "alice"
                - "bill"
                - "james"
            entrypoint: ["jupyter", "notebook", "--allow-root", "--ip=0.0.0.0", "--port=8888", "--notebook-dir=/root"]
            expose:
            - 8888
            ports:
            - 8888:8888
        bob:
            image: openmined/grid-node:latest
            environment:
                - GRID_NETWORK_URL=http://gateway:5000
                - ID=Bob
                - ADDRESS=http://bob:3000/
                - REDISCLOUD_URL=redis://redis:6379
                - PORT=3000
            depends_on:
                - "gateway"
                - "redis"
            expose:
                - 3000
            ports:
            - 3000:3000
        alice:
            image: openmined/grid-node:latest
            environment:
                - GRID_NETWORK_URL=http://gateway:5000
                - ID=Alice
                - ADDRESS=http://alice:3001/
                - REDISCLOUD_URL=redis://redis:6379
                - PORT=3001
            depends_on:
                - "gateway"
                - "redis"
            expose:
                - 3001
            ports:
            - 3001:3001
        bill:
            image: openmined/grid-node:latest
            environment:
                - GRID_NETWORK_URL=http://gateway:5000
                - ID=Bill
                - ADDRESS=http://bill:3002/
                - REDISCLOUD_URL=redis://redis:6379
                - PORT=3002
            depends_on:
                - "gateway"
                - "redis"
            expose:
                - 3002
            ports:
            - 3002:3002
        james:
            image: openmined/grid-node:latest
            environment:
                - GRID_NETWORK_URL=http://gateway:5000
                - ID=James
                - ADDRESS=http://james:3003/
                - REDISCLOUD_URL=redis://redis:6379
                - PORT=3003
            depends_on:
                - "gateway"
                - "redis"
            expose:
                - 3003
            ports:
            - 3003:3003
    

    To Reproduce Steps to reproduce the behavior:

    1. Run the docker-compose file
    2. Launch a JupyterLab-notebook locally
    3. Test the different tutorials

    Expected behavior Successful execution of the jupyter notebooks A clear and concise description of what you expected to happen.

    Desktop (please complete the following information):

    • OS: Ubuntu 18.04

    Additional context In the future we want to transfer the whole docker-compose setup to Kubernetes

    opened by KadKla 15
  • setup configuration

    setup configuration

    Setup configuration

    Description

    It is is recommended to have a setup.py for proper version of the development. It is related with #542

    Fixes #542

    Type of change

    Please mark options that are relevant.

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [x] This change requires a documentation update

    How Has This Been Tested?

    The existing tests should run in the same way. The setup.py should generate a package.

    Checklist:

    • [x] I did follow the contribution guidelines
    • [ ] New Unit tests added
    • [x] Unit tests pass locally with my changes

    Progress:

    I have followed the same guidelines from PySyft repository

    • [x] File setup.py with description
    • [x] Update of:
      • [x] tests. Now the tests uses grid package import grid.app
      • [x] Github actions. New line running python setup.py install
      • [x] Makefile. Update of make venv and make test in order to catch all requirements and run the test using coverage
    • [x] Update documentation about how to install and run
    opened by jmaunon 14
  • Tests for JWT in static FL

    Tests for JWT in static FL

    Is your feature request related to a problem? Please describe. the tests needed are spec-ed out here: https://github.com/OpenMined/PyGrid/pull/539/files#diff-467d6d48965918b3a2013825250e9924R331

    Describe the solution you'd like Please see the above

    Describe alternatives you've considered we must write tests... no alternatives

    Additional context Please contact @Prtfw @hericlesme or @IonesioJunior if you get blocked or need help... (on slack)

    Good first issue :mortar_board: Type: Testing :test_tube: Priority: 2 - High :cold_sweat: Severity: 4 - Low :sunglasses: Status: Available :wave: 
    opened by Prtfw 14
  • Empty search results

    Empty search results

    Question

    After launching the docker compose and sending a tensor to a node, I still can not search the sent tensor.

    Further Information

    I use two PCs. One of the PC's IP is 192.168.51.189 and is launched with sudo docker-compose up with PyGrid commit sha 996b6ab The other PC's IP is 192.168.51.177 and the environment is the docker image: openmined/pysyft-notebook. I run the code on the second PC with the following code: image Is there somewhere I missed to do?

    Thanks very much for helping!

    Type: Question :grey_question: 
    opened by b02202050 11
  • are 2 types of grid clients really needed?

    are 2 types of grid clients really needed?

    Currently, we support 2 types of grid clients: GridClients and WebsocketGridClients. Do we really need to support these both types of clients? Is there a scenario where we want a client to only use HTTP to communicate with other clients?

    If not, I propose two options:

    1. We could make GridClient an abstract class
    2. We could rename WebsocketGridClient to GridClient.
    code quality Type: Discussion :speaker: 
    opened by hereismari 11
  • Grid Tree data adapters are very limited

    Grid Tree data adapters are very limited

    Right now tree mode is limited in what it can do (MNIST).

    When data scientists specify a task, they don't specify what format the data must be in, just what the task should accomplish. Whenever scientists propose an architecture, the specify the input shape, but they don't really specify what format a node has to have data in.

    In the MNIST demo, we propose an architecture where first layer takes in 784 shape and outputs 10. A node must have data in the directory data/mnist which is specified in the task. However, the file format is completely arbitrary. The demo uses .npz format, which is common for mnist, but what do we do for arbitrary tasks?

    Does the data scientist provide an adapter as well as a spec around the data they are speculating on?

    TREE 
    opened by bendecoste 10
  • Repository Reorganization

    Repository Reorganization

    Description

    Due to the need to add CI integration tests between different PyGrid components, we're reorganizing our repository to accomplish #611 requirements.

    Fix #611

    Checklist

    • [x] Create a proper path structure to network/node/worker apps
      • [x] Move PyGridNode App to PyGrid
      • [x] Move GridNetwork to PyGrid
      • [x] Create GridWorker
    • [x] Split tests into unit tests/integration tests
      • [x] Update tests references
      • [x] Split current tests between events/routes/database
      • [x] Update github actions configs
    • [x] Manage dependencies
      • [x] Manage dependencies using poetry
      • [x] Update github actions
    • [x] Remove outdated config files
    • [x] Update release settings
    • [x] Update Dockerfile references
    • [X] I have followed the Contribution Guidelines and Code of Conduct
    • [X] I have commented on my code following the OpenMined Styleguide
    • [X] I have labeled this PR with the relevant Type labels
    • [x] My changes are covered by tests
    Type: Epic :call_me_hand: 
    opened by IonesioJunior 9
  • [Notebook] Federated Word Vectors

    [Notebook] Federated Word Vectors

    Reproduce Federated Word Vectors using PyGrid Architecture.

    Acceptance criteria:

    • Keep the same structure used at Federated Word Vectors.
    • Keep the author's name and add your name.
    • Create a notebook to register dataset tensors with tags.
    • Create a different notebook using PyGrid network to search datasets by tags.
    • Perform Federated training using the found pointers.
    Good first issue :mortar_board: PyGrid stale 
    opened by IonesioJunior 9
  • Create a better Readme

    Create a better Readme

    opened by cereallarceny 8
  • Bump certifi from 2020.12.5 to 2022.12.7 in /apps/network

    Bump certifi from 2020.12.5 to 2022.12.7 in /apps/network

    Bumps certifi from 2020.12.5 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 8.2.0 to 9.3.0 in /apps/domain

    Bump pillow from 8.2.0 to 9.3.0 in /apps/domain

    Bumps pillow from 8.2.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 8.2.0 to 9.3.0 in /apps/worker

    Bump pillow from 8.2.0 to 9.3.0 in /apps/worker

    Bumps pillow from 8.2.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 8.1.2 to 9.3.0 in /apps/network

    Bump pillow from 8.1.2 to 9.3.0 in /apps/network

    Bumps pillow from 8.1.2 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump protobuf from 3.15.8 to 3.18.3 in /apps/worker

    Bump protobuf from 3.15.8 to 3.18.3 in /apps/worker

    Bumps protobuf from 3.15.8 to 3.18.3.

    Release notes

    Sourced from protobuf's releases.

    Protocol Buffers v3.18.3

    C++

    Protocol Buffers v3.16.1

    Java

    • Improve performance characteristics of UnknownFieldSet parsing (#9371)

    Protocol Buffers v3.18.2

    Java

    • Improve performance characteristics of UnknownFieldSet parsing (#9371)

    Protocol Buffers v3.18.1

    Python

    • Update setup.py to reflect that we now require at least Python 3.5 (#8989)
    • Performance fix for DynamicMessage: force GetRaw() to be inlined (#9023)

    Ruby

    • Update ruby_generator.cc to allow proto2 imports in proto3 (#9003)

    Protocol Buffers v3.18.0

    C++

    • Fix warnings raised by clang 11 (#8664)
    • Make StringPiece constructible from std::string_view (#8707)
    • Add missing capability attributes for LLVM 12 (#8714)
    • Stop using std::iterator (deprecated in C++17). (#8741)
    • Move field_access_listener from libprotobuf-lite to libprotobuf (#8775)
    • Fix #7047 Safely handle setlocale (#8735)
    • Remove deprecated version of SetTotalBytesLimit() (#8794)
    • Support arena allocation of google::protobuf::AnyMetadata (#8758)
    • Fix undefined symbol error around SharedCtor() (#8827)
    • Fix default value of enum(int) in json_util with proto2 (#8835)
    • Better Smaller ByteSizeLong
    • Introduce event filters for inject_field_listener_events
    • Reduce memory usage of DescriptorPool
    • For lazy fields copy serialized form when allowed.
    • Re-introduce the InlinedStringField class
    • v2 access listener
    • Reduce padding in the proto's ExtensionRegistry map.
    • GetExtension performance optimizations
    • Make tracker a static variable rather than call static functions
    • Support extensions in field access listener
    • Annotate MergeFrom for field access listener
    • Fix incomplete types for field access listener
    • Add map_entry/new_map_entry to SpecificField in MessageDifferencer. They record the map items which are different in MessageDifferencer's reporter.
    • Reduce binary size due to fieldless proto messages
    • TextFormat: ParseInfoTree supports getting field end location in addition to start.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump protobuf from 3.15.6 to 3.18.3 in /apps/network

    Bump protobuf from 3.15.6 to 3.18.3 in /apps/network

    Bumps protobuf from 3.15.6 to 3.18.3.

    Release notes

    Sourced from protobuf's releases.

    Protocol Buffers v3.18.3

    C++

    Protocol Buffers v3.16.1

    Java

    • Improve performance characteristics of UnknownFieldSet parsing (#9371)

    Protocol Buffers v3.18.2

    Java

    • Improve performance characteristics of UnknownFieldSet parsing (#9371)

    Protocol Buffers v3.18.1

    Python

    • Update setup.py to reflect that we now require at least Python 3.5 (#8989)
    • Performance fix for DynamicMessage: force GetRaw() to be inlined (#9023)

    Ruby

    • Update ruby_generator.cc to allow proto2 imports in proto3 (#9003)

    Protocol Buffers v3.18.0

    C++

    • Fix warnings raised by clang 11 (#8664)
    • Make StringPiece constructible from std::string_view (#8707)
    • Add missing capability attributes for LLVM 12 (#8714)
    • Stop using std::iterator (deprecated in C++17). (#8741)
    • Move field_access_listener from libprotobuf-lite to libprotobuf (#8775)
    • Fix #7047 Safely handle setlocale (#8735)
    • Remove deprecated version of SetTotalBytesLimit() (#8794)
    • Support arena allocation of google::protobuf::AnyMetadata (#8758)
    • Fix undefined symbol error around SharedCtor() (#8827)
    • Fix default value of enum(int) in json_util with proto2 (#8835)
    • Better Smaller ByteSizeLong
    • Introduce event filters for inject_field_listener_events
    • Reduce memory usage of DescriptorPool
    • For lazy fields copy serialized form when allowed.
    • Re-introduce the InlinedStringField class
    • v2 access listener
    • Reduce padding in the proto's ExtensionRegistry map.
    • GetExtension performance optimizations
    • Make tracker a static variable rather than call static functions
    • Support extensions in field access listener
    • Annotate MergeFrom for field access listener
    • Fix incomplete types for field access listener
    • Add map_entry/new_map_entry to SpecificField in MessageDifferencer. They record the map items which are different in MessageDifferencer's reporter.
    • Reduce binary size due to fieldless proto messages
    • TextFormat: ParseInfoTree supports getting field end location in addition to start.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(0.5.0rc1)
  • 0.5.0rc1(Apr 1, 2021)

    Here's the initial changelog for this release:

    • Network
      • Node Association
      • Data Broadcast Search
      • Users
      • Roles
    • Domain
      • Node Association
      • Data-Centric
        • Network Association
        • Users
        • Roles
        • Groups
        • Datasets
        • Tensors
        • Data access requests
        • Worker Deployment
          • AWS
          • Azure
          • GCP
      • Vanity API
        • TenSEAL 0.5.0 support
        • PySyft 0.5.0 support
      • Model-Centric
        • SwiftSyft 0.5.0 compatibility - still being developed
        • KotlinSyft 0.5.0 compatibility
    • Worker
      • PySyft 0.5.0 compatibility
      • TenSEAL 0.5.0 compatibility
    Source code(tar.gz)
    Source code(zip)
Owner
OpenMined
We're on a mission to align and incentivise all institutions to only serve the best interests of humanity.
OpenMined
Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation

Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation By Qiang Zhou*, Zilong Huang*, Lichao Huang, Han Shen, Yon

Forest 117 Apr 01, 2022
Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit

STORM Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit [Install Instructions] [Paper] [Website] This package contains code

NVIDIA Research Projects 101 Dec 12, 2022
Pytorch implementation of Decoupled Spatial-Temporal Transformer for Video Inpainting

Decoupled Spatial-Temporal Transformer for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, J

51 Dec 13, 2022
Robust Consistent Video Depth Estimation

[CVPR 2021] Robust Consistent Video Depth Estimation This repository contains Python and C++ implementation of Robust Consistent Video Depth, as descr

Facebook Research 213 Dec 17, 2022
Geometric Sensitivity Decomposition

Geometric Sensitivity Decomposition This repo is the official implementation of A Geometric Perspective towards Neural Calibration via Sensitivity Dec

16 Dec 26, 2022
A Python reference implementation of the CF data model

cfdm A Python reference implementation of the CF data model. References Compliance with FAIR principles Documentation https://ncas-cms.github.io/cfdm

NCAS CMS 25 Dec 13, 2022
CVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"

Diverse Structure Inpainting ArXiv | Papar | Supplementary Material | BibTex This repository is for the CVPR 2021 paper, "Generating Diverse Structure

152 Nov 04, 2022
2D&3D human pose estimation

Human Pose Estimation Papers [CVPR 2016] - 201511 [IJCAI 2016] - 201602 Other Action Recognition with Joints-Pooled 3D Deep Convolutional Descriptors

133 Jan 02, 2023
Tom-the-AI - A compound artificial intelligence software for Linux systems.

Tom the AI (version 0.82) WARNING: This software is not yet ready to use, I'm still setting up the GitHub repository. Should be ready in a few days. T

2 Apr 28, 2022
GNEE - GAT Neural Event Embeddings

GNEE - GAT Neural Event Embeddings This repository contains source code for the GNEE (GAT Neural Event Embeddings) method introduced in the paper: "Se

João Pedro Rodrigues Mattos 0 Sep 15, 2021
Adaptive, interpretable wavelets across domains (NeurIPS 2021)

Adaptive wavelets Wavelets which adapt given data (and optionally a pre-trained model). This yields models which are faster, more compressible, and mo

Yu Group 50 Dec 16, 2022
Structured Edge Detection Toolbox

################################################################### # # # Structure

Piotr Dollar 779 Jan 02, 2023
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
AVD Quickstart Containerlab

AVD Quickstart Containerlab WARNING This repository is still under construction. It's fully functional, but has number of limitations. For example: RE

Carl Buchmann 3 Apr 10, 2022
Official project repository for 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination'

NCAE_UAD Official project repository of 'Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination' Abstract In this p

Jongmin Andrew Yu 2 Feb 10, 2022
Iranian Cars Detection using Yolov5s, PyTorch

Iranian Cars Detection using Yolov5 Train 1- git clone https://github.com/ultralytics/yolov5 cd yolov5 pip install -r requirements.txt 2- Dataset ../

Nahid Ebrahimian 22 Dec 05, 2022
OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021)

OREO: Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning (NeurIPS 2021) Video demo We here provide a video demo from co

20 Nov 25, 2022
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
Add gui for YoloV5 using PyQt5

HEAD 更新2021.08.16 **添加图片和视频保存功能: 1.图片和视频按照当前系统时间进行命名 2.各自检测结果存放入output文件夹 3.摄像头检测的默认设备序号更改为0,减少调试报错 温馨提示: 1.项目放置在全英文路径下,防止项目报错 2.默认使用cpu进行检测,自

Ruihao Wang 65 Dec 27, 2022
SWA Object Detection

SWA Object Detection This project hosts the scripts for training SWA object detectors, as presented in our paper: @article{zhang2020swa, title={SWA

237 Nov 28, 2022