Bittensor - an open, decentralized, peer-to-peer network that functions as a market system for the development of artificial intelligence

Overview

Bittensor

Pushing Image to Docker Discord Chat PyPI version License: MIT


Internet-scale Neural Networks

DiscordDocsNetworkResearchCode

At Bittensor, we are creating an open, decentralized, peer-to-peer network that functions as a market system for the development of artificial intelligence. Our purpose is not only to accelerate the development of AI by creating an environment optimally condusive to its evolution, but to democratize the global production and use of this valuable commodity. Our aim is to disrupt the status quo: a system that is centrally controlled, inefficient and unsustainable. In developing the Bittensor API, we are allowing standalone engineers to monetize their work, gain access to sophisticated machine intelligence models and join our community of creative, forward-thinking individuals. For more info, read our paper.

1. Documentation

https://app.gitbook.com/@opentensor/s/bittensor/

2. Install

Two ways to install Bittensor.

  1. Through installer (recommended):
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/opentensor/bittensor/master/scripts/install.sh)"
  1. Through pip (Advanced):
$ pip3 install bittensor

3. Using Bittensor

The following examples showcase how to use the Bittensor API for 3 seperate purposes.

3.1. Client

For users that want to explore what is possible using on the Bittensor network.

Open In Colab

import bittensor
import torch
wallet = bittensor.wallet().create()
graph = bittensor.metagraph().sync()
representations, _ = bittensor.dendrite( wallet = wallet ).forward_text (
    endpoints = graph.endpoints,
    inputs = "The quick brown fox jumped over the lazy dog"
)
representations = // N tensors with shape (1, 9, 1024)
...
// Distill model. 
...
loss.backward() // Accumulate gradients on endpoints.

3.2. Server

For users that want to serve up a custom model onto the Bittensor network

Open In Colab

import bittensor
import torch
from transformers import BertModel, BertConfig

model = BertModel( BertConfig(vocab_size = bittensor.__vocab_size__, hidden_size = bittensor.__network_dim__) )
optimizer = torch.optim.SGD( [ {"params": model.parameters()} ], lr = 0.01 )

def forward_text( pubkey, inputs_x ):
    return model( inputs_x )
  
def backward_text( pubkey, inputs_x, grads_dy ):
    with torch.enable_grad():
        outputs_y = model( inputs_x.to(device) ).last_hidden_state
        torch.autograd.backward (
            tensors = [ outputs_y.to(device) ],
            grad_tensors = [ grads_dy.to(device) ]
        )
        optimizer.step()
        optimizer.zero_grad() 

wallet = bittensor.wallet().create()
axon = bittensor.axon (
    wallet = wallet,
    forward_text = forward_text,
    backward_text = backward_text
).start().subscribe()

3.3. Validator

For users that want to validate the models that currently on the Bittensor network

Open In Colab

import bittensor
import torch

graph = bittensor.metagraph().sync()
dataset = bittensor.dataset()
chain_weights = torch.ones( [graph.n.item()], dtype = torch.float32 )

for batch in dataset.dataloader( 10 ):
    ...
    // Train chain_weights.
    ...
bittensor.subtensor().set_weights (
    weights = chain_weights,
    uids = graph.uids,
    wait_for_inclusion = True,
    wallet = bittensor.wallet(),
)

4. Features

4.1. Creating a bittensor wallet

$ bittensor-cli new_coldkey --wallet.name <WALLET NAME>
$ bittensor-cli new_hotkey --wallet.name <WALLET NAME> --wallet.hotkey <HOTKEY NAME>

4.2. Selecting the network to join

There are two open Bittensor networks: Kusanagi and Akatsuki.

  • Kusanagi is the test network. Use Kusanagi to get familiar with Bittensor without worrying about losing valuable tokens.
  • Akatsuki is the main network. The main network will reopen on Bittensor-akatsuki: November 2021.
$ export NETWORK=akatsuki 
$ python (..) --subtensor.network $NETWORK

4.3. Running a template miner

The following command will run Bittensor's template miner

$ python ~/.bittensor/bittensor/miners/text/template_miner.py

OR with customized settings

$ python ~/.bittensor/bittensor/miners/text/template_miner.py --wallet.name <WALLET NAME> --wallet.hotkey <HOTKEY NAME>

For the full list of settings, please run

$ python ~/.bittensor/bittensor/miners/text/template_miner.py --help

4.4. Running a template server

The template server follows a similar structure as the template miner.

$ python ~/.bittensor/bittensor/miners/text/template_server.py --wallet.name <WALLET NAME> --wallet.hotkey <HOTKEY NAME>

For the full list of settings, please run

$ python ~/.bittensor/bittensor/miners/text/template_server.py --help

4.5. Subscription to the network

The subscription to the bittensor network is done using the axon. We must first create a bittensor wallet and a bittensor axon to subscribe.

import bittensor

wallet = bittensor.wallet().create()
axon = bittensor.axon (
    wallet = wallet,
    forward_text = forward_text,
    backward_text = backward_text
).start().subscribe()

4.6. Syncing with the chain/ Finding the ranks/stake/uids of other nodes

Information from the chain are collected by the metagraph.

import bittensor

meta = bittensor.metagraph()
meta.sync()

# --- uid ---
print(meta.uids)

# --- hotkeys ---
print(meta.hotkeys)

# --- ranks ---
print(meta.R)

# --- stake ---
print(meta.S)

4.7. Finding and creating the endpoints for other nodes in the network

import bittensor

meta = bittensor.metagraph()
meta.sync()

### Address for the node uid 0
address = meta.endpoints[0]
endpoint = bittensor.endpoint.from_tensor(address)

4.8. Querying others in the network

import bittensor

meta = bittensor.metagraph()
meta.sync()

### Address for the node uid 0
address = meta.endpoints[0]

### Creating the endpoint, wallet, and dendrite
endpoint = bittensor.endpoint.from_tensor(address)
wallet = bittensor.wallet().create()
den = bittensor.dendrite(wallet = wallet)

representations, _ = den.forward_text (
    endpoints = endpoint,
    inputs = "Hello World"
)

4.9. Creating a Priority Thread Pool for the axon

import bittensor
import torch
from nuclei.server import server

model = server(config=config,model_name='bert-base-uncased',pretrained=True)
optimizer = torch.optim.SGD( [ {"params": model.parameters()} ], lr = 0.01 )
threadpool = bittensor.prioritythreadpool(config=config)
metagraph = bittensor.metagraph().sync()

def forward_text( pubkey, inputs_x ):
    def call(inputs):
        return model.encode_forward( inputs )

    uid = metagraph.hotkeys.index(pubkey)
    priority = metagraph.S[uid].item()
    future = threadpool.submit(call,inputs=inputs_x,priority=priority)
    try:
        return future.result(timeout= model.config.server.forward_timeout)
    except concurrent.futures.TimeoutError :
        raise TimeoutError('TimeOutError')
  

wallet = bittensor.wallet().create()
axon = bittensor.axon (
    wallet = wallet,
    forward_text = forward_text,
).start().subscribe()

5. License

The MIT License (MIT) Copyright © 2021 Yuma Rao

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

6. Acknowledgments

learning-at-home/hivemind

Comments
  • [feature] [BIT 578] speed up metagraph storage query

    [feature] [BIT 578] speed up metagraph storage query

    This PR speeds up the storage call made bysubtensor.neurons in the subtensor.use_neurons_fast function.

    This feature works by bundling a nodejs binary with the polkadotjs API.
    This binary is a CLI that implements the sync_and_save --filename <default:~/.bittensor/metagraph.json> --block_hash <default:latest> command.
    This syncs the metagraph at the blockhash and saves it to a json file.

    The speed-up is quite significant, below is a test run of the manual without the fix, with the ipfs cache, and with the fix.
    output

    And below is the IPFS cache sync versus the manual sync (with fix)

    output_cach_vs_fixed_manual

    A pro of this is that it removes the need for a centralized IPFS cache of the metagraph.

    A downside of this fix is that the binaries with nodejs bundled use ~50MB each (one linux, one macos).
    There is currently no binary for windows, but I'm not certain this should be included anyway, as we only support linux/macos.

    Another pro of this fix is it works on both nobunaga and nakamoto, and can be adapted to any network. This also leaves room for adding other large substrate queries and working further with the polkadot js api.

    do not merge 
    opened by camfairchild 9
  • Fix Docker Build and Docker-Compose

    Fix Docker Build and Docker-Compose

    This PR:

    - Irons out the Python version inconsistency between the install scripts and the Dockerfile

    Right now, the Docker image builds with Python3.7, but the install.sh script per the install instructions installs Python3.8. This is because Python3 is installed in the install script, but we specifically point out 3.7 in the Dockerfile. This causes the docker container to not be compatible with the new keccak update, breaking it.

    - Fixes the Docker Compose spec to have the command correctly route and find the Bittensor Python library

    Previously, we were not specifying the path to the Bittensor Python library. By specifying the PYTHONPATH environment variable, we are able to tell it where to go to find it.

    opened by rmaronn 7
  • Add signature v2 format

    Add signature v2 format

    Summary

    References https://github.com/opentensor/bittensor/pull/976

    This PR introduces a new signature format for requests, in order to avoid situations in which validator requests are fulfilled by nodes which are not targeted by the RPC.

    This PR is based on #976 in order to avoid merge conflicts - only the last two commits are relevant here.

    Changes

    • Remove the static header check. Receptors will still keep adding it, but it is ignored from now on.
    • Add v2 signature format, which also takes into account the target axon hotkey. The v2 signature ensures that the signature cannot be faked by an intermediary that is not a validator.
    • Ensure that nonces cannot be replayed by disallowing equality. Allowing nonce equality renders the nonce moot.
    • On chain parameters of an axon are now always updated. Previously they would be updated only on IP/port changes.
    enhancement feature release/3.6.0 
    opened by adriansmares 6
  • Fallback subtensor

    Fallback subtensor

    Adds fallback endpoints to bittensor.subtensor allowing the user to pass a list of fallback endpoints in the event that the default fails.

    btcli overview --subtensor.chain_endpoint badendpoint --subtensor.fallback_endpoints AtreusLB-2c6154f73e6429a9.elb.us-east-2.amazonaws.com:9944 --no_prompt --logging.debug

    opened by unconst 6
  • Support arbitrary gRPC request metadata order

    Support arbitrary gRPC request metadata order

    Summary

    This PR fixes the gRPC server interceptor logic such that it can handle any metadata order. gRPC request metadata is a set of key values - it has no order. So depending on the fact that the order of the sender is equivalent in the receiver is not correct.

    gRPC metadata is sent over as HTTP headers, and the order of HTTP headers may be changed by intermediary proxies.

    Changes

    • Format the AuthInterceptor using black.
    • Use the invocation metadata as a dictionary, not a list of key-value pairs.
    • Do not trust the user provided request_type and use the gRPC method instead.
    • Fix request_type provided for backward calls.

    Testing

    Tested locally by proxying the axon traffic using Traefik.

    enhancement feature 
    opened by adriansmares 4
  • changed local train and remote train descriptions

    changed local train and remote train descriptions

    local and remote train sometimes cause bugs if set to false. The default is already false, so an extra note is added to remind users to only use this flag when passing true

    do not merge 
    opened by quac88 4
  • [feature] [CUDA solver] Add multi-GPU and ask for CUDA during btcli run

    [feature] [CUDA solver] Add multi-GPU and ask for CUDA during btcli run

    This PR adds

    • Multi-GPU registration capability (one process per GPU + master)
      • btcli register --cuda.dev_id 0 1 2 3 --cuda.use_cuda
    • A prompt for CUDA registration during btcli run
      • can skip this prompt with btcli run --wallet.reregister false
    enhancement feature 
    opened by camfairchild 4
  • No Serve

    No Serve

    adds a flag --neuron.no_serve that stops an axon from being served for a validator.

    useful for those running validators and servers from the same hotkey and need the axon port to be used by their miner.

    opened by CreativeBuilds 3
  • Release 3.4.2

    Release 3.4.2

    [Fix] promo change to axon and dendrite https://github.com/opentensor/bittensor/pull/981 [Feature] no version checking flag https://github.com/opentensor/bittensor/pull/974 [Fix] add perpet hash rate and adjust alpha https://github.com/opentensor/bittensor/pull/960 [Fix] stake conversion issue https://github.com/opentensor/bittensor/pull/958 [Feature] Dendrite asyncio https://github.com/opentensor/bittensor/pull/967 [Fix] Bit 590 backward fix https://github.com/opentensor/bittensor/pull/957 [Feature] No set weights https://github.com/opentensor/bittensor/pull/959 [Feature] Improve dataloader performance https://github.com/opentensor/bittensor/pull/950 [Fix] Remove locals from cli and bittensor common https://github.com/opentensor/bittensor/pull/947 [Fix] Minor fixes https://github.com/opentensor/bittensor/pull/955

    opened by unconst 3
  • [Fix] multi cuda fix

    [Fix] multi cuda fix

    This PR addresses the issues with CUDA registration introduced by v3.3.4 This fixes:

    • Improper termination And this should fix:
    • Reduced actual hashrate (does not match reported)
      • nonce_start += self.update_interval * self.num_proc
      • to
      • nonce_start += self.update_interval * self.TPB
    opened by camfairchild 3
  • [hotfix] fix flags for run command, fix hotkeys flag for overview, and [feature] CUDA reg

    [hotfix] fix flags for run command, fix hotkeys flag for overview, and [feature] CUDA reg

    #876 Fixes the flags

    The CPU register optimization introduced a bug where the btcli run command lacked some config flags that were used in CLI.register. This PR adds them in

    Edit:

    This PR also adds #868 and #867

    opened by camfairchild 3
  • Remove support for huge models

    Remove support for huge models

    Describe the bug The scaling law upgrade is actually backfiring on people running fine-tuned miners, because they are hitting a "glass ceiling" when it comes to loss. Scaling laws seem to prefer small miners with low loss.

    To Reproduce Steps to reproduce the behavior:

    1. Execute the following code with a loss of 2: https://github.com/opentensor/bittensor/blob/d532ad595d39f287f8bef445cc1823e6fcdadc3c/bittensor/_neuron/text/core_validator/init.py#L1000
    2. Execute the same code with a loss of 1.69.
    3. Execute the same code with a loss of 1.5.
    4. Execute the same code with a loss of 1.
    5. Execute the same code with a loss of 0.

    Expected behavior The reproduction should return a higher reproduction each time its being run. A loss of 0 is theoretically possible.

    Environment:

    • OS and Distro: N/A
    • Bittensor Version: 3.5.0

    Additional context A fine-tuned 2.7B receives the same weights as a unturned 6B and an unturned 20B. This triggered an investigation into the reason why this would be the case. Turns out, the scaling law has a minimum of 1.69, which is not mentioned in the corresponding paper and is known by some to be an incorrect estimation. The paper can be disproven by fact.

    opened by mrseeker 2
  • Add Mask to TextLastHiddenState

    Add Mask to TextLastHiddenState

    Adds masking to the TextLastHiddenState Synapse.

    For a return tensor of size (batch_size, sequence_length, hidden_state), we can now optionally pass a mask as: bittensor.synapse.TextLastHiddenState(mask: Optional[List[int]]). The mask will apply across the sequence_length dimension, i.e. if mask == [0], only the hidden state for the 0th element will be filled. Explicitly, return_tensor[:, 1:, :] would ==0.

    Alternatively a list of non-consecutive integers can be passed, e.g. bittensor.synapse.TextLastHiddenState(mask = [0, 5, 3]) would only return non-empty tensors for sequence indices 0, 5, and 3.

    This change can drastically decrease the network bandwidth.

    Alternatively, the user can specify a random mask d.text_last_hidden_state ( ... mask = random.choice( seq_len, k), ...) This random masking can be used for adversarial resistance. Or Jepa style training.

    opened by unconst 0
  • Way to monitor the distribution of the loss in live server.

    Way to monitor the distribution of the loss in live server.

    I am having difficulty deciding which models are performing best and worst. I need to monitor the loss distribution over a whole period of time,

    I am unable to get the Loss variable without local and remote train disabled in the https://github.com/opentensor/bittensor/tree/master/bittensor/_neuron/text)/core_server/run.py

    Describe alternatives you've considered I want to know where the loss variable in the live server is located. I may save to inside a DB and plot distribution or monitor the loss distribution for comparison of live run of different models

    image

    opened by ALI7861111 0
  • Bit 578 speed up metagraph storage query

    Bit 578 speed up metagraph storage query

    Note: this feature requires using https://github.com/opentensor/subtensor/pull/26 in the subtensor node that you query as the buffer size needs to be changed for the fast sync to work


    This PR adds the subtensorapi package to grab live storage from the chain in a faster manner.

    The current live-sync takes around ~8m (See #933) using only pure-python.

    The subtensorapi package wraps a nodejs binary in python and utilizes the @polkadot/api npm library to sync from the chain.

    This sync outputs as JSON to ~/.bittensor/metagraph.json (by default) and then is read into python before being returned to the bittensor package.

    Below is a current graph of the performance of subtensorapi (sapi) vs the ipfs sync (current cached-sync).
    The results may be worse than average as the request times are very node-dependent. This node is hosted on a cheap contabo VPS with heavy traffic. I expect request times to be similar to this, if not better.

    Screen Shot 2022-10-26 at 2 42 38 PM

    Further, subtensorapi can be extended to support other storage values. Currently it also supports the subtensorModule.blockAtRegistration map using Subtensor.blockAtRegistration_all_fast()

    enhancement feature do not merge 
    opened by camfairchild 6
Releases(v3.6.1)
  • v3.6.1(Dec 21, 2022)

    What's Changed

    • V3.6.0 nobunaga merge by @Eugene-hu in https://github.com/opentensor/bittensor/pull/1028
    • Integration dendrite test fixes by @Eugene-hu in https://github.com/opentensor/bittensor/pull/1029
    • Adding 3.6.0 release notes to CHANGELOG by @eduardogr in https://github.com/opentensor/bittensor/pull/1032
    • [BIT-612] Validator robustness improvements by @opentaco in https://github.com/opentensor/bittensor/pull/1034
    • [Hotfix 3.6.1] Validator robustness by @opentaco in https://github.com/opentensor/bittensor/pull/1035

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.6.0...v3.6.1

    Source code(tar.gz)
    Source code(zip)
  • v3.6.0(Dec 13, 2022)

    What's Changed

    • Removal of dendrite multiprocessing by @Eugene-hu in https://github.com/opentensor/bittensor/pull/1017
    • Merging back 3.5.1 fix to nobunaga by @eduardogr in https://github.com/opentensor/bittensor/pull/1018
    • Release/3.5.0 post release by @eduardogr in https://github.com/opentensor/bittensor/pull/1010
    • Fixes issue with --neuron.no_set_weights by @camfairchild in https://github.com/opentensor/bittensor/pull/1020
    • Removing GitHub workflow push docker by @eduardogr in https://github.com/opentensor/bittensor/pull/1011
    • [Fix] fix max stake for single by @camfairchild in https://github.com/opentensor/bittensor/pull/996
    • [Feature] mention balance if not no prompt by @camfairchild in https://github.com/opentensor/bittensor/pull/995
    • Add signature v2 format by @adriansmares in https://github.com/opentensor/bittensor/pull/983
    • Improving the way we manage requirements by @eduardogr in https://github.com/opentensor/bittensor/pull/1003
    • [BIT-601] Scaling law on EMA loss by @opentaco in https://github.com/opentensor/bittensor/pull/1022
    • [BIT-602] Update scaling power from subtensor by @opentaco in https://github.com/opentensor/bittensor/pull/1027
    • Release 3.6.0 by @eduardogr in https://github.com/opentensor/bittensor/pull/1023

    New Contributors

    • @adriansmares made their first contribution in https://github.com/opentensor/bittensor/pull/976

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.5.1...v3.6.0

    Source code(tar.gz)
    Source code(zip)
  • v3.5.1(Nov 25, 2022)

    What's Changed

    • [hotfix] pin scalecodec lower by @camfairchild in https://github.com/opentensor/bittensor/pull/1013

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.5.0...v3.5.1

    Source code(tar.gz)
    Source code(zip)
  • v3.5.0(Nov 24, 2022)

    What's Changed

    • [Fix] allow synapse all (https://github.com/opentensor/bittensor/pull/988)

      • allow set synapse All using flag
      • add test
      • use dot get
    • [Feature] Mark registration threads as daemons (https://github.com/opentensor/bittensor/pull/998)

      • make solver processes daemons
    • [Feature] Validator debug response table (https://github.com/opentensor/bittensor/pull/999)

      • Add response table to validator debugging
    • [Feature] Validator weight setting improvements (https://github.com/opentensor/bittensor/pull/1000)

      • Remove responsive prioritization from validator weight calculation
      • Move metagraph_sync just before weight setting
      • Add metagraph register to validator
      • Update validator epoch conditions
      • Log epoch while condition details
      • Consume validator nucleus UID queue fully
      • Increase synergy table display precision
      • Round before casting to int in phrase_cross_entropy
    • small fix for changelog and version by @Eugene-hu in https://github.com/opentensor/bittensor/pull/993

    • release/3.5.0 by @eduardogr in https://github.com/opentensor/bittensor/pull/1006

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.3...v3.5.0

    Source code(tar.gz)
    Source code(zip)
  • v3.4.3(Nov 15, 2022)

    What's Changed

    • [Hotfix] Synapse security update by @opentaco in https://github.com/opentensor/bittensor/pull/991

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.2...v3.4.3

    Source code(tar.gz)
    Source code(zip)
  • v3.4.2(Nov 9, 2022)

    What's Changed

    • Adding 3.4.0 changelog to CHANGELOG.md by @eduardogr in https://github.com/opentensor/bittensor/pull/953
    • Release 3.4.2 by @unconst in https://github.com/opentensor/bittensor/pull/970

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.1...v3.4.2

    Source code(tar.gz)
    Source code(zip)
  • v3.4.1(Oct 13, 2022)

    What's Changed

    • [Hotfix] Fix CUDA Reg update block by @camfairchild in https://github.com/opentensor/bittensor/pull/954

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.4.0...v3.4.1

    Source code(tar.gz)
    Source code(zip)
  • v3.4.0(Oct 13, 2022)

    What's Changed

    • Parameters update by @Eugene-hu #936
    • Bittensor Generate by @unconst #941
    • Prometheus by @unconst #928
    • [Tooling][Release] Adding release script by @eduardogr in https://github.com/opentensor/bittensor/pull/948

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.4...v3.4.0

    Source code(tar.gz)
    Source code(zip)
  • v3.3.4(Oct 3, 2022)

    What's Changed

    • [hot-fix] fix indent again. add test by @camfairchild in https://github.com/opentensor/bittensor/pull/907
    • Delete old gitbooks by @quac88 in https://github.com/opentensor/bittensor/pull/924
    • Release/3.3.4 by @Eugene-hu in https://github.com/opentensor/bittensor/pull/927

    New Contributors

    • @quac88 made their first contribution in https://github.com/opentensor/bittensor/pull/924

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.3...v3.3.4

    Source code(tar.gz)
    Source code(zip)
  • v3.3.3(Sep 6, 2022)

    What's Changed

    • [feature] cpu register faster by @camfairchild in https://github.com/opentensor/bittensor/pull/854
    • [hotfix] fix flags for multiproc register limit by @camfairchild in https://github.com/opentensor/bittensor/pull/876
    • Fix/diff unpack bit shift by @camfairchild in https://github.com/opentensor/bittensor/pull/878
    • [Feature] [cubit] CUDA registration solver by @camfairchild in https://github.com/opentensor/bittensor/pull/868
    • Fix/move overview args to cli by @camfairchild in https://github.com/opentensor/bittensor/pull/867
    • Add/address CUDA reg changes by @camfairchild in https://github.com/opentensor/bittensor/pull/879
    • [Fix] --help command by @camfairchild in https://github.com/opentensor/bittensor/pull/884
    • Validator hotfix min allowed weights by @Eugene-hu in https://github.com/opentensor/bittensor/pull/885
    • [BIT-552] Validator improvements (nucleus permute, synergy avg) by @opentaco in https://github.com/opentensor/bittensor/pull/889
    • Bit 553 bug fixes by @isabella618033 in https://github.com/opentensor/bittensor/pull/886
    • add check to add ws:// if needed by @camfairchild in https://github.com/opentensor/bittensor/pull/896
    • [BIT-572] Exclude lowest quantile from weight setting by @opentaco in https://github.com/opentensor/bittensor/pull/895
    • [BIT-573] Improve validator epoch and responsives handling by @opentaco in https://github.com/opentensor/bittensor/pull/901
    • Nobunaga Release V3.3.3 by @Eugene-hu in https://github.com/opentensor/bittensor/pull/899

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.2...v3.3.3

    Source code(tar.gz)
    Source code(zip)
  • v3.3.2(Aug 23, 2022)

    SynapseType fix in dendrite

    What's Changed

    • SynapseType fix in dendrite by @robertalanm in https://github.com/opentensor/bittensor/pull/874

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.1...v3.3.2

    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Aug 23, 2022)

    What's Changed

    • [hotfix] Fix GPU reg bug. bad indent by @camfairchild in https://github.com/opentensor/bittensor/pull/883

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.3.0...v3.3.1

    Source code(tar.gz)
    Source code(zip)
  • v3.3.0(Aug 23, 2022)

    CUDA registration

    This release adds the ability to complete the registration using a CUDA-capable device.
    See https://github.com/opentensor/cubit/releases/tag/v1.0.5 for the required cubit v1.0.5 release

    Also a few bug fixes for the CLI

    What's Changed

    • [hotfix] fix flags for run command, fix hotkeys flag for overview, and [feature] CUDA reg by @camfairchild in https://github.com/opentensor/bittensor/pull/877

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.2.0...v3.3.0

    Source code(tar.gz)
    Source code(zip)
  • v3.2.0(Aug 23, 2022)

    Validator saving and responsive-priority weight-setting

    What's Changed

    • [BIT-540] Choose responsive UIDs for setting weights in validator + validator save/load by @opentaco in https://github.com/opentensor/bittensor/pull/872

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.1.0...v3.2.0

    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Aug 23, 2022)

    Optimizing multi-processed CPU registration

    This release refactors the registration code for CPU registration to improve solving performance.

    What's Changed

    • [feature] cpu register faster (#854) by @camfairchild in https://github.com/opentensor/bittensor/pull/875

    Full Changelog: https://github.com/opentensor/bittensor/compare/v3.0.0...v3.1.0

    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Aug 8, 2022)

  • v2.1.0(Aug 5, 2022)

Owner
Opentensor
Building Neurons. Turning the Web into a brain.
Opentensor
Securely and anonymously share files, host websites, and chat with friends using the Tor network

OnionShare OnionShare is an open source tool that lets you securely and anonymously share files, host websites, and chat with friends using the Tor ne

OnionShare 5.4k Jan 01, 2023
A simple DHCP server and client simulation with python

About The Project This is a simple DHCP server and client simulation. I implemented it for computer network course spring 2021 The client can request

shakiba 3 Feb 08, 2022
stellar-add-guest is a small tool to generate a new guest for Stellar Wireless (Enterprise mode) in OmniVista 2500 hosted on OmniSwitch with AOS Release 8

stellar-add-guest is a small tool to generate a new guest for Stellar Wireless (Enterprise mode) in OmniVista 2500 hosted on OmniSwitch with AOS Release 8.

BennyE 3 Jan 24, 2022
Client library for relay - a service for relaying server side messages to the client side browsers via websockets.

Client library for relay - a service for relaying server side messages to the client side browsers via websockets.

getme 1 Nov 10, 2021
An ansible playbook to set up wireguard server.

Poor man's VPN (pay for only what you need) An ansible playbook to quickly set up Wireguard server for occasional personal use. It takes around five m

Amrit Bera 613 Dec 25, 2022
GlokyPortScannar is a really fast tool to scan TCP ports implemented in Python.

GlokyPortScannar is a really fast tool to scan TCP ports implemented in Python. Installation: This program requires Python 3.9. Linux

gl0ky 5 Jun 25, 2022
Huawei firewall automatically updates Chinese ip to target IP group.

Huawei firewall automatically updates Chinese ip to target IP group.

Lundaa 0 Jan 11, 2022
D-dos attack GUI tool written in python using tkinter module

ddos D-dos attack GUI tool written in python using tkinter module #to use this tool on android, do the following on termux. *. apt update *. apt upgra

6 Feb 05, 2022
A pure python implementation of multicast DNS service discovery

python-zeroconf Documentation. This is fork of pyzeroconf, Multicast DNS Service Discovery for Python, originally by Paul Scott-Murphy (https://github

Jakub Stasiak 483 Dec 29, 2022
Domain To Api [ PYTHON ]

Domain To IP Usage You Open Terminal For Run The Program python ip.py Input & Output Input Your List e.g domain.txt Output ( For Save Output File )

It's Me Jafar 0 Dec 12, 2021
Rufus is a Dos tool written in Python3.

🦎 Rufus 🦎 Rufus is a simple but powerful Denial of Service tool written in Python3. The type of the Dos attack is TCP Flood, the power of the attack

Billy 88 Dec 20, 2022
Bark Toolkit is a toolkit wich provides Denial-of-service attacks, SMS attacks and more.

Bark Toolkit About Bark Toolkit Bark Toolkit is a set of tools that provides denial of service attacks. Bark Toolkit includes SMS attack tool, HTTP

13 Jan 04, 2023
Quickly fetch your WiFi password and if needed, generate a QR code of your WiFi to allow phones to easily connect

wifi-password Quickly fetch your WiFi password and if needed, generate a QR code of your WiFi to allow phones to easily connect. Works on macOS and Li

Siddharth Dushantha 2.6k Jan 05, 2023
A TCP Chatroom built with python and TCP/IP sockets, consisting of a server and multiple clients which can connect with the server and chat with each other.

A TCP Chatroom built with python and TCP/IP sockets, consisting of a server and multiple clients which can connect with the server and chat with each other. It also provides an Admin role with featur

3 May 22, 2022
A simple python script to send cute messages to my boyfriend.

Morning Messages A simple python script to send cute messages to my boyfriend. It gives him the weather and news currently. Installation git clone htt

Sabrina Medwinter 3 Oct 12, 2022
Publish GPU miner info to MQTT

Miner2MQTT Доступ к вашему GPU майнеру через MQTT. Изменения 1.0 EXE файл для Windows 1.1 Управление вентиляторами видеокарт (Linux) Упраление power l

Dmitry Bukhvalov 5 Aug 21, 2022
A Python script that alerts via SMS when a stock is reaching an inflection point

TradeAlert Not sure what this will ultimately become, but for now, its a Python script that alerts via SMS when a stock is reaching an inflection poin

3 Feb 22, 2022
A simple GitHub Action that physically puts your senses on alert when your build/release fails

GH Release Paniker A simple GitHub Action that physically puts your senses on alert when your build/release fails Usage Requirements: Raspberry Pi, LE

Hemanth Krishna 5 Dec 20, 2021
Simple P2P application for sending files over open and forwarded network ports.

FileShareV2 A major overhaul to the V1 (now deprecated) FileShare application. V2 brings major improvements in both UI and performance. V2 is now base

Michael Wang 1 Nov 23, 2021
Bittensor - an open, decentralized, peer-to-peer network that functions as a market system for the development of artificial intelligence

At Bittensor, we are creating an open, decentralized, peer-to-peer network that functions as a market system for the development of artificial intelligence.

Opentensor 169 Dec 30, 2022